Parallel between machine learning and human learning helps designing new models.
It is striking that in the field of neural Networks, learning is always considered as a "one shot" mechanism: "learn all at once or die" seems to be the underlying slogan...
However, learning one thing after another, in a progressive manner, makes learning complex tasks possible. We were among the few to empirically demonstrate this in Neural Networks learning.
We have drawn a parallel between connectionist (neural) learning and theories of human learning (automatization theory, developmental theory). Although controversial, this approach enabled us to design new connectionist architectures and models. One of these models is able to learn successive tasks, and reuse if necessary previsouly learned skill for a new learning task.
Each individual has his/her history... why neural nets should not?