The essence of learning is to pinpoint which components in the information-processing pipeline are responsible for an error in output. In artificial intelligence, this is achieved by backpropagation: adjusting a model’s parameters to reduce the error in the output. Many researchers believe that the brain employs a similar learning principle.
However, the biological brain is superior to current machine learning systems. For example, we can learn new information by just seeing it once, while artificial systems need to be trained hundreds of times with the same pieces of information to learn them. Furthermore, we can learn new information while maintaining the knowledge we already have, while learning new information in artificial neural networks often interferes with existing knowledge and degrades it rapidly.
These observations motivated the researchers to identify the fundamental principle employed by the brain during learning. They looked at some existing sets of mathematical equations describing changes in the behaviour of neurons and in the synaptic connections between them. They analysed and simulated these information-processing models and found that they employ a fundamentally different learning principle from that used by artificial neural networks.
Read the fill article on the University of Oxford website.