The research, titled “Hippocampus supports multi-task reinforcement learning under partial observability,” was led by Dabal Pedamonti, Samia Mohinta (Cambridge) and Rui Ponte Costa, in collaboration with researchers at the Medical University of Vienna (Hugo Malagon-Vina) and the University of Bern (Stephane Ciocchi). The team combined behavioural experiments in rodents, computational modelling and neural recordings to uncover how hippocampal circuits enable learning in complex environments.
In the experiments, animals switched between two types of spatial strategies — one based on body movements (egocentric) and another on environmental cues (allocentric) — in mazes where not all cues were visible. Deep learning models inspired by the hippocampus showed that recurrent, memory-based connections were essential for learning under such “partial observability.” Models lacking recurrence failed to adapt when information was missing or ambiguous.
Recordings from hippocampal neurons mirrored the model’s internal dynamics, revealing that both captured strategy, timing and decision information. This close match suggests that hippocampal recurrence helps the brain infer hidden aspects of the environment and select the right strategy.
Read the full story on the Department of Physiology, Anatomy and Genetics website.
