The spatially-embedded
Recurrent Neural Network
Recurrent Neural Network
A model to reveal widespread links between structural and functional neuroscience findings
Jascha Achterberg*, Danyal Akarca*, DJ Strouse, John Duncan, Duncan E. Astle
Jascha Achterberg*, Danyal Akarca*, DJ Strouse, John Duncan, Duncan E. Astle
Learn more:
Preprint available on bioRxiv: Spatially-embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings (https://www.biorxiv.org/content/10.1101/2022.11.17.516914v1)
Github repository with example implementations: https://github.com/8erberg/spatially-embedded-RNN
Summary of key findings below
We are currently expanding our model with additional biophysical constrains. An example implementation using spiking neural networks is already available on Github. Please be in touch if you are interested in collaborating on seRNN related work.
Online Lecture
Project overview
Summary
RNNs faced with task control, structural costs and communication constraints configure themselves to exhibit brain-like structural and functional properties.
These spatially-embedded RNNs show:
A sparse modular small-world connectome
Spatial organization of their units according to their function
An energy-efficient mixed selective code
Convergence in parameter space
Background & question
Due to being exposed to the same basic forces and optimization problems, brains commonly converge on similar features in their structural topology and function [1].
Can we observe this process in a recurrent neural network’s (RNN) optimization process of task control (one-choice inference task) under structural cost and communication constraints?
Our approach
Spatially-embedded recurrent neural networks (seRNNs) are characterized by a special regularization function which embeds them in a 3D box space with local communication constraints.
We trained a population of 1000 seRNNs (10 epochs) and compared them to 1000 L1-regularised RNNs (baseline models). In both populations we varied the regularization strength systematically.
Structural findings
As in empirical brain networks, seRNNs configured themselves to exhibit a sparse modular small-world topology [2].
Structure-function findings
Mirroring neural tuning [3], functionally similar seRNN units clustered in space. Like the brain, task-related information has an organized spatial configuration.
Functional findings
seRNNs exhibit a mixed selective [4] and low-energy demand code for solving the task.
Convergent outcomes
Findings emerge in unison in subgroup of seRNNs within a ”sweet spot” [5] in the regularization strength and training duration parameter space.
Conclusions
Seemingly unrelated neuroscientific findings can be attributed to the same optimization process.
seRNNs can serve as model systems to bridge between structural and functional research communities to move neuroscientific understanding and AI forward.
van den Heuvel, MP, et al. Trends in Cognitive Sciences. 2016.
Bullmore & Sporns. Nature Reviews Neuroscience. 2012.
Thompson & Frannson. Scientific Reports. 2018.
Fusi, et al. Current Opinions in Neurobiology. 2016.
Akarca, et al. Nature Communications. 2021.
We thank UKRI MRC (JA, DA, DEA, JD), Gates Cambridge Scholarship (JA), Cambridge Vice Chancellor’s Scholarship (DA) and DeepMind (DS) for funding.