Abstract
We present here a new algorithm based on a random model for simulating efficiently large brain neuronal networks. Model parameters (mean firing rate, number of neurons, synaptic connection probability and postsynaptic duration) are easy to calibrate further on real data experiments. Based on time asynchrony assumption, both computational and memory complexities are proved to be theoretically linear with the number of neurons. These results are experimentally validated by sequential simulations of millions of neurons and billions of synapses in few minutes on a single processor desktop computer.
Significance Statement Time asynchrony refers to models that prevent any two neurons to fire at the exact same time (up to the numerical precision). Brain models that are time asynchronous have a huge numerical advantage on other ones: they can be easily simulated without huge parallelization or use of GPU. Indeed, this work proposes to exploit the time asynchrony property to derive new simulation algorithms, whose computational and memory complexity can be estimated beforehand. Application on realistic set of parameters show that they can indeed easily reach the size of a small monkey brain, on a usual single processor desktop computer.
Competing Interest Statement
The authors have declared no competing interest.