The computational principles of neural circuitry: what matters most?
Summary
Many animals evolved to have extensive neural circuitry at considerable metabolic cost. The behavior supported by this complex neural circuitry exhibits capabilities beyond those of humankind’s most advanced systems. Manufacturers struggle to automate many simple tasks, so humans still outnumber robots at most factories. Automakers struggle to create self-driving cars, even though rodents and insects can navigate the world with high reliability using objectively inferior sensors. What mechanisms account for the unbeaten computational power of neural circuitry? This workshop will focus on that question. Part of the work will be a critique of current fundamental ideas underpinning popular neuroscience research on the topic of neural computation, including synaptic plasticity and single cell computations. The rest of the work will focus on identifying under-addressed questions about computation in neural circuitry, and underexplored areas that could hold important principles of neural computation.
For example, is synaptic plasticity a distraction? The field of neuroscience has focused a large amount of resources on elucidating principles of synaptic plasticity. Learning rules and molecular components have been investigated, but mostly in ex vivo preparations. The results have inspired theoretical work and models of synaptic plasticity. One example is spike timing dependent plasticity. This is a tremendously popular idea for synaptic plasticity, but it is unlikely to operate in vivo. Is the field of synaptic plasticity based on a shaky foundation?
Despite limitations and problems with the synaptic plasticity field, the fact remains that the essential concept turns out to have tremendous computational performance. Deep learning, especially when deployed at an industrial scale, has shown that a greatly simplified model of neural circuitry can have computational performance beyond any other algorithm designed to date. A critical component of deep learning is an analog of synaptic plasticity. Thus, there is computational that power can be realized using adjustable weights in a network. What bounds can we place on this aspect of neural circuitry, for example, based on ultrastructural data, synapses likely only have about 5-bits of precision in strength. Moreover, some basic wiring principles can bestow computational features that depend only weakly on precise synaptic strengths, e.g., the integration of simple cells in the primary visual cortex to create complex cells.
Looking beyond synaptic plasticity, what principles are we missing as we elucidate the mechanisms that underpin the computational power of neural circuitry? Many behaviors are innate or instinctual, relying little on plasticity. Perhaps we should understand more about the baseline wiring, restricted by the genetic bottleneck. Also, individual cells have potential computational power, how much is feasibly realized in vivo?