A new neuron model from the Flatiron Institute’s Center for Computational Neuroscience could revolutionize AI by providing a more accurate representation of neuronal functions, potentially leading to more advanced and efficient artificial neural networks.
Advances in artificial intelligence (AI) could see a significant leap forward thanks to a groundbreaking neuron model developed by researchers at the Flatiron Institute’s Center for Computational Neuroscience (CCN). The new model presents a more nuanced understanding of how neurons operate, potentially overcoming the limitations of the 1960s-era models that currently underpin AI technologies like ChatGPT.
The CCN’s innovative model treats neurons not just as passive relays of input but as tiny “controllers” capable of influencing their surroundings. This contrasts sharply with traditional models where information flows in a single direction, imposing a rigid framework that fails to capture the complex interactions seen in real neural networks.
“Neuroscience has advanced quite a bit in these past 60 years, and we now recognize that previous models of neurons are rather rudimentary,” Dmitri Chklovskii, a group leader at the CCN and senior author of the paper, said in a news release. “A neuron is a much more complex device — and much smarter — than this overly simplified model.”
The updated model, presented in the journal Proceedings of the National Academy of Sciences, suggests that neurons exert more control over their environment than previously thought. This new understanding could lead to more sophisticated and efficient artificial neural networks, aligning them closer to the efficiency of the human brain.
Despite the impressive achievements of current AI, including natural language processing and image recognition, there remain significant challenges.
“The current applications can give you wrong answers, or hallucinate, and training them requires a lot of energy; they’re very expensive. There are all these problems that the human brain seems to avoid. If we were to understand how the brain actually does this, we could build better AI,” Chklovskii added.
The model was inspired by the large-scale circuits in the brain, which are organized into feedback loops to maintain stability — much like a thermostat controls room temperature. Applying these principles at the level of individual neurons was both unexpected and revolutionary.
“People thought of the brain as a whole or even parts of the brain as being a controller, but no one suggested that a single neuron could do that,” said Chklovskii. “Control is a computationally intensive task. It’s hard to think of a neuron as having enough computational capacity.”
Beyond offering insights into the brain’s efficiency, the new model also sheds light on previously unexplained phenomena, such as the role of noise in neural transmissions. This randomness, often seen at synapses, appears to be crucial in allowing neurons to adapt to changing environments, enhancing their performance.
The implications of this research extend beyond immediate AI applications. By providing a more accurate representation of neuronal behavior, the findings could also inform neurological studies and therapeutic strategies for brain-related diseases.
Chklovskii and his team plan to expand their research, analyzing neurons that might not fit this new model, such as those found in the retina. These neurons, although possibly unable to control their inputs, may still operate on similar principles of prediction and control.
“Control and prediction are actually very related,” Chklovskii added. “You cannot control efficiently without predicting the impact of your actions in the world.”
The Flatiron Institute, the research division of the Simons Foundation, continues to push boundaries in scientific research through computational methods. Their ongoing work at the CCN promises to deepen our understanding of brain function, both in health and disease, which could pave the way for the next generation of AI technologies.