The human brain is an extraordinarily complex organ, often considered one of the most intricate and sophisticated systems in the known universe. The brain is organized hierarchically, with lower-level sensory processing areas sending information to higher-level cognitive and decision-making regions. This hierarchy allows for the integration of complex knowledge and behaviors. The brain processes information in parallel, with different regions and networks working simultaneously on various aspects of perception, cognition, and motor control. This parallel processing contributes to its efficiency and adaptability.
Can we adapt these hierarchical organization and parallel processing techniques in deep learning? Yes, the field of study is called Neural Networks. Researchers at the University of Copenhagen present a type of graph neural network coding in which the growth of a policy network is controlled by another network running in each neuron. They call it Neural Development Program (NDP).
Some biological processes involve mapping a compact genotype to a larger phenotype. Inspired by this, researchers have created indirect coding methods. In indirect coding, the solution description is compressed. This allows information to be reused and the final solution will contain more components than the description itself. However, these encodings (particularly the indirect encoding family) must be developed.
The NDP architecture comprises a multilayer perceptron (MLP) and a graph cellular automaton (GNCA). This updates node additions after each message passing step during the development phase. In general, cellular automata are mathematical models that consist of a grid of cells in one of several states. These automata evolve in discrete time steps based on a set of rules that determine how the cells’ states change over time.
In NDP, the same model applies to everyone. So the number of parameters is constant with respect to the size of the graph it operates on. This provides an advantage to NDP as it can operate on any neural network of arbitrary size or architecture. The NDP neural network can also be trained with any black-box optimization algorithm to satisfy any objective function. This will allow neural networks to solve classification and reinforcement learning tasks and exhibit topological properties.
The researchers also attempted to evaluate differentiable NDP by comparing models trained and tested at different numbers of growth steps. They observed that for most tasks, network performance decreased after a certain number of growth steps. The reason for observing this was that the new network modes became larger. You would need an automated method to know when to stop increasing steps. They say this automation would be an important addition to the PND. In the future, they also want to include activity-dependent and reward-modulated growth and adaptation techniques for the NDP.
Review the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to join. our 31k+ ML SubReddit, Facebook community of more than 40,000 people, Discord channel, and Electronic newsletterwhere we share the latest news on ai research, interesting ai projects and more.
If you like our work, you’ll love our newsletter.
Now we are also on WhatsApp. Join our ai channel on Whatsapp.
Arshad is an intern at MarktechPost. He is currently pursuing his international career. Master’s degree in Physics from the Indian Institute of technology Kharagpur. Understanding things down to the fundamental level leads to new discoveries that lead to the advancement of technology. He is passionate about understanding nature fundamentally with the help of tools such as mathematical models, machine learning models, and artificial intelligence.
<!– ai CONTENT END 2 –>