Designing next-generation deep learning models is an incredibly complex challenge that researchers have been tackling using an approach called Neural Architecture Search (NAS). The goal of NAS is to automate the discovery of optimal neural network architectures for a given task by evaluating thousands of candidate architectures against a performance metric such as accuracy on a validation data set.
However, previous NAS methods faced significant bottlenecks due to the need to exhaustively train each candidate architecture, making the process extremely computationally expensive and time-consuming. Researchers have proposed several techniques, such as weight sharing, differentiable search spaces, and predictor-based methods, to accelerate NAS, but computational complexity remained a major obstacle.
This article presents NASGraph (shown in Figure 1), an innovative method that dramatically reduces the computational burden of neural architecture search. Instead of fully training each candidate architecture, NASGraph converts them into graph representations and uses graph metrics to efficiently estimate their performance.
Specifically, the neural network is first divided into graphic blocks which contains layers such as convolutions and activations. For each block, the technique determines how strongly each input channel contributes to the output channels by performing a single forward step. These contributions form the weighted edges when assigning inputs to nodes and connections to edges in the graph representation.
Once the architecture is represented as a graph, NASGraph calculates the average degree (average number of connections per node) as an indicator to classify the quality of architecture. However, the researchers introduce substitute models with reduced computational requirements to further accelerate this process.
These NASGraph(h, c, m) Substitute models have fewer channels. hfewer search cells c per module and fewer modules meter. As shown in their systematic study following the convention in EcoNAS, the use of computationally reduced configurations makes it possible to trade off accuracy in exchange for significant speedups.
To evaluate NASGraph, the team tested it on multiple NAS benchmarks such as NAS-Bench-101, NAS-Bench-201, and TransNAS-Bench-101. They compared the average degree metric rankings with ground truth and other NAS methods without training. The average degree metric showed a strong correlation with the actual performance of the architecture, outperforming previous untrained NAS methods, and showed low bias toward particular operations compared to actual rankings. Additionally, combining this graph measure with other untrained metrics, such as Jacobian covariance, further boosted classification capabilities, achieving new State-of-the-art Spearman rank correlations exceeding 0.8 on data sets such as CIFAR-10, CIFAR-100, and ImageNet-16-120.
In conclusion, NASGraph presents a paradigm shift in neural architecture search by leveraging an ingenious graph-based approach. It overcomes a major computational bottleneck that plagued previous NAS methods by avoiding the need for architectural training. With its stellar performance, low bias, data-independent nature, and remarkable efficiency, NASGraph could catalyze a new era of rapid exploration of neural architecture and discovery of powerful ai models in diverse applications.
Review the Paper. All credit for this research goes to the researchers of this project. Also, don't forget to follow us on twitter.com/Marktechpost”>twitter. Join our Telegram channel, Discord channeland LinkedIn Grabove.
If you like our work, you will love our Newsletter..
Don't forget to join our 41k+ ML SubReddit
Vineet Kumar is a Consulting Intern at MarktechPost. She is currently pursuing her bachelor's degree from the Indian Institute of technology (IIT), Kanpur. He is a machine learning enthusiast. He is passionate about research and the latest advances in Deep Learning, Computer Vision and related fields.
<script async src="//platform.twitter.com/widgets.js” charset=”utf-8″>