Skip to yearly menu bar Skip to main content


Poster

Dependency-aware Differentiable Neural Architecture Search

Buang Zhang · Xinle Wu · Hao Miao · Bin Yang · Chenjuan Guo

# 30
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Wed 2 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Neural architecture search (NAS) technology has reduced the burden of manual design by automatically building neural network architectures, among which differential NAS approaches such as DARTS, have gained popularity for the search efficiency. Despite achieving promising performance, the DARTS series methods still suffer two issues: 1) It does not explicitly establish dependencies between edges, potentially leading to suboptimal performance. 2) The high degree of parameter sharing results in inaccurate performance evaluations of subnets. To tackle these issues, we propose to model dependencies explicitly between different edges to construct a high-performance architecture distribution. Specifically, we model the architecture distribution in DARTS as a multivariate normal distribution with learnable mean vector and correlation matrix, representing the base architecture weights of each edge and the dependencies between different edges, respectively. Then, we sample architecture weights from this distribution and alternately train these learnable parameters and network weights by gradient descent. With the learned dependencies, we prune the search space dynamically to alleviate the inaccurate evaluation by only sharing weights among high-performance architectures. Besides, we identify good motifs by analyzing the learned dependencies, which guide human experts to manually design high-performance neural architectures. Extensive experiments and competitive results on multiple NAS Benchmarks demonstrate the effectiveness of our method.

Live content is unavailable. Log in and register to view live content