Hardware-aware Neural Architectural Search (NAS) is gaining momentum to enable the deployment of deep learning on edge devices with limited computing capabilities. Incorporating device-related objectives such as affordable floating point operations, latency, power, memory usage, etc. into the optimization process makes searching for the most efficient neural architecture more complicated, since both model accuracy and hardware cost should guide the search. The main concern with most state-of-the-art hardware-aware NAS strategies is that they propose for evaluation also trivially infeasible network models for the capabilities of the hardware platform at hand. Moreover, previously generated models are frequently not exploited to intelligently generate new ones, leading to prohibitive computational costs for practical relevance. This paper aims to boost the computational efficiency of hardware-aware NAS by means of a neuro-symbolic framework revolving around a Probabilistic Inductive Logic Programming module to define and exploit a set of symbolic rules. This component learns and refines the probabilities associated with the rules, allowing the framework to adapt and improve over time, thus quickly narrowing down the search space toward the most promising neural architectures.
Efficient Resource-Aware Neural Architecture Search with a Neuro-Symbolic Approach
Elena Bellodi
Primo
;Davide BertozziSecondo
;Alice Bizzarri;Michele Favalli;Michele FraccaroliPenultimo
;Riccardo ZeseUltimo
2023
Abstract
Hardware-aware Neural Architectural Search (NAS) is gaining momentum to enable the deployment of deep learning on edge devices with limited computing capabilities. Incorporating device-related objectives such as affordable floating point operations, latency, power, memory usage, etc. into the optimization process makes searching for the most efficient neural architecture more complicated, since both model accuracy and hardware cost should guide the search. The main concern with most state-of-the-art hardware-aware NAS strategies is that they propose for evaluation also trivially infeasible network models for the capabilities of the hardware platform at hand. Moreover, previously generated models are frequently not exploited to intelligently generate new ones, leading to prohibitive computational costs for practical relevance. This paper aims to boost the computational efficiency of hardware-aware NAS by means of a neuro-symbolic framework revolving around a Probabilistic Inductive Logic Programming module to define and exploit a set of symbolic rules. This component learns and refines the probabilities associated with the rules, allowing the framework to adapt and improve over time, thus quickly narrowing down the search space toward the most promising neural architectures.File | Dimensione | Formato | |
---|---|---|---|
full-text.pdf
solo gestori archivio
Descrizione: versione editoriale
Tipologia:
Full text (versione editoriale)
Licenza:
NON PUBBLICO - Accesso privato/ristretto
Dimensione
470.51 kB
Formato
Adobe PDF
|
470.51 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.