Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training

Pintor, Maura;Demontis, Ambra;Biggio, Battista;Roli, Fabio;
2023-01-01

Abstract

Deep learning models undergo a significant increase in the number of parameters they possess, leading to the execution of a larger number of operations during inference. This expansion significantly contributes to higher energy consumption and prediction latency. In this work, we propose EAT, a gradient-based algorithm that aims to reduce energy consumption during model training. To this end, we leverage a differentiable approximation of the $$\ell _0$$ norm, and use it as a sparse penalty over the training loss. Through our experimental analysis conducted on three datasets and two deep neural networks, we demonstrate that our energy-aware training algorithm EAT is able to train networks with a better trade-off between classification performance and energy efficiency.
2023
978-3-031-43152-4
978-3-031-43153-1
Files in This Item:
File Size Format  
978-3-031-43153-1.pdf

Solo gestori archivio

Description: versione editoriale, pdf
Type: versione editoriale
Size 1.06 MB
Format Adobe PDF
1.06 MB Adobe PDF & nbsp; View / Open   Request a copy
ICIAP2023___Energy_Aware_Training.pdf

embargo until 06/09/2024

Type: versione post-print
Size 572.03 kB
Format Adobe PDF
572.03 kB Adobe PDF & nbsp; View / Open   Request a copy

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Questionnaire and social

Share on:
Impostazioni cookie