Improving Fast Minimum-Norm Attacks with Hyperparameter Optimization

Giuseppe Floris
First
;
Raffaele Mura;Luca Scionis;Giorgio Piras
;
Maura Pintor;Ambra Demontis
Penultimate
;
Battista Biggio
Last
2023-01-01

Abstract

Evaluating the adversarial robustness of machine-learning models using gradient-based attacks is challenging. In this work, we show that hyperparameter optimization can improve fast minimum-norm attacks by automating the selection of the loss function, the optimizer, and the step-size scheduler, along with the corresponding hyperparameters. Our extensive evaluation involving several robust models demonstrates the improved efficacy of fast minimum-norm attacks when hyped up with hyperparameter optimization. We release our open-source code at https://github.com/pralab/HO-FMN.
2023
978-2-87587-088-9
Machine Learning; Adversarial Machine Learning; Optimization
Files in This Item:
File Size Format  
ES2023-164 (1).pdf

Solo gestori archivio

Type: versione editoriale
Size 1.69 MB
Format Adobe PDF
1.69 MB Adobe PDF & nbsp; View / Open   Request a copy
2310.08177.pdf

open access

Type: versione pre-print
Size 443.53 kB
Format Adobe PDF
443.53 kB Adobe PDF View/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Questionnaire and social

Share on:
Impostazioni cookie