Robust-by-Design Classification via Unitary-Gradient Neural Networks

Brau F.;
2023-01-01

Abstract

The use of neural networks in safety-critical systems requires safe and robust models, due to the existence of adversarial attacks. Knowing the minimal adversarial perturbation of any input x, or, equivalently, knowing the distance of x from the classification boundary, allows evaluating the classification robustness, providing certifiable predictions. Unfortunately, state-of-the-art techniques for computing such a distance are computationally expensive and hence not suited for online applications. This work proposes a novel family of classifiers, namely Signed Distance Classifiers (SDCs), that, from a theoretical perspective, directly output the exact distance of x from the classification boundary, rather than a probability score (e.g., SoftMax). SDCs represent a family of robust-by-design classifiers. To practically address the theoretical requirements of a SDC, a novel network architecture named Unitary-Gradient Neural Network is presented. Experimental results show that the proposed architecture approximates a signed distance classifier, hence allowing an online certifiable classification of x at the cost of a single inference.
2023
Inglese
Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023
AAAI Press
37
12
14729
14737
9
37th AAAI Conference on Artificial Intelligence, AAAI 2023
Esperti anonimi
2023
usa
scientifica
no
4 Contributo in Atti di Convegno (Proceeding)::4.1 Contributo in Atti di convegno
Brau, F.; Rossolini, G.; Biondi, A.; Buttazzo, G.
273
4
4.1 Contributo in Atti di convegno
none
info:eu-repo/semantics/conferencePaper
Files in This Item:
There are no files associated with this item.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Questionnaire and social

Share on:
Impostazioni cookie