Securing Machine Learning against Adversarial Attacks

DEMONTIS, AMBRA
2018-03-26

Abstract

Machine learning techniques are nowadays widely used in different application domains, ranging from computer vision to computer security, despite it has been shown that they are vulnerable to well-crafted attacks performed by skilled attackers. These include evasion attacks aimed to mislead detection at test time, and poisoning attacks in which malicious samples are injected into the training data to compromise the learning procedure. Different defenses have been proposed so far. However, the majority of them is computationally expensive, and it is not clear under which attack conditions they can be considered optimal. There is moreover a lack of a security evaluation methodology that allows comparing the security of different classifiers. This thesis aims to provide a contribution to the study of machine learning system security. Through this thesis, we firstly provide an adversarial framework that can help us to perform the security evaluation of different classifiers. We exploit this provided tool to assess the security of different machine learning systems, focusing our attention on systems with limited hardware resources. Thanks to this analysis we discover an interesting relationship between sparsity and security. Then, we propose a poisoning attack that, respect to the state-of-art ones, can be exploited against a broad subset of classifiers (neural network included). Finally, we provide theoretically well-founded and efficient countermeasures, demonstrating their effectiveness on two case studies involving Android malware detection and robot vision.
26-Mar-2018
Files in This Item:
File Size Format  
tesi.pdf

open access

Description: tesi di dottorato
Size 8.64 MB
Format Adobe PDF
8.64 MB Adobe PDF View/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Questionnaire and social

Share on:
Impostazioni cookie