Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid

Melis Marco;Demontis Ambra;Biggio Battista
;
Fumera Giorgio;Roli Fabio
2018-01-01

Abstract

Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains. It has however been shown that they can be fooled by adversarial examples, i.e., images altered by a barely-perceivable adversarial noise, carefully crafted to mislead classification. In this work, we aim to evaluate the extent to which robot-vision systems embodying deep-learning algorithms are vulnerable to adversarial examples and propose a computationally efficient countermeasure to mitigate this threat, based on rejecting classification of anomalous inputs. We then provide a clearer understanding of the safety properties of deep networks through an intuitive empirical analysis, showing that the mapping learned by such networks essentially violates the smoothness assumption of learning algorithms. We finally discuss the main limitations of this work, including the creation of real-world adversarial examples, and sketch promising research directions.
Files in This Item:
File Size Format  
melis17-vipar.pdf

Solo gestori archivio

Type: versione post-print
Size 3.23 MB
Format Adobe PDF
3.23 MB Adobe PDF & nbsp; View / Open   Request a copy

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Questionnaire and social

Share on:
Impostazioni cookie