Detecting Attacks Against Deep Reinforcement Learning for Autonomous Driving

MAURA PINTOR
;
ANGELO SOTGIU;AMBRA DEMONTIS;BATTISTA BIGGIO
2023-01-01

Abstract

With the advent of deep reinforcement learning, we witness the spread of novel autonomous driving agents that learn how to drive safely among humans. However, skilled attackers might steer the decision-making process of these agents through minimal perturbations applied to the readings of their hardware sensors. These force the behavior of the victim agent to change unexpectedly, increasing the likelihood of crashes by inhibiting its braking capability or coercing it into constantly changing lanes. To counter these phenomena, we propose a detector that can be mounted on autonomous driving cars to spot the presence of ongoing attacks. The detector first profiles the agent's behavior without attacks by looking at the representation learned during training. Once deployed, the detector discards all the decisions that deviate from the regular driving pattern. We empirically highlight the detection capabilities of our work by testing it against unseen attacks deployed with increasing strength.
2023
979-8-3503-0377-3
Files in This Item:
File Size Format  
ICMLC___Detecting_Attacks_against_Deep_Reinforcement_Learning_Policies.pdf

open access

Description: preprint
Type: versione pre-print
Size 1.76 MB
Format Adobe PDF
1.76 MB Adobe PDF View/Open
editorial_version_detecting_attacks.pdf

Solo gestori archivio

Description: versione editoriale
Type: versione editoriale
Size 2.17 MB
Format Adobe PDF
2.17 MB Adobe PDF & nbsp; View / Open   Request a copy

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Questionnaire and social

Share on:
Impostazioni cookie