Machine Learning Security Against Data Poisoning: Are We There Yet?

Demontis, Ambra;Biggio, Battista;Roli, Fabio;
2024-01-01

Abstract

Poisoning attacks compromise the training data utilized to train machine learning (ML) models, diminishing their overall performance, manipulating predictions on specific test samples, and implanting backdoors. This article thoughtfully explores these attacks while discussing strategies to mitigate them through fundamental security principles or by implementing defensive mechanisms tailored for ML.
2024
2024
Inglese
57
3
26
34
9
Esperti anonimi
scientifica
Computational modeling; Training data; Machine learning; Predictive models; Data models; Computer security
Cinà, Antonio Emanuele; Grosse, Kathrin; Demontis, Ambra; Biggio, Battista; Roli, Fabio; Pelillo, Marcello
1.1 Articolo in rivista
info:eu-repo/semantics/article
1 Contributo su Rivista::1.1 Articolo in rivista
262
6
partially_open
Files in This Item:
File Size Format  
Machine_Learning_Security_Against_Data_Poisoning_Are_We_There_Yet.pdf

Solo gestori archivio

Type: versione editoriale
Size 1.12 MB
Format Adobe PDF
1.12 MB Adobe PDF & nbsp; View / Open   Request a copy
preprint-version-Machine_Learning_Security_Against_Data_Poisoning_Are_We_There_Yet.pdf

open access

Type: versione pre-print
Size 1.36 MB
Format Adobe PDF
1.36 MB Adobe PDF View/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Questionnaire and social

Share on:
Impostazioni cookie