Machine Learning Security Against Data Poisoning: Are We There Yet?
Demontis, Ambra;Biggio, Battista;Roli, Fabio;
2024-01-01
Abstract
Poisoning attacks compromise the training data utilized to train machine learning (ML) models, diminishing their overall performance, manipulating predictions on specific test samples, and implanting backdoors. This article thoughtfully explores these attacks while discussing strategies to mitigate them through fundamental security principles or by implementing defensive mechanisms tailored for ML.File | Size | Format | |
---|---|---|---|
Machine_Learning_Security_Against_Data_Poisoning_Are_We_There_Yet.pdf Solo gestori archivio
Type: versione editoriale
Size 1.12 MB
Format Adobe PDF
|
1.12 MB | Adobe PDF | & nbsp; View / Open Request a copy |
preprint-version-Machine_Learning_Security_Against_Data_Poisoning_Are_We_There_Yet.pdf open access
Type: versione pre-print
Size 1.36 MB
Format Adobe PDF
|
1.36 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.