Discriminazioni algoritmiche?
Gianmarco Gometz
2022-01-01
Abstract
The operation of today’s AIs based on the machine learning model makes it somewhat uncomfortable to determine whether or not a certain algorithmic response placed at the basis of choices, decisions and policies productive of legal effects relevant to individuals is censurable as directly or statistically discriminatory, that is, based on the consideration of some characteristic protected by anti-discrimination law as the reason, motive or cause of a certain disadvantageous treatment. If, however, the training data and AI algorithms are available, there is sometimes the possibility of re-running them to see whether they would have produced the same outputs had the subjects considered been of a different race, sex, religion, sexual orientation, etc., invalidating decisions based on the consideration of elements that the law prohibits from being the basis for unequal treatment that is productive of disadvantages for those affected. The costs of this approach, however, become extremely onerous, and perhaps unsustainable, when the elements used by the system to make its predictions are derived not from “static” datasets, but from massive streams of continuously updated data.File | Dimensione | Formato | |
---|---|---|---|
G.Gometz-Discriminazioni algoritmiche-27-32.pdf accesso aperto
Tipologia: versione editoriale
Dimensione 66.84 kB
Formato Adobe PDF
|
66.84 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.