Do Graph Neural Networks Build Fair User Models? Assessing Disparate Impact and Mistreatment in Behavioural User Profiling

Boratto L.;
2022-01-01

Abstract

Recent approaches to behavioural user profiling employ Graph Neural Networks (GNNs) to turn users' interactions with a platform into actionable knowledge. The effectiveness of an approach is usually assessed with accuracy-based perspectives, where the capability to predict user features (such as gender or age) is evaluated. In this work, we perform a beyond-accuracy analysis of the state-of-the-art approaches to assess the presence of disparate impact and disparate mistreatment, meaning that users characterised by a given sensitive feature are unintentionally, but systematically, classified worse than their counterparts. Our analysis on two real-world datasets shows that different user profiling paradigms can impact fairness results. The source code and the preprocessed datasets are available at: https://github.com/erasmopurif/do_gnns_build_fair_models.
2022
Inglese
International Conference on Information and Knowledge Management, Proceedings
9781450392365
Association for Computing Machinery
4399
4403
5
31st ACM International Conference on Information and Knowledge Management, CIKM 2022
Esperti anonimi
17-21 October 2022
Atlanta, GA, USA
scientifica
Fairness; Graph neural networks; User models; User profiling
4 Contributo in Atti di Convegno (Proceeding)::4.1 Contributo in Atti di convegno
Purificato, E.; Boratto, L.; De Luca, E. W.
273
3
4.1 Contributo in Atti di convegno
open
info:eu-repo/semantics/conferencePaper
File in questo prodotto:
File Dimensione Formato  
3511808.3557584.pdf

accesso aperto

Tipologia: versione editoriale
Dimensione 937.72 kB
Formato Adobe PDF
937.72 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Questionario e social

Condividi su:
Impostazioni cookie