Exploring Algorithmic Fairness in Deep Speaker Verification

Fenu Gianni;Lafhouli H.;Marras Mirko
2020-01-01

Abstract

To allow individuals to complete voice-based tasks (e.g., send messages or make payments), modern automated systems are required to match the speaker’s voice to a unique digital identity representation for verification. Despite the increasing accuracy achieved so far, it still remains under-explored how the decisions made by such systems may be influenced by the inherent characteristics of the individual under consideration. In this paper, we investigate how state-of-the-art speaker verification models are susceptible to unfairness towards legally-protected classes of individuals, characterized by a common sensitive attribute (i.e., gender, age, language). To this end, we first arranged a voice dataset, with the aim of including and identifying various demographic classes. Then, we conducted a performance analysis at different levels, from equal error rates to verification score distributions. Experiments show that individuals belonging to certain demographic groups systematically experience higher error rates, highlighting the need of fairer speaker recognition models and, by extension, of proper evaluation frameworks.
2020
978-3-030-58810-6
978-3-030-58811-3
Algorithmic fairness; Deep learning; Speaker recognition
Files in This Item:
File Size Format  
iccsa-marras.pdf

Solo gestori archivio

Type: versione post-print
Size 404.55 kB
Format Adobe PDF
404.55 kB Adobe PDF & nbsp; View / Open   Request a copy

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Questionnaire and social

Share on:
Impostazioni cookie