Considerations for applying logical reasoning to explain neural network outputs

Cau F. M.
;
Spano L. D.
;
2020-01-01

Abstract

We discuss the impact of presenting explanations to people for Artificial Intelligence (AI) decisions powered by Neural Networks, according to three types of logical reasoning (inductive, deductive, and abductive). We start from examples in the existing literature on explaining artificial neural networks. We see that abductive reasoning is (unintentionally) the most commonly used as default in user testing for comparing the quality of explanation techniques. We discuss whether this may be because this reasoning type balances the technical challenges of generating the explanations, and the effectiveness of the explanations. Also, by illustrating how the original (abductive) explanation can be converted into the remaining two reasoning types we are able to identify considerations needed to support these kinds of transformations.
2020
Inglese
XAI.it 2020. Italian Workshop on Explainable Artificial Intelligence 2020 Proceedings of the Italian Workshop on Explainable Artificial Intelligence co-located with 19th International Conference of the Italian Association for Artificial Intelligence(AIxIA 2020). Online Event, November 25-26, 2020.
CEUR-WS
2742
96
103
8
2020 Italian Workshop on Explainable Artificial Intelligence, XAI.it 2020
Contributo
Esperti anonimi
25-26 November 2020
Virtual, Online
internazionale
scientifica
Explainable User Interfaces; Reasoning; XAI
4 Contributo in Atti di Convegno (Proceeding)::4.1 Contributo in Atti di convegno
Cau, F. M.; Spano, L. D.; Tintarev, N.
273
3
4.1 Contributo in Atti di convegno
open
info:eu-repo/semantics/conferencePaper
Files in This Item:
File Size Format  
short3.pdf

open access

Type: versione editoriale
Size 204.39 kB
Format Adobe PDF
204.39 kB Adobe PDF View/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Questionnaire and social

Share on:
Impostazioni cookie