Explainability of Extension-Based Semantics - Intelligence Artificielle Accéder directement au contenu
Rapport Année : 2022

Explainability of Extension-Based Semantics

Résumé

This paper defines visual explanations to the Verification Problem in argumentation, that is, of why a set of arguments is or is not acceptable under a given semantics. These explanations rely upon the modularity of the acceptability semantics, and they take the form of subgraphs of the original argumentation graph. Graph properties that these subgraphs satisfy depending on whether or not the set is acceptable, are established. Properties of the proposed explanations are addressed, and the potential of the modularity of the approach is highlighted. Note that this research report is the complete version of a paper submitted to a conference. In this complete version, the reader can find the proofs of the results given in the submitted paper.
Fichier principal
Vignette du fichier
rapport_IRIT_RR_2022_05_FR (1).pdf (204.6 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03657060 , version 1 (02-05-2022)

Identifiants

  • HAL Id : hal-03657060 , version 1

Citer

Sylvie Doutre, Théo Duchatelle, Marie-Christine Lagasquie-Schiex. Explainability of Extension-Based Semantics. [Research Report] IRIT/RR--2022--05--FR, IRIT - Institut de Recherche en Informatique de Toulouse. 2022, pp.1-20. ⟨hal-03657060⟩
173 Consultations
61 Téléchargements

Partager

Gmail Facebook X LinkedIn More