A semiotic-based framework to assess mental models of XAI systems - IRT SystemX Accéder directement au contenu
Poster De Conférence Année : 2024

A semiotic-based framework to assess mental models of XAI systems

Clément Arlotti
  • Fonction : Auteur
  • PersonId : 1367713
Nicolas Heulot

Résumé

The rapid growth of eXplainable Artificial Intelligence (XAI) in many industrial sectors stresses the need for user-centered explanations, to ensure trustworthy operational use [1,2,3]. To be able to address a broader audience, these systems should ultimately adapt to the way of thinking of their users. However, it remains a challenge to describe how human stakeholders picture such systems, in order to characterize their mental models. Many user-centered design studies for AI Systems [4,1,5,6,7] tend to either build on frameworks of human explanations from social sciences, or empirically study how ex- planation features impact user interaction with AI, but lack systematic conceptual tools to connect both sides. We thus propose to investigate the potential of the vast body of work stemming from Peirce’s Semiotic theory (systematic study of representation and interpretation processes [8,9,10]) to connect and wrap-up these existing concepts into one unifying framework. We show how fundamental semiotic concepts can be used to describe three key aspects of mental models that constitute the interpretation process: representing, explaining and understanding. Then, we gather inter-disciplinary elements to embody formal aspects of the semiotic theory. This allows operational assessment of the different types of relations human stakeholders have with an XAI system, considering their particular background knowledge, goals and interests. To study these divergent ways of thinking and interacting with the machine, we notably leverage the concept of mental model [2,11] and use it to characterize the gap between the designers’ intended purpose of the system and the in-field user experience as a mental model misalignment. Finally, to test this framework applicability, we carried out interviews’ workshops with designers and users of an industrial XAI system. In particular, we assessed the framework’s ability to delineate consistent stakeholders’ profiles tendencies based on semiotic categories. In doing so, we aim to highlight the potential of existing work in Semiotics, for connecting interdisciplinary concepts into a unifying user-centered assessment framework. References : [1] Liao QV, Varshney KR. Human-centered explainable ai (xai): From algorithms to user experiences. arXiv preprint arXiv:211010790. 2021. [2] Rutjes H, Willemsen M, IJsselsteijn W. Considerations on explainable AI and users’ mental models. In: Where is the Human? Bridging the Gap Between AI and HCI:. Glasgow: Association for Computing Machinery; 2019. [3] Páez A. The Pragmatic Turn in Explainable Artificial Intelligence (XAI). Minds and Machines. 2019 Sep;29(3):441-59. [4] Liao QV, Gruen D, Miller S. Questioning the AI: informing design practices for explainable AI user experiences. In: Proceedings of the 2020 CHI conference on human factors in computing systems; 2020. p. 1-15. [5] Liao QV, Subramonyam H, Wang J, Wortman Vaughan J. Designerly understanding: Information needs for model transparency to support design ideation for AI-powered user experience. In: Proceedings of the 2023 CHI conference on human factors in computing systems; 2023. p. 1-21. [6] Wright AP, Wang ZJ, Park H, Guo G, Sperrle F, El-Assady M, et al. A comparative analysis of industry human-AI interaction guidelines. arXiv preprint arXiv:201011761. 2020. [7] Yildirim N, Pushkarna M, Goyal N, Wattenberg M, Viégas F. Investigating How Practitioners Use Human-AI Guidelines: A Case Study on the People+ AI Guidebook. In: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems; 2023. p. 1-13. [8] Peirce CS, Hoopes J. Peirce on signs: writings on semiotic. Nachdr. ed. Chapel Hill: University of North Carolina Press; 2006. [9] Nöth W. The semiotics of models. Sign Systems Studies. 2018 May;46(1):7-43.
Fichier principal
Vignette du fichier
15_03Poster_EN_HyCHA2024_logo_confiance.pdf (648.56 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04520749 , version 1 (25-03-2024)

Identifiants

  • HAL Id : hal-04520749 , version 1

Citer

Clément Arlotti, Nicolas Heulot. A semiotic-based framework to assess mental models of XAI systems. HyCHA 2024 Hybridation Connaissances, Humain et Apprentissage Statistique, Mar 2024, Gif-sur-Yvette, France. ⟨hal-04520749⟩
1 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More