Peeking Outside the Black-Box: AI Explainability Requirements beyond Interpretability
- authored by
- Jakob Droste, Hannah Deters, Ronja Fuchs, Kurt Schneider
- Abstract
With the rise of artificial intelligence (AI) in society, more people are coming into contact with complex and opaque software systems in their daily lives. These black-box systems are typically hard to understand and therefore not trustworthy for end-users. Research in eXplainable Artificial Intelligence (XAI) has shown that explanations have the potential to address this opacity, by making systems more transparent and understandable. However, the line between interpretability and explainability is blurry at best. While there are many definitions of explainability in XAI, most do not look beyond the justification of outputs, i.e., to provide interpretability. Meanwhile, contemporary research outside of XAI has adapted wider definitions of explainability, and examined system aspects other than algorithms and their outputs. In this position paper, we argue that requirements engineers for AI systems need to consider explainability requirements beyond interpretability. To this end, we present a hypothetical scenario in the medical sector, which demonstrates a variety of different explainability requirements that are typically not considered by XAI researchers. This contribution aims to start a discussion in the XAI community and motivate AI engineers to take a look outside the black-box when eliciting explainability requirements.
- Organisation(s)
-
Software Engineering Section
- Type
- Conference contribution
- Volume
- 3672
- Publication date
- 2024
- Publication status
- Published
- Peer reviewed
- Yes
- ASJC Scopus subject areas
- General Computer Science
- Electronic version(s)
-
https://ceur-ws.org/Vol-3672/RE4AI-paper2.pdf (Access:
Open)