Explainability Requirements for Time Series Forecasts

A Study in the Energy Domain

verfasst von
Jakob Droste, Ronja Fuchs, Hannah Deters, Jil Klunder, Kurt Schneider
Abstract

With the rise of artificial intelligence in industry, many companies rely on machine learning methods such as time series forecasting. By processing data from the past, such systems can provide predictions for data in the future. In practice, however, there is often skepticism about the quality of the forecasts. Explainability has been identified as a means to address this skepticism and foster trust. While there are already different methods to explain time series forecasts, it is unclear which of these explanations are actually useful for stakeholders. To investigate the need for explanations for time series forecasts, we conducted a study at a mid-sized German company in the energy domain. Throughout the study, 23 participants were shown five examples of different explanation types. For each type of explanation, we tested if it actually helped our participants to better understand the forecasts. We found that visual explanations including decision trees and feature importance charts were able to improve domain experts' understanding of time series forecasts. Textual explanations tended to lead to confusion rather than empowerment. While the exact findings and preferable types of explanations may vary between companies, our concrete results can provide a starting point for in-depth analyses in other environments.

Organisationseinheit(en)
Fachgebiet Software Engineering
Externe Organisation(en)
Kraft-Wärme-Kopplung GmbH
Typ
Aufsatz in Konferenzband
Seiten
229-239
Anzahl der Seiten
11
Publikationsdatum
2024
Publikationsstatus
Veröffentlicht
Peer-reviewed
Ja
ASJC Scopus Sachgebiete
Allgemeine Computerwissenschaft, Allgemeiner Maschinenbau, Strategie und Management
Elektronische Version(en)
https://doi.org/10.1109/RE59067.2024.00030 (Zugang: Geschlossen)