AI Strategy and Concepts Bibliography

WIKINDX Resources

Mehdiyev, N., & Fettke, P. (2021). Explainable artificial intelligence for process mining: A general overview and application of a novel local explanation approach for predictive process monitoring. Interpretable Artificial Intelligence: A Perspective of Granular Computing, 1–28. 
Added by: SijanLibrarian (2022-07-07 15:27:53)   Last edited by: SijanLibrarian (2022-07-07 15:30:32)
Resource type: Journal Article
BibTeX citation key: Mehdiyev2021
Email resource to friend
View all bibliographic details
Categories: Artificial Intelligence, Cognitive Science, Complexity Science, Computer Science, Data Sciences, Decision Theory, General, Mathematics
Subcategories: Analytics, Behavioral analytics, Big data, Chaos theory, Decision making, Deep learning, Forecasting, Human decisionmaking, Informatics, Machine learning, Markov models, Q-learning, Systems theory
Creators: Fettke, Mehdiyev
Collection: Interpretable Artificial Intelligence: A Perspective of Granular Computing
Views: 13/13
Views index: 23%
Popularity index: 5.75%
Abstract
The contemporary process-aware information systems possess the capabilities to record the activities generated during the process execution. To leverage these process specific fine-granular data, process mining has recently emerged as a promising research discipline. As an important branch of process mining, predictive business process management, pursues the objective to generate forward-looking, predictive insights to shape business processes. In this study, we propose a conceptual framework sought to establish and promote understanding of decision-making environment, underlying business processes and nature of the user characteristics for developing explainable business process prediction solutions. Consequently, with regard to the theoretical and practical implications of the framework, this study proposes a novel local post-hoc explanation approach for a deep learning classifier that is expected to facilitate the domain experts in justifying the model decisions. In contrary to alternative popular perturbation-based local explanation approaches, this study defines the local regions from the validation dataset by using the intermediate latent space representations learned by the deep neural networks. To validate the applicability of the proposed explanation method, the real-life process log data delivered by the Volvo IT Belgium’s incident management system are used. The adopted deep learning classifier achieves a good performance with the area under the ROC Curve of 0.94. The generated local explanations are also visualized and presented with relevant evaluation measures which are expected to increase the users’ trust in the black-box model.
  
wikindx 6.2.2 ©2003-2020 | Total resources: 1373 | Username: -- | Bibliography: WIKINDX Master Bibliography | Style: American Psychological Association (APA) | Database queries: 66 | DB execution: 1.65775 secs | Script execution: 1.67071 secs