AI Bibliography

WIKINDX Resources  

Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An approach to evaluating interpretability of machine learning. arXiv preprint arXiv:1806.00069. 
Resource type: Journal Article
BibTeX citation key: Gilpin2018
View all bibliographic details
Categories: Artificial Intelligence, Cognitive Science, Computer Science, Data Sciences, Decision Theory, Engineering, General
Subcategories: Autonomous systems, Decision making, Deep learning, Human decisionmaking, Human factors engineering, Machine learning, Psychology of human-AI interaction, Social cognition
Creators: Bajwa, Bau, Gilpin, Kagal, Specter, Yuan
Publisher:
Collection: arXiv preprint arXiv:1806.00069
Attachments  
Abstract
There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we provide our definition of explainability and show how it can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.
  
WIKINDX 6.7.0 | Total resources: 1621 | Username: -- | Bibliography: WIKINDX Master Bibliography | Style: American Psychological Association (APA)