IJCAI/ECAI 2018 Workshop on
Explainable Artificial Intelligence (XAI)


Venue: Stockholmsmässan | Room: K2 | Times: 0830-1500 and 1730-1900
13 July 2018 | Stockholm, Sweden
http://home.earthlink.net/~dwaha/research/meetings/faim18-xai

Description | Topics of Interest | Agenda | Speakers | Accepted Papers | Proceedings | Paper Submissions | Dates | Organizers | PC | Related Work | Related Events | FAQ | News

Description

Explainable AI (XAI) systems embody explanation processes that allow users to gain insight into the system's models and decisions, with the intent of improving the user's performance on a related task. For example, an XAI system could allow a delivery drone to explain (to its remote operator) why it is operating normally and the situations for when it will deviate (e.g., avoid placing fragile packages on unsafe locations), thus allowing an operator to better manage a set of such drones. Likewise, an XAI decision aid could explain its recommendation for an aggressive surgical intervention (e.g., in reaction to a patient's recent health patterns and medical breakthroughs) so that a doctor can provide better care. The XAI system's models could be learned and/or hand-coded, and be used for a wide variety of analysis or synthesis tasks. However, while users of many applications (e.g., related to autonomous control, medical, or financial decision making) require understanding before committing to decisions with inherent risk, most AI systems do not support robust explanation processes. Addressing this challenge has become more urgent with the increasing reliance on learned models in deployed applications.

This raises several questions, such as: how should explainable models be designed? How should user interfaces communicate decision making? What types of user interactions should be supported? How should explanation quality be measured? These questions are of interest to researchers, practitioners, and end-users, independent of what AI techniques are used. Solutions can draw from several disciplines, including cognitive science, human factors, and psycholinguistics.

This workshop will provide a forum for learning about exciting research on interactive XAI methods, highlighting and documenting promising approaches, and encouraging further work, thereby fostering connections among researchers interested in AI (e.g., causal modeling, computational analogy, constraint reasoning, intelligent user interfaces, ML, narrative intelligence, planning) human-computer interaction, cognitive modeling, and cognitive theories of explanation and transparency. While sharing an interest in technical methods with other workshops, the XAI Workshop will focus on agent explanation problems, motivated by human-machine teaming needs. This topic is of particular importance to (1) deep learning techniques (given their many recent real-world successes and black-box models) and (2) other types of ML and knowledge acquisition models, but also (3) application of symbolic logical methods to facilitate their use in applications where supporting explanations is critical.

This is the Second XAI Workshop; the First XAI Workshop was held at IJCAI-17. XAI-18 will be coordinated among a set of four workshops:

We encourage XAI Workshop attendees to participate in these other, highly related events.

Topic Areas of Interest

XAI should interest researchers studying the following topics (among others):

Agenda (Tentative as of 6 July 2018!)

Date: Friday, 13 July 2018

Invited Speakers

Jörg Cassens | Cristina Conati | Ken Forbus | Andrew Gordon | Mohan Sridharan

Accepted Papers (Poster Presentations)

  1. Hybrid Data-Expert Explainable Beer Style Classifier
    Jose M. Alonso, Alejandro Ramos-Soto, Ciro Castiello, and Corrado Mencar

  2. Explanations for Temporal Recommendations
    Homanga Bharadhwaj and Shruti Joshi

  3. Towards Providing Explanations for AI Planner Decisions
    Rita Borgo, Michael Cashmore, and Daniele Magazzeni

  4. Human-Aware Planning Revisited: A Tale of Three Models
    Tathagata Chakraborti, Sarath Sreedharan, and Subbarao Kambhampati

  5. Explanatory Predictions with Artificial Neural Networks and Argumentation
    Oana Cocarascu, Kristijonas Cyras, and Francesca Toni

  6. ScenarioNet: An Interpretable Data-Driven Model for Scene Understanding
    Zachary Daniels and Dimitris Metaxas

  7. Explaining Deep Adaptive Programs via Reward Decomposition
    Martin Erwig, Alan Fern, Magesh Murali, and Anurag Koul

  8. Generating Natural Language Explanations for Visual Question Answering using Scene Graphs and Visual Attention
    Shalini Ghosh, Giedrius Burachas, Arijit Ray, and Avi Ziskind

  9. Interpretable Self-Labeling Semi-Supervised Classifier
    Isel Grau, Dipankar Sengupta, Mar¨ªa Matilde Garc¨ªa Lorenzo, and Ann Nowe

  10. How Explainable Plans can Make Planning Faster
    Antoine Gréa, Laetitia Matignon, and Aknine Samir

  11. A Qualitative Analysis of Search Behavior: A Visual Approach
    Ian Howell, Robert Woodward, Berthe Y. Choueiry, and Hongfeng Yu

  12. Toward Learning Finite State Representations of Recurrent Policy Networks
    Anurag Koul, Sam Greydanus, and Alan Fern

  13. Interpretable Neuronal Circuit Policies for Reinforcement Learning Environments
    Mathias Lechner, Ramin M. Hasani, and Radu Grosu

  14. Exploring Explainable Artificial Intelligence and Autonomy through Provenance
    Crisrael Lucero, Braulio Coronado, Oliver Hui, and Douglas Lange

  15. Explore, Exploit, and Explain: Personalizing Explainable Recommendations with Bandits
    James McInerney, Benjamin Lacker, Samantha Hansen, Karl Higley, Hugues Bouchard, Alois Gruson, and Rishabh Mehrotra

  16. Declarative Description: The Meeting Point of Artificial Intelligence, Deep Neural Networks, and Human Intelligence
    Zoltán Á. Milacski, Kinga Bettina Faragó, Áron Fóthi, Viktor Varga, and Andras Lorincz

  17. Assisted and Incremental Medical Diagnosis using Explainable Artificial Intelligence
    Isaac Monteath and Raymond Sheh

  18. From Context Mediation to Declarative Values and Explainability
    Grzegorz J. Nalepa, Martijn van Otterlo, Szymon Bobek, and Martin Atzmueller

  19. Local Interpretable Model-Agnostic Explanations of Bayesian Predictive Models via Kullback-Leibler Projections
    Tomi Peltola

  20. Explanatory Masks for Neural Network Interpretability
    Lawrence Phillips, Garrett Goh, and Nathan Hodas

  21. Transparency Communication for Reinforcement Learning in Human Robot Interactions
    David Pynadath, Ning Wang, and Michael Barnes

  22. Analyzing Hypersensitive AI: Instability in Corporate-Scale Machine Learning
    Michaela Regneri, Malte Hoffmann, Jurij Kost, Niklas Pietsch, Timo Schulz, and Sabine Stamm

  23. A Survey of Interpretability and Explainability in Human-Agent Systems
    Avi Rosenfeld and Ariella Richardson

  24. A Symbolic Approach to Explaining Bayesian Network Classifiers
    Andy Shih, Arthur Choi, and Adnan Darwiche

  25. Towards a Taxonomy for Interpretable and Interactive Machine Learning
    Elio Ventocilla, Tove Helldin, Maria Riveiro, Juhee Bae, Veselka Boeva, Göran Falkman, and Niklas Lavesson

  26. Explainable Security
    Luca Viganò and Daniele Magazzeni

  27. Contrastive Explanations for Reinforcement Learning in terms of Expected Consequences
    J.S. van der Waa, Jurriaan Van Diggelen, Karel van den Bosch, and Mark Neerincx

  28. Show Me What You've Learned: Applying Cooperative Machine Learning for the Semi-Automated Annotation of Social Signals
    Johannes Wagner, Tobias Baur, Dominik Schiller, Yue Zhang, Björn Schuller, Michel Valstar, and Elisabeth André

  29. Learning Attributions Grounded in Existing Facts for Robust Visual Explanation
    Yulong Wang, Xiaolin Hu, and Hang Su

  30. Explicating Feature Contribution using Random Forest Proximity Distances
    Leanne S. Whitmore, Anthe George, and Corey M. Hudson

Paper Submissions

We welcome/encourage submissions relevant to the topic of Explainable AI (XAI). Authors may submit long papers (6 pages plus up to one page of references) or short papers (4 pages plus up to one page of references). (Note: Submissions do not need to be anonymous.)

All papers should be typeset in the IJCAI style. Accepted papers will be published on the workshop website. Papers must be submitted in PDF format via the EasyChair system (https://easychair.org/conferences/?conf=xai18).

Dates

Organizers

Program Committee

Related Work

Some related work is listed below. Additional suggestions are welcome.

Related Events

FAQ

News