IJCAI 2017 Workshop on
Explainable Artificial Intelligence (XAI)
Venue: TBD | Room: TBD
19, 20, or 21 August 2017 | Melbourne, Australia
Topics of Interest |
Paper Submissions |
Related Work |
Related Events |
Explainable Artificial Intelligence (XAI) concerns, in part, the
challenge of shedding light on opaque machine learning (ML) models in
contexts for which transparency is important, where these models could
be used to solve analysis (e.g., classification) or synthesis tasks
(e.g., planning, design). Indeed, most ML research usually focuses on
prediction tasks but rarely on providing explanations/justifications
for them. Yet users of many applications (e.g., related to autonomous
control, medical, financial, investment) require understanding before
committing to decisions with inherent risk. For example, a delivery
drone should explain (to its remote operator) why it is operating
normally or why it suspends its behavior (e.g., to avoid placing its
fragile package on an unsafe location), and an intelligent decision
aid should explain its recommendation of an aggressive medical
intervention (e.g., in reaction to a patient's recent health
patterns). Addressing this challenge has increased in urgency with the
increasing reliance of learned models in deployed applications.
The need for interpretable models exists independently of how models
were acquired (i.e., perhaps they were hand-crafted, or interactively
elicited without using ML techniques). This raises several questions,
such as: how should explainable models be designed? How should user
interfaces communicate decision making? What types of user
interactions should be supported? How should explanation quality be
measured? And what can be learned from research on
XAI that has not involved ML?
This workshop will provide a forum for sharing and learning about
recent research on interactive XAI methods, highlighting and
documenting promising approaches, and encouraging further work,
thereby fostering connections among researchers interested in ML (and
AI more generally), human-computer interaction, cognitive modeling,
and cognitive theories of explanation and transparency. While sharing
an interest in technical methods with other workshops, the XAI
Workshop will have a distinct problem focus on agent explanation
problems, which are also seen as necessary requirements for
human-machine teaming. This topic is of particular importance to (1)
deep learning techniques (given their many recent real-world successes
and black-box models) and (2) other types of ML and knowledge
acquisition models, but also (3) application of symbolic logical
methods to facilitate their use in applications where supporting
explanations is critical.
XAI should interest researchers studying the topics listed below (among
others). In accord with the theme of IJCAI-17, we are particularly
interested in work involving autonomous (and semi-autonomous) decision
The morning session will begin with a short presentation on the
workshop's motivation, highlighting the potential benefits from using
XAI methods, and summarizing existing related work/efforts. We will
host (1) many invited talks by senior XAI researchers (e.g.,
who can provide broad visions for what XAI techniques can accomplish,
discuss their recent related work, or discuss important open research
issues), (2) oral or poster presentations of accepted submissions, (3)
demonstrations of XAI systems, and (4) a panel of active XAI
researchers who will be asked to discuss key issues that require
attention and promising research directions. We will end with a
summary that: categorizes the workshop's contributions, lists
identified open research areas, and provides pointers for further
information. Time will be reserved, throughout the workshop, for
discussion sessions on selected focal topics.
- Machine learning (e.g., deep, reinforcement, statistical relational, transfer)
- Cognitive architectures
- Commonsense reasoning
- Decision making
- Episodic reasoning
- Intelligent agents (e.g., planning and acting, goal reasoning)
- Knowledge acquisition
- Narrative intelligence
- Temporal reasoning
- After action reporting
- Ambient intelligence
- Autonomous control
- Caption generation
- Computer games
- Image processing (e.g., security/surveillance tasks)
- Information retrieval and reuse
- Intelligent Decision Aids
- Intelligent tutoring
- Plan replay
- Recommender systems
- User modeling
- Visual question-answering
Details on the agenda, when available, will be posted here.
We are currently contacting potential invited speakers, and tentatively include notional speakers in grey here.
As mentioned, our agenda will primarily include invited
speakers. However, we warmly welcome contributions that, for
example, describe prior or ongoing work on XAI, describe key issues
that require further research, or highlight relevant challenges of
interest to XAI researchers and practitioners, and plans for
addressing them. In particular, we welcome the following types of
- Jessie Chen (ARL, USA)
- Trevor Darrell (UCB, USA)
- Dave Gunning (DARPA, USA)
- Daniele Magazzeni (King's College London, UK)
- Foster Provost (NYU, USA)
- Darryn Reid (DSTG Edinburgh, Australia)
- Uncertainty, Resource Allocation, and Trust
- Raymond Sheh (CU, Australia)
- Mohan Sridharan (U. Auckland, New Zealand)
- Manuela Veloso (CMU, USA)
- Yezhou Yang (ASU, USA)
Self-contained submissions must be no longer than five pages (i.e.,
four pages for the main text of the paper, and one additional page for
(only) references) in PDF format for letter-size (8.5 x 11)
paper. Please use
templates, and include author names, affiliations, and email
addresses on the first page. Submissions should be made through EasyChair.
- Theoretical analyses
- Empirical analyses (human subject or simulation)
- System demonstrations
- Position papers
- Planned research or application
Papers will be subject to peer review by the workshop's program
committee. Selection criteria include originality of ideas,
correctness, clarity and significance of results (or other
contribution), and quality of presentation.
Below is a sample of related work. Additional suggestions are welcome.
- 1 June 2017: Paper submissions due
- 15 June 2017: Reviews due to EasyChair
- 18 June 2017: Reviewing responses due to authors
- 9 July 2017: Camera-ready deadline
- 19, 20, or 21 August 2017: XAI Workshop
- Cheng, H., et al. (2014) SRI-Sarnoff Aurora at TRECVid 2014: Multimedia event detection and recounting.
- Doshi-Velez, F., & Kim, B. (2017). A roadmap for a rigorous science of interpretability. (arXiv:1702.08608)
- Elhoseiny, M., Liu, J., Cheng, H., Sawhney, H., & Elgammal, A. (2015). Zero-shot event detection by multimodal distributional semantic embedding of videos. Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (pp. 3478-3486). Phoenix, AZ: AAAI Press.
- Hendricks, L.A, Akata, Z., Rohrbach, M., Donahue, J., Schiele, B., & Darrell, T. (2016). Generating visual explanations. (arXiv:1603.08507v1)
- Kofod-Petersen, A., Cassens, J., & Aamodt, A. (2008). Explanatory capabilities in the CREEK knowledge-intensive case-based reasoner. Frontiers in Artificial Intelligence and Applications, 173, 28-35.
- Kulesza, T., Burnett, M., Wong, W. K., & Stumpf, S. (2015). Principles of explanatory debugging to personalize interactive machine learning. Proceedings of the Twentieth International Conference on Intelligent User Interfaces (pp. 126-137). Atlanta, GA: ACM Press.
- Lake, B.H., Salakhutdinov, R., & Tenenbaum, J.B. (2015). Human-level concept learning through probabilistic program induction. Science, 350, 1332-1338.
- Langley, P., Meadows, B., Sridharan, M., & Choi, D. (2017). Explainable agency for intelligent autonomous systems. In Proceedings of the Twenty-Ninth Annual Conference on Innovative Applications of Artificial Intelligence. San Francisco: AAAI Press.
- Letham, B., Rudin. C., McCormick, T., and Madigan, D. (2015). Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model. Annals of Applied Statistics, 9(3), 1350-137.
- Lombrozo, T. (2012). Explanation and abductive inference. Oxford Handbook of Thinking And Reasoning (pp. 260-276).
- Martens, D., & Provost, F. (2014). Explaining data-driven document classifications. MIS Quarterly, 38(1), 73-99.
- Ribeiro, M.T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?" Explaining the predictions of any classifier. Human Centered Machine Learning: Papers from the CHI Workshop. (arXiv:1602.04938v1)
- Rosenthal, S., Selvaraj, S. P., & Veloso, M. (2016). Verbalization: Narration of autonomous mobile robot experience. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. New York, NY: AAAI Press.
- Sheh, R.K. (2017). "Why did you do that?" Explainable intelligent robots. In K. Talamadupula, S. Sohrabi, L. Michael, & B. Srivastava (Eds.) Human-Aware Artificial Intelligence: Papers from the AAAI Workshop (Technical Report WS-17-11). San Francisco, CA: AAAI Press.
- Si, Z. and Zhu, S. (2013). Learning AND-OR templates for object recognition and detection. IEEE Transactions On Pattern Analysis and Machine Intelligence, 35(9), 2189-2205.
- Shwartz-Ziv, R. & Tishby, N. (2017). Opening the black box of deep neural networks via information. (arXiv:1703.00810 [cs.LG])
- Sormo, F., Cassens, J., & Aamodt, A. (2005). Explanation in case-based reasoning: Perspectives and goals. Artificial Intelligence Review, 24(2), 109-143.
- Swartout, W., Paris, C., & Moore, J. (1991). Explanations in knowledge systems: Design for explainable expert systems. IEEE Expert, 6(3), 58-64.
- van Lent, M., Fisher, W., & Mancuso, M. (2004). An explainable artificial intelligence system for small-unit tactical behavior. Proceedings of the Nineteenth National Conference on Artificial Intelligence (pp. 900-907). San Jose, CA: AAAI Press.
- Zahavy, T., Zrihem, N.B., & Mannor, S. (2017). Graying the black box: Understanding DQNs. (arXiv:1602.02658 [cs.LG])
- Who is the point-of-contact for this workshop?
- David W. Aha; please send any communications/questions to him at [david.aha (at) nrl.navy.mil]
- (2017 March 3): Web site is live