IJCAI 2017 Workshop on
Explainable Artificial Intelligence (XAI)

Venue: TBD | Room: TBD
20 August 2017 | Melbourne, Australia

Description | Topics of Interest | Agenda | Speakers | Paper Submissions | Dates | Organizers | Related Work | Related Events | FAQ | News


Explainable Artificial Intelligence (XAI) concerns, in part, the challenge of shedding light on opaque machine learning (ML) models in contexts for which transparency is important, where these models could be used to solve analysis (e.g., classification) or synthesis tasks (e.g., planning, design). Indeed, most ML research usually focuses on prediction tasks but rarely on providing explanations/justifications for them. Yet users of many applications (e.g., related to autonomous control, medical, financial, investment) require understanding before committing to decisions with inherent risk. For example, a delivery drone should explain (to its remote operator) why it is operating normally or why it suspends its behavior (e.g., to avoid placing its fragile package on an unsafe location), and an intelligent decision aid should explain its recommendation of an aggressive medical intervention (e.g., in reaction to a patient's recent health patterns). Addressing this challenge has increased in urgency with the increasing reliance of learned models in deployed applications.

The need for interpretable models exists independently of how models were acquired (i.e., perhaps they were hand-crafted, or interactively elicited without using ML techniques). This raises several questions, such as: how should explainable models be designed? How should user interfaces communicate decision making? What types of user interactions should be supported? How should explanation quality be measured? And what can be learned from research on XAI that has not involved ML?

This workshop will provide a forum for sharing and learning about recent research on interactive XAI methods, highlighting and documenting promising approaches, and encouraging further work, thereby fostering connections among researchers interested in ML (and AI more generally), human-computer interaction, cognitive modeling, and cognitive theories of explanation and transparency. While sharing an interest in technical methods with other workshops, the XAI Workshop will have a distinct problem focus on agent explanation problems, which are also seen as necessary requirements for human-machine teaming. This topic is of particular importance to (1) deep learning techniques (given their many recent real-world successes and black-box models) and (2) other types of ML and knowledge acquisition models, but also (3) application of symbolic logical methods to facilitate their use in applications where supporting explanations is critical.

Topic Areas of Interest

XAI should interest researchers studying the topics listed below (among others). In accord with the theme of IJCAI-17, we are particularly interested in work involving autonomous (and semi-autonomous) decision making.


The morning session will begin with a short presentation on the workshop's motivation, highlighting the potential benefits from using XAI methods, and summarizing existing related work/efforts. We will host (1) many invited talks by senior XAI researchers (e.g., who can provide broad visions for what XAI techniques can accomplish, discuss their recent related work, or discuss important open research issues), (2) oral or poster presentations of accepted submissions, (3) demonstrations of XAI systems, and (4) a panel of active XAI researchers who will be asked to discuss key issues that require attention and promising research directions. We will end with a summary that: categorizes the workshop's contributions, lists identified open research areas, and provides pointers for further information. Time will be reserved, throughout the workshop, for discussion sessions on selected focal topics.

Details on the agenda, when available, will be posted here.

Invited Speakers

Trevor Darrell | Dave Gunning | Rao Kambhampati | Freddy Lecue | Dan Magazzeni | Darryn Reid | Raymond Sheh | Barry Smyth | Mohan Sridharan

Paper Submissions

As mentioned, our agenda will primarily include invited speakers. However, we warmly welcome contributions that, for example, describe prior or ongoing work on XAI, describe key issues that require further research, or highlight relevant challenges of interest to XAI researchers and practitioners, and plans for addressing them. In particular, we welcome the following types of submissions: Self-contained submissions must be no longer than five pages (i.e., four pages for the main text of the paper, and one additional page for (only) references) in PDF format for letter-size (8.5 x 11) paper. Please use IJCAI-17's templates, and include author names, affiliations, and email addresses on the first page. Submissions should be made through EasyChair.

Papers will be subject to peer review by the workshop's program committee. Selection criteria include originality of ideas, correctness, clarity and significance of results (or other contribution), and quality of presentation.



Program Committee

Related Work

Below is a sample of related work. Additional suggestions are welcome.

Related Events