2nd ICAPS Workshop on Explainable Planning (XAIP-2019)

Berkeley, CA, 12 July 2019.

The proceedings are available! (Click to download)

Mission:

Explainable Artificial Intelligence (XAI) concerns the challenge of shedding light on opaque models in contexts for which transparency is important, i.e. where these models could be used to solve analysis or synthesis tasks. In particular, as AI is increasingly being adopted into application solutions, the challenge of supporting interaction with humans is becoming more apparent. Partly this is to support integrated working styles, in which humans and intelligent systems cooperate in problem-solving, but also it is a necessary step in the process of building trust as humans migrate greater competence and responsibility to such systems. The challenge is to find effective ways to characterise, and to communicate, the foundations of AI-driven behaviour, when the algorithms that drive it are far from transparent to humans. While XAI at large is primarily concerned with learning-based approaches, model-based approaches are well suited -- arguably better suited -- for explanation, and Explainable AI Planning (XAIP) can play an important role in addressing complex decision-making procedures.

After the success of previous workshops on XAI and XAIP, the mission of this workshop is to mature and broaden the XAIP community, fostering continued exchange on XAIP topics at ICAPS.

Topics

Topics of interest include (but are not limited to):

Submissions:

Given the novelty of XAIP as an area, we invite several different kinds of submissions:


Please format submissions in AAAI style (see instructions in the Author Kit at http://www.aaai.org/Publications/Templates/AuthorKit19.zip). Authors considering submitting to the workshop papers rejected from the main conference, please ensure you do your utmost to address the comments given by ICAPS reviewers. Please do not submit papers that are already accepted for the main conference to the workshop.

Every submission will be reviewed by members of the program committee according to the usual criteria such as relevance to the workshop, significance of the contribution, and technical quality. Authors can select if they want their submissions to be single-blind or double-blind (recommended for IJCAI dual submissions) at the time of submission. The OpenReview forum will also allow the public to comment and provide feedback on the papers (but authors will remain anonymous to the public).
Submit your papers here: https://openreview.net/group?id=icaps-conference.org/ICAPS/2019/Workshop/XIAP

Submissions sent to other conferences are allowed if this does not interfere with their submission rules. Submissions under double-blind review in another conference (in particular IJCAI-19) must be anonymous.

The workshop is meant to be an open and inclusive forum, and we encourage papers that report on work in progress or that do not fit the mold of a typical conference paper.

At least one author of each accepted paper must attend the workshop in order to present the paper. Authors must register for the ICAPS main conference in order to attend the workshop. There will be no separate workshop-only registration.

Important Dates:

  • Paper submission: April 15, 2019 (UTC-12)
  • Notification: May 15, 2019
  • ICAPS early registration: May 17, 2019
  • Camera-ready submission: TBD
  • Date of Workshop: July 12th, 2019
  • Invited Speaker:

    Robert R. Hoffman, Institute for Human and Machine Cognition

    Macrocognition: Foundations for Planning and Explanation
    Macrocognition is how cognition adapts to complexity. The historical roots of macrocognition reach back to the late 1800s, and the essentials of the paradigm have been fairly well specified.
    The models of sensemaking, flexecution, coordination, re-learning, and mental projection help clarify differences between macrocognitive and microcognitive approaches. Microcognitive models are based on causal chains having distinct start and stop (or input-output) points. On the other hand, macrocognitive models are cyclical and closed-loop. Microcognitive models are useful in hindsight, to tell stories; macrocognitive models are transcendent and anticipatory.
    The primary macrocognitive functions correspond with the "Families of Laws of Complex Cognitive Systems" developed by David Woods. The Families are based on five fundamental bounds on complex human-machine work systems.
    Noteworthy aspects of macrocognition are pertinent to planning systems technology:

    These macrocognitive concepts and models have implications for Explainable AI (XAI) systems. If we present to a user an AI planning system that explains how it works, how do we know whether the explanation works and the user has made sense of the AI and is able to flexecute with it? In other words, how do we know that an XAI system is any good? Key concepts of measurement include specific methods for evaluating: (1) the goodness of explanations, (2) whether users are satisfied by explanations, (3) how well users understand the AI systems, (4) how curiosity motivates the search for explanations, (5) whether the user's trust and reliance on the AI are appropriate, and finally, (6) how well the human-XAI work system performs.

    Program:

    8:30-8:45 Introduction
    8:45-9:45 Invited Talk by Robert Hoffman
    Macrocognition: Foundations for Planning and Explanation
    9:45 - 10:30 Session 1
    Design for Interpretability
    Anagha Kulkarni, Sarath Sreedharan, Sarah Keren, Tathagata Chakraborti, David Smith, Subbarao Kambhampati
    Towards Explainable Planning as a Service
    Michael Cashmore, Anna Collins, Benjamin Krarup, Senka Krivic, Daniele Magazzeni, David Smith
    Varieties of Explainable Agency
    Pat Langley
    Challenges of Explaining Control
    Adrian Agogino, Ritchie Lee, Dimitra Giannakopoulou
    10.30 - 11:00 BREAK
    11:00 - 12:00 Session 2
    Combining Cognitive and Affective Measures with Epistemic Planning for Explanation Generation
    Ronald P. A. Petrick, Sara Dalzel-Job, Robin L. Hill
    Feature-directed Active Learning for Learning User Preferences
    Sriram Gopalakrishnan, Utkarsh Soni, Subbarao Kambhampati
    Online Explanation Generation for Human-Robot Teaming
    Mehrdad Zakershahrak, Ze Gong, Akkamahadevi Hanni, Yu Zhang
    Model-Free Model Reconciliation
    Sarath Sreedharan, Alberto Olmo, Aditya Prasad Mishra, Subbarao Kambhampa
    Human-Understandable Explanations of Infeasibility for Resource-Constrained Scheduling Problems
    Niklas Lauffer, Ufuk Topcu
    A General Framework for Synthesizing and Executing Self-Explaining Plans for Human-AI Interaction
    Sarath Sreedharan, Tathagata Chakraborti, Christian Muise, Subbarao Kambhampati
    12:00 - 12:30 Posters for sessions 1 and 2
    12:30 - 14:00 LUNCH
    14:00 - 15:00 Session 3
    Towards Model-Based Contrastive Explanations for Explainable Planning
    Benjamin Krarup, Michael Cashmore, Daniele Magazzeni, Tim Miller
    Explaining the Space of Plans through Plan-Property Dependencies
    Rebecca Eifler, Michael Cashmore, Jörg Hoffmann, Daniele Magazzeni, Marcel Steinmetz
    A Preliminary Logic-based Approach for Explanation Generation
    Stylianos Loukas Vasileiou, William Yeoh, Tran Cao Son
    Bayesian Inference of Temporal Specifications to Explain How Plans Differ
    Joseph Kim, Christian Muise, Ankit Shah, Shubham Agarwal, Julie Shah
    Why Can't You Do That HAL? Explaining Unsolvability of Planning Tasks
    Sarath Sreedharan, Siddharth Srivastava, David Smith, Subbarao Kambhampati
    Towards an argumentation-based approach to explainable planning
    Anna Collins, Daniele Magazzeni, Simon Parsons
    15:00 - 15:30 Posters for session 3
    15.30 - 16:00 BREAK
    16:00 - 17:00 Session 4
    When Agents Talk Back: Rebellious Explanations
    Ben Wright, Mark Roberts, David W. Aha, Ben Brumback
    (How) Can AI Bots Lie?
    Tathagata Chakraborti, Subbarao Kambhampati
    Domain-independent Plan Intervention When Users Unwittingly Facilitate Attacks
    Sachini Weerawardhana, Darrell Whitley, Mark Roberts
    Robust Goal Recognition with Operator-Counting Heuristics
    Felipe Meneguzzi, André Grahl Pereira, Ramon F. Pereira
    Branching-Bounded Contingent Planning via Belief Space Search
    Kevin McAreavey, Kim Bauters, Weiru Liu, Jun Hong
    17:00 - 17:30 Posters for session 4

    Accepted Papers:

    Sister Workshops:

    Please consider our sister workshops at AAMAS and IJCAI!
    EXTRAAMAS: visit website
    XAI: visit website

    Organizing Chairs:

    Program Committee: