loading page

Reframing Explanation as an Interactive Medium: The EQUAS (Explainable QUestion Answering System) Project
  • +13
  • Dhruv Batra ,
  • William Ferguson,
  • Raymond Mooney,
  • Devi Parikh,
  • Antonio Torralba ,
  • David Bau,
  • David Diller,
  • Joshua Fasching,
  • Jaden Fiotto-Kaufman,
  • Yash Goyal ,
  • Jeff Miller,
  • Kerry Moffitt,
  • Alex Montes De Oca,
  • Ramprasaath R. Selvaraju ,
  • Ayush Shrivastava ,
  • Jialin Wu
Dhruv Batra
Georgia Tech

Corresponding Author:[email protected]

Author Profile
William Ferguson
Raytheon BBN Technologies
Author Profile
Raymond Mooney
The University of Texas at Austin
Author Profile
Devi Parikh
Georgia Tech
Author Profile
Antonio Torralba
Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory
Author Profile
David Bau
Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory
Author Profile
David Diller
Raytheon BBN Technologies
Author Profile
Joshua Fasching
Raytheon BBN Technologies
Author Profile
Jaden Fiotto-Kaufman
Raytheon BBN Technologies
Author Profile
Yash Goyal
Georgia Tech
Author Profile
Jeff Miller
Raytheon BBN Technologies
Author Profile
Kerry Moffitt
Raytheon BBN Technologies
Author Profile
Alex Montes De Oca
Raytheon BBN Technologies
Author Profile
Ramprasaath R. Selvaraju
Georgia Tech
Author Profile
Ayush Shrivastava
Georgia Tech
Author Profile
Jialin Wu
The University of Texas at Austin
Author Profile

Abstract

This letter provides a retrospective analysis of our team’s research performed under the DARPA Explainable Artificial Intelligence (XAI) project. We began by exploring salience maps, English sentences, and lists of feature names for explaining the behavior of deep-learning-based discriminative systems, especially visual question answering systems. We demonstrated limited positive effects from statically presenting explanations along with system answers – for example when teaching people to identify bird species. Many XAI performers were getting better results when users interacted with explanations. This motivated us to evolve the notion of explanation as an interactive medium – usually, between humans and AI systems but sometimes within the software system. We realized that interacting via explanations could enable people to task and adapt ML agents. We added affordances for editing explanations and modified the ML system to act in accordance with the edits to produce an interpretable interface to the agent. Through this interface, editing an explanation can adapt a system’s performance to new, modified purposes. This deep tasking, wherein the agent knows its objective and the explanation for that objective will be critical to enable higher levels of autonomy.
11 Jun 2021Submitted to Applied AI Letters
18 Jun 2021Submission Checks Completed
18 Jun 2021Assigned to Editor
22 Jun 2021Reviewer(s) Assigned
17 Jul 2021Review(s) Completed, Editorial Evaluation Pending
26 Jul 2021Editorial Decision: Revise Minor
13 Oct 20211st Revision Received
15 Oct 2021Submission Checks Completed
15 Oct 2021Assigned to Editor
25 Oct 2021Reviewer(s) Assigned
12 Nov 2021Review(s) Completed, Editorial Evaluation Pending
12 Nov 2021Editorial Decision: Accept