Explainable Artificial Intelligence (XAI) refers to artificial intelligence (AI) systems that are transparent and understandable to humans. The goal of XAI is to develop AI models that can provide clear explanations of their decision-making processes so that humans can trust and verify their outputs.

An explainable system should be able to answer questions like why it reached a certain conclusion, why it did not choose another path, why it succeeded, why it failed, how to fix an error, etc.

XAI is a relatively new field of study coined and developed by DARPA (the Defence Advanced Research Project Agency) when they launched their explainable AI research program.

New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.

- Dr. Matt Turek, DARPA

In 2020, the National Institute of Technology (NIST) published its Four Principles of Explainable Artificial Intelligence which relays the following fundamentals for XAI:

Explanation: A system delivers or contains accompanying evidence or reason(s) for outputs and/or processes.

Meaningful: A system provides explanations that are understandable to the intended consumer(s).

Explanation Accuracy: An explanation correctly reflects the reason for generating the output and/or accurately reflects the system’s process.

Knowledge Limits: A system only operates under conditions for which it was designed and when it reaches sufficient confidence in its output.

The efforts of DARPA and NIST as well as others have set a precedent for XAI research that encourages a shift away from the black box and into AI transparency.

At it stands today, explainability is not inherent in many state-of-the-art AI systems, and attempting to apply explainability techniques that satisfy the criteria mentioned above to existing models is proving extremely challenging. Indeed, the most widely-adopted AI techniques, tools, and systems are also the most opaque. Some experts argue that the unexplainable ‘black box’ may be necessary to maintain high accuracies and that retrofitting explainability into these sub-symbolic systems would only hinder performance.

But AI explainability is a key component for:

There are a number of applications across Healthcare, Finance, Defense, Aerospace, and other industries that require explainability to enable AI that satisfies regulations. All of these applications (e.g., medical treatment and diagnosis, financial risk management, predictive maintenance for military equipment, autonomous flight, etc.) would be greatly improved with the implementation of artificial intelligence, but a lack of explainability raises reasonable ethical concern, which prevents AI from making it into production in these and similar areas where AI is responsible for high-stakes decision support.

Whereas excellent performance benchmarks may be efficient in many AI applications, trust and transparency are equally important metrics in solving our most complex problems. Explainability is the first step in implementing ethical, unbiased AI systems that can be employed in ways that are beneficial to society. It is important to note there are novel technologies out there today with explainability baked in that can learn directly from data as well as expert input that perform just as well as popular black box systems.

References:
1 DARPA - Explainable Artificial Intelligence (XAI)

2 NIST - Four Principles of Explainable Artificial Intelligence

3 Explainable Artificial Intelligence: An Analytical Review