Toggle Menu

Insights / Artificial Intelligence (AI) / NIST’s Four Principles for Explainable ArtificiaI Intelligence (XAI)

August 27, 2020

NIST’s Four Principles for Explainable ArtificiaI Intelligence (XAI)

2 mins read

The National Institute of Standards and Technology (NIST) recently proposed four principles for explainable artificial intelligence (XAI). NIST’s XAI principles provide a framework for assessing the trustworthiness of AI solutions and can be a useful guide for developing and operating them.

NIST highlights the importance of leveraging interdisciplinary skills when building and assessing AI solutions, “including the fields of computer science, engineering, and psychology.” This is crucial. At Excella, we recognize that effective AI solutions require cross-functional teams that can translate between the underlying business opportunity and technical expertise while eliminating unanticipated or undesirable bias in the underlying data. AI is more than a technical challenge; it’s an interdisciplinary skill to build trustworthy solutions that can be relied upon.

NIST’s four principles for XAI reflect this perspective. They are:

  • Explanation: Systems deliver accompanying evidence or reason(s) for all outputs.
  • Meaningful: Systems provide explanations that are understandable to individual users.
  • Explanation Accuracy: The explanation correctly reflects the system’s process for generating the output.
  • Knowledge Limits: The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output.

In effect, these principles boil down into two straightforward questions:

  • Can the system explain how it reached its conclusions in ways users can understand?
  • Can the system fail gracefully when asked to perform a task it wasn’t designed for?

If a system can do these things, then it will have the necessary transparency to create trust in its behavior and confidence in its conclusions. As we explain in our Introduction to XAI, transparency is key to mitigating the risk of unintended consequences when building and deploying AI solutions.

AI solutions rely on incredible complexity but that doesn’t mean they have to be inexplicable black boxes. NIST’s four principles show how AI solutions can be explainable and inspire trust and confidence. Building these solutions requires great skill and diverse experience. At Excella, we have the expertise to build on NIST’s foundation and make trustworthy XAI solutions.

Curious about XAI? Learn how interpretability and explainability are key to staying accountable to customers, building trust, and making decisions with confidence in our Introduction to XAI eBook.



You Might Also Like

Artificial Intelligence (AI)

How to Deliver Impactful Software with Doguhan Uluca and Keith Mealo

In case you missed it, Excella Principal Fellow, Doguhan Uluca, and Senior Engagement Manager, Keith...

Excellian Spotlights

Burton White Announced as a 2023 WashingtonExec’s Chief Officer Awards Finalist

“Excellians are Passionate About Making the Tech Community More Diverse” – Burton White.  WashingtonExec announced...

Excella

The Shift Left and the Future of Tech with John Gilroy and Jeff Gallimore

In case you missed it, Jeff Gallimore, Excella’s Chief Technology and Innovation Officer, joined long-time...