Toggle Menu

Insights / Artificial Intelligence (AI) / Why Government Agencies Need to Incorporate Explainable AI in 2021

June 07, 2021

Why Government Agencies Need to Incorporate Explainable AI in 2021

1 min read

Jump to section

Consumers want reassurance about ethical use and fairness related to AI.

In a world fueled by digital data, the use of artificial intelligence is prolific—from the automation of human processes to discovering hidden insights at scale and speed. Machines can do many tasks far more efficiently and reliably than humans, resulting in everyday life that increasingly resembles science fiction. This inevitably sparks concern about controls—or lack thereof—to inspect and ensure these advanced technologies are used responsibly.

Consumers want reassurance about ethical use and fairness related to AI. Businesses need to mitigate the risk of unintended consequences when employing these advanced, complex solutions. Enter: Explainable AI, or XAI, an attempt to create transparency in the “black box” of artificial intelligence.

Can you confidently answer the simple questions below about your current AI solutions?

  • Why did the AI model make a specific decision or prediction?
  • When the result is unexpected, why did the model pick an alternate choice?
  • How much confidence can be placed in the AI model results?

Continue reading on NextGov.

Learn more about Explainable AI via our complimentary XAI eBook.

Download XAI eBook

You Might Also Like

Resources

Simplifying Tech Complexities and Cultivating Tech Talent with Dustin Gaspard

Technical Program Manager, Dustin Gaspard, join host Javier Guerra, of The TechHuman Experience to discuss the transformative...

Resources

How Federal Agencies Can Deliver Better Digital Experiences Using UX and Human-Centered Design

Excella UX/UI Xpert, Thelma Van, join host John Gilroy of Federal Tech Podcast to discuss...