Toggle Menu

Insights > Artificial Intelligence (AI) > Why Government Agencies Need to Incorporate Explainable AI in 2021

Why Government Agencies Need to Incorporate Explainable AI in 2021

Consumers want reassurance about ethical use and fairness related to AI. In a world fueled by digital data, the use of artificial intelligence is prolific—from the automation of human processes to discovering hidden insights at scale and speed. Machines can do many tasks far more efficiently and reliably than humans, resulting in everyday life that […]

By

June 07, 2021

Consumers want reassurance about ethical use and fairness related to AI.

In a world fueled by digital data, the use of artificial intelligence is prolific—from the automation of human processes to discovering hidden insights at scale and speed. Machines can do many tasks far more efficiently and reliably than humans, resulting in everyday life that increasingly resembles science fiction. This inevitably sparks concern about controls—or lack thereof—to inspect and ensure these advanced technologies are used responsibly.

Consumers want reassurance about ethical use and fairness related to AI. Businesses need to mitigate the risk of unintended consequences when employing these advanced, complex solutions. Enter: Explainable AI, or XAI, an attempt to create transparency in the “black box” of artificial intelligence.

Can you confidently answer the simple questions below about your current AI solutions?

  • Why did the AI model make a specific decision or prediction?
  • When the result is unexpected, why did the model pick an alternate choice?
  • How much confidence can be placed in the AI model results?

Continue reading on NextGov.

Learn more about Explainable AI via our complimentary XAI eBook.

Download XAI eBook