Toggle Menu

Insights / Artificial Intelligence (AI) / Why Government Agencies Need to Incorporate Explainable AI in 2021

June 07, 2021

Why Government Agencies Need to Incorporate Explainable AI in 2021

1 min read

Jump to section

Consumers want reassurance about ethical use and fairness related to AI.

In a world fueled by digital data, the use of artificial intelligence is prolific—from the automation of human processes to discovering hidden insights at scale and speed. Machines can do many tasks far more efficiently and reliably than humans, resulting in everyday life that increasingly resembles science fiction. This inevitably sparks concern about controls—or lack thereof—to inspect and ensure these advanced technologies are used responsibly.

Consumers want reassurance about ethical use and fairness related to AI. Businesses need to mitigate the risk of unintended consequences when employing these advanced, complex solutions. Enter: Explainable AI, or XAI, an attempt to create transparency in the “black box” of artificial intelligence.

Can you confidently answer the simple questions below about your current AI solutions?

  • Why did the AI model make a specific decision or prediction?
  • When the result is unexpected, why did the model pick an alternate choice?
  • How much confidence can be placed in the AI model results?

Continue reading on NextGov.

Learn more about Explainable AI via our complimentary XAI eBook.

Download XAI eBook

You Might Also Like

Modernization

Human Centered Design: The What, Why, and How

Have you ever been on a website and felt that it was made for you?...

Excellian Spotlights

Mahreen Rashid Announced as a 2023 WashingtonExec’s HR Executive of the Year Finalist

“It is vital that we create and enable opportunities for anyone looking to pursue a...

Security

Software Lifecycle Development: Day 0 vs. Day 2 DevSecOps

Improving the software development lifecycle has benefits both internally and externally, particularly when security is...