Toggle Menu

Insights / Artificial Intelligence (AI) / When (and Why) AI Needs a Human in the Loop

June 22, 2023

When (and Why) AI Needs a Human in the Loop

4 mins read

Jump to section

Written by

Excella

In the past five years, artificial intelligence (AI) adoption has more than doubled, fueled in part by the pandemic. A growing number of AI solutions are free to use as well, with content generator ChatGPT garnering the most attention of late. Considering the many benefits of AI, it’s no surprise that businesses are flocking to the technology, free or otherwise. But while canned AI models can be fine for inconsequential or low-risk tasks, more caution should be taken when they are used for more serious use cases.

Even when companies tap contractors to build AI solutions specifically for their business, risks of bias, security threats, and more remain. Too often, this is done without proper due diligence or oversight. Despite the promises of artificial intelligence, many applications still require humans in the loop. Let’s take a look at the three stages of an AI project and the role humans should play in each.

1. Determining the use case

In the EU, a law has been proposed to regulate AI precisely because of the risk associated with it. Should the law pass, it will be the first major regulation of the technology to be put in place. In a nutshell, the law aims to differentiate between AI that creates unacceptable risk (for which use will be prohibited), high risk (for which use will be regulated), and low risk. These tiers represent a useful framework for any company determining whether AI is applicable for the use case at-hand.

AI is best suited for repetitive tasks with well-defined goals. In the fraud detection context, this often means using AI/ML to find anomalies and patterns in vast data sets. A machine can sort through this information much faster than a human tasked with reviewing thousands of documents manually. But once the algorithm homes in on a particular subset—say, those with the highest likelihood of fraud—expert review is still required. “The AI can act as an automated research tool for investigators by highlighting patterns. It makes the investigators’ jobs more efficient, but it does not replace their judgement or adherence to protocols” said Mary Scott Sanders, AI/ML Solutions Architect at Excella.

2. Data collection and model design

When companies create their own algorithms, they must be diligent about what data they are trained on. Regarding data collection, human oversight means ensuring that each subset of the target audience is represented appropriately. Models simply average the data that is fed to them, with no awareness of what is important from a human moral or ethical perspective. If your model is going to provide results across demographics, for instance, you should inspect your dataset to see if each is represented fairly. Various forms of sampling exist to help ensure data is representative—but the development team must be proactive about applying them. “There are some data points that should be avoided altogether,” says Mary Scott. Zip code, for example, could provide predictive power for a fraud detection model, but because of this country’s history of racial segregation, using zip code as input will result in biased output.

After a model is trained, even if it is trained on a representative dataset, it can have disparate impacts across demographic groups. This necessitates additional inspection and can be addressed using methods to enforce fairness. At each step, humans are responsible for inspecting and addressing any moral or ethical issues that can arise when developing and implementing AI.

3. Monitoring for bias and security

After an AI model has been deployed, human expertise is still required. Because AI gets smarter over time, humans should be in the loop to monitor any development of bias being introduced.  AI tasked with evaluating text may become inaccurate or biased over time as the way different people use language is continuously evolving. Thus, even once the model has been trained and deployed, human oversight is required to monitor for disparate outcomes across demographic groups.

Unfortunately, research suggests most organizations fail to take the steps necessary to monitor for bias. According to a recent survey, just one-quarter of organizations are taking steps to reduce bias in their AI models, while just one-third are tracking performance variations and model drift. Last but not least, continuous monitoring should also take place from a security perspective. AI is fueled by data, so organizations must be sure to maintain good cybersecurity hygiene, including auditing and encryption. Never simply assume your AI is secure.

The bottom line is that AI is not ready – and may never be – to fly solo when high risk is involved. Without proper oversight, relying on AI can be akin to diagnosing yourself using information found online. Instead, use AI as a tool that you provide to your human experts to make better decisions. With this approach, you will be better positioned to reap the many benefits of AI without exposing themselves to unnecessary risk.

Excella

You Might Also Like

Resources

20 Years in Tech, What to Consider Today

As Excella celebrates its 20th anniversary, CEO and Co-Founder, Burton White, sits down with Chief...

Resources

Achieve Your Business Goals Faster with Agile for AI

Harmonizing Agile and Data Approaches The pace of change is accelerating. Data is getting bigger....

Resources

Powerful Insights for Future Focused Organizations

Data is Prolific. Our approach to Artificial Intelligence (AI) focuses on the ability to leverage...