The concept of artificial intelligence (AI) has been around since the middle of the 20th century, but recent applications have generated unprecedented buzz. From ChatGPT to deepfakes, AI has quickly sidled its way into everyday life and caused a fair amount of handwringing in the process.
Of course, not all artificial intelligence is created equal. Machines and algorithms are well-suited to perform certain tasks quickly and reliably, but only if they are built and trained with ethics in mind. Let’s dive into three components of ethical, responsible artificial intelligence.
1. Human oversight
Artificial intelligence is particularly well-suited for quickly detecting and highlighting patterns in large data sets. However, humans must remain in the loop to ensure ethical use of this data. Prior to deploying AI technology, humans must be involved in designing the models to ensure intentionality and fairness of the data collected and how it’s used. They must also stay active in monitoring for bias as the AI algorithm learns and evolves. As bias is unavoidable, it must be acknowledged, accepted and mitigated so users can apply ethical standards when making data-driven decisions. For example:
- Facial recognition software has demonstrated racial profiling when forecasting the likelihood of people previously arrested to be arrested again in the future, while
- Models used to approve credit limits have demonstrated gender bias, giving lower borrowing limits to women.
Having human oversight in designing and monitoring the model and outputs is critical to minimize bias and help users make fair and ethical decisions.
According to a recent survey, more rigorous oversight tends to breed greater success with AI. Whether the situation calls for an AI model monitoring approach or a human in the loop approach to decision making, training both the humans who will oversee the technology and the AI model itself can lead to more ethical outcomes from AI. Of companies that have achieved what they consider “real success” from AI, more than 90 percent say they conduct ethics training for their technologists.
Explainability is another cornerstone of ethical AI and refers to how easily a user can understand how a model reaches its conclusion — the data it outputs. For AI to be used ethically and responsibly, even a non-expert must be able to understand why the model output that set of data. However, explainability does not operate on a yes/no approach. Instead, it operates on a spectrum and can have significant consequences on the outcomes. As part of the design process, stakeholders must have explicit conversations regarding how explainable the AI model needs to be based on the specific use case involved and what the implications would be if the model is not transparent enough.
Consider the following use case: the largest grant-making organization in the federal government is Health and Human Services, Office of Inspector General (HHS OIG). Its Grants Analytics Portal (GAP) brings several data sets about grants together and uses an AI model that flags grants for OIG agents, evaluators, and auditors. Users must be able to understand how the data is flagged in order to analyze this data to detect fraud, waste, or abuse. Explainability is non-negotiable when lack of explanability and transparency could lead to harm or legal action, as it could in this use case.
Another ethical concern surrounding artificial intelligence pertains to security. After just a few months in the spotlight, a bug in ChatGPT has exposed user files, which should remind organizations using an outside model that they still must manage their own security testing and risks if they do so.
Additionally, organizations must be mindful of how the introduction of external data inputs could lead to security vulnerabilities. Adversarial machine learning attacks are a good example. This refers to bad actors’ attempts to find minor variations in the AI model that can be used to produce undesired outputs (e.g. tricking a neural network that classifies images by changing a single pixel in the input range). If the model is learning from an online data source, hackers may also attempt to gain access such as in the case of Microsoft’s AI chatbot in 2016.
When considering the use of open-source data, organizations must weigh the risks of making data acquisition decisions based on expediency or necessity and determine what guardrails are needed to monitor the real-world impact and behavior of outside data sources within the AI model to ensure ethical decision-making is not hampered.
With all of this in mind, security must always remain top of mind when developing ethical AI. By incorporating robust DataOps and MLOps into the model development process alongside sufficient bias and security training for both the model and users, organizations can significantly reduce vulnerability to security attacks.
The bottom line
With any cutting-edge technology, a certain level of judiciousness is warranted. Artificial intelligence indeed holds tremendous promise for everything from automation to decision making, but only if it’s designed and used with ethics top of mind. The European Union, for one, has released ethical guidelines for what it considers to be “trustworthy AI,” and other governing bodies are sure to follow. But rather than waiting for the many global players to weigh in on the issue, or, worse yet, for a legal action of their own – organizations should protect themselves and their users by prioritizing ethics now as they begin to benefit from the growing array of AI applications.