Toggle Menu

The Importance of Ethical AI

As we integrate intelligent technologies further into our lives and society, an informed understanding of how they work, the data they collect and what drives the actions they take becomes increasingly relevant.

What is Included in a Responsible and Ethical AI Solution?

  • A detailed assessment of potential risks
  • A defined plan of action to avoid or minimize those identified risks
  • Transparency built into the solution execution leaving no inexplicable black boxes
  • Ensuring that humans are always looped in when decisions are made

Excella’s Ethics Guidelines

Excella uses the following guidelines when building AI solutions to ensure a responsible and ethical outcome.

Guidelines to Assess the Ethical Impacts of an AI Project:

gear icon inside a continuous loop
What is the intended use case for the AI solution?

 

Example: Potential to use the solution to manipulate a person’s behavior that may result in negative physical or psychological consequences.

Example: Potential to use the solution intentionally or unintentionally to exploit vulnerable population groups.

Example: A solution that has an impact on a person’s health and safety, such as a medical device or critical infrastructure.

group of people icon single color
What is the potential impact of the AI solution on individual and/or community welfare?

 

Example: Potential for the solution to be used to control population groups without their participation or knowledge.

Example: What could be unintended consequences of this solution that could be positive or negative impacts?

text bubbles icon
What is the potential for bias in the AI solution?

 

Example: Use cases or population groups that are over or underrepresented in the training data (based on expected production use)

Example: Bias introduced over time in production with no mechanism to monitor or detect it.

Guidelines for Building a Responsible and Ethical AI Solution: 

 

Solution is Designed and Built to Be Explainable and Transparent 

All code and workflows can be interpreted understood without having advanced technical knowledge.  This often requires additional steps in the design and build of the solution to ensure explainability is achieved and maintained 

Use MLOps Principles to Ensure the Solution is Sustainable and Secure 

MLOps brings the principals of DevOps to support the deployment, monitoring and governance of AI solutions through a combination of tools, technologies and practices. We aim to make the AI model implementation fast, reliable, repeatable through the automation of tests and deployment pipelines, using standardized infrastructure across pre-prod and production environments, ensuring solid security is built in throughout and automating monitoring to validate the model continues to work as expected. 

Solution Addresses Risks of Potential Bias 

We include processes to assess data quality, confirm that training data appropriately represents all target audiences, testing, tracking and alerts for deliberate attempts to expose model bias, bias detection and remediation plans and monitoring for model bias over time to mitigate the risk of potential bias in AI solutions.

Privacy and Consent is Confirmed for All Data Used in the Solution 

We recommend active notification to users when they are interacting with an AI system to ensure they are aware. We also advocate providing all users with details when their personal data a solution is collected and for what purpose. 

Build in ‘Human in the Loop’ Safeguards for Appropriate Oversight 

We design and build AI solutions that maintain an appropriate level of human oversight at all times, including a human override capability should unexpected results or actions occur.  

Benefits

Legislation to regulate artificial intelligence appears inevitable to protect the interests and safety of all citizens and organizations (with defined consequences if breeched). For example, the European Union has already introduced a draft AI Act with proposed legal framework for AI and in the U.S. NIST is developing an Artificial Intelligence Risk Management Framework to improve the management of risks to individuals, organizations, and society associated with artificial intelligence. Until then, adopting voluntary ethics standards to create responsible and ethical artificial intelligence AI solutions are first step organizations can take for risk mitigation. Do you know the ethical impacts of your AI solution? 

Contact Us