As the use of AI and AI applications look set to become even more commonplace — forecasters predict the global AI marketplace will be worth $1.8tn by 2030 — all eyes are now on its governance.

Although this field of computer science has provided advancements in many areas of our daily lives, it is not without discrepancies and disparities. Indeed, artificial intelligence bias is a very real shortcoming and can have far-reaching consequences.

In this blog, we'll explore how to mitigate AI bias to help you drive ethical practices in your organization.

What is AI bias?

We revealed that inadequate testing of AI models and a lack of diversity within teams allows unconscious biases to creep into AI algorithms and machine learning models.

Revisit our previous blog on how AI bias happens to go more in depth.

Examples of the potentially damaging effects of AI bias on certain groups and demographics were highlighted in a US Department of Commerce study, which found that inequity in facial recognition algorithms means facial recognition AI has a propensity to misidentify African-American people and people of color.

When used for law enforcement purposes, biased forms of this technology could have a harmful impact on the lives of these groups, potentially leading to wrongful arrests — and in turn, resulting in a lack of trust in the criminal justice system.

Another example of AI-based discrimination was revealed in a study by UC Berkeley, which found that biased algorithms used by financial services companies meant borrowers from minority communities, including Latino and Black customers, were routinely charged higher interest rates for mortgages.

Humans have a role to play in mitigating AI bias

As humans choose the data that machine learning algorithms use and decide how the results of those algorithms will be applied, human biases are responsible for artificial intelligence bias.

And so, it is also humans who must champion ethical AI and work towards bias mitigation.

Mitigating bias and limiting its potentially adverse impact of automated decisions in real-world situations is now a priority for data science leaders, policymakers, and key decision-makers.

Legislation is already underway in Europe in the form of the European Union’s Artificial Intelligence Act, which is an attempt to introduce a common regulatory and legal framework for AI. Combined with the EU’s General Data Protection Regulation and Digital Services Act and Digital Markets Act, the aim of the AI Act is to improve public confidence and trust in technology.

How key stakeholders can mitigate AI bias

There are several approaches that AI professionals who build machine learning models and algorithms — as well as organizations that buy and use AI, and governments that regulate it — can take to mitigate bias, and limit its potentially adverse impact.

Some recommended methods and approaches include:

  • Explainability - Explainable AI (XAI) is a methodology and set of processes that provides transparent reasons for outcomes that affect customers in industries such as financial services. Explanations accompany the AI/ML output to address concerns and challenges.
    Lenders or credit underwriters need to ensure that protected classes of attributes such as ancestry, color, disability, ethnicity, gender, or gender identity are not being used in machine learning models or in existence through proxies.
  • Human-in-the-loop: This method combines supervised machine learning and active learning with humans involved in the training and testing stages of building an algorithm. By bringing together human and machine intelligence, a continuous feedback loop is created, which enables the algorithm to deliver better results each time.
    This methodology can be used for any deep learning AI project including natural language processing (NLP) and alongside content moderation systems to analyze user-generated content.
  • Pre-processing algorithms: Pre-processing algorithms are bias mitigation algorithms applied to training data, attempting to improve fairness metrics. They are the earliest mediation category, offering the most flexibility to correct bias.
    Pre-processing algorithms can be employed if a user is allowed to modify the training data before model building.
    However, it is important that pre-processing algorithms do not include fairness metrics that involve model predictions such as predictive parity.
  • Choosing machine learning model training data that is suitable, large, and representative enough, in order to neutralize (at least to a meaningful extent) different kinds of AI bias such as prejudice bias, measurement bias, and sample bias.
  • Rigorously testing the output (results) of machine learning systems and verifying that they do not incorporate AI bias rooted in data sets or the algorithm itself is another way to reduce algorithmic bias.
  • Continuously monitoring machine learning systems during performance, in order to keep AI bias from creeping in over time.
  • Using resources to visually probe the behavior of trained machine learning models (e.g., Google’s What-If Tool).
  • Establishing data collection methods that take into consideration different opinions on data point labeling options. More inclusion means greater machine learning model flexibility — and a reduced likelihood of AI bias.
  • Fully understanding the training data used for machine learning models, including what it includes, where it comes from, and how it was generated. Training data sets that contain inaccurate or unfair classes and labels are a major source of AI bias.
  • Continuously monitoring the machine learning model, and allocate resources (priority, people, time, budget) to making ongoing improvements based on feedback and observations.

Mitigating AI bias is an ongoing practice

In addition to the recommendations above, the U.S. National Institute of Standards and Technology (NIST) recommends that data scientists and other AI professionals look beyond machine learning processes and training data for potential sources of AI bias.

This wider scope should include broader societal context, which shapes and influences how the technology is conceived and developed. According to Reva Schwartz, a principal investigator of AI bias at NIST:

“Context is everything. AI systems do not operate in isolation. They help people make decisions that directly affect other people’s lives. If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI. Many of these factors go beyond the technology itself to the impacts of the technology, and the comments we received from a wide range of people and organizations emphasized this point.”

It is also important for organizations — both those that build AI systems, and those that deploy it — to see the general public as an ally in the fight against AI bias, similar to how cybersecurity professionals actively encourage and enable users to spot and report vulnerabilities (through “big bounty” and other programs).

For example, one AI expert urges organizations to create a grievance process that lets individuals, who believe they or others have been harmed by AI bias (or could be harmed) by its decision-making, to escalate their concerns about inequities in an organized, efficient, and solution-focused way.

Since all parties have the same objectives — accuracy, appropriateness, and fairness — the engagement and dialogue should be collaborative, rather than adversarial.

The final word

There are many positive — and in some cases profound — use cases for AI. For example, AI is driving breakthroughs in medical research and healthcare, establishing access to education for millions of people in developing nations, and helping combat climate change. However, AI is not a perfected panacea.

Like many other technologies, it is developing and evolving in ways that we can anticipate, and probably in other ways that we cannot. Working earnestly at all levels to identify, reduce, and ideally prevent potential biases is a critically important way to help ensure that the ongoing story of AI is characterized by solutions instead of setbacks.

To learn more about AI explore the Sitecore Knowledge Center.