There is a great deal about artificial intelligence (AI) to be impressed with and inspired by, from driving groundbreaking medical and scientific initiatives, to the opportunities presented by the emergence of generative AI.

And on a somewhat less profound — but still enjoyable and exciting level — AI systems and AI-driven tools and technologies are making it faster, easier, and more rewarding to do everything from booking a vacation to choosing the next TV series to binge.

However, despite these advancements in the use of AI and automation, there is growing concern and controversy about the possibility — and in some cases the verified existence — of what is known as AI bias, which is why more and more governments and organizations are now calling for investment in responsible and ethical AI.

In this blog, we dive deeper into this area of data science, exploring the basics of AI bias, how AI bias happens, different types of AI bias, and examples of AI bias.

What is AI bias?

Although the engine that enables and drives AI is complex, understanding the essence of AI bias is simple.

AI bias — which is also known as machine learning bias, or algorithm bias — occurs when there are inherent and erroneous assumptions in the machine learning process that skews the output, influencing or limiting the results we are offered.

AI bias — regardless of intent — ultimately makes the result (however that may be defined) questionable and problematic on one end of the spectrum, to unreliable, prejudicial, or discriminatory to certain groups of people on the other end. It is also important to realize that AI bias is not an incidental issue: it is a primary problem that will by no means ‘fix itself’ over time.

Gartner estimates that 85% of AI projects provide false results due to bias built into the data or the algorithms, or that exist in the professionals managing those deployments.

And a team of researchers from the University of Southern California’s Information Sciences Institute analyzed two AI databases (ConceptNET and GenericsKB), and discovered bias in up to 38.6% of ‘facts’ used by AI.

Types of AI bias

There are several types of AI bias:

  • Algorithm bias, which is rooted in inherent problems with the algorithm that performs calculations, and in turn generate machine learning computations.
  • Sample bias, which is caused by problems with the dataset used to train the machine learning — often because the data is not sufficiently large or representative to effectively ‘teach’ the learning model.
  • Prejudice bias, which occurs when the data used to train the learning model relies on data that is discriminatory, prejudicial, or stereotypical.
  • Measurement bias, which happens when the underlying data reflects problems with how it was assessed or measured.
  • Exclusion bias, which occurs when essential information is left out of the data that the machine learning model uses (this can happen unintentionally, or if the modelers erroneously fail to recognize relevant data as meaningful).
  • Selection bias, which is similar to sample bias, and occurs when the training dataset used to train the machine learning model is not sufficiently large or representative.
  • Recall bias, which manifests in the data labeling phase, and causes labels to be inconsistently applied due to subjective observations.

Before moving on in our discussion, it is important to reiterate that the very nature of bias (not just as it applies to AI) is dependent on individuals or groups being unaware that it exists and is happening.

Take exclusion bias for example, which as noted happens when essential information is left out of the data used by the machine learning model. Why can’t we simply tell modelers, researchers, and other contributors and stakeholders involved to ensure that their data is sufficiently inclusive?

The answer, of course, is that these people do not know or do not believe that the data is exclusionary; or at least, they do not know or believe that it is exclusionary enough to undermine the machine learning model — and ultimately create AI bias. The same conundrum applies to the other types of AI bias: in most cases, it is not deliberate or explicit. It is embedded and systemic, especially in the case of prejudice bias.

How does AI bias happen?

There are several ways that AI models can become biased, such as:

  • Not enough model training data (or good model training data) for specific groups.
  • The inconvenient truth that human beings are fundamentally biased — and consequently, so is the data that AI uses for training models.
  • ‘Cleaning’ data to remove bias is extremely difficult to do. For example, while attempts have been made to remove certain data points (e.g., age and race) this has not prevented models from using correlated attributes (e.g., neighborhood and education) as proxies.
  • AI professionals are not an especially diverse group. According to Mitra Best, Technology Impact Leader at PwC US: “Humans write the algorithms to make certain choices: what insights to value, what conclusions to draw, and what actions to take.
     
    Since the AI research community suffers from a dearth in diversity, the biases of the majority, who tend to share certain dominant perspectives, assumptions, and stereotypes, can seep into AI models, inadvertently discriminating against certain groups.”
  • Stringent privacy regulations make it difficult to perform external audits. For example, many organizations are forbidden — and those that aren’t have no incentive — to share the customer data they used to train and develop models for artificial intelligence systems.
  • While most people inside and outside the AI modeling world believe that ethical AI and fairness is important, there can be widespread disagreement on what, exactly, fairness means.
     
    In fact, researchers have identified more than 20 definitions of 'fairness' that have been submitted just in the past few years. What may be deemed equitable by one AI professional or group may be declared inequitable by another.
  • The statistical properties of data can change over time, which causes the machine learning model to become increasingly less accurate — and in some cases, perform in unexpected and unpredictable ways. This is called: model drift.

What are some examples of AI bias?

If left unchecked, AI bias has been shown to deliver negative rather than neutral consequences. Here are some real-world examples that demonstrate the harmful impact of AI bias:

  • The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is an American case management and decision support tool. It is designed to help judges more accurately predict which criminals are more likely to re-offend.
     
    However, an analysis of the COMPAS AI algorithm discovered racial biases — black criminals were inaccurately and unfairly assessed as being more risky vs. white criminals. Black criminals were also misclassified by COMPAS as being more dangerous than white criminals.
  • Many jurisdictions around the world use predictive policing (PredPol) in order to help determine the most effective way to allocate police resources across a geographic area. In theory, this approach is designed to remove human bias from the decision-making process.
     
    However, an analysis of PredPol revealed that they are susceptible to disparities and “runaway feedback loops,” which means that police are repeatedly sent back to the same minority neighborhood areas, regardless of the actual crime rate.
  • In 2014, Amazon introduced an AI-driven recruitment system with a worthy objective: eliminate bias in the recruiting process, and create a truly level playing field that would identify the best candidates based on their skills, experience, knowledge, etc.
     
    Unfortunately, about a year after launch Amazon’s own machine learning specialists discovered that the learning algorithm used to train the model was inherently biased against resumes from women, who as a group were historically underrepresented in technical roles.
     
    As such, the model determined that male applicants “must be preferred,” and acted accordingly. Amazon attempted to eliminate the AI bias, but was unsuccessful, and ultimately shut down the system a few years later.
  • In 2019, researchers discovered that an algorithm, used by hospitals in the US to predict which patients were more likely to require additional medical care, was biased towards white patients vs. black patients (i.e., white patients were disproportionately predicted to require additional medical care).
     
    This was because the algorithm took into consideration past healthcare expenditures, and historically white people spent more than black people and people of color. The bias has since been mitigated, but only after the problem was uncovered. Had it remained hidden, it would almost certainly be churning out flawed ‘intelligence’ right now — and potentially putting lives at risk.
  • In 2015, Joy Buolamwini, a researcher at MIT, discovered bias in facial recognition software, when the technology was unable to detect her face until she covered her dark skin with a white mask. She attributed the issue to the AI system lacking data as they’re largely trained on white male faces.

As AI gathers momentum and use cases increase, it is the responsibility of legislators, data scientists, and tech companies to work towards responsible AI by canceling out inequities. Fortunately, there are ways to mitigate AI bias.