Exploring Artificial Intelligence [5]

May 29, 2025

Bias within AI

Bias in AI is both a technical issue and a societal mirror. Following on from my last post in the Exploring AI series , where I shared my own thoughts, questions, and what I’d been learning, this article goes deeper into one of the most urgent challenges in today’s AI sphere.

As AI systems increasingly influence what shapes our lives today, from job hiring to healthcare, it raises difficult questions: Who benefits? Who gets left behind? And how do we build systems that reflect the values we want, not the inequalities we inherit?

This blog doesn’t pretend to provide definitive answers. Instead, it offers real-world examples, expert insights, and diverse perspectives to help navigate the complexity of bias in AI and to encourage thoughtful reflection and conversation that will help you make your own conclusions.


The Main Concerns

A central focus that emerges: If we attempt to filter out bias entirely, do we create fairer AI systems, or do we risk losing valuable context about how human values and morals have evolved, so they can continue to evolve?

Societal norms have shifted dramatically in recent generations. Attitudes once considered mainstream are now seen as harmful; previously marginalised values are now celebrated. This raises a fundamental question for AI development: Should we program AI to reflect today’s standards, or should we give it the context to understand how and why those standards changed?


Understanding Bias in AI

AI itself isn’t biased—it learns from data that often is. Bias stems from the information AI is trained on, not from the learning machine itself.

Bias in the context of AI refers to a systematic skew in outputs resulting in unfair treatment of individuals or groups. This can come from:

  • Training data: Historical datasets may reflect past inequalities, discriminatory language, or underrepresentation of certain groups.
  • Algorithmic design: The way models are built can unintentionally reinforce certain preferences or exclude edge cases.
  • Deployment context: Even neutral systems can produce biased outcomes when placed in complex, real-world settings.

Bias is often unintentional, but the impact can be significant, especially when AI is used in hiring, law enforcement, healthcare, or public services.


Programming Bias Out vs. Teaching What Went Wrong 

Most modern AI systems learn from vast datasets that have been scraped from books, websites, and social media. These contain a wealth of human insight but also reflect inequalities and prejudices, past and present.

One approach argues that AI should be trained to understand historical context rather than avoid it.

Algorithmic bias, like human bias, results in unfairness. However, algorithms, like viruses, can spread bias on a massive scale at a rapid pace.
(This expresses Joy Buolamwini's position on the dangers of unexamined algorithmic systems)

In contrast, others believe certain content is too dangerous to include - even with context.

If the input data is biased, the output will amplify these biases.
(Timnit Gebru emphasises the risks of training models on harmful or incomplete data and argues that some content should be excluded to prevent reinforcing existing power imbalances.)

Both perspectives raise valid concerns. Programming out all bias could create blind spots, but teaching harmful histories could unintentionally normalise them, especially since AI lacks the ability to critically reflect on context as humans do.

This is why even with different views, both stress that human insight and ethical reflection remain essential. AI can assist with decision-making, but it should not replace human responsibility to interpret, question, and lead with values.

A Century in Perspective: Values Change

Looking back over 100 years, many practices now widely considered unjust—such as racial segregation, criminalisation of homosexuality, and exclusion of women from voting or employment—were once standard in many parts of the world.

An AI system trained on data that includes these eras could misinterpret these norms as current and valid unless guided to understand how society has moved on and why.

Looking forward raises a different challenge: What might future generations view as ethical blind spots in today’s data?

  • Could AI unknowingly reinforce exclusion in employment or finance?
  • Might underrepresentation of marginalised voices limit its usefulness?
  • Could cultural dominance in training data skew its responses for non-Western users?

Real-World Examples of AI Bias
Facial Recognition and Invisibility

In 2018, the MIT Media Lab published the “Gender Shades” study, which found commercial facial recognition software had:

  • 34% error rates for darker-skinned women
  • Less than 1% error rates for lighter-skinned men

The disparity wasn’t caused by malicious coding, but by incomplete training datasets that failed to represent a diverse population. This resulted in AI systems that quite literally didn’t “see” certain people accurately.

Language Models and Cultural Representation

Language-based AI tools also show how uneven training data can affect outputs. Researchers have found that AI often:

  • Associates certain professions with specific genders
  • Struggles with non-Western dialects or cultural references
  • Provides less accurate or useful answers to underrepresented user groups

In Kenya, for example, language models were found to be less effective due to gaps in training on local dialects and context-specific knowledge, thus limiting their value for the very people they were meant to support.

Women’s Health and Hidden Censorship

In 2025, a campaign by CensHERship revealed that 90% of surveyed women’s health organisations experienced some form of online content restriction or moderation. This wasn’t due to a deliberate exclusion in datasets, but rather the output of automated moderation tools, often driven by AI classifiers, that have learned from training data where such terms are disproportionately flagged as inappropriate.

  • Posts were flagged for using medical terms like “vagina” or “menstruation”
  • Educational images of breast exams were removed as “explicit”
  • Advertisements for women’s health were denied while similar men’s ads ran freely

These issues highlight how AI moderation systems, if trained on datasets that treat women’s health terms as taboo or explicit, can reinforce stigma or suppress vital health information. Brands such as Bodyform and Daye have publicly called out these imbalances, while formal complaints have been filed with the European Commission.


How Different Cultures Impact AI Training Data

Cultural diversity, or the lack of it, in training data significantly shapes how AI systems interpret, respond, and perform across different societies.

  • Bias and Representation: Western-centric datasets may lead to systems that overlook or misinterpret non-Western customs and perspectives.
  • Contextual Understanding: Language, social norms, and moral frameworks vary widely between cultures. AI often struggles to interpret idioms, hierarchical speech patterns, or culturally specific behaviours.

“It’s not just about removing bias, but about deciding what values you want your system to have.”
“We need to be explicit about the values we want our AI systems to embody.”
“Bias is not just a technical problem, it’s a value-laden problem.”
Margaret Mitchell, Computer Scientist


Diverse Perspectives on Solutions

There’s no single fix. Different stakeholders offer different strategies:

  • Developers focus on tools like dataset audits, inclusive testing, and explainable models.
  • Regulators aim to categorise risks and enforce transparency, as seen in the EU AI Act.
  • Communities advocate for representation and inclusion in the design process, ensuring those most affected by bias have influence over how systems are built.

Where Do We Go From Here?

AI bias isn’t a bug to fix and forget. It’s a constant tension between progress and responsibility, between the promise of innovation and the lessons of history.

What can we do?

  • Design inclusively
  • Audit frequently
  • Involve diverse voices early and often
  • Balance harm prevention with historical awareness

Fair AI isn’t born from code alone—it’s built through choices. Yours. Ours.


Join the Conversation

Fairness in AI isn’t a fixed formula - it’s a global conversation. So what does it mean to you? Tag us on social and add your voice.

Missed our last post on AI ethics? Read it here.

Found this thought-provoking? Share it. The more voices we can include, the less bias we can build—and the fairer our future AI can be.


Further Reading & Resources








By looka_production_176055138 May 21, 2025
So we automated the jobs. Now what..?
By looka_production_176055138 May 15, 2025
Discover how Stephen R. Covey’s timeless leadership book The 7 Habits of Highly Effective People influences strategic thinking and client engagement at Taylored Solutions, from proactive IT planning to trust-based consultancy.
By looka_production_176055138 May 9, 2025
Marie Curie