AI in the enterprise

– 9 min read

Tackling AI bias

A guide to fairness, accountability, and responsibility

William Ruzvido

William Ruzvidzo

AI bias can perpetuate and amplify stereotypes, skew decision-making, and even result in unfair practices. For businesses, it’s a serious issue that goes beyond ethics to affect the very essence of decision-making, threatening reputations and posing potential legal risks.

Here’s a little test to illustrate just how biased AI can be.

Question; what stands out to you in these AI-generated images of a lawyer?

This is a board of images of 18 male lawyers, mostly of caucasian background.

Answer: there’s no woman or person of color in sight.

What about in these images of nurses? Notice any men?

This is a board of images of 18 nurses, all women.

And what about here? How many women can you count among these scientists?

This is a board of images of 18 male scientists, most of caucasian background.

Zero. Zip. Zilch. Nada. This in spite of the fact that 43.1% of all scientists are women.

These images, taken from an online tool that examines biases in popular AI image-generating models, reveal the dark side of AI. For all of AI’s brilliance and transformative potential, the specter of bias casts a long shadow.

It’s a sobering reality that must be faced as businesses increasingly adopt AI into their operations.

Let’s confront this shadowy aspect of AI together. We’ll also provide guidelines for mitigating AI bias, ensuring that as your business ventures further into the realm of AI, you do so with fairness, accountability, and responsibility at the forefront.

Summarized by Writer

  • AI bias is when AI algorithms, models, and datasets are created with built-in assumptions that lead to unfair or inaccurate results.
  • AI biases often show up as unfair or incorrect portrayals of people based on race or gender.
  • Common types of AI bias include: Historical, Sampling, Labeling, and Confirmation Bias.
  • Tools and strategies for preventing AI bias include: Training AI systems on diverse data sets, identifying and minimizing biases in AI models, regularly auditing and updating AI systems, including diverse teams in the design and decision-making process and establishing a corporate AI policy.
  • Developers, data scientists, and business leaders carry a significant role in ensuring AI operates equitably and justly.

What is AI bias?

AI bias is when AI algorithms, models, and datasets are created with built-in assumptions that lead to unfair or inaccurate results. This bias can come from how data is collected, coding errors, or even the influence of a particular team or individual.

Alarmingly, about 65% of executives believe that data bias exists in their organizations. The consequences of this oversight are far-reaching, potentially leading to skewed decision-making, discriminatory practices, damaged reputations, and even legal consequences. Case in point: A lawsuit was recently filed against Workday Inc., alleging that the company’s artificial intelligence systems and screening tools are biased against Black, disabled, and older applicants.

The last thing you want for your company is a lawsuit or a tarnished reputation due to unchecked AI bias.

For businesses using AI to thrive and be credible, it’s vital to actively tackle and lessen these biases. It’s not just about fairness and equality, but also about your company’s long-term success and credibility.

The most common types of AI bias

AI biases often show up as unfair or incorrect portrayals of people based on race or gender. They often mirror old prejudices and accepted societal beliefs. 

By identifying the presence of these different types of AI bias, whether they lurk in the corners of your datasets or subtly shape the algorithms’ decision-making processes, you can make important strides toward creating AI applications that are more equitable and fair.

Historical Bias

This occurs when the training data used for an AI system reflects historical prejudices, societal norms, or long-standing inequities.

Example:

In 2016, a ProPublica investigation found that COMPAS, a decision-making algorithm, was almost twice as likely to falsely flag black defendants as future criminals compared to their white counterparts. On the flip side, it was more likely to incorrectly predict that white defendants were low risk.

Sampling bias

Arises when the data used to train an AI system isn’t representative of the whole population the system will be serving.

Example:

In 2018, Amazon’s AI hiring tool was found to be biased against female job candidates. It was trained on ten years of resumes that were mostly from men. This caused the algorithm to favor male candidates over female ones, making the hiring process unfair.

Labeling bias

This is when a machine learning algorithm labels data inaccurately due to the data set it is trained on being biased.

Example:

Google came under fire back in 2015 when it was discovered that the image-labeling algorithm in Google Photos was tagging black people as gorillas.

Confirmation Bias

This occurs when an AI system is trained in a way that confirms pre-existing beliefs or assumptions, often overlooking contradictory data.

Example:

Facebook is designed to profit from humans’ confirmation bias. Its News Feed algorithm shows users more of the content they already engage with. This can create an “echo chamber” where users are shown only what they want to see, regardless of its accuracy or social impact. This can reinforce harmful views or stereotypes, as users may not get the chance to explore different viewpoints or challenge their own beliefs.

Tools and strategies for preventing AI bias

Once you’ve identified the ugly specter of AI bias, the next step is to confront it head-on and even prevent its resurgence. This calls for the right tools and methods — ‌from sourcing diverse and representative datasets to employing rigorous testing and validation methods. Every step refines your AI, making it more equitable and effective, ensuring it’s a helpful ally rather than a harmful entity.

Train AI systems on diverse data sets 

The first line of defense against bias is using a diverse and representative dataset for training your AI models. As Dr. Timnit Gebru aptly points out, “If we don’t have the right strategies in place to design and sanitize our sources of data, we will propagate inaccuracies and biases that could target certain groups.”

In essence, if the training data is biased, the AI will likely be biased as well. Therefore, machine learning algorithms should include a diverse set of data points that represent different demographics, backgrounds, and perspectives.

For example, let’s say you are training a sentiment analysis model to detect customer satisfaction from reviews. The training data should include comments from a variety of customers from different backgrounds, ages, genders, and ethnicities.

By using diverse datasets and implementing the right strategies to ‘sanitize’ these data sources, as Dr. Gebru suggests, you can reduce the risk of your AI making incorrect decisions due to implicit biases or lack of context.

Identify and minimize biases in AI models

Spotting and reducing biases in AI models is another key element to ensuring fairness, trust, and accuracy. You can use tools like AI Fairness 360 or What If to detect potential biases and flag questionable results that may lead to unfair decisions.

If your business finds biases in its algorithms, you can reduce or eliminate them by fine-tuning. This involves training a machine learning model to improve its performance for a specific task. By allowing the model to adapt to new, more varied data, it can reduce the impact of any biases in the original training data.

Regularly audit and update your AI systems 

It’s important to audit your algorithms regularly to ensure they’re not producing any unfair outcomes or disparities over time. For example, you can use fairness metrics to measure how fair the AI system is, i.e., whether it discriminates against certain groups.

Your audits should also check to see if any new datasets or adjustments have been added, which could introduce bias into the model’s results.

Include diverse teams in the design and descision-making process

Tech has a diversity problem. That’s not exactly a news flash. This has been obvious for years. And according to the 2019 report on Discriminating Systems – Gender, Race, Power in AI, over 80% of AI instructors are male. Additionally, Latinx and Black workers make up only 7% and 9% of the STEM workforce, even though they represent 18.5% and 13.4% of the U.S. population.

Given these numbers, it’s hardly surprising that AI systems frequently exhibit biases. Overlooking certain groups in data can result in skewed algorithms. Leaving out specific groups and communities from data representation can lead to skewed algorithms.

As computer science professor Jim Boerkoel notes in a Forbes interview, “If the population that is creating the technology is homogeneous, we’re going to get technology that is designed by and works well for that specific population. Even if they have good intentions of serving everyone, their innate biases drive them to design toward what they are most familiar with.”

To address this, it’s crucial to include diverse teams in the design and decision-making processes. This includes age, gender, race, ethnicity, experience, and intellectual background. Diverse teams can not only foresee but also mitigate potential fairness issues, thereby instilling more objectivity and fairness in AI systems.

Establish a corporate AI policy

Create a corporate AI policy on how data is collected, stored, used, and analyzed. Your corporate policy should emphasize ethical AI practices, responsibility, transparency, and a commitment to minimizing AI bias. This ensures that all teams are following best practices when it comes to the ethical use of collected data. It also helps protect customer data and promote fairness in your company’s processes.

The road ahead: achieving a bias-free, fair, and equitable AI

Artificial Intelligence holds tremendous potential as a powerful ally in various aspects of business. However, if mismanaged, it can also become a malevolent force that disproportionately impacts women and minorities.

The task of mitigating AI bias doesn’t rest upon the shoulders of users, but rather squarely with those who develop and deploy these systems. Developers, data scientists, and business leaders carry a significant role in ensuring AI operates equitably and justly. From training datasets to fine-tuning models, every step in developing AI requires a vigilant eye for potential bias.

Choosing your AI technology partner is also of paramount importance. Ensure that you’re working with an AI company that has models with curated datasets (like the Writer LLM, Palmyra) and low toxicity scores.

For leaders contemplating AI adoption, we recommend our comprehensive guide on integrating AI into your enterprise. It covers how to identify and prioritize use cases, prepare for change management, introduce governance workflows, and safeguard brand reputation while employing generative AI. With AI at the heart of your operations, you can drive success while ensuring fairness and equity.