Bias in Machine Learning and AI | Artificial Intelligence | Beyond Analysis

Bias in Machine Learning and AI 

Presenting new ethical challenges in business

AI and Machine Learning (ML) technology have become a major part of the armoury for many industries whether it be private companies, financial services, healthcare or governments.

 

These data driven tools are used to make ever more important decisions that can have far-reaching impact on individuals and societies both positive and negative.

 

As these solutions have evolved and become more widely used, new human rights issues have been brought to light as biases are uncovered within the decision-making systems we design. This also causes legal, ethical and brand reputation issues for the entity involved.

What is bias?

 

Bias is a disproportionate weight in favour of or against an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair.

 

Bad data used to train ML models can contain implicit racial, gender, or ideological biases. This can occur indirectly through historical institutional bias that has gone unnoticed and through poor data practices.

A few well known examples of how bias could impact you include:

  • The COMPAS algorithm is widely used in the US to guide sentencing by predicting the likelihood of a criminal re-offending.  In May 2016 it was reported that COMPAS was racially biased and predicting that black defendants posed a higher risk of re-offending.

  • The PredPol algorithm predicts when and where crimes are likely to happen.  This was found to unfairly target certain neighbourhoods and was sending a disproportionate number of officers to areas with a high proportion of people from racial minorities

  • Googles online advertising system has even been shown to target high-income jobs to men much more often than to women.

 

How does bias creep into AI solutions?

 

In many positive cases, AI can work well to reduce humans’ subjective interpretation of data.  It can do this because ML models learn to consider only the variables that improve their predictive accuracy, based on the training data used.

 

This is fine if the training data is ok, but in reality many models end up being trained on data containing human decisions or on data that reflect second-order effects of societal or historical inequities. Although these causes are unintentional, the bias still occur, and the risks and results remain the same.

 

Optimising models in today’s environment doesn’t just mean predictive performance, it is also essential to be in accordance with ethical & legal principles.

 

Why should we be doing something about it?

 

There are many obvious drivers to encourage individuals to uncover and rectify bias. Above all we have both a moral and legal obligation to treat others fairly without bias or discrimination.

 

On a professional or branding level, business need to consider bias for other important reasons also:

  • Customer Experience and Brand Trust: bias in AI systems can not only unwittingly erode trust between humans and machines that learn but also cause immense damage to brands unintentional or otherwise.

  • Regulatory Sanctions: regulators want to ensure that citizens are treated fairly by regulated commercial entities to ensure equality. For example, no individual or group is unfairly refused credit on the basis of their sex, race etc.  The reach and the powers of regulators are increasing and their ability to impose sanctions and fines increasing.

  • Quality & Accuracy: the more we come to rely on models to make decisions, the greater the imperative to know how they work and confirm efficacy.  Without a solid understanding of how a model works the potential to unwittingly increase business risk or erode margin increases.

 

Who should be concerned?

 

All sectors should be concerned about the risks of bias whether it be Health, Financial Services, Consumer Brands, Human Resources, or Judiciary, bias can become a challenging risk to overcome.

 

Models are being created everywhere but three areas we are paying close attention to include:

  • Financial Services models for mortgage rates and approvals, savings and loan rates, credit card fraud protection and virtual assistants providing automated advice

  • Insurance models across all product types be it car, home, life and health

  • Gaming and other closely regulated services where protecting consumers is a key responsibility.

 

 

The BA Solution

 

At Beyond Analysis we believe passionately in the power of data to do good.  Ensuring our clients are fully aligned with how their models operate and perform and have the information to take the best ethical decisions is a critical part of our value to them.

 

Our Bias Solution supports businesses to create the required transparency of their models through independent validation.  We look to address the two main challenges: ensuring the right stakeholders fully understand how the model works and fixing the underlying data.

 

1. Data Bias

The underlying data is often the source of the issue and so our solution interrogates all aspects of the data inputs for bias.  We consider the following issues:

  • Training data contains human decisions and reflects second-order effects of societal or historical inequalities.

  • Poor sampling techniques

  • User generated data

  • Statistical correlations that are unacceptable or illegal.

2. Individual Understanding

Enabling non-technical employees to build a relative understanding of AI is important to allow them to understand how the unintentional bias occurs in their models.

 

Growing awareness and giving everyone accurate and measurable views on the relative importance and significance of features is a first step.

 

An additional layer of independently verified testing of the model function and outputs provides the validation required internally to mitigate internal/institutional bias.

To approach these challenges Beyond Analysis has built a four-step Model Validation process. Throughout this process bias is reviewed across the spectrum of where it can occur: before, during and after modelling. This means that from the data inputs, through to model performance and finally the predictions, bias is identified and rectified.

Independent review of your models to identify and assess bias

Facilitate internal conversations about bias and the outcomes

Targeted modifications to model or data sources to address bias

Implement process of regular review and assessment to check for bias for internal  reporting

About Beyond Analysis

 

Beyond Analysis is an award-winning data science, analytics and strategic data solutions and consulting business. 

  

We help our clients use data to drive efficiency gains and business improvements through better understanding their customers and business operations. Using the latest in artificial intelligence and machine learning, we model and forecast behaviour that enables the sharing of insights so that they can take action across their business for competitive advantage. 

Beyond Analysis is a data science, analytics and strategic data solutions and consulting business.

Reach us at info@beyondanalysis.net

© 2020 Beyond Analysis Ltd