top of page

Bias Examples

Bias in policing - Durham

Durham Constaublary in the UK was using an algorithm designed to help it make custody decisions and has been forced to alter it amid concerns that it could discriminate against poor people.

Bias in politics - Google

House Republicans told Google CEO Sundar Pichai that he needs to do something about what they consider to be a liberal bias inside his company's ranks, saying Google has suppressed conservative views on its search and video platforms.

Bias in parole decisions - US

Rice University’s School of Social Sciences, reviewed research on various methods for assessing risk among accused or convicted criminals. Actuarial and algorithmic models are used to assess these risks, alongside the professional judgment of parole officers, correctional officers, and psychiatrists. Findings showed that actuarial risk assessments can reduce discrepancies in how the system assesses and treats individuals but also exacerbate existing inequalities, particularly on the basis of socioeconomic status or race.

Bias in the workplace - performance management

How gender bias affects feedback and performance reviews. Gender bias takes many forms in the workplace. Women are commonly perceived to be less suitable –and less capable – in traditionally masculine roles.

Bias at Facebook

Facebook has acknowledged that it needs to do more to combat racism on its platforms and is setting up two groups to examine its policies and algorithms.

Bias in immigration controls

UK passport photo checker shows bias against dark-skinned women

Bias in search - Google

Following an investigation by the Observer, UK, Google’s search algorithm appeared to be systematically promoting information that was either false or slanted with an extreme right-wing bias on subjects as varied as climate change and homosexuality. Research found that Google’s search engine prominently suggested neo-Nazi websites and antisemitic writing.

Bias in recruitment - CV screening

Empirical studies have found that various CV screening systems have resulted in employers granting interviews at different rates to candidates with the identical CV but with names that reflect different racial groups or ethnicities.

Dealing with racial bias at Starbucks

After a highly publicized act of racial discrimination by a Starbucks employee against two African American men in one of its stores in 2018, the company closed its 8,000 U.S. coffee shops for a day of unconscious bias training. The company also revised store policies and employee training practices. Harvard Business School professors Francesca Gino and Katherine Coffman discuss what we can learn about unconscious bias in corporate culture.

Bias in credit - Apple

A US financial regulator has opened an investigation into claims Apple's credit card offered different credit limits for men and women. It follows complaints - including from Apple's co-founder Steve Wozniak - that algorithms used to set limits might be inherently biased against women.

Bias in social care - Allegheny Tool

The Allegheny Family Screening Tool is a model designed to assist social workers and courts in deciding whether a child should be removed from their family because of abusive circumstances. Bias appeared in the model through use of a public dataset that reflected societal factors around middle class families having a higher ability to “hide” abuse by using private health providers. AS such the data saw referrals from non-white, lower socio economic groups over three times as often.

Bias online - sexual orientation

In 2013 Tumblr annouced it was banning terms such as #gay and #bisexual in order to help filter out adult content and porn.

Bias in credit applications - Banking

A study found that female executives were treated less fairly than their male counterparts when it came to accessing bank loans. Male executives are 5% more likely to get a loan approved for their business than females. Of the females that did succeed in getting a loan they were subjected to on average 0.5% higher interest rate. The average female run venture backed company starts with a third less capital and achieves annual revenues 12% higher than those run by men.

Bias in medical care - Skin cancer detection

Studies have found skin cancer–detection algorithms are less accurate when used on dark-skinned patients because AI models were trained mostly on images of light-skinned patients.

Bias in the travel sector - holidays

In a detailed academic study of travel behaviours over the past decade it was observed that travelers suffer from the influence of common biases at all the stages of travel, such as pre-trip, on-site, and post-trip. Major drivers cited include the nature of images used to market destinations and products, the ranking of search results for holiday queries and the use of feedback rankings.

Bias in medical research - Covid 19

Indian-Origin Doctors Warn Of Racial Bias In Medical Research In UK According to Public Health England (PHE), those from black, Asian and minority ethnic (BAME) backgrounds are at increased risk of poor outcomes from COVID-19.

Bias in CV screening - Amazon

Reuters reported in 2018 that an AI recruiting system designed to streamline the recruitment process for Amazon by reading resumes and selecting the best-qualified candidate was unfairly selecting men over women. It transpired that the machine learning had been designed to replicated the hiring practices that Amazon had used over the preceding years, so it inadvertently replicated these biases.

Bias in job ads - Google

A study in 2014 of Google Ads found that ads for high paying jobs would be served up more often to men that women. This was as a result of google allowing advertisers to target ads for high paying jobs to men only and so the learning data for the targeting alogrithms was inherently biased.

Bias in education - exam grading

Global exam grading algorithm under fire for suspected bias. Students and experts say the formula the International Baccalaureate program used to generate grades may be discriminatory.

Bias in healthcare allocation – USA

In 2019 research by the University of California discovered that AI that was being used to define who got what care across a base of over 200 million patients in the US. Black patients were found to be receiving a lower standard of care. This had happened because black people were being allocated a lower risk score based on the predicted cost of care. Ability to pay had become a determining factor in the model, out weighing the medically higher health risk factors that should have ensured they received the right level of care. Model adjustments enabled the level of bias to be reduced by 84%.

Bias in facial recognition - Sexual orientation

A from Stanford University found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women. This has raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology, and the potential for this kind of software to violate people’s privacy or be abused for anti-LGBT purposes.

Bias in advertising - Facebook

In 2019, Facebook was found to be in contravention of the US constitution, by allowing its advertisers to deliberately target adverts according to gender, race and religion, all of which are protected classes under the country’s legal system. Job adverts for roles in nursing or secretarial work were suggested primarily to women, whereas job ads for janitors and taxi drivers had been shown to a higher number of men, in particular men from minority backgrounds. The algorithm learned that ads for real estate were likely to attain better engagement stats when shown to white people, resulting in them no longer being shown to other minority groups.

Bias in beauty contests - Beauty.ai

In 2016, the Beauty.AI website was using robots as judges for beauty contests. It found that people with light skin were judged much more attractive than people with dark skin. 

Bias in sentencing - USA

COMPAS (which stands for Correctional Offender Management Profiling for Alternative Sanctions) is used to predict the liklihood of a criminal reoffending. This provides a guide to sentencing decisions. Analysis showed that it was no better than a random number generator. Black defendants were twice as likely to be misclassified in comparison to their white defendants.

Bias in facial recognition - Social

In another poignant illustration of algorithmic AI bias, the American Civil Liberties Union (ACLU) studied Amazon’s AI-based “Rekognition” facial recognition software. The ACLU showed that Rekognition falsely matched 28 US Congress members with a database of criminal mugshots. According to the ACLU, “Nearly 40 percent of Rekognition’s false matches in our test were of people of color, even though they make up only 20 percent of Congress.”

bottom of page