Confronting bias in artificial intelligence

02 September 2020

Whether it’s what show to watch next on your favourite streaming service or search results in Google, millions of processes a day are automated by computers. These computers use algorithms (sets of instructions or models used by computers) to perform tasks extremely quickly en masse. However, the instructions they are given can be biased.

While the outcome of a bias for movie recommendations can be minimal, in areas such as risk assessment, hiring/promotion policy, and the judicial system, this can magnify, at scale, the societal biases contained within the instructions.

For example, when a tech giant introduced a credit card, a large number of instances were reported of females being given significantly lower credit limits than men despite often sharing the exact same finances. New York state regulators are now investigating the algorithm for discrimination.

To promote a fair society and to gain widespread trust and buy-in to these systems, assessing (and mitigating) bias is crucial.

As the use of machine learning and modelling is becoming widespread in the Artificial Intelligence (AI) ethics principles document [1], the Australian Federal government has outlined eight principles on which AI models should be implemented:

  • Human, social and environmental wellbeing

  • Human-centred values

  • Fairness

  • Privacy protection and security

  • Reliability and safety

  • Transparency and explainability

  • Contestability

  • Accountability

While these are guidelines and not considered mandatory, Denmark has recently passed landmark legislation that will require, by law, companies to create and define AI ethics statements that adhere to similar principles.

With these principles in mind, which key areas should you consider during data analytics and modelling implementation?

1. The data that the model/algorithm is based on

A recent article authored by Dr Alex Antic [2] notes that “human bias begets data bias, which begets model bias, which begets human bias… ” which is reflected in the feedback cycle illustrated below. This means that the data the algorithm is based off should be assessed for potential bias.

2. Development of the model/algorithm

When developing a data analytics pipeline, companies should be aware of areas at high risk for bias. This includes engaging in fact-based dialogue about bias and what a “fair” algorithm looks like.

Companies should consider how the real-world outcomes of the model could be used to feedback into the pipeline and further increase the accuracy of the model and identify biased outcomes. A number of tools have been developed to help assess the level of bias from a model.

3. Implementation of the model/algorithm

The global machine learning market was valued at US$1.58B in 2017 and is expected to reach US$20.83B by 2024 [3]. Given this exponential increase in the value and uptake of machine learning, it demonstrates that companies looking to adopt or expand their machine learning capabilities should be taking the issues raised here into consideration as early as possible.