The ethics of AI: Q&A with Associate Professor Sarah Kelly

Image: Getty Images / Design Cells / Yuichiro Chino

Image: Getty Images / Design Cells / Yuichiro Chino

There is growing evidence and concern that the algorithms and data underpinning AI can produce bias and ethical injustice. Associate Professor Sarah Kelly discusses the governance and data management considerations necessary to ensuring the ethical implementation of AI.

Imagine you’re responsible for programming an autonomous vehicle and are faced with a moral dilemma. You need to write the algorithm that makes decisions for the car in anticipation of a time when it will be faced with different decisions, each with different moral implications.

For example, a runner trips and falls onto the road, but the only option for the autonomous car to avoid hitting them is to swerve into oncoming traffic. Should you tell the car to continue on its original path, harming and probably killing the person? Or change its path to protect its inhabitants? Even though there’s no way to predict what this action will lead to.

Which is the more ethical option? Or, more simply: what is the right thing to do?

Such ethical considerations have been used extensively in research on moral psychology since 2001, but have since also become significant in relation to the design of autonomous vehicles and other forms of Artificial Intelligence (AI), many of which are already being integrated into our societies.

One example of machine learning models of AI that are already being used is the COVIDSafe tracing app, which identifies COVID-19 clusters, and the efficiency of AI and worldwide ‘big data’ from health sectors is being used to rapidly formulate a COVID-19 vaccine.

AI can even help banks decide who to extend a loan to. From these examples, it’s clear that many AI applications have high stakes consequences for the individuals involved – so the ethical and regulatory considerations related to the development of AI decision making should be a priority.

Yet increasingly, AI algorithms have also begun to determine fairness for us. They are now responsible for deciding who sees housing ads and even who gets hired or fired. However, it isn’t CEOs or customer experience teams defining the fairness of AI decisions. This often still sits with software engineers, who are being asked to articulate what it means to be fair in the code they build for AI and the data from which it learns.

Image: Getty Images/ Artur Debat

Aerial view of a car driving on a road with a grid overlaid.

According to Associate Professor Sarah Kelly, regulators around the world are now grappling with the question: how can you mathematically quantify fairness?

“While AI provides great opportunity for efficient decision making and prediction, it also presents ethical dilemmas associated with how to define fairness measures that must underpin code in the AI and reveal unbiased data.

“AI can be just as flawed and biased as human decision making if it is not designed and regulated correctly.”
Associate Professor Sarah Kelly presenting at The University of Queensland Business School.

Regulation, trust and ethics, and governance expert, Associate Professor Sarah Kelly.

Regulation, trust and ethics, and governance expert, Associate Professor Sarah Kelly.

Sarah shares some insights into the ethical considerations businesses, regulators and citizens need to weigh up when developing and adopting AI.

Why is ethics a concern with AI?

Ethics is a concern with AI, mainly because it wasn't designed with it in mind. Designers are now reverse-engineering AI data sets to better incorporate ethics as a priority.

On a framework level, AI breaches three ethical principles of transparency, bias and accountability.

There is a lack of public understanding about when, where and how AI is being used and the evidence, principles and assumptions on which AI prediction and decisions are being made. This contributes to a lack of transparency.

Another ethical concern is that it generally isn’t established who is responsible for AI decisions or for the flow-on effects those decisions can have, meaning that there’s a lack of accountability.

The reason there’s an ethical concern about bias when it comes to AI is that AI can only be as ethical as those who write the training data, program it and audit it.

Are there examples of companies who have used AI with unintentional ethical implications?

"One example is how Amazon stopped using a hiring algorithm after finding it favoured applicants based on words like 'executed' or 'captured' that were more commonly found on men’s resumes."

Another way bias can creep into the design of AI is flawed data sampling, such as when groups are over or underrepresented in the training data. For example, one study found that facial analysis technologies had higher error rates for minorities, particularly minority women, potentially due to their being unrepresented in the training data.

Who should be responsible for the ethics of AI?

Interestingly, robots are often perceived as more trustworthy than humans due to their neutrality. But, whenever you turn philosophical notions of fairness into mathematical expressions, they lose their nuance, their flexibility and some degree of accuracy in decisions.

We need to look at how to encourage a power shift in the decision-making process when it comes to the design of more ethical AI. Currently, the people who make such decisions are mainly computer scientists, rather than the CEOs, customer experience teams, trained ethicists, lawyers, policy makers or citizens most affected by these decisions.

Image: Getty Images / spencer_whalin

facial recognition on smartphone

We also need to consider whether regulation is helpful and how this might work in practice.

"For example, should the law dictate the ethical standards of all autonomous vehicles or should autonomous car owners or drivers determine the ethical values of their car?"

AI can predict anti-social behaviour in citizens, but should the entities holding these results have an obligation to prevent the harm they predict? Who should be held accountable for these predictions and possible consequent actions?

Artificial Intelligence monitoring traffic

Image: Getty Images / Dong Wenjie

Image: Getty Images / Dong Wenjie

What are some governance concerns of AI?

AI surveillance tools such as facial recognition, smart city platforms and smart policing are increasingly being used by states and organisations around the world to monitor and target citizen actions to meet policy objectives, like reducing energy consumption and traffic congestion. In some cases, this surveillance is legal, but in others it is ethically questionable.

What steps are already being taken to make AI more ethical?

In recent years, there has been increased research on the topic of fairness and bias in machine learning models. This is not surprising, as fairness is a complex and multi-faceted concept that depends on culture and context.

Even as fairness definitions and metrics evolve, researchers have also made progress on a wide variety of techniques that ensure AI systems can meet them. Examples of this include processing data beforehand, altering system-made decisions afterwards, or incorporating fairness definitions into the training process itself.

How can companies ensure their automated AI technology can be trusted?

Seven steps I recommend using to help mitigate AI bias:

1. Identify when AI can correct bias and when humans can correct AI (often AI mirrors flaws that already exist in human decision making).

2. Introduce algorithmic audit processes pre, during and post processing.

3. Employ an algorithmic ethicist.

4. Educate yourself and your employees by engaging in discussions and continuing to learn about AI industry ethical best practice.

5. Promote the diversity of employees within the AI field.

6. Critique decision making from an ethical lens and test fairness perceptions on humans.

7. Reflect as a citizen and not as a consumer.

Image: Getty Images /Atem Peretaitko

Business woman watching technology display

With AI embedded in all sectors, it is critical to have some deep understanding of how it operates, how it is programmed and who is accountable for this, in order to ensure fairness and transparency in AI outcomes. These outcomes affect us as societies, organisations and citizens.

“At the core of AI, there needs to be a diversity of human input, consensus and monitoring of fairness measures."

Learn more about the ethical considerations of AI with a Master of Business Analytics

Register now