How to avoid algorithmic decision-making mistakes: lessons from the Robodebt debacle

Video Credit: Adobe Stock / flashmovie #216890559

Video Credit: Adobe Stock / flashmovie #216890559

The unprecedented amount of data generated in society today has incentivized governments to automate citizen-facing services with algorithmic decision-making systems.

While there are benefits to using algorithmic decision-making, including efficiency, cost savings and operational transparency, its use can also have unintended negative consequences. This was the case with Centrelink’s Online Compliance Intervention program – otherwise known as Robodebt.

Cartoon image of person and robot

Image credit: Adobe Stock / zenzen #297527000

Image credit: Adobe Stock / zenzen #297527000

Robodebt: what happened?

In June 2021, a Federal Court Judge approved a settlement worth at least A$1.8 billion for people wrongly pursued by the federal government's Robodebt scheme.

The court discovered during the Robodebt scheme, the Commonwealth had unlawfully raised A$1.73 billion in debts against 433,000 people. Of this, $751 million was wrongly recovered from 381,000 people. Settlement payments to eligible group members involved in the Robodebt class action are due to be finalised by May 2022.

Where previously the use of decision-making algorithms to identify and collect welfare support overpayments still involved human oversight, Robodebt was an automated debt assessment and recovery program that relied solely on a data-matching system to issue debt notices to welfare recipients.

It followed an algorithm that compared welfare recipients’ records with income data from the tax office, worked out the amount of overpayment and pursued refunds where relevant.

However, inherent flaws in the algorithmic decision-making program initiated a cycle of increasing distress among affected citizens, especially ones in vulnerable positions. It also caused work disruption among Centrelink employees who were under unprecedented strain due to the surge of contact from distressed citizens.

Ultimately, while the program was launched with financial savings in mind, it resulted in no financial gain and significant costs. Sustaining the program severely tarnished the reputation of Centrelink and eroded public trust in the government’s ability to manage social services.

How did this application of algorithmic decision-making technology go so awry? And what can organisations learn from this debacle to ensure they don’t repeat past mistakes?

Algorithmic decision-making

Data-driven decision making and autonomous systems have opened a world of possibilities previously unfathomable to organisations. The trouble arises when data and automation are used as a silver bullet to address complex problems without appropriate human oversight and intervention.

After researching this case, University of Queensland (UQ) Business School experts Dr Tapani Rinta-Kahila, Dr Ida Asadi SomehProfessor Nicole Gillespie, Professor Marta Indulska and Professor Shirley Gregor from Australian National University, discovered that both organisational and technical failures contributed to the many flaws and ultimate downfall of the Robodebt scheme.

Learn to lead conversations around algorithmic decision-making with UQ's Master of Business Analytics.

Image credit: Adobe Stock / lidiia #364816477

Face made out of blocks of colour

Why was the Robodebt scheme established?

Reducing national debt has been an objective of Australian political parties for years. One strategy to reduce debt was to recoup welfare support overpayments. Between 2010 and 2013, it was estimated that over 860,000 recipients of Government benefits owed the Government an average of A$1400 due to discrepancies in their accounts.

How did Robodebt work?

Using automated technology, the Robodebt scheme was designed to balance the budget by clawing back $2.3 billion in these welfare overpayments. But ultimately, flaws in the algorithmic decision-making system ended up causing severe distress to citizens and welfare agency staff. The debacle also caused widespread distrust in and economic penalties for the Australian Government. How could this situation have been avoided?

Why did the Robodebt scheme fail?

According to Dr Rinta-Kahila and colleagues, one way to understand why the Robodebt scheme failed is to recognise how organisations and their staff members are constrained by limits. These limits exist within and outside the organisation. For instance, the design of a technical system limits its performance and capability; managerial policies set limits to what workers can do; and biases caused by cultural and socio-political conditions can limit managers’ ability for rational decision-making.

1. Technical flaws: a simplistic algorithm that couldn’t cope with real-life complexity

One of the design flaws of the algorithm used to make debt collection decisions was that it drew data from two different government systems, one belonging to Centrelink and another one to the Australian Tax Office (ATO). These two systems recorded citizens’ income data in different formats. While Centrelink’s system applied fortnightly figures, ATO’s stored annual income data.

The algorithm averaged a citizen’s earnings reported to ATO over a series of fortnights, matched them with received welfare benefits, and based on the matching, calculated potential overpayments. Because the formula used fortnightly averages instead of actual earnings in the fortnight in question, it led to exaggerated or even false inflation of debt.

Other ways the simplistic algorithm failed to account for real-life complexities included:

  • Struggling to account for different spellings of employer names in both databases.
  • Not being able to account for citizens’ unique work-history circumstances, such as casual work.

As a result, a significant number of citizens received a 'Robodebt notice' that didn’t reflect what they owed to the government (or whether they owed anything at all).

"The debt identification process relied on the automatic matching of two incompatible datasets. As such, the debt sums were based on pure speculation by the system. The debt collection process, in turn, resembled extortion – citizens were effectively scared to accept the debt and pay up. And many did,” says Dr Rinta-Kahila.

Learn how to develop an information resilience plan to avoid data errors.

2. Complex, confusing interfaces and lack of explanation

The interface of the Robodebt system (debt recovery letters and myGov portal) aggravated the debacle in several ways:

  • By not explaining the debt-recollection process, how the debt being recollected was calculated or that it could be inaccurate.
  • Failing to explain that you could ask for an extension or be assisted by a compliance officer if you had problems paying the debt.
  • Omitting the 1800 compliance helpline telephone number (from the debt letter).
  • Making it difficult to understand and contest debts via a complex and hard-to-use online portal that was especially challenging for people with limited access to online technologies.

3. Limited human oversight

The Robodebt system shifted the responsibility of calculating potential overpayments from humans to an algorithm. It also transferred the responsibility of validating the algorithm’s calculations from Centrelink staff to the ordinary citizens who were affected.

While the new system largely automated a very manual process and reduced costs, removing human agency and oversight meant that there were virtually no safeguards in place to counterbalance the algorithm’s flaws.

Previously, human caseworkers used a data-matching system to identify potential debtors and also manually checked information for accuracy before contacting the recipient or their employer to clarify any discrepancies. The automated system independently estimated welfare overpayments and sent debt notification letters to citizens without human scrutiny. There were no checks of accuracy with the recipient or employer.

Before 2016, Centrelink had legally assumed the responsibility for establishing the existence of a debt before pursuing it. They also provided support for citizens who had questions or wanted to work out discrepancies. The Robodebt scheme effectively shifted this responsibility to the individual, requiring them to acquire old bank statements or salary receipts from previous employers if they didn’t agree with the debt claim.

"Before Robodebt, human caseworkers had leveraged a data-matching system to identify potential debtors. The system’s approximations had been treated as ‘raw data’ that needed to be validated through manual investigation; for example, by contacting the citizen or their employer to clarify any discrepancies. This process changed with Robodebt to full automation that removed human safeguards.”
Dr Rinta-Kahila

4. Lack of governance or best  practice

Centrelink has been criticised for not involving relevant stakeholders, such as the Digital Transformation Agency (DTA), ATO, legal experts or domain specialists, in the development of the Robodebt scheme. In fact, DTA said they had been “locked out” from the process entirely. Additionally, little testing or piloting was conducted when implementing the new Robodebt system.

5. Political motivations

UQ’s experts argue that the government decision-makers responsible for rolling out the program exhibited tunnel vision. They framed welfare non-compliance as a major societal problem and saw welfare recipients as suspects of intentional fraud. Balancing the budget by cracking down on the alleged fraud had been one of the ruling party’s central campaign promises.

As such, there was a strong focus on meeting financial targets with little concern over the main mission of the welfare agency and potentially detrimental effects on individual citizens. This tunnel vision resulted in politicians’ and Centrelink management’s inability or unwillingness to critically evaluate and foresee the program’s impact, despite warnings. And there were warnings.

For instance, when a risk management plan was made, it recognised only the risk of insufficient resources, overlooking the scheme’s potential impact on service delivery, citizen experience or reputational harm. The Australian Council of Social Services (ACOSS) also warned that any automated and aggressive debt recovery program “could lead to significant hardship for vulnerable people”.

Even when it became clear the system was flawed, the economic and political incentives and path dependencies prevented government leaders from making any major changes — this ‘locked’ them on a path toward failure.

Image credit: Adobe Stock / rootstocks #345328536 & Adobe Stock / Photo Sesaon #483684256 & Adobe Stock / cherezoff #98367318 & Adobe Stock / rogerphoto #198900001

Red computerised error image
Person sitting at desk in front of laptop with their head in their hands
Abstract computer data against a blue background
People sitting at a desk in front of a laptop gesturing hands sharply

How can AI and algorithms be used ethically in decision-making?

The Robodebt case warns managers against considering algorithmic decision-making systems as a silver bullet that will yield automatic benefits. To use artificial intelligence and algorithmic decision-making systems successfully, governments and managers need to be sensitive to the context in which the system is being deployed and attentive to any negative feedback signals. Significant efforts need to be invested in testing the robustness and accuracy of algorithms both prior to release and periodically throughout their lifecycles. This way, managers can ensure the algorithm is fit-for-purpose and works as intended.

Further, managers should understand that automation in the absence of human agency can have drastic unintended implications on different stakeholders, especially in the context of social service delivery. Instead of disempowering their domain expert staff, a wiser path is to pursue efficiency through using technology to augment the staff. This involves investing in staff’s technical skills and understanding of algorithmic fairness and ethics.

“We hope this debacle has awakened both government and businesses to the risks of using data and algorithms to automate public services. These technologies can benefit society by producing notable cost savings and service improvements, but only if they are implemented responsibly.”
Dr Rinta-Kahila

Learn the data management and AI skills to lead conversations around algorithmic decision-making with UQ's Master of Business Analytics program

Insights provided by UQ Business School Trust, Ethics and Governance Alliance experts

Dr Tapani Rinta-Kahila (far left) is a lecturer in the Business Information Systems discipline at UQ Business School.

Professor Nicole Gillespie (second from right) is a Professor of Management at UQ Business School.

Dr Ida Asadi Someh (second from left) is a senior lecturer in the Business Information Systems discipline at UQ Business School.

Professor Marta Indulska (far right) is a Professor and Leader of the Business Information Systems discipline at UQ Business School.