Trust issues: a roadmap for building confidence in AI

From detecting fraud to correcting typos, from powering search engines to saving money, Artificial Intelligence (AI) is evolving across business, government and personal lives.

An artificial brain. Image: UQ

An artificial brain. Image: UQ

An artificial brain. Image: UQ

Get the latest UQ research news delivered straight to your inbox.

As AI’s reach grows, public confidence that AI is being developed and used in an ethical and trustworthy manner is low.

A world-first research project led by the UQ Business School in partnership with KPMG has discovered 72 per cent of people don’t trust AI.

Organisational trust is an issue that Business School Professor of Management Nicole Gillespie has studied for more than 20 years and one that forms a key pillar of the global values of professional services firm KPMG.

Bonded by this shared interest, the two institutions formalised a partnership in 2019, creating the KPMG Chair in Organisational Trust.

They embarked on a series of research projects examining how organisations can build and sustain the trust of their stakeholders and design trustworthy systems.

A concept design of artificial intelligence.

A concept design of artificial intelligence. Image: UQ

A concept design of artificial intelligence. Image: UQ

KPMG Futures Partner in Charge James Mabbott said the partnership was a joint response to a growing awareness and understanding that organisational trust demanded attention and resources.

“Our goal in coming together was to combine KPMG’s business expertise with UQ’s research into organisational trust, to support organisations to take a long-term, strategic whole-of-business approach to restore and strengthen trust,” he said. 

The results of their groundbreaking study Trust in Artificial Intelligence – which surveyed more than 6000 people in Australia, the US, Canada, Germany and the UK – together with their recent report Achieving Trustworthy AI offer a clear, practical roadmap for organisations to build trust in their AI use.

“Our five-country study was designed to provide an evidence-based deep-dive into people’s trust in AI systems,” Professor Gillespie said.

“The findings highlight several strategies organisations can use to enhance trust in AI, including establishing independent ethical review boards and appropriate governance mechanisms to ensure the risks of AI systems are identified and mitigated before and during use.”

The UQ-led research project also showed that organisations can bolster trust by proactively building employee and customer understanding of AI systems.

“Understanding AI is a key driver of trust, yet most people report limited understanding,” Professor Gillespie said.

“When people understand AI, they can make more informed choices, recognise and identify issues and intervene before they escalate.

“AI has many benefits, both to organisations and society but it is also a technology that poses unique risks and challenges, such as explaining how it generates recommendations and maintaining appropriate oversight of automated decisions.

“Many organisations are still at an early stage of maturity in establishing the necessary technical and governance foundations to ensure ethical AI use.

“Like any new, powerful technology, trust is really critical to its acceptance.”

A futuristic abstract blue line

What will the future of AI look like? Image: Adobe Stock/Maxima

What will the future of AI look like? Image: Adobe Stock/Maxima

Mr Mabbott said AI was described by World Economic Forum founder Klaus Schwab as a “fusion of technologies that is blurring the lines between the physical, digital and biological spheres”.

“This fusion speaks to the more sophisticated opportunities for AI such as precision medicine, autonomous vehicles, digital twins and augmented humans; a fusion that will be much harder to realise, slower to achieve and economically expensive if we don’t trust in the technology,” he said.

“To build trust in AI, organisations and governments must meet society’s expectations of trustworthy and ethical AI, build strong and robust regulatory frameworks and help educate Australians to strengthen our AI literacy.”

In addition to making the Trust in Artificial Intelligence report publicly accessible – and delivering insights via webinars, published articles, executive education, presentations and industry engagement – Professor Gillespie’s research team is working with KPMG to develop practical toolkits for organisations.

“This research doesn’t stop at publication,” Professor Gillespie said.

“We are developing tools and maturity indexes to help organisations identify their strengths and areas of development in building and sustaining trust with their stakeholders and supporting trustworthy conduct in line with best practice.”

“The development of new business models, products and services can happen at speed when trust is high and stalls considerably when trust is low,” Mr Mabbott added.

“In a world where the pace of change seems to be ever-increasing and new technologies abound, trust is the flywheel around which adoption and uptake will accelerate.

“Our work with UQ is one way to support the successful implementation of these technologies while strengthening trust between all stakeholders.”


Faculty of Business, Economics and Law
Business School

Professor Nicole Gillespie. Image: UQ

Professor Nicole Gillespie is the co lead for UQ Business School’s, Trust, Ethics and Governance Research Hub

Email: n.gillespie@business.uq.edu.au
Twitter: @DrNGillespie
LinkedIn: Professor Nicole Gillespie

Professor Nicole Gillespie in the University of Queensland's Great Court

Professor Nicole Gillespie. Image: UQ

Professor Nicole Gillespie. Image: UQ