Momentum - the Business School Magazine logo

 

Trust in the machine: a guide to building workplace trust in AI

Featured UQ Business School experts: Dr Steve Lockey and Professor Nicole Gillespie

Trust in the machine: a guide to building workplace trust in AI

Featured UQ Business School experts: Dr Steve Lockey and Professor Nicole Gillespie

From chatbots and search engines to fraud detection and traffic navigation, Artificial Intelligence (AI) is ingrained in most industries and organisations.

While global dependency on this new technology is ballooning, humans are still suspicious of AI, particularly in the workplace. The question is – how can organisations build trust in their use of AI at work and encourage employees to buy in?

Findings from Trust in Artificial Intelligence: A global study, a world-first research partnership between The University of Queensland (UQ) and KPMG Australia, may hold the key.

Led by The University of Queensland (UQ) Business School’s KPMG Chair in Organisational Trust Professor Nicole Gillespie, Postdoctoral Research Fellow Dr Steve Lockey, Dr Caitlin Curtis and Dr Javad Pool, the study found only 40% of Australians trust AI in the workplace.

Among workers’ chief concerns were losing jobs to automation, dehumanising decision-making, and murky regulation.
 

Taking a global perspective

The research team surveyed more than 17,000 people from 17 countries in the first global deep dive into public trust and attitudes towards the use of AI.

“Until now, little was known about how trust in AI is experienced by people in different countries or what influences this trust,” Professor Gillespie said.

“Our study takes a global perspective, which is important given AI systems are not bounded by physical borders and are rapidly being deployed and used across the globe.”

Dr Lockey said the report highlighted the critical roles that education, awareness and engagement play in the swiftly evolving technology.

“There has been such a step-change in AI over the past few months that 2023 does seem like a watershed year for its mainstream adoption,” Dr Lockey said.

“Given the increased interest in AI since ChatGPT was released late in 2022, there will undoubtedly be a lot of new AI applications developed in the coming months, too.”
 

Awareness builds trust

The study shows fewer than one in four Australians know if AI is used in their workplace.

Additionally, only one in five Australians report using AI at work regularly, suggesting they are unaware of everyday applications of AI at work such as facial recognition and email filters.

James Mabbott, KPMG Futures Partner in Charge and a key advisor on the study, said these results underlined the importance of workplace education.

“The reality is that almost every organisation will likely already be using these tools in areas such as cybersecurity, knowledge management, sales and marketing, customer service delivery and business performance,” Mr Mabbott said.

Professor Gillespie said the research revealed that the more people feel they understand AI, the more they trust it.

“As people become more familiar with AI and how it works – and the more they use it at work – the more likely they are to trust and accept it and recognise its benefits,” she said.
 

A robot and a human shaking handsKeeping the human in human resources

Australians are notably less comfortable with AI use across human resource management, particularly around monitoring, evaluating, and recruiting employees.

Dr Lockey said that people may see AI as “a blunt tool” when it comes to people management.

“Our broader research suggests that people believe AI lacks nuance and the ability to ‘read between the lines’,” he said.

“AI is currently very good at mechanical tasks but not so good in contexts that require uniquely human capabilities – AI doesn’t have empathy, imagination or the ability to reflect.”

Professor Gillespie said people overwhelmingly preferred that humans were involved in making critical decisions about others.

“People worry that their unique capabilities, contributions and circumstances may not be taken into account by AI, that there might be bias in the system and that AI may not be as accurate as humans in taking context into account.”


Read: Avoiding algorithmic decision-making mistakes – lessons from the Robodebt debacle

Augmentation, not automation

Once consigned to the realm of science fiction, the idea of robots – or AI applications – replacing humans in the workforce is now a stark concern for many employees.

Of those surveyed, 42% said they were worried AI would replace jobs in their industry, and 71% disagreed or were unsure that AI would create more jobs than it eliminates.

Dr Lockey conceded that dystopian depictions of AI in popular culture and media were shaping people’s understanding of the technology and its capabilities.

“Academics and futurists, too, have suggested that AI will lead to job displacement at scale in certain industries,” he said.

“However, recent empirical research suggests that this isn’t happening; AI is being used to augment employees rather than replace them.”

Professor Gillespie added that the ability of AI to automate previously labour-intensive tasks also fuelled job redundancy fears, particularly among manual workers.

“We find that most people are comfortable with AI-human collaboration in managerial decision-making and prefer AI involvement to sole human decision-making, with the caveat that humans retain equal or greater input,” she said.

“This finding carries an important implication for technology leaders: while full automation may maximise efficiency and cost reduction, it can undermine trust and acceptance. Balance is required.”

 

5 steps to building trust in AI in the workplace

Icon of an eye in a magnifying glass1. Improve transparency

Organisations and business leaders must first build trust with employees before they, in turn, begin building trust with AI.

Mr Mabbott said demystifying the technology was key.

“By informing your people about where and how you’re using these technologies, the governance processes you have in place and the benefits your organisation seeks to deliver, you can uplift both knowledge and awareness.”

Icon of three hands coming together in a demonstration of team spirit2. Prioritise staff engagement

The research team is united in its assertion that an engaged workforce directly correlates to higher AI trust levels.

“If leaders engage their staff in the processes involved in AI deployment in their organisations — from pre- to post-use — this will engender trust,” Dr Lockey said.

“This process involves taking a collaborative approach to AI implementation that actively consults and involves co-design from employees across multiple business units.”

Mr Mabbott added that several key components were needed to win the hearts and minds of employees and customers.

“We need good governance processes, we need to experiment with the technology in controlled environments, and we need to involve our people and engage them in the process of identifying meaningful applications and benefits,” he said.

icon of a checklist on a clipboard3. Establish a robust governance model

In the absence of a comprehensive legislative framework, there are still practical steps that organisations can put in place to support trustworthy AI.

“We still have gaps when it comes to regulating AI technologies,” Dr Lockey said.

“But, in an organisational context, we know what good governance looks like in terms of processes, controls and review mechanisms… and there is nothing to prevent an organisation from putting these in place today.”

Professor Gillespie noted that achieving trusted and trustworthy AI in the workplace requires a whole-of-business approach.

“To help organisations on their journey to ensuring the trustworthy use of AI, we developed a model and practical guidance in our UQ-KPMG report on achieving trustworthy AI,” Professor Gillespie said.

Mr Mabbott said organisations must show leadership in meeting the changing expectations and requirements around responsible and ethical AI use.

“Organisations have a range of tools to gather employee opinions about AI use: pulse surveys, interviews, focus groups, intranet sites, and town halls,” he said.

“Further, CSIRO recently launched a Responsible AI Network for Australian organisations that want practical guidance about how to deploy AI responsibly.”

icon of people with cogs beneath them4. Train and upskill team members

Employees tasked with integrating AI into their workplace must be trained to effectively use and understand the systems.

“Organisations need to upskill staff to be able to engage with AI or retrain them to develop other skills should AI genuinely be able to do their jobs,” Dr Lockey said.

Professor Gillespie said training also empowered employees to seize the benefits of AI and feel comfortable using it as a resource.

“Importantly, it also helps to play an active role in identifying and managing the risks associated with AI, which is particularly important when AI is used in service delivery,” she said.

icon of two hands holding an outline of a person5. Encourage intergenerational mentoring

The study reveals almost two-thirds of generation Z and millennials trust and feel comfortable using AI at work.

The research team recommended enlisting this demographic of digital natives to share knowledge and support older generations.

“Organisations could engage in reverse mentoring — particularly if older employees have more senior positions — to benefit both parties,” Dr Lockey said.

“For example, younger employees could educate their older colleagues and explain AI’s benefits, while older colleagues could provide the benefit of their experience.”

Learn more and read the full Report, Global Executive Summary, and Individual Country Insights

Learn more about how UQ Business School Trust, Ethics and Governance Alliance researchers are enhancing our understanding of public trust in AI

 

Engage with us

Professor Nicole Gillepie

Profile photo of Professor Nicole Gillespie

Professor Nicole Gillespie is the KPMG Chair in Organisational Trust and Professor of Management at UQ Business School. Professor Gillespie leads the School’s Trust, Ethics and Governance Alliance. Her research focuses on building, preserving and repairing trust in organisations and emerging technologies, as well as in contexts when trust is challenged, such as after a trust failure.

 

Dr Steve Lockey

Profile photo of Dr Steve Lockey

Dr Steve Lockey is a Postdoctoral Research Fellow at UQ Business School. Dr Lockey is a member of the School’s Trust, Ethics and Governance Alliance. His research focuses on topics such as organisational trust and trust repair, and perceptions of emerging technologies in the workplace and society.

 

 

James Mabbott

Profile photo of James Mabbott

James Mabbott is Partner in Charge for KPMG Futures. Mr Mabbott leads KPMG’s investment into future technologies, which includes a focus on quantum technologies, AI and robotics, web 3.0 technologies and venture partnerships. Mr Mabbott and his team work at the intersection of signals of change across society, technology, environment, economics and politics to identify and understand emerging trends and their potential to reshape and reimagine our world.