Human at the helm:
experts share the leadership qualities
no AI algorithm can replace
Featured UQ experts Professor Marta Indulska, Associate Professor Chad Chiu and industry expert Dr Gayan Benedict
To say AI is advancing swiftly is an understatement – from telehealth apps using AI for patient triage to logistics firms automating supply chains, Australian organisations are embracing AI at pace.
Recent data from the Australian Bureau of Statistics shows businesses have more than doubled their investment in AI, allocating $668.3 million to AI research and development in 2023-24 – a sign of rapid growth.
Nowhere is this transformation felt more keenly than in the workplace.
From streamlining operations to redefining roles and reshaping organisational structures, AI is rewriting the rules of business.
But amid this technological revolution, the real challenge for leaders isn’t simply keeping up: it’s harnessing AI’s momentum to build more adaptive, human-centred cultures – not replace them.
As AI applications automate routine tasks and influence key business decisions, the human dimensions of leadership – empathy, creativity and accountability – become even more critical.
UQ Business School experts Professor Marta Indulska and Associate Professor Chad Chiu join UQ alum and industry expert Dr Gayan Benedict to share their insights in a thought-provoking Q&A about how to lead with humanity in the age of AI.
Listen to this story or scroll to read on
Technology perspective
Professor Marta Indulska, UQ Business School
With a background in computer science and information systems, Professor Indulska has long been intrigued by the business impact of emerging technologies. She believes transparency, trust and ethical judgement are now essential additions to every leader’s AI toolkit.

Q. Which uniquely human skills will remain essential for leaders over the next 5 years as AI becomes more integrated into the workplace?
Creativity is an innate skill that AI won’t replace. While Generative AI (GenAI) output can seem impressive, it’s merely a synthesis of already existing content. As AI becomes more pervasive, leaders need to foster innovative environments where technologies like AI are applied to generate the most value and create a competitive advantage. This means using AI to effectively augment human creativity, not replace it.
Equally important is the human ability for ethical judgement. It will become imperative that leaders establish clear ethical boundaries and adequate oversight for the responsible development and use of AI.
Relationship building and communication skills are crucial for leaders as AI becomes widely adopted. We’ve seen the consequences of unchecked AI use play out in the media, from consultants preparing reports with AI-generated errors to newspapers publishing AI-assisted reading lists with books that didn’t exist. Instances like these highlight a lack of transparency between contractual partners and a poor understanding of how we might use AI to assist work.
Read: how Generative AI will impact different sectors over the next 5 years
Q. How can leaders integrate human-centred principles into the design and deployment of AI systems?
Leaders who embed effective governance mechanisms within an organisation have the best chance of ensuring responsible, transparent and well-informed AI use. This commitment means constantly monitoring AI model performance, ongoing human oversight and open conversations about AI use. There’s a critical need for higher levels of AI literacy in the workplace, so upskilling employees is a key priority.
Explainability and escalation mechanisms are also crucial features. AI systems should provide explanations of how they came to a particular recommendation, and humans should have access to escalation mechanisms to adjust the AI output if needed. Accountability is also essential – when AI makes mistakes, a human has to be held accountable.
Q. What practical steps can leaders take to lead with humanity in an increasingly automated world?

It’s difficult to lead confidently in the age of AI without first understanding the technology. A certain level of AI fluency is needed to lead effectively in this new era.
1. Dedicate time each week to learn
Subscribe to podcasts, commit to a short course, read a book, talk to your teams and ask them to explain concepts you don’t understand.
2. Never blindly trust an AI system
The technology offers tremendous value, but over-reliance is dangerous. Question the assumptions of the system, the quality of the data it’s trained on and the potential impacts of any decisions.
3. Model transparency by being open about how you use AI in your work and decisions
This strategy will help you identify any areas of personal discomfort regarding responsible use and encourage others to be forthcoming about their AI use.
Organisational leadership perspective
Associate Professor Chad Chiu, UQ Business School
As a behavioural scientist specialising in leadership and organisational behaviour, humanity and humility sit at the core of Dr Chiu’s work. He believes these qualities are essential for leaders to maintain a people-centred focus as they integrate AI into their organisations.
Q. Looking ahead to the next 5 years, which human capabilities will remain vital for leaders as AI becomes more deeply embedded in the workplace?
By definition, leadership is a social influence process through which some individuals (leaders) motivate and enable others (followers) to achieve a common goal. We can’t automate this human process.
Even as AI becomes more integrated into workplaces, leaders will still need uniquely human skills:

- Sense-making
Interpreting complex and uncertain environments and identifying what truly matters. - Empathic communication
Articulating shared goals and aligning diverse people behind them. - Relationship-building
Cultivating trust and inclusion by building connectsion beyond their supervised group or their organisation. - Moral discernment
Balancing efficiency and ethics when technology may optimise for only one.
These abilities connect people to purpose, which is something the existing algorith can hardly replicate.
Q. What cultural shifts are essential for organisations to lead responsibly and humanely through AI-driven transformation?
We should use AI to amplify human judgement and connection, not as an excuse for organisations to assign more direct reports to managers and push people to their cognitive or emotional limits. The technology can process information, but only humans can interpret meaning, build trust and nurture commitment.
The challenge for modern organisations is to ensure that AI frees leaders to lead more thoughtfully, rather than becoming faster administrators of larger teams.
To achieve this goal, organisations could embrace a broader definition of efficiency and performance, one that recognises the importance of relational quality, trust, belonging and psychological safety, rather than solely focusing narrowly on measurable outputs or financial returns.
Without this broader view, companies risk falling into a bottom-line mentality where short-term productivity gains come at the cost of long-term engagement and innovation. Leaders should be encouraged to question and interpret technology, exercising curiosity, moral judgement and empathy in every decision.
Q. What practical steps can leaders take at an individual level to lead with humanity in an increasingly automated world?
For leaders to embody the broader definition of efficiency and performance, they must value human connection, ethical reflection and sustainable well-being alongside measurable results.
1. Practise intentional humility
Openly acknowledge personal limitations, recognise others’ strengths and demonstrate a willingness to learn. These behaviours build trust and psychological safety, which are essential foundations for teams navigating rapid technological change.
2. Humanise your data
Before acting on data-driven recommendations, ask how each algorithmic decision might affect the people behind the numbers. This approach prevents over-reliance on systems that may unintentionally reinforce bias or neglect human nuance.
3. Prioritise presence and relationships
Use the time saved through automation to strengthen interpersonal connections through mentoring, coaching and informal check-ins. Face-to-face contact is a powerful source of trust and engagement.
4. Treat people as individuals, not categories
Take the time to understand each person’s unique motivations, constraints and values. Personalised leadership decisions are more effective and equitable.
5. Invest in self-care as a strategic resource
Effective leadership depends on sustained energy and emotional balance. Leaders who recharge themselves are better able to make sound judgements, model empathy and avoid burnout.
Stay ahead – subscribe to Momentum for more business expertise and insights
Industry perspective
Dr Gayan Benedict, Partner at PwC and Industry Research Fellow
at MIT's Center for Information Systems Research
Dr Benedict is an experienced technology leader and industry expert. The UQ Bachelor of Commerce and Laws alum is a Partner at PwC, having held senior leadership roles at the Reserve Bank of Australia, Westpac and Salesforce. Central to his focus are the risks and opportunities of AI, as well as the critical need to keep “humans at the helm” to ensure human accountability as companies and broader society scale their adoption of AI.
Q. In an era of increasing workplace automation, which human function will remain critical for leaders over the next 5 years?

The one thing you can't delegate to AI is human accountability. In fact, you have to elevate it.
Generally, accountability sits at the very top of an organisation, and with AI, we need to ensure this continues and that this accountability is traceable through to AI-driven decisions, actions and outcomes across an organisation. There can’t be any room for a situation where AI leads to a bad decision, or performs a task in a way that causes harm, and we’re left wondering who was accountable for that outcome.
Wherever AI has operated without a ‘human-in-the-loop’, it’s nearly always ended up in tears.
One thing humans do that AI machines can't is take accountability for the broader outcomes that are developed and achieved. As leaders, we need to understand the AI we are accountable for – and importantly – exercise the right diligence to ensure it’s performing within our risk appetite, and employing the controls we expect to see in place. Knowing how to lead and govern machines will be a key skill for leaders in the coming years.
Q. How can organisations build principles, controls and leadership structures that ensure human accountability without slowing innovation?
If you want to scale AI, you have to scale AI governance – and that starts with making accountability explicit. For centuries, human accountability was often implicit with the main challenge identifying where the buck stopped. However, as we hand off work that humans previously did to AI, the question then becomes, who is accountable for the decisions and actions that AI takes?
What we've seen previously with new technology innovations is someone being allocated executive accountability – a Chief Information Officer, a Chief Digital Officer and now, a Chief AI Officer. Often, that's because you need someone at an executive level who can set a direction and who’s also accountable for delivering it.
Organisations need to make this principle of accountability abundantly clear to staff. They may use a tool, but ultimately, it’s their name and reputation on the line, reinforcing this need for explicit accountability. We can’t risk the responsibility for AI actions falling between the cracks. That only ever ends badly when things go wrong.
This approach is much more pressing as we move towards agentic AI. Unlike GenAI – which creates and summarises content – agentic AI is autonomous and can reason, make decisions in dynamic situations and then act digitally.
Cut through the hype: how leaders can overcome and learn from AI failures
Q. What practical steps can leaders take to lead with humanity in an increasingly automated world?
1. Become fluent in AI
It’s hard to lead an organisation or a team that's been disrupted by AI if your understanding of the technology stops at the basics. Leaders have elevated their knowledge in key domains over the years, from financial knowledge, health and safety, risk, cyber, and now it’s AI’s turn. You can’t delegate fluency in these strategic domains to others, and AI is no different.
2. Lead by experiencing change
Leaders must determine the implications of AI for their own roles to better empathise with how the technology will transform the roles of their team and customers.
3. Future-proof your workforce
Encourage AI use from the bottom up, not just dictated by central teams. The time for command-and-control, top-down AI micro-management is passing. Delegating decision rights is key to unlocking scale, provided you have a focus on embedding controls and elevating your team’s fluency of the risks and opportunities AI presents.
4. Balance opportunity with risk
Focus on educating staff and leadership on AI’s risks, controls and benefits. This knowledge will help leaders ensure their people are less likely to make inadvertent mistakes while taking advantage of the innovation now possible.
5. Invest in and experiment with AI
It’s crucial to invest in the technology. The initial attitude of many organisations was to say, “you can’t use AI”, but that’s not realistic anymore. It's almost impossible to buy technology that doesn't feature AI in some way. Instead, work out how to introduce the technology safely into the organisation, then encourage staff to experiment and extract value.
The shared perspective
“The future is already here. It's just not evenly distributed.”
According to Dr Benedict, this quote by science fiction writer William Gibson exemplifies the integration of AI into the business world.
“Uneven AI adoption is very much the case in organisations – some are very progressive and considered in their approach, others are lagging behind,” Dr Benedict said.
“AI isn’t something you can delegate anymore; the emphasis has shifted from building the technology to using it. If you have a mobile phone, you have access to a powerful technology, and you need to be accountable for how you use it.”

The experts agree that knowledge and understanding are key to leading organisations through AI’s fast-paced workplace transformation.
“AI is a double-edged sword – the technology comes with significant benefits as well as significant risks,” Professor Indulska cautioned.
She said the better equipped leaders were to understand the implications and applications of AI, the more they could harness its potential to lead with empathy.
Additionally, AI has the potential to reshape leadership priorities and retain people at the centre of an organisation.
“Many task-oriented responsibilities can now be automated or augmented through AI. This shift means that, theoretically, leaders should have more capacity to focus on people-oriented functions,” Dr Chiu said.
To thrive, organisational culture must evolve to balance AI innovation with trust, transparency and inclusion, ensuring technology empowers rather than replaces people. If business leaders adopt and adapt, they can position themselves and their organisations for future success.
Learn how to lead with humanity with a UQ Master of Business Administration (MBA)
Professor Marta Indulska 
Professor Marta Indulska is the Director of Research at UQ Business School. For more than 2 decades, the information systems scholar has examined how data and technology are managed and applied in organisations. Professor Indulska’s current research focuses on AI, from how businesses plan for its adoption to how it’s reshaping the data science profession.
Associate Professor Chad Chiu
Associate Professor Chad Chiu is a behavioural scientist specialising in leadership and organisational behaviour. His work explores how humility, openness and teachability foster trust in leaders. His research also examines workplace influence, from the power of shared leadership models to how leaders can leverage social capital and connections to amplify their impact.
Dr Gayan Benedict
Dr Gayan Benedict is a Partner at PwC, with previous experience in leadership roles at the Reserve Bank of Australia, Westpac and Salesforce. He contributes to global industry insight as a research fellow at MIT's Center for Information Systems, Chair of Standards Australia’s Committee on Blockchain and DLT, and as a 2023 Fulbright Scholar.

