Momentum - The Business School Magazine

 

 

Navigating misinformation, echo chambers and fake news

3 ways your organisation can build digital trust and protect its reputation 

 

Fake news bots. Misinformation. Online extremism. Echo chambers.

They’re all hallmarks of the digital era: an era in which conspiracy theories on everything from the COVID-19 pandemic to climate change have spread like wildfire across social media, forums and chats.

The explosion of misinformation and fake news has had an undeniable impact on democracy, our trust in traditional news media and science, and the business world. Organisations targeted by online misinformation and disinformation campaigns have suffered serious consequences, including falling share prices and a loss of customer trust.

On the other hand, living in the digital era empowers us to connect with our customers and transform how we do business. We can use social media platforms, new innovations and technology to create social change and improve business growth.

So, how can organisations navigate the digital landscape while building customer confidence and protecting their reputation? Trust experts from The University of Queensland (UQ) Business School and industry share their insights and top tips to combat misinformation and fake news.

What’s the difference between misinformation, disinformation and fake news?

Misinformation is misleading or false information that’s spread without the intent to deceive others. Case in point: when your well-meaning relative shares an article about unproven COVID-19 treatments on Facebook.  

Dr Marten Risius researcher at UQ Business School
Dr Marten Risius 

Conversely, UQ Business School researcher Dr Marten Risius says that disinformation is deliberately crafted misinformation that mimics real news. Disinformation on social media is also commonly referred to as ‘fake news’. 

“Some people argue that you can share fake news unintentionally, but its key definition is the deliberate spread of false information,” Marten says.  

Look no further than the 2016 United States presidential election, when millions of AI bots were programmed to share negative messages and manipulate public opinion about the candidates via Facebook and Twitter.

“In 2016, we saw the deployment of Russian bots, fake accounts and remote disinformation,” Marten says.

“Suddenly, you could create different personas or even entire news organisations to share fake news online.

“Bots still play a role today, but it’s not the only issue anymore. We’ve witnessed a societal change where real people are openly sharing fake news – it’s become a cultural phenomenon.”

What are the implications of misinformation and disinformation for the business world?  

According to Donna Kramer, co-founder of PR agency Aruga, online misinformation and disinformation can be deeply detrimental to organisations of all sizes.

“Examples that I’ve seen include unfair online reviews or someone taking comments made by a company spokesperson out of context,” she says.

“The greatest challenge posed by misinformation and fake news is the erosion of customer confidence in an organisation and its leaders. Loss of customer confidence can have many negative impacts, including reduced patronage and sales, increased negative word-of-mouth and a lack of trust in the organisation and the people who run the business.

“Of all the impacts, a lack of trust is the hardest to rebuild – it can take a significant amount of time and effort.”

The proof is in the pudding – or, more accurately, the research. Marten’s work shows that misinformation, disinformation and ‘online radicalisation’ not only pose significant challenges for society, but also for businesses.    

“Think about the 2021 “Reddit revolt”, when Reddit users came together to drive up the GameStop stock price and caused multi-billion-dollar damage to short selling hedge funds,” Marten says.

“This incident demonstrated the impact that users can have by exchanging ideas and radicalising online against an organisation they perceive as hostile.

Professor Nicole Gillespie
Professor Nicole Gillespie

“Similarly, if an online mob goes after you because of misinformation or disinformation that others have spread about your organisation, you’re in trouble.”

Alarmingly, some organisations are also starting to use AI-driven disinformation tactics against competitors to tarnish their reputations or manipulate stock prices.

In 2020, the Financial Times reported that South-East Asian telecommunications company Viettel coordinated fake Facebook accounts and pages that “posed as customers critical of the company’s rivals, and spread fake news of alleged business failures and market exits.”

UQ Business School researcher and KPMG Chair in Organisational Trust Professor Nicole Gillespie says these unethical tactics stir up public concern and raise questions about the trustworthiness and regulation of AI systems.

“Our research shows that the responsible and ethical use of AI is critical for maintaining customer trust and organisational reputation,” Nicole says.

“We’ve seen governments and organisations across the globe suffer reputational damage due to trust failings in their use of AI and automated decision making. Typically, this happens because the outcomes were either biased, inaccurate, or the data used breached privacy or was used without consent.”

 

Integrate ethical AI practices into your organisation with a Master of Business Analytics

How can organisations combat online misinformation, build customer trust, and use AI and other technologies for good? 

Discover what motivates your team

 

1. Build customer trust in AI systems

If your business wants to see the benefits of AI, Nicole says you must proactively build trust in AI systems.

“Trust needs to be earned, and the best way to do this and protect your reputation is by ensuring your organisation’s AI-enabled services and products are designed, developed and used in a responsible and trustworthy way,” she says.

“One key practice that supports trust is being purpose-led: using AI to help solve important problems and challenges, with clear benefits to end users."

“People are more trusting of AI applications and more comfortable with the use of their data when it’s for a good cause and they can see some reciprocity and benefit to them or society more broadly. A great example is the use of AI to enable better diagnosis and treatment of disease and enhance healthcare.”

Organisations can also adopt assurance mechanisms that signal their use of AI is responsible and ethical, such as establishing AI ethical review boards and processes, adhering to national standards, and training employees in the ethical use of AI, Nicole says.

Address fake news directly

 

2. Address fake news directly

Donna believes that media and communication training is crucial for any organisation hoping to protect its reputation and offset the impact of online misinformation and fake news.

Address fake news directly “Key spokespeople need to be trained in answering difficult questions and communicating the true information quickly and diligently,” she says.

“When we media train our clients, we practise the organisation’s key messages and values. The only thing we practise more are the answers to the questions they don’t want to be asked, as it’s smart to hope for the best but be prepared for the worst.”

Once their key spokespeople are media-ready, Donna says affected organisations can address misinformation or disinformation head on and counter it by supplying the correct information – provided it doesn’t further inflame the situation.

“It’s important that this happens swiftly and with authenticity by the key spokesperson of the organisation,” she says.

“This can be a labour-intensive approach as it requires all key audiences – and most importantly clients and customers – to receive the same information at the same time.”

Use fake news mechanisms to your advantage

 

3. Use fake news mechanisms to your advantage

While the technology used to spread misinformation and fake news poses a threat to all organisations, it can also be leveraged for good.

“We’re working on AI bots that can post fact-checking comments and counter-narratives on extremist or fringe content – so, using the extremists’ own tools against them,” Marten says.

“Businesses can also leverage the same mechanisms used for spreading fake news to their advantage.”

Donna agrees, noting that Aruga is often enlisted by organisations to counter negative online reviews left by disgruntled past employees.

“We’ve done this by running incentive campaigns to grow the number of authentic customer reviews – both glowing and constructive,” she says.

“An increase in online reviews paints a more authentic and well-rounded picture of the organisation’s customer experience.

“Simultaneously, we provide opportunities for customers to provide real-time feedback so they can talk directly with a representative of the organisation rather than post online.”

However, if your organisation operates in bad faith in the digital space, it can become an easy target for online, radicalised movements.

“The hedge funds targeted by the Reddit GameStop movement – they’re the kind of corporations that people already hate,” Marten warns.  

“You’ll bring the wrath upon yourself from these online movements if you behave unethically.”

Learn more about the complex trust, ethics and governance challenges currently facing industry, government, not-for-profits, and society with UQ Business School's Trust, Ethics and Governance Alliance (TEGA).

Dr Marten Risius and Professor Nicole Gillespie are trust experts at the UQ Business School Trust, Ethics and Governance Alliance (TEGA) research hub. Nicole is also the KPMG Chair for Organisation Trust and Co leader of TEGA. Her research focuses on trust development and repair in organisational contexts, and in contexts where trust is challenged. Marten is a lecturer in the Business Information Systems discipline, and his research interests are in the areas of social media and blockchain technologies.

  Contact Marten  Contact Nicole