Lack of trust holding back AI innovation: Study

Australian partners say transparency, guardrails and tangible business value will help improve trust in AI systems.

Image:
Unsplash

Organisations stand to benefit from AI, yet a lack of trust is holding back adoption. In particular, Australians are less trusting and positive about AI than most countries, according to a global KPMG study.

Some 78 percent of Australians are concerned about a range of negative outcomes such as inaccuracy, misinformation and manipulation, deskilling and loss of privacy, the survey found.

Scott McGowan, CEO of global provider of end-to-end asset management solutions Cosol said, “The statistics reflect what many Australians are feeling, particularly among staff, due to a perceived impact on traditional work that’s causing uncertainty.”

Fostering trust is central to improving acceptance and adoption of AI, but businesses must tackle transparency issues, demonstrate business value and manage risk to gain buy-in.

“Trust is earned when AI systems are transparent, ethical and consistently deliver value,” McGowan said.

To improve transparency, businesses need to encourage stakeholders to be involved in AI development processes and address any concerns with open communication.

McGowan suggests organisations embrace AI literacy training during onboarding, establish AI champions to train staff and develop internal roadmaps to help improve levels of trust.

“It’s all about how businesses can deliver true value — and that begins with leading by example and showcasing ethical AI implementations,” he said.

Fred Thiele, CISO, Interactive said explaining how data is used, for what purpose, and where training data comes from should be central to efforts to bolster trust.

This includes Generative AI tools providing good references for every answer and being clear about the reasoning it used to get there.

“Seeing that 'thought process' helps build trust. It’s also important that these systems can take feedback into account — for example, if a user says, ‘I would have thought differently about this part,’ that should inform future responses,” he said.

Likewise, security and privacy are integral to improving trust and AI adoption.

“As Australia tightens its laws on data privacy, enterprises need to know if there is any data leakage within the company. This also extends to individual use of AI, which would need to go through a security review,” Thiele added.

Strengthening governance policies will help identify the relative risks of different AI implementations.

"For example, implementing a tool like Microsoft Copilot may present low safety risk but requires consideration of data privacy and security. On the other hand, replacing a human-monitored process with machine learning decision-making carries a different set of risks that need thoughtful evaluation,” said McGowan.

For its part, the government has a role to play in helping improve trust in AI through regulations around data use and storage, for example, although some in the industry are wary of the heavy hand of government stifling innovation.

“We need to be careful not to over-regulate, as it stifles innovation and makes countries or states difficult to do business with. That said, good regulation can enable innovation, with a set of guardrails to help address concerns of people using GenAI,” Thiele said.

CEOs remain unconvinced about c-level expertise

The adoption of AI also faces a trust issue at the leadership level. CEOs believe AI technology will define the next era of business, but that experts lack the knowledge and capabilities to drive business outcomes, according to a recent Gartner survey.

More than three-quarters of CEOs surveyed view AI as a transformative way of working yet they don’t have faith in their technologists including CIOs, CISOs and CDOs to hire skilled people in the numbers required and calculate business value or outcomes.

To measure value, Thiele recommended identifying common use cases and extrapolating on metrics such as average time saved per use. Multiplying that by how often per day it’s used, provides some data to help calculate tangible benefits.

“You can also track before-and-after stats on substantial use cases. For example, before AI RFP tool, average RFP completion time was X hours and now it's Z hours — that’s tangible value you can point to,” Thiele said.

To make the most of the opportunities AI will bring, businesses must focus on upskilling and improving AI savviness across all mission-critical activities as a priority, Gartner suggests.

“One effective strategy is to have a 'champion' or 'guide' group embedded within each business unit, they can become evangelists for AI and help identify specific, valuable use cases within their teams,” Thiele told CRN.

Overall, AI adoption requires a culture of trust, innovation and continuous improvement.

“To remain competitive, organisations will need to demonstrate their responsible AI policies and show investment in educating staff to better understand AI and capitalise on the productivity gains that the technology promises,” Thiele ended.

Highlights