Why does trustworthy AI matter?
AI Systems Are Both Social and Technical
It's tempting to think of AI as a purely technical phenomenon — algorithms crunching data to produce outputs. But that framing misses the point entirely.
AI systems are fundamentally social too. Humans write, curate, and label the training data. Humans fine-tune the models. Humans write the prompts. And humans decide what to do with the outputs. At every stage, human judgment, values, and decisions shape what the technology produces and how it affects the world.
Adoption Is Stalling Because of a Trust Gap
Despite the hype, AI adoption remains surprisingly low and recent data suggests it may even be declining among large enterprises.
The U.S. Census Bureau's Business Trends and Outlook Survey shows that only about 9-10% of U.S. businesses are currently using AI. Large companies (250+ employees) saw adoption rates drop from roughly 13.5% in June 2025 to around 12% by late August—the first consistent decline since tracking began.
Meanwhile, MIT's NANDA initiative found that 95% of enterprise AI pilots fail to deliver measurable returns. The culprit isn't the technology itself—it's an execution gap driven by brittle implementations, inflated expectations, and a fundamental lack of trust in AI outputs.
A Melbourne Business School global study of over 48,000 people across 47 countries confirms the pattern: while 66% of people now use AI with some regularity, less than half (46%) are willing to trust it. People have actually become less trusting and more worried about AI as adoption has increased.
The conclusion is stark: without trust, adoption hits a ceiling.
The Characteristics of a Trustworthy AI System
So what makes an AI system trustworthy? The National Institute of Standards and Technology (NIST) has identified seven essential characteristics:
Valid and Reliable — The system performs consistently across conditions and delivers dependable outputs.
Safe — It minimises risks and avoids causing unintended harm to users and communities.
Secure and Resilient — It withstands adversarial attacks, maintains data integrity, and recovers from disruptions.
Accountable and Transparent — Clear mechanisms exist for taking responsibility for AI decisions and communicating how the system works.
Explainable and Interpretable — Users can understand how decisions are made and why specific outputs are produced.
Privacy-Enhanced — The system respects human autonomy and protects personal data through anonymity, consent, and control.
Fair with Harmful Bias Managed — The system actively identifies and mitigates discriminatory outcomes.
Would You Hire a Human Without These Characteristics?
Here's a useful thought experiment: imagine you're hiring for a critical role at your organisation. Would you hire someone who:
Couldn't be relied upon to perform consistently?
Posed safety risks to your team or customers?
Was vulnerable to manipulation and couldn't recover from setbacks?
Refused to explain their decisions or take responsibility for mistakes?
Couldn't articulate their reasoning in a way you could understand?
Showed no respect for privacy or confidentiality?
Demonstrated persistent bias against certain groups?
Of course not. These are baseline expectations for any trusted colleague, advisor, or employee.
Why should we accept anything less from AI systems that increasingly make or influence consequential decisions in healthcare, finance, hiring, criminal justice, and beyond?
Trustworthy AI isn't a nice-to-have. It's table stakes for AI that actually gets adopted.