Without trust, AI is just technology; with trust, it becomes Transformation
- Miriam Mukasa - Inclusive Leadership & AI

- Nov 19
- 5 min read

Image/Canva
For senior leaders, the challenge is clear: trust is the missing ingredient in AI adoption. The organisations that will thrive are those embedding Transparency, Explainability, and Accountability into their AI strategies
A Lesson in Trust from the Park
A few weeks ago, while walking with a friend in a beautiful park and enjoying the last rays of summer, a dog trotted over to greet us. My heart instantly melted. My friend Susan*, equally fond of dogs, joined me in fussing over him (let’s call him Rex), and soon we were both speaking fluent "dog".
It was the perfect end to a wonderful day with someone whose trust I value deeply, despite our relatively short history. Perhaps that trust was born of the fact that we were introduced by a mutual friend we both adore.
After a few minutes, Rex left us in search of a new canine companion. As he bounded towards another dog, the owner abruptly turned and shooed him away. Susan was taken aback, having wrongly assumed that this owner would welcome Rex as warmly as she had.
Connecting the Dots to AI
So, you may be wondering what this dog story has to do with AI. Bear with me as I bring this analogy home.
I explained to Susan that while the dog owner seemed unfriendly, perhaps she was simply being cautious. Her dog was much smaller, and some owners are understandably protective. Susan had grown up surrounded by animals; this was her lived experience. And therein lies the lesson.
Having raised a large(ish) dog myself, I know that trust is not always granted immediately. It often has to be earned, usually through conversation, observation and reassurance.
And so it is with AI.
Why Trust in AI is Low
Generative AI frequently produces hallucinations, reflects data and algorithmic bias, and is often marketed in ways that heighten anxiety.
Too often, the narrative is that AI will not only take your job, but that you’ll probably be the one to train it. Social media is awash with “AI First” messaging, celebrating the reduction or redundancy of human roles within a year or two. Is it any wonder then that AI is viewed with trepidation and adoption numbers remain stubbornly low?
A 2023 KPMG study found that 97% of respondents endorse the principles of trustworthy AI, while 71% expect external, independent oversight.
The message is clear: trust, or the lack of it, is a huge big in holding AI adoption back.
The Case for Real‑World Impact
The case for AI is also being argued poorly and those who put themselves at the frontline of the AI story are often not trusted.
AI has the potential to transform healthcare, education, climate science and more. Yet too many developers focus on hype over substance, preaching mostly to “similar-to-me” audiences. If they cannot read the room, is it any wonder their messages fail to land with the wider public, most of whom are focused on real-world problems and have little interest in tech evangelism?
Just as my friend Susan misjudged the dog owner’s reaction (she grew up surrounded by animals and was a confident animal handler), many AI developers fail to appreciate that not everyone takes to technology as they do. This disconnect may explain why many AI advocates appear out of touch with reality on the ground.
Economics and Adoption Barriers
Unlike SaaS, unit economics means AI ecosystems cannot survive solely on “similar to me” or niche demographics. The vast sums invested, particularly as hyperscalers purchase and hoard chips, mean AI requires global adoption to be viable. Even then, there will be many casualties along the way including companies building GPT wrappers.
Unlike mobile phones where benefits were immediately visible (despite their initial 'Hooray Henry' reputation) few people have seen AI solve a clearly defined (user) problem. Instead, they see US$ billions poured into technology seemingly designed to replace them.
AI promises much, yet often feels like a toddler in that you constantly have to monitor it even when it behaves because you’re never quite sure what it will do next.
Globally, trust in AI is low with the technology posing risks and challenges which not many people and/or organisations are unwilling to bear.
Distrust in AI is linked to adoption barriers. If people don’t trust AI, they will not accept it in the workplace or wider society.
Building Trust in AI
AI developers must spend more energy and focus building trust. The current “trust me bro” attitude isn’t cutting through. Furthermore, no one is interested in techbro vs techbro cage fights.
For adoption rates to increase, users must be confident that AI is being developed and deployed responsibly; with Transparency, Explainability and Accountability at the core of every AI initiative.
Real‑World Applications
AI has enormous capability; people now need to see and hear more real-world applications such as:
AI analysing and interpreting MRIs
AI personalising medicines
AI optimising agricultural resources such as water and fertiliser
AI assessing machinery safety in the context of health & safety
AI enhancing food quality through supply chain traceability
Inclusion and Recognition
Furthermore, including people from diverse backgrounds and experiences will ensure AI is seen and heard by wider audiences.
The Global South (often overlooked in technological revolutions), plays a key role in the development of AI through data labelling and annotation. While rarely acknowledged, these so called "invisible workers" contribute significantly to making AI safe for users. Ignoring the Global South will do little to build trust. This is not just a moral imperative but a business one too. After all, users and companies in the Global South were some of the leading pioneers of mobile money years ahead of Apple and Google Pay.
The Way Forward
The way forward is clear: AI marketing needs to focus on solutions to real-world problems. Only then will an anxious public begin to feel confident about the technology.
*Name changed for privacy.
If you’d like to learn more about our bespoke AI Trust Frameworks, Leadership Training Programmes or, would like Guidance on where to start, please contact me here on LinkedIn. You can also visit ExecutiveGlobalCoaching.com to learn more about what we do.
Subscribe below to receive an alert as soon as I publish new editions of my LinkedIn newsletters or, to read previous editions:
Responsible AI - (this one) Putting People and Culture at the heart of AI Strategy
#trustworthyAI #ResponsibleAi #humancentredai #ethicalai #futureofwork #emotionalintelligence #culturalintelligence #inclusiveleadership #diversityinai #inclusiveai





Comments