top of page

AI Backlash – Why Many Leaders Seem So Out of Touch

ree

Alt-text: Two people sitting in front of a laptop both looking concerned


Switch on Bloomberg, CNN or CNBC, or glance at the financial press, and you’d be forgiven for thinking AI isn’t just here – it’s everywhere. The prevailing narrative suggests that every organisation is embracing it, and that soon, it will be coming for your job.


CEOs are increasingly talking about “AI-first” organisations, often with little consideration for how this message is being interpreted (or received) by their employees. 

In addition, AI is largely marketed as a tool for efficiency and productivity, with promises that it will eliminate mundane tasks. So, hooray. Adopt AI and you’ll have more time for … what, exactly?


All this comes at a time when governments and organisations are spending hundreds of US$ billions on AI, while employees are being told budgets are tight and headcounts are being reduced. 


Are people losing their jobs to make way for AI, whose total cost may end up higher than human labour while depreciating faster than a car driven out of a showroom?


Take data centres, for example. They’re often pitched as job creators, but the real question is, how many people are actually needed to operate one? 


The reality is, many of these data centres (frequently criticised for keeping locals awake with their constant humming sound, energy use and water consumption), are underutilised. Yet, further investment is forecast for 2026 and beyond for this “expected” flood of AI demand, which, for now, remains elusive.


The unanswered question remains: what’s in it for workers? 


Beyond job losses, environmental strain and deepening societal inequalities, the benefits of AI remain unclear to many. These conversations are often avoided or glossed over, which is precisely why trust in AI remains low.


AI Has Capability – But Where Are the Relevant Applications?


AI is undoubtedly an exciting & dynamic technology. From fusing science and engineering, connecting modalities, training neural networks to mirror biological neurons, to analysing MRIs, AI could be a force for good in the wider world, not just at work.


Yet in its current form, AI is far from the holy grail it's often portrayed to be. It’s reliance on small, generalised datasets limits its effectiveness, particularly for products and services designed for diverse, local and global audiences and consumers.


Furthermore, many of the latest AI advancements are hardly a priority for people navigating real-world personal and professional challenges in 2025.


Despite its potential, many AI applications lack relevance. 


This has led to growing resistance, with employees reluctant to engage. It doesn’t help that many leaders investing in AI rarely take time to understand existing workflows, processes or the workarounds their teams rely on or, struggle with, on a daily basis.


People are already exhausted trying to make current software work and often, the only reason payroll deadlines are met is because Sally manually enters a postcode which the system refuses to accept. Now she’s expected to adopt and train a new AI tool which, she’s told, could replace her in less than two years?

For many leaders, the logic remains: “If our competitor is doing it, then we should too.” 

And so the cycle continues - funding AI tools while neglecting to explain and/or reassure staff about their future or at least, adequately prepare them for this “AI-first” future.


Expecting employees to adopt AI without answering their What’s in it for me?” is not only short-sighted but risks being interpreted as out of touch and insensitive.


Marketing or pitching your organisation as “AI-first” and then wondering why employees are wary is like wondering why turkeys wouldn’t want to vote for Christmas.


The Disconnect Is Growing


Many leaders appear unaware of the mindset shift that has taken place over the last five years. 


AI is being launched into a world grappling with economic, geopolitical, health and societal crises. An AI-powered Word document that saves ten minutes a week is hardly top of mind for most employees - even if, yes, time saved compounds.

Imagine car salespeople pitching vehicles as a way to get to work faster so you can spend more time in one-on-ones with Andrew in accounts. And then using data from these meetings to train Ziggy, the new AI tool. 


And yes, this sounds absurd, though not far from how AI is currently being marketed.


The AI Pushback is Real


Is it any wonder then that as AI continues to proliferate, so does the pushback with many viewing AI as a tool that exacerbates existing income and societal inequalities benefiting those already wealthy.


Selling organisations as “AI-first”, may draw praise in some circles, but for many, it signals replacement rather than opportunity. Duolingo learned this lesson earlier this year.

Interestingly, many people do in fact use AI tools of their own volition – just not at work. 

It’s also true that resistance to new technology isn’t new, but AI brings additional concerns: hallucination, bias, IP violations, the wellbeing of "invisible" workers in the global south, digital surveillance, privacy breaches and more. Unlike traditional software, AI has the ability to permeate entire organisations.


Concerns Beyond the Workplace


In addition to work related concerns, societal concerns remain. 


Narrow datasets and algorithmic bias have already led to flawed AI outputs, including in healthcare, where misdiagnoses can and have, disproportionately affected people of colour due to under-representation in training data.


Environmental concerns also loom large, with the carbon footprint of training large AI models raising serious questions.


Studies show that few people believe that AI will benefit them personally – and few leaders have taken the time to explain how it might. 


The focus remains on organisational gain, not individual value. This may explain why employees report few tangible benefits from AI tools. Perhaps it’s also because they’re rarely involved in the design, development and/or deployment process, leading to friction downstream when AI meets workplace reality.


Handling AI Conversations Better


Leaders must approach AI discussions with openness and accessibility. Just as they do with the printed press and digital media, CEOs should also make themselves available to their staff; whether through town hall style meetings or in smaller groups such as ERGs, taskforces etc.


That said, there may be situations when the executive bench may not be best placed to “sell” AI tools. 


In these cases, organisations need internal AI advocates or “influencers” so to speak - trusted colleagues who can speak credibly, peer-to-peer. 


Just as millions prefer to learn Python or cooking from their favourite YouTuber despite having access to experts at work or school, employees may be more receptive to those they trust, within their own ranks.


Leaders must also recognise that for many, work is more than just a pay cheque every month - it’s also about identity and purpose. 


This is why empathy and patience are essential and leaders must be willing to answer uncomfortable questions. This is where CHROs can play a vital role in facilitating these conversations.


The Way Forward


AI is here to stay. It's already integrated into many systems and tools that we use daily, both at work and at home. It’s not going anywhere and was in fact, already here before OpenAI launched ChatGPT. 


What may change is how it shows up.


Some research labs, such as DeepSeek – until recently, a relatively unknown Chinese AI lab - are exploring innovative algorithmic strategies that don’t rely solely on massive hardware investments, though they still rely on foundation models for now. 


Successful AI deployment requires investment not only in technology, but, more important, in people


Despite AI’s immense potential in healthcare, education, agriculture, climate science and more, the public narrative remains dominated by gimmicks - AI generated cartoons of pink elephants cycling, techbros chasing one another in nine-second video clips. The novelty soon wears off, and attention drifts from the serious work AI could be doing.

This disconnect also reinforces the perception that those developing and promoting AI are often out of touch with everyday people. 

This is why, the AI ecosystem urgently needs diversity - not just in identity but of mindset too. 

It's the only way to build products and services that are relevant to the huge customer and user base that is out there and, more importantly, to build trust. 


Building Trust in AI – Some Key Steps


  •  Introduce AI as a tool to enhance human capability - not eliminate jobs

  • Integrate AI Literacy into ongoing development programmes, such as workplace culture and leadership training, rather than offering them as one-off standalone technical courses. This may reduce apprehension and suspicion if AI is treated as just another workplace tool

  • Demand transparency and explainability from AI developers - opaque “black boxes” don’t inspire confidence

  • Involve employees from design through to deployment, especially those most impacted, both positively and negatively

  • Ensure your data represents everyone - trustworthy AI starts with good, diverse data

  • Include AI Literacy training programmes in onboarding and professional development, focusing on opportunities as well as risks and limitations

  • Avoid premature hype. Perhaps one big mistake many AI developers made (and continue to make), is publicising systems before they're ready for prime time. The ‘fail fast, learn faster’ approach may work for consumer tech, but not for systems that claim to replace or diagnose humans


Diversity in the AI ecosystem is no longer a “nice to have”

It's both a commercial and ethical imperative - the only route to building trust, adoption and ultimately, ROI. 


If you’d like to learn more about embedding AI Literacy and Responsible AI in your organisation, then please free to contact me on LinkedIn or, visit ExecutiveGlobalCoaching.com to learn more about how we work


Subscribe below to receive an alert as soon as I publish new editions of my LinkedIn newsletters or, to read previous editions:

  1. Responsible AI - (this one) Putting People and Culture at the heart of AI Strategy.

  2. Leading with Emotional Intelligence (EQ)

  3. Inclusive Leadership in the era of AI

 
 
 

Comments


© 2025 ExecutiveGlobalCoaching.com

bottom of page