top of page

Limitations of AI - What AI Cannot Do

Image created in Canva / Alt-text: A wireframe digital head with overlaid text about AI limitations.
Image created in Canva / Alt-text: A wireframe digital head with overlaid text about AI limitations.

AI is everywhere, from boardroom strategy to operational efficiency. At the moment, AI seems to be flavour of the month (every month), despite scepticism and lack of trust among many users.


It’s also true that many C-suite leaders are under pressure to “do something with AI.” This often leads to organisations leaping into (AI) implementation without fully understanding what they’re signing up for, only to surface months later admitting there’s no measurable ROI. There are several reasons for this, which I’ll explore in future posts.


As someone who has high hopes for AI, I remain cautious and weary about the current hype particularly when the media and those who know better, rarely push back.


While many AI developers and vendors promise transformation, few leaders challenge these claims. Too often, they're taken at face value, even though the developers themselves, often cannot explain why their systems produce the outputs they do.


We’re often told the Sky’s the limit when it comes to AI, but is this really true?


Today I’ll share a few reasons why this is not yet the case. 


AI’s limitations, which I’ve captured below, are not just technical; they carry ethical, cultural and reputational implications. In other words, organisations that rely on AI as the first customer or candidate touchpoint risk unintentionally excluding potential clients and talent, while exposing themselves to legal, financial, and reputational risk.


What AI Can Do


AI has transformative potential serving as a versatile tool that can automate repetitive tasks and augment human intelligence. 


When combined with AR, it becomes a powerful learning and training tool in fields such as medicine, education, health and safety, aviation, and inclusion. When trained responsibly and tasked with addressing real-world challenges such as disease diagnosis or climate science, AI can indeed be a game changer.


However, in a world increasingly shaped by algorithms, it’s our human judgement, Emotional Intelligence (EQ), and Cultural Intelligence (CQ) that truly set us apart.


What AI Cannot do

Despite some impressive achievements so far, AI’s capability is limited in several critical ways, as outlined below (this list is not exhaustive):


  1. Probability not Inclusion - AI is a statistical inference machine, built for probability, not inclusion

  2. It cannot understand context – All AI can do is analyse patterns, it does not comprehend meaning

  3. It has zero Emotional Intelligence (EQ) - AI doesn’t empathise or nor does it care about your feelings. While many humans lack EQ too, AI’s scalability makes this particularly risky especially if/when used in front-facing roles

  4. It lacks Cultural Intelligence (CQ) – Just as it cannot empathise, AI cannot differentiate between cultural norms or idiosyncrasies. Once again, while it’s true that many humans too lack CQ skills, AI’s ability to scale amplifies the risk, especially when its decisions affect people, not just processes

  5. Limited data diversity  – AI often relies on small or generalised datasets, which restricts it capabilities and effectiveness particularly for products and services designed for diverse, local and global audiences and consumers

  6. Lack of Explainability – Even AI developers themselves, struggle to explain why their systems generate certain outputs. Imagine being a recruiting manager challenged to justify why Candidate A was rejected. Could you defend your AI’s decision? 

  7. Opaque Systems - Transparency requires you to be able to answer basic questions such as: Who built the system? What data was used? Can users  understand why they were rejected or even profiled? Transparency is an important stepping stone to building trust. Opaque “black boxes” don’t inspire confidence

  8. Lack of Accountability – When AI fails or causes harm,  few are held accountable. Until someone is, these models will continue to operate unchecked

  9. Poor Transferability - AI trained in one domain often performs poorly in another without retraining

  10. Algorithmic bias – Developers’ backgrounds, assumptions, values and blind spots, shape what AI prioritises. Guardrails alone cannot fully address these biases. Emotionally intelligent design, oversight and intervention remain essential

  11. Low Trust – Despite all the hype and media headlines, public trust in AI remains low. This limits its learning potential, as AI systems depend on widespread, responsible use to improve

  12. Model Drift – Unbeknown to many users, AI models degrade over time as real-world and organisational data diverges from training data. This is why, continuous monitoring is essential to keep systems relevant

  13. High Energy Consumption - Training large models consumes vast energy, raising sustainability concerns. Some argue this alone could become a major constraint on AI’s future growth.

  14. Weak Leadership - Until leaders start to question, challenge and demand accountability, transparency and human-centred design, AI's limitations will persist. Strong, confident and self-aware leaders acknowledge their own blind spots around equity and inclusion. By building on their Emotional and Cultural Intelligence, they gain the foresight to require that AI developers and suppliers deliver transparency, explainability and fairness. They also understand the importance of testing AI systems with their own data and involving diverse teams - not just in identity, but in thought


Understanding AI’s limitations helps mitigate risk, manage expectations and remind us that, as imperfect as we are, humans who lead with Emotional and Cultural Intelligence, will have the upper hand when it comes to navigating AI because they’re the ones who will be asking the right questions.


As with many things in life, success won’t come from those who adopt AI first, but from those who prepare best.


If your organisation is exploring how to embed Responsible and Inclusive AI, and you’d like guidance on where to start, what to prioritise and how to prepare your people (including leadership), feel free to reach out to me here on LinkedIn You can also visit ExecutiveGlobalCoaching.com to learn more about what we do.


Subscribe below to receive an alert as soon as I publish new editions of my LinkedIn newsletters or, to read previous editions:


  1. Responsible AI - (this one) Putting People and Culture at the heart of AI Strategy

  2. Inclusive Leadership in the era of AI

  3. Leading with Emotional Intelligence (EQ)



 
 
 

Comments


© 2025 ExecutiveGlobalCoaching.com

bottom of page