top of page

Building AI through the lens of inclusion

Updated: Sep 1

ree

Alt-text: A group of diverse individuals in an office setting participating in a meeting, with a woman writing on a glass wall covered in sticky notes.


As more and more organisations look to integrate AI systems into their tech stacks, it’s crucial to address the potential risks that come with some of these systems. 


Rolling out responsible AI is far more than just buying off-the-shelf AI tools and then running the odd prompt training class or two. While there are many upsides, integrating AI tools into organisation tech stacks also comes with risk.


Large Language Models (LLMs), one of the most popular AI modules, are not inherently inclusive in their design. Instead, they are trained to recognise patterns and their outputs rely on the data used during training which often includes content scrapped from the internet including social media as well as some questionable subreddits.


Therefore, to mitigate risks and foster responsible AI implementation, the following considerations are essential:


(1) DIVERSE EXPERTISE – The inclusion of diverse domain experts particularly HR & DEI departments throughout the entire AI lifecycle from problem identification to deployment all the way to impact assessment(s). This ensures that from the get-go, AI is designed and built for all users


(2) FRAMEWORK ADOPTION – Following a step-by-step data development framework, such as the one proposed by Drs. Khan and Hanna 'The Subject and Stages of AI Dataset Development' (2022). This framework ensures human intervention at various stages and effectively mitigates the mysterious AI “black box”. Following this framework (which may be more suited to B2B small language models (SLMs)), helps maintain transparency and accountability during AI development as the model can verify and certify output.


(3) AI LITERACY – Organisations must build or adopt ‘Life-long learning’ cultures rather than ‘one-and-done’ staff & leadership training cultures that are often at play. Little to no AI literacy training, risks people falling behind or even putting themselves as well as the organisation at risk if staff are unable to differentiate between what is or isn’t AI. At its core, AI literacy skills enable us to develop skills to navigate and interact responsibly with AI. It’s the ability to know when to use and when not to use AI; how to judge and correct its output, and how to adapt to its changes. It’s about knowing what AI is, its capabilities, its limitations, and its potential impact on the organisation, as well as society as a whole.


(4) ANNOTATOR WELL-BEING – Data annotators play a vital role in ensuring AI systems are safe for users. However, these workers (often from the global south), are poorly paid, work under tremendous pressure, have to regularly review disturbing content as a result of which, many now suffer from PTSD (post-traumatic stress disorder). The conditions under which AI annotators work, have led to comparisons with sweatshops. This is why, it's important that before developing or purchasing AI tools, that DEI/HR/L&D departments through (leadership) training, raise awareness amongst AI project leaders to help them make informed decisions when engaging with tech developers and/or third-party AI suppliers. This also enables inclusive leaders to ask their tech developers or third-party suppliers the right and relevant questions including where the data is sourced from, who is impacted, as well as details about annotators and conditions under which they work. The data development framework allows human intervention at stages if/where problematic outputs are encountered. Failure to do so could result in organisations finding themselves in unfortunate sweatshop type situations similar to fashion houses.


(5) Human-in-the-loop – While AI systems continue to advance, most still require human oversight to operate responsibly and to mitigate risks. In addition, no third-party supplier knows nor understands your culture or organisation the way that you do.  Ethical and inclusive AI calls for human intervention, particularly from individuals who understand the importance of building AI with an Inclusion lens to reflect diverse perspectives and priorities.


(6) Guardrails - Diverse domain experts working closely with your AI  developers/procurement departments to build/purchase AI systems, which, subject to some baseline constraints, align to your organisational values.


Building AI through the lens of inclusion fosters diverse perspectives and expertise in the development, training, and application of LLMs and other AI models. This approach ensures we create AI systems that are equitable, unbiased, and serve the needs of users from all backgrounds.

 ****************************************

FYI - If you'd like to explore ways of making your AI systems more equitable and accountable, or are simply curious, then do watch my short (<4 min) videos – created for non-technical leaders and, with takeaways for tech teams too. Jump straight to videos below:

1. Who we are & what we do – Watch

2. Data & Machine Learning Framework – Watch

3. AI’s role in advancing inclusion – Watch


Sign up below for my free LinkedIn newsletters (or read previous editions) and receive alerts as soon as new editions are published:



ABOUT ME

Helping leaders Design, Build, or Purchase, AI that works for everyone

MIRIAM MUKASA - Consultant specialising in ‘Inclusive Leadership & AI’ advising C-Level executives, leaders, and those in succession, on navigating the dynamic intersection of leadership, technology and inclusivity. Learn more at: https://www.executiveglobalcoaching.com/

 
 
 

Comments


© 2025 ExecutiveGlobalCoaching.com

bottom of page