3 July 2025 | Dr Jo Kandola PhD

Is AI Perpetuating Common Practice in DEI?

I’m a big fan of AI - when used correctly. Over the past year, I’ve experimented with it extensively: testing its capabilities, exploring how it might streamline content creation, and observing how it interprets diversity, equity, and inclusion (DEI). What I’ve discovered has been equal parts fascinating and concerning. Artificial Intelligence, at its core, is a pattern recogniser. It analyses vast amounts of data, finds trends, and reproduces what it sees. And therein lies the problem for DEI: AI doesn’t distinguish between what is common and what is good.
Description
Common Practice ≠ Best Practice

DEI is a field driven by nuance, evidence, and evolving understanding. It’s not about simply doing what most people are doing – it’s about doing what’s right, what’s effective, and what’s ethical. But AI doesn’t make that distinction. It doesn’t analyse critical research or measure efficacy; it picks up on the most repeated phrases, the loudest headlines, and the most widely circulated practices. And unfortunately, in the DEI space, the most common practices aren’t always the most helpful.

Take, for example, unconscious bias (UB) training. AI will frequently tell you that UB training doesn’t work. Why? Because headlines say that. Because some organisations have rolled out poorly executed one-off sessions and seen no change. But deeper in the research, you’ll find that context matters – well-designed UB programmes that are sustained, nuanced, and paired with systemic interventions can make a difference. But AI doesn’t dig that deep. It mirrors public perception, not peer-reviewed evidence.

When AI Writes Your Bio (That You Didn’t Write)

The issue isn’t just what AI leaves out. It’s also what it makes up.

When testing its summarisation abilities, I once asked an AI tool to describe my work around menopause and bias. What it came up with blew me away – articulate, insightful, persuasive. The only problem? I didn’t say even half the things it attributed to me. Worse still, when I asked it to provide references or links to support the content, it cited podcasts I’d never been on and articles I’d never written.

This isn’t just inconvenient – it’s dangerous. AI’s fabricated references may look convincing, but they erode trust, misinform readers, and risk positioning professionals as experts in things they haven’t touched. In a field as sensitive and impactful as DEI, this matters.

The Rise of AI-Generated DEI Content

Internally, many DEI professionals are now turning to AI to write workshop content—on topics like inclusive leadership (IL) or bias. And while it’s understandable (time pressures, resource constraints, and the demand to scale content), it’s also deeply problematic.

The result is often vanilla content: generalised, surface-level information that sounds right but lacks rigour. It’s content that ticks boxes, but doesn’t challenge thinking. That sounds inclusive, but lacks inclusion of marginalised perspectives. That uses the right buzzwords, but misses the science that should underpin it.

Let me be clear – being a DEI expert is not easy. It requires staying on top of the research, continuously reflecting on your own assumptions, engaging with lived experiences, and applying critical thinking every step of the way. I’ll admit, even I find it hard to keep up with the pace of academic publishing and new insights. But that’s the job. If you’re going to do it, you need to read, learn, and think. You need to apply judgement – not just generate text.

AI Can Support, It Can’t Lead

This isn’t a defensive post. I’m not dismissing AI outright. It has its place.

AI can be a great tool for brainstorming ideas, summarising key points (with fact-checking!), reformatting content, or helping to overcome writer’s block. But it cannot replace thinking. It cannot understand nuance, context, or the lived experience of discrimination. It doesn’t grasp the systemic nature of inequality or how interventions land differently for different people. It can’t tell you what it feels like to be excluded – or how belonging really shows up in a team.

DEI is about real people, real emotions, and real organisational structures. It demands a depth of empathy and a level of insight that AI is simply not equipped for.

Why This Matters Now

In a world increasingly driven by speed and scale, there’s a temptation to use AI to do more, faster. And yes, it can be seductive to believe that AI can write your workshops, blog posts, or strategy decks for you. But if we allow AI to lead the charge on DEI, we risk replicating the very problems we’re trying to solve.

We risk centring the loudest voices, not the most marginalised. We risk pushing generic solutions when what’s needed is bespoke intervention. We risk mistaking polish for progress.

So, here’s my ask to those using AI in DEI work: use it responsibly. Treat it as a co-pilot, not a replacement. Question what it says. Challenge its outputs. Bring your own critical thinking to the table. Because DEI deserves more than copy-paste inclusion. It deserves care, thought, and genuine expertise.