If you’ve spent any time online lately, you’ve likely felt the buzz around AI image generation. Generative AI is transforming how we create, communicate, and imagine – and it’s growing fast. ChatGPT alone now counts over 700 million active monthly users, while tools like Gemini are being used for everything from viral TikTok videos to professional content creation.
For those working in or with a creative studio, this shift is seismic. Image-generating tools built on large language models (LLMs) are becoming increasingly lifelike, reshaping the visual stories that define our world – from advertising campaigns and brand identities to graphic design and entertainment.
But as AI becomes a new lens through which we see the world, a crucial question emerges: who gets seen?
To explore this, Berlew’s creative studio asked AI to generate images of people in everyday UK roles – from teachers and nurses to CEOs, construction workers, and graphic designers – and then analysed the features of those who appeared.
We compared those results with real UK workforce demographics to understand just how accurately (or inaccurately) AI represents modern Britain, and what that means for the future of creativity and representation.
Why does representation in AI imagery matter?
AI doesn’t just reflect the world – it’s now in a position that can help to shape it. The images created by generative AI are already influencing how we see different jobs, genders, and ethnicities across marketing, media, and everyday storytelling. When these systems replicate or amplify bias, they reinforce narrow ideas of who belongs where – who looks like a leader, who looks like a carer, and ultimately, who gets to be seen.
At Berlew, the Nottingham design agency, our digital design services exist to tell stories through visuals – shaping brand identities and helping people connect. Generative AI is now part of that toolkit for every brand, offering new ways to imagine, illustrate, and ideate. But with that creative power comes responsibility.
If the imagery we produce – even unintentionally – reinforces outdated stereotypes, we’re not just designing visuals; we’re designing narratives. For brands that want to be authentic, understanding how AI “sees” the world is essential to building the one we want to live in.
This study has been conducted to understand the role of design in shaping culture. By analysing how AI depicts everyday British roles, we’re exploring both the biases embedded in technology and the social stories being automated – often invisibly – into our visual world.
While we only requested five images per profession – and it’s true that running dozens or even hundreds of prompts could yield a more balanced set – it’s arguably the first few results that matter most. In reality, those are the images people are most likely to see, share, and subconsciously absorb.
So, we asked one LLM to create five lifelike images that were representations of people in the below occupations in 2025. Here’s what we found…
Industry analysis
Doctors – What AI gets wrong about the UK’s medical workforce

In reviewing five AI-generated images of UK doctors, we observed a skewed representation: three of the images featured men and two featured women; two depicted individuals from Black, Asian or minority ethnic (BAME) backgrounds, while three were white.
Yet according to the latest General Medical Council (GMC) data, women actually now account for 50.04% of doctors in the UK (164,440 women vs 164,195 men), and just over half of the workforce identifies as BAME. This mismatch between AI-generated imagery and real-world demographics highlights how the technology can fail to reflect the actual diversity of the profession.
With five images to represent the split, showing three men and two women does not accurately mirror the real world proportions, which, if rounded for simplicity, would mean women deserve higher representation in this sample.
When seen through that lens, the AI’s output noticeably underrepresents women and people of colour. The imbalance suggests that generative models may still be drawing on outdated or biased visual data – portraying doctors through a historical or stereotypical lens rather than mirroring today’s workforce, which can then have further correlations on our assumptions and wider conversations.
Graphic designers – AI’s creative bias

In analysing five AI-generated images representing UK graphic designers, a different – but equally revealing – bias emerged. All five depictions were of white individuals: three men and two women, despite the fact that women make up around 60% of graphic designers.
According to Design Council research, 13% of designers identify as being from Black, Asian, or minority ethnic (BAME) backgrounds – a figure that, while still below national averages, represents meaningful diversity within the field.
Yet, the AI-generated imagery not only skews male, depicting women less frequently than the industry reflects – but also doesn’t include anyone of BAME diversity. The resulting visuals reproduce an outdated stereotype of the creative industry as predominantly white and male, overlooking the progress toward gender balance and inclusivity that the sector has made.
This kind of visual bias matters deeply in creative professions. Designers are the storytellers and image-makers who shape how the world looks – and when AI portrays that community through such a narrow lens, it inadvertently reinforces the very inequities that design is often trying to challenge.
Teachers – Who’s shaping the next generation?

In our analysis of AI-generated imagery depicting UK teachers, the results again revealed a subtle but significant distortion. Across five generated images, three portrayed men and two portrayed women, with two individuals appearing to be from Black, Asian or minority ethnic (BAME) backgrounds).
When set against official data, this visual profile diverges sharply from reality. According to the Department for Education’s 2024/25 workforce statistics, the teaching profession in the UK remains predominantly female, with 76% of teachers identifying as women – a proportion that has remained consistent over time.
In contrast, the AI-generated images suggested near gender parity, substantially underrepresenting women in a field where they make up more than three-quarters of the workforce.
When AI tools reimagine teachers through a lens that underplays women’s dominance in the profession or flattens the diversity of ethnicity, it can subtly rewrite the story of who leads and inspires the next generation. The result isn’t just an inaccurate visual; it’s a cultural misrepresentation of one of society’s most essential roles.
Nurses – Britain’s diverse population, but does AI show it?

In our test of AI-generated imagery for UK nurses, the results painted an overly uniform picture. All five generated images depicted white women, with no visible ethnic diversity and no male representation at all.
At first glance, this output aligns partially with reality – nursing in the UK is indeed a female-dominated profession, with around 89% of registered nurses identifying as women and 11% as men, according to data from the Nursing and Midwifery Council (NMC). However, the AI imagery fell short in one crucial area: ethnicity.
In England, approximately 34% of nurses identify as being from Black, Asian, or minority ethnic (BAME) backgrounds, making healthcare one of the most diverse workforces in the UK. Yet in our test, every generated image portrayed white women, completely erasing this diversity.
Nurses are the backbone of the healthcare system, but also a reflection of Britain’s multicultural reality. When AI tools default, they can undermine visibility for the thousands of men and ethnically diverse professionals who make up today’s NHS.
By failing to reflect this balance, generative AI doesn’t just miss demographic accuracy – it also misses an opportunity to celebrate the inclusivity, representation, and modern identity of the UK’s nursing workforce.
Surgeons – How AI sees the occupation

In our review of AI-generated images representing UK surgeons, the results offered a slightly more mixed – but still imperfect – picture. Of the five images produced, three depicted men and two depicted women, while two individuals appeared to be from Black, Asian or minority ethnic (BAME) backgrounds).
Compared with real-world data, this outcome partially aligns with reality but still reveals subtle distortions. According to the latest figures, around 67% of surgeons in the UK are men and 33% are women, indicating that surgery remains one of the more male-dominated branches of medicine. In this case, AI’s near gender balance somewhat overcorrects, portraying a profession that appears more evenly split than it actually is.
However, when it comes to ethnicity, the pattern is more promising – at least on the surface. The UK surgical workforce is 57% white and 43% from ethnic minority backgrounds, reflecting one of the most diverse medical specialisms. The AI-generated imagery, with 2 out of 5 images featuring BAME individuals, therefore slightly underrepresents this diversity.
Writers – When AI authors the author stereotype

AI produced a relatively balanced set of the average writer – three women and two men, with one individual appearing to be from a Black, Asian or minority ethnic (BAME) background. At first glance, this might seem close to reality.
According to the latest UK labour data, 60% of authors, writers and translators are women, while 40% are men. This makes writing one of the few creative professions where women form the majority. However, the AI imagery still understated this – presenting near gender parity rather than reflecting the female-led nature of the field.
Care worker – AI sees balance, reality doesn’t

The results revealed a mix of accuracy and omission for care workers. Of the five images reviewed, three featured women and two featured men, with two individuals appearing to be from Black, Asian or minority ethnic (BAME) backgrounds).
While this may initially appear balanced, the reality tells a different story. According to the latest workforce data, the UK’s care sector is overwhelmingly female, with around 81% of care workers identifying as women and 19% as men.
In terms of ethnicity, the AI’s depiction was slightly more reflective of the real-world picture. Around 21% of care workers identify as being from BAME backgrounds, compared with 78% who are white – proportions that are roughly mirrored in the generated set.
Police officers – A dated uniform

AI skewed heavily towards one dominant profile in our research. Of the five generated images, four portrayed white men and one a white woman, with no visible ethnic diversity across the set.
When compared with official workforce data, this represents both an overemphasis on gender imbalance and a near-total erasure of racial diversity. According to the latest Home Office figures, the UK police force is 65% male and 35% female, meaning women are underrepresented in the AI imagery. The lack of any visible ethnic minority officers also contrasts sharply with reality: 8% of police officers in England and Wales identify as belonging to an ethnic minority group, including 5.9% Asian and 3.5% Black officers, while 85% identify as white.
In this case, the AI amplified historical stereotypes – defaulting to white, male authority figures – rather than reflecting the ongoing efforts to diversify UK policing. The resulting images evoke a traditional, hierarchical view of law enforcement that doesn’t fully acknowledge the sector’s evolving demographics or its commitment to inclusion.
This visual bias is particularly significant in policing, where representation directly affects trust, legitimacy, and community connection. If AI-generated imagery continues to reproduce outdated stereotypes, it risks reinforcing the very perceptions that modern policing is actively working to change.
By presenting a version of the police that looks more like the past than the present, AI imagery underscores how machine-generated visuals can quietly re-entrench the status quo – even in professions striving hardest to transform it.
Data scientist – The algorithm’s own blind spot

Data scientists have hugely shaped the way we live and work in recent years – and, in many cases, they’re the very people whose data helps train the large language models (LLMs) now generating the images we see.
However, bias in our research was once again stark. All five generated images featured white men, with no visible representation of women or ethnic diversity.
While the gender imbalance may partially reflect real-world trends, the imagery still exaggerates the disparity. According to the latest industry data, around 78% of data scientists in the UK are men and 22% are women – a clear gender gap, but not total exclusion. The AI-generated results, however, presented a homogenous image of the field.
The absence of ethnic diversity was equally striking. Although official statistics on ethnicity within the data science workforce are less comprehensive, UK tech sector studies indicate that ethnic minority professionals are consistently underrepresented but present – making AI’s portrayal of an all-white field both inaccurate and regressive.
The results suggest that generative models are not simply reproducing existing workforce demographics; they could be at risk of amplifying their inequalities. In doing so, AI imagery reinforces one of the most persistent stereotypes in technology – that data science is a white, male domain – and overlooks the growing diversity within the UK’s tech and analytics community.
For a profession that shapes the very algorithms behind AI itself, the visuals here highlight the recursive nature of bias in technology: when the creators of algorithms are misrepresented by the algorithms themselves, it exposes how deeply cultural assumptions are baked into our visual understanding of expertise, intelligence, and innovation.
CEOs – The enduring archetype of power

In our analysis of AI-generated imagery representing UK CEOs, we asked it to show us a lifelike image of a CEO for a UK FTSE350 company, and the results were strikingly uniform. All five images depicted white men, with no visible diversity in either gender or ethnicity.
When measured against real-world data, this outcome reflects – but also amplifies – existing inequalities. According to recent figures, only 21 of the FTSE 350 companies are led by women, representing just 6% of chief executive roles. While this highlights a clear gender imbalance at the top of British business, the AI’s portrayal takes that imbalance to an extreme – producing an image of leadership that is exclusively white and male.
Ethnic diversity within the UK’s executive landscape is similarly limited, though improving slowly. Yet the AI-generated imagery failed to show any sign of that progress, and the result is a visual narrative that not only mirrors existing disparities but also cements them – depicting leadership as something that still looks overwhelmingly like the past, not the future.
This matters because CEOs are powerful symbols of aspiration, authority, and success. When generative AI presents leadership in such a narrow way, it risks reinforcing long-standing cultural biases about who “looks like” a leader — biases that organisations and society are actively trying to dismantle.
Members of Parliament – AI’s parliament is stuck in the past

In our analysis of AI-generated imagery depicting UK Members of Parliament (MPs), the results leaned heavily toward one archetype. Of the five images generated, four portrayed white men and one a white woman, with no visible ethnic diversity across the set.
This doesn’t accurately reflect modern reality. Following the July 2024 general election, the UK achieved a record level of female representation, with 263 women now serving as MPs – accounting for 40.5% of the House of Commons. This marks a steady upward trend in gender balance and a more representative political landscape than in previous decades.
However, the AI-generated imagery appeared frozen in time, defaulting to a vision of political power that is overwhelmingly male and exclusively white. This not only underplays women’s growing presence in Parliament but also erases the increasing ethnic diversity among MPs – progress that has been both symbolically and substantively important in recent elections.
The result is imagery that fails to capture the evolution of British politics, instead reinforcing an outdated stereotype of authority: the white, male politician in a suit. As with corporate leadership, this matters because AI-generated visuals help shape cultural narratives about who leads, who decides, and who represents. When those narratives lag behind reality, they risk perpetuating the very inequities society has worked to redress.
By portraying politics as more homogenous than it truly is, AI not only misrepresents today’s Parliament – it risks distorting how the public “sees” democracy itself.
What AI imagery tells us about representation in the UK
Across every single profession we analysed – from doctors and nurses to CEOs and designers – a clear pattern has emerged. AI doesn’t just reflect the world we live in; it reimagines it through the lens of bias.
When prompted to create images of people in everyday UK roles, generative AI repeatedly defaulted to narrow, often outdated stereotypes. The outputs were overwhelmingly white, male, and middle-aged, even in professions where women or ethnically diverse groups make up a significant share of the real workforce.
In roles where women form the majority – such as nursing (89% female), teaching (76%), and care work (81%) – AI consistently overstated male representation, portraying gender balance or even male dominance where it does not exist. Conversely, in male-dominated fields such as data science (78% male) and surgery (67% male), AI exaggerated the imbalance further, sometimes depicting entire teams of men.
This pattern does suggest that AI models draw heavily on entrenched cultural stereotypes which could be spoken about online, rather than actual data, visualising professions through a social rather than a statistical lens.
Ethnicity: Diversity erased or simplified
Ethnic representation was equally distorted. In healthcare, for example, over a third of nurses and half of doctors in England are from Black, Asian, or minority ethnic backgrounds, yet AI largely whitewashed those roles. When diversity was present, it often appeared tokenistic – limited to one or two figures with darker skin tones rather than authentic depictions of Britain’s multi-ethnic workforce.
Even in creative fields such as graphic design, where diversity levels are lower but improving, AI generates entirely white imagery, revealing how visual bias can erase nuance across all sectors.
The White Male Archetype
In positions of authority – CEOs, MPs, and police officers – AI’s portrayal was even more uniform. The archetype of the white male leader dominated, despite real-world shifts toward greater diversity. While only 6% of FTSE 350 CEOs are women, AI rendered that imbalance absolute, producing five white men and no women at all. Similarly, in Parliament, where 40.5% of MPs are now women, AI failed to show this progress – reproducing an image of leadership that could look more like the 1980s than 2025.
The bigger picture: When bias translates through design
Across sectors, the findings reveal a deeper truth: AI imagery doesn’t just mirror bias – it codifies it. Because generative models are trained on vast datasets of historical and cultural imagery, they reproduce patterns that reflect long-standing social hierarchies. Professions associated with care or empathy become more feminine and whitewashed; those tied to power, intellect, or authority become more masculine and monocultural.
For the creative industries, if not treated carefully, this can have serious implications. The systems are increasingly used by marketeers, designers, advertising executives and journalists – the very spaces where visual narratives are shaped and circulated. If left unchecked, they risk hardwiring old inequalities into the next generation of content and culture.
A call for creative responsibility
For studios and designers working in graphic design and beyond, the challenge – and opportunity – of generative AI lies in awareness and intervention. By recognising where bias appears, we can begin to curate, correct, and consciously diversify the imagery these tools produce. That means interrogating prompts, refining datasets, and actively designing for inclusion.
Ultimately, this study shows that AI’s “view” of Britain is still catching up to reality. But with deliberate, ethical creativity, we can help ensure that the next wave of generative and graphic design doesn’t just imagine what the world looks like – it helps reimagine who gets seen.
Methodology
We analysed representation across 11 UK professions using an LLM platform, prompting it to generate “five lifelike representations of a person working in [occupation] in 2025.”
The first five images returned for each profession were selected – mirroring the way most users interact with AI-generated visuals. Each image was then reviewed and coded according to visible gender and ethnicity, allowing for a comparative analysis against the latest available UK workforce demographic data by occupation.
Table – AI vs Reality: Key Bias by Profession
| Profession | Key Bias |
|---|---|
| Doctors | Leans on outdated stereotypes. |
| Graphic Designers | Reinforces “white and male” creative stereotypes. |
| Teachers | Significantly underrepresents women despite being a heavily female workforce. |
| Nurses | Presents only white women, erasing real multicultural workforce. |
| Surgeons | Overcorrects gender balance. |
| Writers | Leans toward gender parity instead of female majority. |
| Care Workers | Overrepresents men compared to reality; ethnicity shown but exaggerated. |
| Police Officers | Defaults to white male authority figure; no ethnic diversity shown. |
| Data Scientists | Completely excludes women and ethnic diversity. |
| CEOS | Over-amplifies gender and ethnicity bias. |
| Members of Parliament | Presents Parliament as overwhelmingly white and male; erases real progress in gender and ethnic representation. |
