AI is on the brink of a significant shift, but not in the way you might think. This isn't just about the latest tech breakthroughs, it’s about who’s leading the charge.
The gender gap in AI isn’t merely a statistic; it’s a call to action. Women are underrepresented across AI inputs, outputs and leadership, and that’s hindering progress. But change is here. And it's about time.
Incorporating women into AI ensures we don’t repeat history. It ensures women are in the room when AI is being created and tested, so the systems and solutions we build don’t have gaps in data and perspective. This presence is crucial to ensure that the tools that increasingly shape how we learn, work and self-identify are not biased from the outset. Otherwise, we risk coding inequality into the next generation of opportunities and solutions.
According to World Economic Forum, in some regions only 28% of the global STEM workforce is composed of women, and even fewer enter AI-specific fields. If our next generation uses AI to guide them, and the models are biased, we perpetuate this problem.
Recognizing the urgency of these issues, Real Chemistry recently united over 70 life sciences executives at a dynamic Women in AI event. This forum sparked insightful discussions and strategic ideas to bridge the gender gap in AI.
Here’s what we discussed.
Why You Should Care: The Bias in AI Systems
The underrepresentation of women in AI extends far beyond being a gender issue; it affects everyone. AI systems reflect the biases present in their data and the teams that create them.
Nearly 44% of AI systems across a range of industries exhibit gender bias.1 This bias not only skews outputs but also perpetuates the marginalization and underrepresentation of women. However, gender bias is only one part of the equation. Social biases, such as racial bias, are also prevalent and must be addressed simultaneously to ensure AI systems are equitable for all.
Hidden Algorithms, Visible Bias: The Gender Gap in AI Outputs
-
A study by Kotek et al. (2023) found that modern large language models (LLMs) are 3–6 times more likely to assign occupations based on gender stereotypes, amplifying societal stereotypes more than actual job statistics do.2
-
MIT’s thesis, “When ChatGPT Becomes a Hiring Manager” (May 2024), revealed that while LLMs were more likely to hire female candidates, they tended to recommend lower salaries for them.3
-
Research from TrustNLP 2025 (Ding et al.) shows ChatGPT exhibits significant gender bias across multiple languages, not just English.4
These examples of bias highlight why closing the gender gap in AI is not just a women's issue but a societal imperative.
How did we get here?
Bias in AI systems primarily originates from skewed training data and is further intensified by reinforcement learning from human feedback.2 Studies, including those by UN Women, emphasize that AI technologies often reflect societal stereotypes, impacting crucial areas such as hiring, healthcare, voice assistants and language translation.5
This mirroring of existing biases in AI systems can perpetuate and even exacerbate inequalities across various applications, highlighting the need for more equitable and representative data and training practices.
Where We Stand Today: A Critical Look at AI's Gender Imbalance
Women make up less than one-third of the AI workforce. The higher up you go, the steeper the drop. Women represent 29.6% of entry-level jobs, but that number falls to just 13.8% at the senior executive level—a sharp decline that shows how the gender gap widens as careers progress.6
And it’s not just about headcount; It’s about how women experience AI at work.
Women are 16 percentage points less likely than men to use AI tools, and they often feel less supported when they do.7 While 83% of men say their companies encourage them to use AI, only 61% of women feel the same.1
Training is another sticking point. In the general workforce, only 49% of women users say their company invests in generative AI training.1
Industry leaders are starting to take notice, and their perspectives echo the urgency of the problem:
-
Deloitte: Highlights a closing adoption gap but notes a persistent trust issue
-
Time Magazine: Calls for more female voices in AI to prevent career stagnation
-
Harvard Business School: Warns of the consequences if women continue to avoid AI.
-
interface: Identifies the talent pool gap as a critical issue
Closing these gaps isn't just about fairness, it’s the key to creating AI that’s innovative, trusted and built to serve everyone. The question now is how we get there.
The Path Forward
Achieving gender parity in AI isn't just a goal—it's a movement gaining strong momentum. Empowering women to lead in AI is essential for driving the industry forward. To dismantle these barriers and biases, we must adopt targeted strategies that pave the way for a more equitable AI future.
Strategies for Change
To create an AI landscape that respects and reflects the diversity of our world requires more than good intentions; it takes deliberate action. By putting the right strategies in place, we can reduce bias, build trust and ensure AI works for everyone.
Start with the model.
Bias often creeps in through the data. To fix it, researchers use techniques like:
-
Adjusting word embeddings to remove gender stereotypes. For example, stopping models from linking “homemaker” with women and “programmer” with men.
-
Counterfactual data augmentation where gendered words (“he” and “she”) are swapped in the training data, cutting bias by nearly 20% in some models.
-
Removing biased training points, a method developed by MIT to help treat underrepresented groups more fairly, without hurting accuracy.
Double-check the outputs
Even with better data, bias can slip through. Post-processing tools help catch those issues before they reach users. For example, teams can use filters and threshold adjustments, techniques that review results after the model runs and rebalance them if one group is over- or underrepresented.
Build Diverse Teams
Bias isn't just about the code; it’s about who writes it. Companies like IBM (AI Fairness 360) stress building development teams with different perspectives and documenting how algorithms work so they can be reviewed and improved.
Audit early, audit often
Bias isn’t a one-time fix. Regular checks keep systems accountable:
-
Tools like GenderBench help teams measure 19 types of gender-related bias across 14 test “probes.” These probes flag issues like stereotypical reasoning, underrepresentation and discriminatory responses in hiring or medical scenarios.
-
New York City Local Law 144 now requires independent annual audits of hiring algorithms to catch discrimination based on race or gender.
Fairness isn’t just good ethics; it’s good business. Teams that invest in these steps end up with AI that’s not only more equitable but also more resilient, innovative and trusted.
Critical Thinking and Upskilling
Even with better models, filters and audits, there’s one thing technology can’t replace: human judgment. As AI becomes part of everyday life, we can’t let people, especially younger generations, lose the ability to question what they’re seeing.
Upskilling isn’t about making everyone an AI engineer; it’s about teaching teams how to:
-
Question outputs: Encourage users to ask, “Does this make sense?” or “Does this match other trusted sources?” before sharing or acting on AI-generated information.
-
Use fact checking habits: Cross-reference AI outputs with reputable sources, news outlets or peer-reviewed research before treating them as truth.
-
Understand prompting: Train users to craft clear, specific prompts that improve AI accuracy and learn to rerun or refine if the results feel off.
-
Leverage human experience: Remind users that expertise, context and common sense are still the best tools for spotting when AI is wrong or misleading.
By building these habits into training programs, schools and workplaces, we can make sure AI augments human intelligence instead of replacing it. It keeps critical thinking alive and ensures that automation doesn’t come at the cost of discernment.
Join Real Chemistry in Leading the Charge
At Real Chemistry, we're not just part of the conversation; we're leading it. With a female-forward AI culture where women make up 53% of top users and hold significant leadership roles, we're setting the standard for inclusivity and innovation.
Join us in forging a future where AI reflects the world it serves.
Sources:
-
Deloitte. Tech, Media & Telecom. Available from: https://www.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2025/women-and-generative-ai.html (last accessed Aug 2025)
-
Kotek H, et al. Proceedings of The ACM Collective Intelligence Conference. 2023;12–14. Available from: https://dl.acm.org/doi/10.1145/3582269.3615599 (last accessed Aug 2025)
-
Gerszberg NR. 2024. Quantifying Gender Bias in Large Language Models: When ChatGPT Becomes a Hiring Manager. Doctoral thesis, Massachusetts Institute of Technology. Available from: https://dspace.mit.edu/handle/1721.1/156812?utm_source=chatgpt.com (last accessed Aug 2025)
-
Ding YT, et al. Association for Computational Linguistics. 2025;TrustNLP 2025:552–579. Available from: https://aclanthology.org/2025.trustnlp-main.36/ (last accessed Aug 2025)
-
UN Women. Interview. Available from: https://www.unwomen.org/en/news-stories/interview/2025/02/how-ai-reinforces-gender-bias-and-what-we-can-do-about-it (last accessed Aug 2025)
-
Pal S, et al. Interface. 2024:1–42. Available from: https://www.interface-eu.org/publications/ai-gender-gap (last accessed Aug 2025)
-
Humlum A, Vestergaard E. Proc Natl Acad Sci U S A. 2025;122:e2414972121