The AI transformation is shifting how we work and live, with no one knowing the full extent to which it will change our lives. AI won’t just change how we approach problem-solving—it may well change how we think. Simultaneously, leadership gender parity remains years away, with women holding less than 10% of CEO roles in the S&P 100 and the FTSE 100.
To better understand how the leadership gender diversity gap is playing out within this dynamic space, Russell Reynolds Associates took a deep dive into 39 AI application and platform organizations, analyzing the gender diversity of their senior leadership teams (see below for complete methodology). Additionally, we analyzed data from our Global Leadership Monitor, which captures the perspective of leaders across industries, including how they are interpreting AI's potential impacts on their organizations.
Our analysis found that women leaders’ representation is equally poor within these AI-focused organizations (Figures 1 and 2), with women holding only 30% of the overall leadership roles and 10% of the CEO and top tech toles at these organizations.
Figure 1: Women leaders’ representation within AI platform and application organizations
Source: RRA Proprietary Analysis, select AI platform & application organization leadership teams, 2024 (n=39 companies, 426 executives); data on executive teams was collected via company websites and LinkedIn and was categorized and analyzed in Q4 of 2024.
The majority of women leaders in these organizations are clustered in HR and legal, holding 86% and 55% of these roles, respectively (Figure 2). These functional roles rarely hold the same level of influence as traditional “CEO feeder” roles, nor do they typically lead to the top job.
Figure 2 : Gender diversity in AI platform/application organization executive roles
Source: RRA Proprietary Analysis, select AI platform & application organization leadership teams, 2024 (n=39 companies, 426 executives); data on executive teams was collected via company websites and LinkedIn and was categorized and analyzed in Q4 of 2024. Only included roles in which sample sizes were >20. Given the range of organizations analyzed, SLT size and composition varied significantly.
AI’s proliferation has resulted in a complex, rapidly-changing vendor landscape that is already having massive implications for our workforce. With how quickly things are changing, we know many leaders are making decisions about this technology without fully understanding it. And leaders are even less optimistic about their organization’s accompanying direction around AI, as our Monitor found that only 27% agree that their organization has provided the right level of guidance to harness generative AI ethically and safely, and only 24% agree that they have the right processes in place to protect against AI misuse. This means that organizations are relying on AI vendors to design useful and ethical technologies with appropriate safety controls and bias mitigation and detection protocols.
The problems AI organizations face with bias in their tools is well documented. There are numerous instances of AI producing results that perpetuate stereotypes and bias in wide range of use cases, including healthcare, applicant tracking systems, online advertising, image generation, and predictive policing tools. Despite AI appearing neutral, it's made by humans. This means it internalizes and iterates on all the same biases we have - including gender bias.
But what we are talking about goes beyond distasteful or comically incorrect outputs. AI tools and their forebearers are applied to decisions that have real world consequences for individuals and communities – be that decisions on job applications, mortgage and loan applications, or healthcare. As such, the leaders of AI organizations wield a huge level of influence. With that comes great responsibility.
Large Language Models (LLMs) are trained on vast existing data sets designed to represent natural language and function. But historically, data collection has centered men’s experience and overlooked women. This male centricity is rarely deliberate—it’s typically the function of cost, convenience, or a simple lack of consideration. But whether intentional or not, biased product design (stemming from incomplete or inaccurate data sets) has resulted in serious disadvantages for women and their wellbeing, impacting:
Medicine & Health OutcomesIn 2020, only 1% of healthcare research and innovation was invested in female-specific conditions beyond oncology. This lack of data negatively impacts women’s health outcomes by failing to address the female body specifically, resulting in women spending 25% more of their lives in debilitating health than men. |
Vehicular SafetyMany safety standards and policies are designed on the assumption that women are just smaller versions of men—a concept called the “Henry Higgins Effect.” For example, car crash dummies have historically been built to mimic men’s bodies. While men are more likely to crash, women involved in collisions are 47% more likely to be seriously hurt, and 17% more likely to die. |
Equipment DesignMost personal protective equipment (PPE) is based on the sizes and characteristics of male populations from Europe and the US. However, using a “standard” US male face shape for dust, hazard and eye masks means PPE doesn’t fit most women (as well as many black and minority ethnic men). A 2017 TUC report found that only 5% of women in emergency services said that their PPE never hampered their work, with body armor, stab vests, hi-vis vests and jackets all highlighted as unsuitable. |
For more on the adverse effects on women caused by the gender bias in big data collection, see Caroline Criado Perez’ Invisible Women: Data Bias in a World Designed for Men, New York, NY: Abrams Press, 2019.
AI can help us guard against repeating these issues – but only if designed with the appropriate guardrails, and with input from everyone it aims to reflect and protect. While improved gender balance in leadership teams is by no means a silver bullet, an organization’s leadership helps shape those safeguards, answer big questions around where their company should focus, and outline what problems they’re aiming to solve with this technology.
This lack of representation doesn’t take away from the impactful and groundbreaking work that current women leaders are doing in this space – for example: Fei Fei Li (Stanford University), Mira Murati (former CTO, OpenAI), Daniela Amodei (President, Anthropic), Melanie Perkins (CEO, Canva), Lila Ibrahim (COO, Deepmind), to name just a few.
|
While every senior leadership role is important to a company’s healthy functioning, if AI is to truly meet the future needs of society, more balanced gender representation in its technical design and governance will be critical to stopping the perpetuation of unintended and embedded gender biases.
But currently, women leaders are severely underrepresented in technology leadership roles in AI companies, holding a mere 22% of the product, engineering, and science roles observed (Figures 2 and 3). And we observed only four women CEOs and four women CTOs across these 39 organizations.
Figure 3: Women leaders’ representation in select key roles at AI organization
Source: RRA Proprietary Analysis, select AI platform & application organization leadership teams, 2024 (n=39 companies, 426 executives); data on executive teams was collected via company websites and LinkedIn and was categorized and analyzed in Q4 of 2024.
While the small sample size makes broad conclusions difficult, our findings suggest that when women helm AI companies and/or their tech teams, they’re more likely to achieve gender-balanced leadership. Of the four women-led AI organizations we examined, two have achieved gender parity and three are more gender diverse than the average AI leadership team (meaning their senior leadership teams are comprised of over 30% women.)
While it’s important to understand gender representation across AI’s overall executive population, it’s perhaps even more crucial to understand how it varies between organizations. With that, we also analyzed the leadership teams of these 39 AI platform and application organizations.
As of Q4 2024, four of these organizations’ senior leadership teams were comprised of all men, and seven have only one woman executive onboard (Figure 4). Nineteen of these C-suites have less than 25% women, and 30 are comprised of less than one-third women.
Only two of the 39 organizations analyzed have senior leadership teams comprised of over 50% women. As noted above, both of these organizations are led by women.
Figure 4: AI platform/application leadership team gender diversity snapshot
Source: RRA Proprietary Analysis, select AI platform & application organization leadership teams, 2024 (n=39 companies, 411 executives – noting execs listed on organization’s website ONLY); data on executive teams was collected via company websites and LinkedIn and was categorized and analyzed in Q4 of 2024.
According to RRA’s Global Leadership Monitor, which gathers perspective from leaders across industries, women leaders at every level (next generation, C-suite, CEO, and board directors) are more likely to be concerned about AI’s most pressing issues in the workplace, including misinformation perpetuation and introducing bias into talent processes (Figure 4).
Figure 4: Women leaders across levels are more likely to express concern about AI’s societal impacts
Source: RRA H1 2024 Global Leadership Monitor, n=1,432 board, C-level, CEO, and next gen leaders, 2024.
Source: RRA H1 2024 Global Leadership Monitor, n=124 C-suite women, 315 C-suite men
It stands to reason that women leaders – who’ve likely had to contend with more gender bias throughout their careers than men – would be more attuned to AI’s bias risks, and why they’re more likely to be in favor of stronger regulation to ensure its safe use.
We’ve already seen many generative AI product launches miss the mark with women and underrepresented minority groups, resulting in embarrassing claw backs for their organizations. Given what we know about women leaders’ poor representation in AI organizations—particularly in roles that influence product design—one has to wonder if these missteps could have been avoided if more individuals who were keener on asking tough questions about bias and misinformation mitigations were in the room during the tool’s creation. Of course, this is not simply a question of representation, but also one of capability and organizational culture.
While bias may be an inescapable fact of life, it does not need to be an unavoidable aspect of new technologies. In fact, when designed thoughtfully, these technologies can help us solve these issues. While there are many steps AI organizations can take to ensure this outcome, ensuring that their leaders represent everyone for whom they’re developing this tech is key.
No company can outperform its leadership. And research has long shown that gender-balanced leadership teams produce better outcomes. Findings from our RRA Artemis movement indicate that the holistic way in which women leaders approach problem-solving and leadership is even more important in the increasingly volatile landscape of work. For technology organizations aiming to tap into the power of AI for all via their leadership teams, consider the following.
We analyzed 39 AI application (vertical applications, creative consumer applications, and general productivity enterprise applications) and platform (cloud data platforms, foundational model developers, AI developer tools, AI software solutions) organizations. Most of these organizations are based in the United States, but organizations based in the United Kingdom, Europe, and China are also represented.
Data was collected in September-October of 2024 and analyzed in November 2024. Note that with the industry’s high volatility (e.g., three senior leaders left Open AI during the data collection process), leaders in this space are changing roles faster than the industry average. We make every attempt to ensure data accuracy at the time of analysis, but cannot account for every role change that occurs post data collection.
We did not include hardware organizations that are manufacturing the chips that power AI. While these organizations are pivotal to the development of AI, they are less involved in the design of AI outputs, meaning the gender diversity of these senior leadership teams is less relevant for this discussion.
However, addressing the gender gap within these organizations is crucial to overall gender diversity within the tech space. And with external research suggesting that chip companies like Nvidia and Intel also face a gender gap amidst the AI boom, this warrants further study.
We did not analyze ethnic diversity as part of this study, due to GDPR and other privacy laws that limit data collection of this type. However, ethnic diversity is a crucial piece of the parity puzzle and AI’s product design. This important topic also warrants further study.
Gender data comes from Boardex, LinkedIn, and company websites. Due to the sensitivity and complexity of this data, no data is reported on individuals and all data is analyzed and reported in the aggregate. If not explicitly self-identified, the gender of members included in this analysis has been inferred either by the pronouns used on their LinkedIn profiles or by the basis of first name. Members whose gender could not be inferred as either men or women were excluded from this analysis.
Our data sources (Boardex, LinkedIn, and company websites) provide information on each executive's job title and responsibilities. However, there is a high degree of variability in that information, which is both a facet of differences in specific job titles for common functional roles and differences in organizational structures and between industries.
We designed a role categorization process whereby each executive was tagged to a specific role. In some instances, only one executive in each company could be tagged to the role, while in other instances multiple individuals could be tagged to the same role category. See the below table for more details.
CEO: Multiple allowed to account for co-CEOs
CFO, CHRO, CMO, COO, GC, COO: Only one person per company listed (if role exists).
CIO/CTO: Multiple allowed in specific cases where there is a clear separation between Corporate IT (CIO) and Technology roles.
Strategy: Multiple allowed. Category includes Corporate Development. R&D and Innovation roles categorized in Product/Engineering.
Commercial: Multiple allowed. Category includes merchandizing, and Customer roles (unless they have a very clear product/engineering orientation).
Other Functional Leadership: Any functional roles not caught in the above buckets. These are often sub-function roles (e.g., Treasury, which is part of Finance) or roles that are just less common at this level e.g., Comms, Corporate Affairs.
Operations/Supply Chain/Logistics: Multiple allowed. Category covers supply chain roles, logistics, and operations roles in banks and retail organizations etc.
Product/Engineering/Science: This category covers any roles that clearly relate to the development/creation of the product itself. Quality roles belong here, as do innovation and R&D roles. Analysis includes heads of Generative AI at large tech organizations leading product development who are not listed as part of an organization’s ELT.
P&L Leaders: Individuals that run business units, regions, or lines of business.
Additionally, we analyzed data from RRA’s H1 2024 Global Leadership Monitor, which is an online survey of executives and non-executives that gathers the perspective of leaders on the impact of external trends on organizational health and their leadership implications (first launched in 2021). Russell Reynolds Associates surveyed its global network of executives using an online/mobile survey from 4 March to April 1 2024. Data from previous Global Leadership Monitor surveys were deployed in February/March 2021, March 2022, October 2022, March 2023, and September/October 2023.
Leah Christianson and Tom Handcock of RRA’s Center for Leadership Insight conducted the research and authored this report.
LEARN MORE ABOUT THE AUTHORS AND THE CENTER FOR LEADERSHIP INSIGHT
The authors would like to thank the following contributors from Russell Reynolds Associates’ Technology and Board & CEO Advisory practices for their time and valuable perspectives:
Fawad Bajwa leads Russell Reynolds Associates’ AI, Analytics & Data Practice globally. He is based in Toronto and New York.
David Finke is a senior member of the firm’s Technology and Board & CEO Advisory practices. He is based in Palo Alto.
Margot McShane co-leads Russell Reynolds Associates’ Board & CEO Advisory practice in the Americas, and is the co-founder of RRA Artemis. She is based in San Francisco.
Hetty Pye is a senior member of Russell Reynolds Associates’ Board & CEO Advisory practice, and is the co-founder of RRA Artemis. She is based in London.
Tuck Rickards is a senior member and former leader of Russell Reynolds Associates’ Technology practice. He is based in San Francisco.