
Written by Letizia Rovere, Emma Cavalieri, Marta Kolodziej and Bianca Emanueli
1. Executive
Summary AI technology makes inroads into nearly all spheres of life: apart from private use of generative linguistic tools, big companies such as Google, Amazon, and Meta are increasingly integrating AI in their operations, ranging from logistics, decision making, advertising, and recruitment procedures, among others (Kelly, 2025). Against this backdrop, as AI-driven solutions spread across sectors, experts point to the widening range of risks and dangers connected to the expansion of this technology. A growing number of researchers highlights how the AI-driven tools can reproduce and amplify gender inequalities, particularly in core areas of private and public life. In this context, concerns over gender bias, bodily image autonomy, and privacy protection have been at the forefront of discussion (Caballar, 2024).
This policy brief offers an analysis anchored in gender equality and human rights perspectives and revolves around one among the three high-risk categories encapsulated in the European Union’s Regulation 2024/1689, which focuses on employment and examines the link to professionalism and women’s access to new spaces and male-dominated fields. To curb this bias, it will be necessary to understand the multidimensional nature of women’s quest to exercise leadership, for women to be able to navigate it.
2. Introduction
Since its introduction in late 2022, generative Artificial Intelligence (AI) has profoundly reshaped the relationship between humans and technology. From its early days, one of the most widely used AI- driven text generators, ChatGPT, has received massive attention around the world: in January 2023, it was already generating an equivalent of the entire printed matter that humankind produced every 14 days (Thompson, 2023). Two years later, OpenAI launched a free-of-charge image generation tool integrated in ChatGPT, which was used to generate 700 million pictures in the first 3 weeks of its introduction (Wiggers, 2025). In a recent podcast, Brazilian Professor Mariana Valente, from the University of Saint Gallen, discussed gender violence through AI reform from an EU and Brazil perspective (Café da Manhã [RFM], 2026). As Grok, an AI-powered content generation platform integrated into X, reaches international headlines with cases of AI deepfakes, Prof. Valente reflects on how AI contributes to a specific form of gendered discrimination that expropriates the victims of their 2 image and agency to levels unprecedented in history, enlarging the scale of the possibilities of discrimination and the spectrum of violence.
Gender bias, broadly defined, is the discrimination against people of biological sex, sexual orientation, and/or gender identity other than cis heterosexual men (Cuesta, 2025). In relation to AI, such bias manifests itself in a wide array of examples. For instance, semantic associations in the form of word embeddings automatically default names of professional roles according to their stereotypical gendered form (O’Connor and Liu, 2023). Similarly, women are underrepresented in AI-generated images of employees performing traditionally masculine jobs (Gorska and Jemielniak, 2023).
This policy brief focuses on how the EU’s AI Act (Regulation 2024/1689) handles gender perspectives in its risk-based approach, especially in high-risk systems in recruitment. In particular, this piece examines the extent to which the law in place addresses the gender dimension and the resulting impacts on women across the EU.
3. Policy Description
A recent World Bank report (2024) analysed how discrimination is enshrined in law and highlighted how even when gender lenses are superimposed on legislation, the interplay between culture, religion, social norms, pressure of communities, and codified rules continues to perpetuate gendered dynamics (World Bank, 2024). The findings of the report are instrumental to the understanding of the limits of relying solely on changing the law to address the issue of gender bias in AI recruitment, particularly in relation to its failure to consider the cultural dimensions that lead societies to justify and perpetuate the discrimination.
AI systems have been increasingly used in employment decision-making, prompting the European Union to intervene through Regulation 2024/1689, hereafter referred to as the AI Act. This European regulatory framework works through a risk-based approach, where Annex III Section 4 classifies employment and work management as high-risk applications. For this reason, the use of AI within the workplace requires further investigation, as these systems often affect access to work, career progression, and economic security. As mentioned in the European Commission Impact Assessment on AI (2021), there is a need to establish harmonized rules over the placement on the market and use of products employing AI technology within the Union. It is important to highlight that these systems shape labour market access as a whole and create significant structural consequences, as errors and biases are rarely limited to individuals but rather systematically affect certain groups of workers
4. Policy problem
When looking at gender bias in AI, the information gap is significant and directly affects European women within the employment context. Research shows that AI-generated images tend to disproportionately depict men in “traditionally masculine professions”, while women tend to be overly represented in care-oriented roles, reinforcing occupational segregation and stereotypical associations in professional contexts and having a direct effect on automated recruitment and screening systems (Gorska & Jemielniak, 2023). Such bias emerges significantly and translates into direct employment discrimination.
As Cuesta (2025) emphasizes, gender bias in AI can be traced back to different sources, impacting AI systems at various levels. First, bias can be data-related, due to the underrepresented and incomplete datasets used in the training of LLMs. Another source of bias is related to the functioning of algorithms, which are in turn affected by confirmation biases, coverage bias (i.e., what is accessible), ranking bias (i.e., based on the popularity of certain pieces of information, relying on visual formats rather than textual data), and presentation bias (i.e., how information is displayed). Lastly, the presence of bias in AI systems can be attributed in part to the underrepresentation of women in STEM disciplines, such as programming and coding.
While the AI Act recognises employment as a high-risk category, it does not explicitly consider gender as a separate risk category within its risk management framework. For the purposes of the AI Act, high-risk systems used in the context of employment are classified into two categories: systems used in recruitment and selection (with filtering and selection functions) and systems affecting decision-making in the workplace (such as promotions, termination, and evaluation of performance). For the inherent nature of these processes, the use of AI that is not properly targeted through a gendered lens can add further burden to women in the employment context. As a result, gender-based discrimination in AI-driven employment systems remains a foreseeable, yet insufficiently specified, risk under the current regulatory framework.
5. Policy research
5.1 Empirical Evidence of Gender Bias in Employment AI
Empirical research demonstrated that gender bias within AI systems is pronounced in employment- related technologies to perform functions such as: hiring algorithms, CV screening tools, and automated evaluation systems. One of the most prominent real-world examples emerged when a recruitment algorithm used by Amazon, one of the most influential companies worldwide, was found to systematically downgrade CVs that contained indicators associated with women, such as participation in female-dominated professional networks (Dastin, 2018). This case illustrates the possible harm of algorithmic hiring systems trained on historical employment data, which are highly likely to reproduce existing labour market inequalities.
Academic research further confirms the existence of such biases and identifies them as systemic rather than merely isolated incidents. Raghavan et al. (2020) demonstrate that algorithmic tools often perpetuate and embed structural gender bias, even when there are clear attempts by the developers to adjust the systems; this can be explained by the reliance on historical datasets reflecting male- dominated occupational patterns. Similarly, research highlights that word embeddings tend to directly associate technical and leadership roles with male identities while linking female identities to supportive or domestic positions, thereby reinforcing occupational segregation and biases that influence the automated screening process (O’Connor and Liu, 2023). These representation biases are reinforced in gendered linguistic environments, where AI models default to masculine professional terminology for high-status occupations (Wellner, 2020).
Intersectional evidence further highlights disparities within automated assessment technologies. Facial recognition and evaluation systems have shown higher error rates in the analysis of women of colour, flagging issues over automated performance monitoring and identity verification tools used in the recruitment process and workplace surveillance (Buolamwini and Gebru, 2018). Collectively, these findings show how gender bias in employment is not an accidental error, but rather a measurable and persistent outcome, capable of shaping hiring results and influencing career trajectories.
5.2 AI Act Regulatory Strengths and Limits
The AI Act represents the first attempt of the European Union to regulate artificial intelligence systems through a risk-based governance framework. Annex III, Section 4, classifies the AI systems used in employment as high-risk, reflecting their potential to affect fundamental rights and labour market access. As a result, providers and deployers have to comply with strict obligations, focusing on risk management procedures, data governance requirements, human oversight mechanisms, and monitoring duties.
These provisions define and establish vital safeguards: the regulation requires training data to be representative, and art. 27 requires deployers to conduct compulsory Fundamental Rights Impact Assessments (FRIA) prior to the implementation of high-risk systems. For this reason, the AI Act is recognised academically as a turning point in the operationalisation of fundamental rights within digital structures (Veale and Borgesius, 2021).
However, despite its ambitious attempts, the AI Act does not explicitly recognise gender discrimination as a distinct risk category. While requirements addressing the importance of a representative dataset implicitly address gender bias, the Regulation does not mandate gender- disaggregated testing, specialised bias audits, nor a systematic monitoring of gendered employment outcomes. As noted by Cuesta (2025), the absence of these targeted gender safeguards risks allowing AI systems, such as algorithmic recruitment and workforce management, to reproduce and perpetuate existing labour inequalities.
Additionally, control over compliance represents a critical issue within the framework established by the AI Act. The deployment of systems classified as high risk gives rise to a right to explanation when a decision affects an individual’s legal position or fundamental rights, including the right of non- discrimination. Under Article 86, the deployer is required to provide clear and meaningful explanations on the role AI systems played in the decision-making procedure. This provision implies that the final decision is not exclusively attributable to the AI system itself but remains subject of human oversight.
Research has shown, however, that demonstrating the existence of biases in AI systems can prove to be challenging. This reflects the fact that automated discrimination, whether it involves AI or not, appears to be more abstract and intangible, making it difficult to even suspect that a decision was affected by automated biases (Grozdanovski, 2025). Moreover, due to their specific nature, AI systems undergo a constant evolution and training processes, and, as a result, the occurrence and frequency of discriminatory outcomes in the decision-making process may vary over time (De Stefano and Wouters, 2022).
In addition to this, an addressee making the claim for an explanation under Art. 86 may encounter obstacles. The applicant would be required to prove that the treatment they were subject to was unfair compared to others in a similar situation. This would, in turn, at least require a general knowledge of how the AI system operated in that instance. Such requirements are in contrast with the purpose of the provision, as they require the applicant to have evidence that would likely be contained in the explanation given as a result of the claim. While the right to an explanation is an important tool to challenge discriminatory practices within high-risk AI systems, its effectiveness can ultimately be hindered.
The AI Act relies mostly on ex ante technical compliance mechanisms, thus failing to create a comprehensive system for enforcement and specific remedies in the area of the protection of fundamental rights. The lack of such a system, along with the lack of a specific gender lens in tackling discrimination and bias in high-risk systems, leads to uncertainty with regard to how gender bias in the recruitment process and in work-related decision-making can be addressed and challenged.
Although the AI Act provides for a strong governance framework, gender-bias discrimination remains a foreseeable concern that is not sufficiently specified. This gap highlights the need to promptly consider how measures of implementation could integrate gender equality into existing compliance mechanisms.
6. Policy recommendations
Legislation depends directly on a governmental function, instead, culture is shaped by the input of every citizen. To change social norms and traditions, one needs to grapple with these two realities and understand that women’s access to gender unbiased employment requires acknowledging that their quest is multidimensional: at times collective and at times individual, while always embedded in the sociopolitical and economic landscapes. It is essential to confront internalized biases and the extent to which trust, comfort, and likability are projected. This can be done through a twofold strategy that considers the legislative approach through a new lens, prioritizing transparency, broader access to information, and increasing monitoring.
- Measure 1: To address the regulatory gaps identified above, the implementation phase of the AI Act should incorporate targeted gender-sensitive compliance mechanisms within employment systems classified as high-risk. In other words, the targeting of the legislative process should come from both cultural education and legal application. Practically, EU regulators should require mandatory gender-disaggregated testing for all high-risk systems used in recruitment, evaluation, and workplace monitoring. This approach should allow measuring error rates, selection outcomes, and predictions across gender groups, thereby ensuring that the requirements of the AI act result in effective and measurable equality outcomes. Running unconscious bias seminars for staff could be part of the strategy.
- Measure 2: access to information and effective control over compliance are two of the most critical elements of the AI Act. For this reason, enhancing transparency in the use of high-risk systems is crucial for mitigating negative impacts and ensuring fairness in decisions having potentially discriminatory outcomes, particularly in recruitment processes and work-related decisions. From a policy point of view, this can be achieved by establishing a duty for employers to disclose the intention of using AI systems beforehand and defining what tasks they will be performing.
- Key actors: The European Union Artificial Intelligence Office (AI Office), established to support the enforcement and implementation of the AI Act, can play a central role in shaping 7 the gender dimension of its application. The AI Office is responsible for drafting guidelines and supportive tools (e.g., protocols and best practices) that should include specific considerations related to gender equality and discrimination. The AI Office also assists the European Commission in drafting decisions and delegated acts, which serve as an opportunity to incorporate a gender lens in the implementation of the act. Additionally, the AI Office cooperates with other Union bodies, providing sectoral expertise. Member States remain, however, central in implementing the Act. In particular, national competent authorities (Art. 70) will play a crucial role in mitigating the adverse impacts of AI systems.
Bibliography
Buolamwini, J., & Timnit G.. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81, 1–15. Retrieved from https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.
Caballar, R. D. (2024, September 3). 10 AI Dangers and Risks and How to Manage Them. IBM. Retrieved from https://www.ibm.com/think/insights/10-ai-dangers-and-risks-and-how-to-manage-them.
Valente, M. (Guest). (2026, January 29). Grok and AI deepfakes [Audio podcast episode]. In Café da Manhã. RFM.
Cuesta, J. (2025, October 28). Gender Bias in Artificial Intelligence. Journal of Economic Policy Reform, 1–25. Retrieved from https://doi.org/10.1080/17487870.2025.2579039.
Dastin, J. (2018, October 11). Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women.Reuters,. Retrieved from https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai- recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/.
De Stefano, V., & Mathias W. (2022). AI and Digital Tools in Workplace Management and Evaluation: An Assessment of the EU’s Legal Framework. (EPRS Study PE 728.516). European Parliamentary Research Service. Retrieved from https://www.europarl.europa.eu/thinktank/en/document/EPRS_STU(2022)729516
European Commission. (2021, April 21). Impact Assessment of the Regulation on Artificial Intelligence (COM SWD (2021) 84 final). Shaping Europe’s Digital Future. Retrieved from https://digital-
strategy.ec.europa.eu/en/library/impact-assessment-regulation-artificial-intelligence
Gorska, A. M., & Dariusz, J. (2023). The Invisible Women: Uncovering Gender Bias in AI-Generated Images of Professionals. Feminist Media Studies, 23(8), 4370–4375. Retrieved from https://doi.org/10.1080/14680777.2023.2263659.
Grozdanovski, L. (2025, May 29). Non-Discrimination Law, the GDPR, the AI Act and the – Now Withdrawn – AI Liability Directive Proposal Offering Gateways to Pre-Trial Knowledge of Algorithmic Discrimination. AI and Ethics, 5, 5039-5062. Retrieved from https://doi.org/10.1007/s43681-025-00754-0.
Kelly, P. (2025, February 19). How Governments Are Using AI: 8 Real-World Case Studies. GovNet. Retrieved from https://blog.govnet.co.uk/technology/ai-in-government-case-studies.
Lütz, F. (2024). The AI Act, Gender Equality and Non-Discrimination: What Role for the AI Office? ERA- Forum, 25(1), 79-95. Retrieved from https://doi.org/10.1007/s12027-024-00785-w.
O’Connor, S., & Huan, L. (2023). Gender Bias Perpetuation and Mitigation in AI Technologies: Challenges and Opportunities. AI & Society, 39 (4), 2045–2057. Retrieved from https://doi.org/10.1007/s00146-023-01675-4.
Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT ’20),469–481. New York: ACM. Retrieved from https://creatingfutureus.org/wp-content/uploads/2021/10/RaghavanEtAl-2020-MitigatingBiasHiring.pdf.
Thompson, A. (2023, December 1). GPT-3.5 + ChatGPT: An Illustrated Overview. LifeArchitect.ai. Retrieved from https://lifearchitect.ai/chatgpt/.
Veale, M., & Borgesius, F.Z. (2021). Demystifying the Draft EU Artificial Intelligence Act. Computer Law Review International, 22(4), 97–112. Retrieved from https://discovery.ucl.ac.uk/id/eprint/10131593/.
Wellner, G. P.. (2020). When AI Is Gender-Biased. Humanamente, 13(36). Retrieved from https://www.humanamente.eu/index.php/HM/article/view/307.
Wiggers, K. (2025, April 3). ChatGPT Users Have Generated over 700M Images since Last Week, OpenAI Says | TechCrunch. TechCrunch.Retrieved from https://techcrunch.com/2025/04/03/chatgpt-users-have-generated-over-700m-images-since-last-week-openai-says/.
World Bank. (2024). Women, Business and the Law 2024. Retrieved from http://hdl.handle.net/10986/41040
Wright, S. (2025). Artificial Intelligence and Work: A Review of the European Policy Landscape. Journal of Industrial Relations, 67 (5),794-805. Retrieved from https://doi.org/10.1177/00221856251394780.

Reprogramming Equality: Decoding the Algorithm of the EU’s AI Act
Bridging Continents: A Comparative Study of China and the EU’s Engagement in the Inner Maghreb
A Catastrophe for Generations: The Production of Debility during the Genocide in Gaza
Unity and Disunity in EU Diplomacy: Why Speaking with One Voice Remains Difficult