At the end of last year, the European Parliament and European Council agreed on the final version of the EU AI Act. It is the world’s first comprehensive legal framework on the development and use of AI and aims to safeguard the rights and safety of European citizens.
The legislation has far reaching implications and effects, but HR is one industry that should pay close attention to the developments and rollout of the AI Act.
What is the EU AI Act?
While AI and generative AI have been in the spotlight for some years, including in HR, legislation has lagged behind. The EU AI Act aims to address this gap.
The AI Act takes a risk-based approach, and categorises AI systems into four risk groups based on their use cases. Depending on the risk level the AI system falls into, different requirements will apply:
- Minimal/no risk: these are AI applications that pose virtually no risk to human welfare and safety, and include uses that are well-embedded in modern life, such as video games or email spam filters. The AI Act allows for the free use of AI systems that fall under this category.
- Limited risk: AI systems at this level will face, first and foremost, transparency requirements, to make sure that the individuals are aware that they are interacting with AI rather than a human, such as with chat bots or deepfakes. The AI Act will require that individuals have the option to interact with a human instead.
- High risk: These include applications that could negatively affect human rights and safety. Some high-risk use cases include AI in critical infrastructure like roads, heating and electricity, and also when used to make employment decisions.
- Unacceptable risk/Prohibited AI: At this level the applications will be strictly prohibited as they are deemed to pose a clear threat to the safety or rights of individuals. Here we are talking about activities such as social scoring, predictive policing, untargeted scraping of the internet for facial images to build-up databases, and systems used for emotion recognition at work.
The AI Act also lays down rules regarding the use of “foundation models'' to ensure compliance with copyright laws, transparency around the content used to train these models, and the technical documentation concerning their use. Foundation models are systems trained on extensive datasets, serving as a starting point for developing machine learning models for a specific purpose, rather than creating new AI from scratch. Open AI’s ChatGPT is one example of a foundation model, as it was built on the GPT (Generative Pre-trained Transformer) architecture to train it to generate content and human-like text.
"I look forward to seeing the finalised text and I applaud the EU for coming to a political agreement on a very important piece of legislation."
- Corinne Hedlund Nytén, Head of Legal at Jobylon
Who and what will be affected by the legislation?
The AI Act will include a definition of what an “AI system” is, so the first question will always be: does the system actually qualify as AI?
The legislation will apply to developers of an AI system as well as those who have an AI system developed with an aim of placing it on the market applies to (providers) and those deploying an AI system within the EU.
The AI Act will also have a broad geographical scope, as it will apply to even companies based outside of the EU, if member-state law applies by virtue of public international law, or the output produced by the system is intended to be used in the EU. This means that no matter where the technology is developed, if it is used or marketed within the EU, it will need to comply with the AI Act. For this reason, many companies will need to keep an eye on the developments.
While the AI Act is expected to be fully effective in the EU in 2026, companies are recommended to act early by reviewing their AI practices and assessing their risk level in line with the legislation.
How will the AI Act affect HR?
AI systems that affect employment decisions may fall into the “high-risk” category. Since, according to the AI Act, they “appreciably impact future career prospects and livelihoods” of people. For this reason, HR will need to make sure that their organizations comply with the requirements of the AI Act when thinking of developing or deploying AI.
Though the official finalised text of the law hasn’t yet been published, from what we have seen so far in the negotiations, the Act does specifically mention AI systems used for matching candidates, AI for biometric identification and emotion recognition in the workplace. Of course, there may be aspects of recruiting that fall outside of “high-risk”, such as interacting with a chat-bot during initial selection stages (which would be considered a “limited risk” application), it will still be necessary to fulfill the transparency requirements in such a case.
High-risk activities are not banned, but to ensure compliance they must meet certain criteria. There will be a list of examples of high-risk systems in an annex to the AI Act, but what we know so far as it relates to employment, some high-risk use cases include, for example, the placement of targeted job advertisements, the analysis and filtering of job applications, and evaluation candidates.
While the AI Act will likely have drastic impacts on how companies develop and incorporate AI technology, starting now will ensure that there is enough time to adapt systems to the new regulations. The purpose of the legislation is to keep AI development transparent and safe.
HR professionals will need to stay ahead of the developments and understand which parts of the recruitment process will be impacted. Compliance, transparency, and adaptability will be key in navigating this evolving regulatory landscape.
NOTE: The official text of the Act has not yet been released. As of writing, it is still a proposal and needs to be formally adopted by the European Parliament and Council to become law, after which the Act will be published.