EU AI Act: What is it and how will it affect recruitment?

Sarah Molaiepour Future of HR AI

In the beginning of 2024, the European Parliament and European Council agreed on the final version of the EU AI Act, now published in the EU Official Journal. It is the world’s first comprehensive legal framework on the development and use of AI and aims to safeguard the rights and safety of European citizens. 

The legislation has far reaching implications and effects, but the HR and recruitment industries should pay close attention to the developments and rollout of the AI Act. 

Two women working, one with a VR headset

What is the EU AI Act?

While AI and generative AI have been in the spotlight for some years, including in HR, legislation has lagged behind. The EU AI Act aims to address this gap. 

It is, in EU-law terms - a regulation which means that it will become directly applicable law in all EU-member states, just like the General Data Protection Regulation (GDPR). The AI Act takes a risk-based approach, and categorizes AI systems into four risk groups based on their use cases.

Depending on the risk level the AI system falls into, different requirements will apply:

  • Minimal/no risk: these are AI applications that pose virtually no risk to human welfare and safety, and include use cases that are part of modern life, such as video games or email spam filters. AI systems that fall under this category can be freely used under the AI Act.

  • Limited risk: First and foremost, AI systems at this level will face transparency requirements. Individuals will need to be informed when they are interacting with AI rather than a human, such as with chat bots or deep fakes. The AI Act will require that people have the option to interact with a human instead.

  • High risk: These include applications that could negatively impact human rights and safety. Some high-risk use cases include AI in critical infrastructure like roads, heating and electricity, and also when used within recruitment to e.g. filter job applications or evaluate candidates.

  • Unacceptable risk/Prohibited AI: Applications at this level will be strictly prohibited as they are deemed a threat to the safety or rights of individuals. These include systems that use intentionally manipulative or deceptive techniques that distort the behavior of a person and could cause harm to themselves or others. 

The AI Act also includes rules regarding the use of “general purpose AI models'' to ensure compliance with copyright laws, transparency around the content used to train these models, and the technical documentation concerning their use.  

General-purpose AI models are systems trained on extensive datasets, capable of performing a wide range of tasks, and can be integrated into a variety of applications or systems. Open AI’s ChatGPT is an example of a general-purpose AI model, as it was built on the GPT (Generative Pre-trained Transformer) architecture to generate content and human-like text. 


Corinne Hedlund Nyten

"I applaud the EU for coming to a political agreement on a very important piece of legislation."

- Corinne Hedlund Nytén, Head of Legal at Jobylon


Who and what will be affected by the legislation? 

The AI Act will include a definition of what an “AI system” is, so the first question will always be: does the system actually qualify as AI?

If the system qualifies as AI, the next question for an organization will be to determine what role it has in relation to the AI-system. The AI Act introduces different roles that will have different obligations–a bit similar to a “processor” and “controller” under Europe’s GDPR. 

While all roles are classified as “operators”, there are five distinct categories of operators: 1) Providers, 2) Deployers, 3) Importers, 4) Distributors, and 5) Product manufacturers. For example, providers are developers of AI systems, as well as those that have an AI system developed with the intention of placing it on the market. Deployers use AI systems in their operations, while importers and distributors manage AI systems brought into the EU.

The AI Act will also have a broad geographical scope, as it will apply to companies based outside of the EU, if member-state law applies by virtue of public international law, or the output produced by the AI-system is intended to be used in the EU. This means that regardless of where the technology is developed, if it is used or marketed within the EU, the AI Act will be applicable. As a result, many companies will need to stay informed about its developments.

How will the AI Act affect HR and recruitment? 

AI systems that affect employment decisions may fall into the “high-risk” category. Since, according to the AI Act, they “appreciably impact future career prospects and livelihoods” of people. For this reason, HR will need to make sure that their organizations comply with the requirements of the AI Act when thinking of developing or deploying AI.  

Though the official finalised text of the law hasn’t yet been published, from what we have seen so far in the negotiations, the Act does specifically mention AI systems used for matching candidates, AI for biometric identification and emotion recognition in the workplace. Of course, there may be aspects of recruiting that fall outside of “high-risk”, such as interacting with a chat-bot during initial selection stages (which would be considered a “limited risk” application), it will still be necessary to fulfill the transparency requirements in such a case.

High-risk activities are not banned, but to ensure compliance they must meet certain criteria. There will be a list of examples of high-risk systems in an annex to the AI Act, but what we know so far as it relates to employment, some high-risk use cases include, for example, the placement of targeted job advertisements, the analysis and filtering of job applications, and evaluation candidates. 

HR and TA teams will need to:

  • Implement technical and organizational measures required to ensure the system is used as intended and aligns with the information supplied by the Provider
  • Ensure human oversight
  • Fulfill transparency requirements
  • Put measures in place to monitor the AI-system’s operations
  • Manage risks and report incidents
  • Be mindful of automated decision-making and the requirements surrounding that

Besides the EU AI Act, TA and HR teams need to consider:

  • Privacy: Ensure compliance with both the EU AI Act and GDPR. Understand where data goes and consider third-party and third-country data transfers. Always ensure the legal basis for data collection and follow the GDPR’s requirements surrounding automated decision making.

  • Intellectual property: Be aware of copyright issues related to AI-generated content. Determine ownership of the AI output.

  • Discrimination: Be vigilant about potential bias in AI systems and work to minimize it.

Preparing for the EU AI Act

While the AI Act doesn’t fully come into force until 2026, companies are recommended to start their compliance work early by reviewing their AI practices and assessing their risk level and their roles according to the legislation.

What TA and HR teams can do now:

  • Map out your use of AI: See where you already might be using AI and how you want to use the technology.
  • Raise awareness: Educate your team about the regulation and its impact on recruitment.
  • Assess your role: Assessing your company’s role for each use case will be pivotal for understanding your obligations.
  • Assess the risk-level: For each use of an AI-system, assessing the risk level in relation to the Act will also be pivotal for understanding your obligations.
  • Cater for privacy and bias: Be aware of privacy and discrimination risks when using AI for automated decisions.
  • Plan for human oversight: Read up on the requirements for human oversight and how you will govern it. 
  • Plan for transparency: Discuss how you will fulfill the transparency requirements surrounding your AI usage.

Top tip!

  • If you have an internal legal function, reach out to them for guidance and support in navigating the new regulations.

Conclusion

While the AI Act will likely have drastic impacts on how companies develop and incorporate AI technology, starting now will ensure that there is enough time to adapt systems to the new regulations. It's better to start early than to try and bend already implemented systems and processes to fit the requirements of the law.

And remember, the purpose of the legislation is to make the use of AI transparent and safe, not to forbid it.

HR and TA professionals will need to stay ahead of developments and understand which parts of their HR and recruitment processes will be impacted. Compliance, transparency, and adaptability will be key in navigating this evolving regulatory landscape.

Recent Posts

Last updated:

Sarah Molaiepour

Sarah is a content designer at Jobylon, crafting content to help HR professionals hire faster and better. She has previously worked in communications and branding at various tech companies and non-profit organisations. Originally from California, she now calls Stockholm home and enjoys making tiny animations, baking, and picking up random hobbies.

Get the latest updates

Subscribe to the blog to stay updated with the latest content on HR, recruiting, and the future of work

2024-WINTER-G2-Badges-Website

Want to see Jobylon in action?

Get a product tour of our talent acquisition platform and discover why we are loved by recruiters, hiring managers, and HR leaders across the world's largest employers!

Book a demo