If your organization uses AI for screening CVs, shortlisting candidates, managing performance reviews, or delivering training, you're already in scope for the EU AI Act.
That catches most HR teams off guard. A 2026 SHRM survey found that 39% of organizations have adopted AI in their HR functions - with recruiting as the most common use case at 27%. But the EU AI Act classifies AI used in employment decisions as high-risk, which triggers some of the strictest compliance obligations in the regulation.4
And this isn't a future concern. AI literacy requirements and bans on prohibited AI practices have been in effect since February 2, 2025. The bulk of the remaining rules - including high-risk AI system obligations and transparency requirements - take effect on August 2, 2026.
Yet a 2026 readiness report from Vision Compliance found that 78% of organizations haven't taken meaningful steps toward compliance. For HR and L&D leaders, the gap between AI adoption and regulatory readiness is growing fast.5
This guide breaks down what the EU AI Act requires, how it affects HR and L&D teams specifically, and the practical steps you can take to prepare your organization before enforcement begins.
- The EU AI Act is the world's first comprehensive AI law. It regulates how AI systems are developed, sold, and used across the EU - and it applies to non-EU businesses too.
- AI literacy obligations and bans on prohibited AI practices are already in effect as of February 2025. The majority of the Act's remaining provisions take effect on August 2, 2026.
- AI used in hiring, performance management, and workforce decisions is classified as high-risk under the Act - meaning most HR teams using AI recruitment tools already fall under its scope.
- The Act creates specific obligations for four stakeholder roles: providers, deployers, importers, and distributors. Most organizations using AI tools in HR are "deployers" with direct compliance responsibilities.
- Fines for non-compliance reach up to €35 million or 7% of global annual turnover - whichever is higher.
- UK businesses aren't exempt. The Act applies to any organization whose AI systems or outputs affect people in the EU, regardless of where the company is based.
- 78% of organizations have not taken meaningful steps toward AI Act compliance. The window to prepare is closing.
What is the EU AI Act?
The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. Passed in 2024 and entering into force on August 1, 2024, it creates binding rules for how AI systems are developed, placed on the market, and used across the European Union.1
The Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy, which infers from its inputs how to generate outputs such as predictions, recommendations, decisions, or content. That's a deliberately broad definition - it covers everything from a ChatGPT-powered chatbot to an AI recruitment screening tool to an automated compliance monitoring system.
Why was it created?
The regulation responds to specific risks that increasingly powerful AI systems create:
- Bias and discrimination - AI can perpetuate or amplify biases in hiring, lending, and access to services
- Lack of transparency - Many AI systems operate as "black boxes" where decision-making processes aren't visible
- Data privacy - AI frequently processes personal data that may be misused or shared without proper consent
- Safety and accountability - When AI systems make errors that cause harm, it's often unclear who's responsible
- Manipulation - AI can generate deepfakes, misleading content, and subliminal manipulation
The Act aims to balance these risks against the EU's interest in supporting AI innovation - particularly for SMEs and startups. It does this through a risk-based framework that applies proportionate regulation: higher risk means stricter requirements.
The implementation timeline
The Act doesn't land all at once. It's phased in over three years:
| Date | What takes effect |
|---|---|
| February 2, 2025 | Prohibited AI practices banned. AI literacy obligations begin (Article 4). |
| August 2, 2025 | Rules for general-purpose AI models apply. Governance structures and penalty frameworks must be in place. |
| August 2, 2026 | Majority of the Act becomes fully applicable. High-risk AI system obligations for Annex III systems take effect. Transparency rules (Article 50) apply. Each Member State must have at least one AI regulatory sandbox operational. |
| August 2, 2027 | Rules for high-risk AI systems embedded in regulated products (Article 6(1)) apply. Full scope enforcement for all remaining provisions. |
One important wrinkle: the European Commission's Digital Omnibus proposal (November 2025) has proposed linking some high-risk compliance deadlines to the availability of harmonized standards and support tools. This could push certain obligations slightly later - but the core framework, prohibited practices, and AI literacy requirements remain unchanged.
The risk-based framework explained
The Act categorizes AI systems into four risk tiers. The level of regulation increases with the level of risk.
Unacceptable risk (prohibited)
Certain AI applications are banned outright because they pose unacceptable threats to people's rights and safety. These bans have been in effect since February 2, 2025. Prohibited systems include:
- Social scoring - AI that evaluates people based on social behavior leading to unfavorable treatment
- Emotion recognition in workplaces and schools - AI that detects emotions from facial expressions in employment or educational settings
- Biometric categorization by sensitive characteristics - AI that classifies people by race, political views, sexual orientation, or other protected attributes
- Manipulative AI - Systems using subliminal techniques to distort behavior in harmful ways
- Real-time facial recognition in public spaces for law enforcement (with very limited exceptions)
If any of your tools use emotion recognition during video interviews or biometric categorization during screening, they may already be prohibited under the Act.
High risk
High-risk AI systems are permitted but subject to strict compliance obligations. These are systems that can significantly impact people's lives, safety, or fundamental rights. Areas classified as high-risk include:
- Employment and workforce management - AI used in hiring, promotions, terminations, task allocation, and performance monitoring
- Education and training - AI determining academic opportunities or assessing learners
- Critical infrastructure - AI managing essential services like energy or water
- Access to essential services - AI allocating public benefits, credit scoring, or insurance
- Law enforcement and migration - AI used in policing, judicial decisions, or border control
AI used in recruitment screening, CV ranking, interview analysis, workforce scheduling, performance evaluation, or promotion decisions falls squarely into the high-risk category. If your organization uses any AI-powered HR tech in these areas, the Act's most stringent obligations apply to you.
Limited risk
Limited-risk systems must meet transparency requirements but aren't subject to the full compliance framework that high-risk systems face. Examples include:
- Chatbots - Users must be informed they're interacting with AI rather than a human
- Generative AI - Content created by AI must be clearly labeled as AI-generated
- Emotion recognition (outside workplace/education) - Permitted with proper consent and disclosure
Minimal risk
Most AI systems fall into this category and face little to no additional regulation. Basic spam filters, AI-powered recommendation engines, and inventory management tools are typical examples. General-purpose AI models (like GPT or Claude) have separate transparency requirements that increase if they pose systemic risk, but otherwise face minimal regulation for standard use.
Who's responsible? Stakeholder roles under the Act
The Act defines four stakeholder roles, each with distinct obligations. Understanding which role your organization fills is essential for knowing what's required of you.
Providers
Organizations that develop and place AI systems on the market under their own brand. Providers carry the heaviest obligations: risk assessments, technical documentation, quality management, data governance, human oversight capabilities, and post-market monitoring. For high-risk systems, providers must register the system in the EU database and obtain CE marking through conformity assessments.
Deployers
Organizations that use AI systems in their professional activities. This is where most HR teams sit. If you've purchased an AI recruitment tool, an AI-powered performance management system, or any other AI tool for HR workflows, you're likely a deployer.
Deployer responsibilities include:
- Using AI systems according to provider instructions
- Implementing human oversight measures
- Monitoring system performance during operation
- Conducting data protection impact assessments when processing personal data
- Maintaining logs of system activity
- Reporting serious incidents to providers and authorities
- Ensuring transparency to end-users
For high-risk AI systems, deployers must also implement risk management systems, ensure input data is relevant and representative, maintain detailed records of system use, and enable human review of AI-generated outputs.
Importers and distributors
EU-based entities that bring non-EU AI systems into the market (importers) or make AI systems available without being providers or importers (distributors). Both must verify compliance documentation, CE marking, and cooperate with authorities.
Article 4 on AI literacy has been in effect since February 2, 2025. Everyone involved in operating, using, or making decisions based on AI systems needs to understand how those systems work, their limitations, and when to exercise human judgment. This isn't a suggestion - it's a legal obligation that applies right now.
Many organizations hold multiple roles simultaneously. A company developing its own internal AI tools is both a provider and a deployer. A business importing AI systems from outside the EU for internal use is both an importer and a deployer. Understanding which roles apply to your organization determines your specific compliance obligations.
Compliance requirements for high-risk AI
If your organization uses AI in employment-related decisions - hiring, performance reviews, promotions, terminations, or task allocation - you're deploying high-risk AI. Here's what the Act requires.
Risk management
A continuous risk management system must operate across the AI system's entire lifecycle. This includes identifying and analyzing foreseeable risks, implementing mitigation measures, testing their effectiveness, and documenting all risk management activities.
Data governance
Training data and input data must be relevant, representative, and as free from bias as possible. For HR teams using AI recruitment tools, this means understanding what data the AI was trained on and whether it could produce discriminatory outcomes against protected groups.
Human oversight
High-risk AI systems must allow humans to interpret outputs, override or reverse decisions when necessary, and interrupt or shut down the system. In an HR context, this means no AI tool should make a final hiring or firing decision without meaningful human review.
Transparency and documentation
Organizations must maintain comprehensive technical documentation, keep logs of system activity for traceability, and report serious incidents to authorities. For deployers, this means your AI vendors need to provide the documentation you need to demonstrate compliance - and you need to maintain records of how you use their systems.
Bias detection and mitigation
Systems must include measures to detect potential biases in data and algorithms, test across different demographic groups, implement corrections when biases are found, and monitor for emerging biases during operation.
Conformity assessment
Before a high-risk AI system can be placed on the market, it must undergo a conformity assessment. Most providers can perform this internally, but some systems require third-party evaluation. Successful assessment results in CE marking - the stamp that indicates EU compliance.
Penalties and enforcement
The EU AI Act creates a two-tier governance structure: the AI Office at EU level and national competent authorities in each Member State.
| Violation type | Maximum fine |
|---|---|
| Deploying prohibited AI systems | €35 million or 7% of global annual turnover (whichever is higher) |
| Non-compliance with high-risk AI obligations | €15 million or 3% of annual turnover |
| Less serious infringements | €7.5 million or 1.5% of annual turnover |
SMEs may face reduced penalties, with fines capped at the lower percentage of global turnover.
Beyond fines
Enforcement authorities can also order non-compliant AI systems to be withdrawn from the market, require corrective actions within specific timeframes, and restrict or prohibit the use of non-compliant systems. Organizations facing enforcement actions have the right to judicial remedy.
Voluntary compliance tools
The Act also encourages proactive compliance through voluntary codes of practice, harmonized standards (which create a presumption of conformity), and regulatory sandboxes - controlled environments where businesses can test AI under regulatory supervision. Each EU Member State must have at least one sandbox operational by August 2026.
What UK businesses need to know
Brexit doesn't exempt UK businesses from the EU AI Act. The Act has significant extraterritorial reach.
When the Act applies to UK companies
UK businesses are subject to the EU AI Act when they:
- Place AI systems on the EU market or put them into service in the EU
- Use AI outputs that affect people located in the EU
- Provide services to EU customers using AI systems
- Operate AI systems that affect EU residents
If your organization uses an AI recruitment tool to screen candidates who include EU nationals - even if your company is based in Manchester - you may be in scope.
The UK's own approach
The UK has taken a different path. Rather than a single comprehensive AI law, the UK relies on a principles-based framework set out in a 2023 White Paper, built around five principles: safety, security and robustness; transparency and explainability; fairness; accountability; and contestability and redress.7
These principles are currently non-statutory - they're guidance for existing regulators like the ICO, FCA, and Ofcom to interpret within their own sectors. A dedicated UK AI bill was expected in 2025 but didn't materialize. Current signals suggest a comprehensive bill may be introduced in the second half of 2026 at the earliest, but timing remains uncertain.8
Practical implications
For UK organizations operating in both markets, this creates a dual compliance challenge. The pragmatic approach: comply with the stricter EU requirements as your baseline, since meeting the EU AI Act's standards will generally satisfy the UK's softer, principles-based framework. This avoids running parallel compliance programs and positions your organization well for whatever UK legislation eventually emerges.
Key overlaps with existing UK law that organizations should already be managing:
- UK GDPR and Data Protection Act 2018 - Data minimization, impact assessments, and transparency requirements that overlap with AI Act obligations
- Equality Act 2010 - Protections against discrimination that apply whether a human or an AI makes the decision
- Employment Rights Act 2025 - New protections that may intersect with AI-driven workforce management
How to prepare your organization
78% of organizations haven't started meaningful preparation for the EU AI Act. Here's a practical seven-step plan to close the gap.
Your seven-step preparation plan
Conduct an AI inventory
Map every AI system your organization uses. This includes the obvious tools (AI recruitment platforms, chatbots) and the less obvious ones (AI features embedded in your HRIS, AI-powered scheduling, or automated performance analytics). For each system, document what it does, what data it processes, and who it affects.
Classify your AI systems by risk
Using the Act's framework, categorize each system as prohibited, high-risk, limited-risk, or minimal-risk. Pay particular attention to anything used in employment decisions - it's almost certainly high-risk.
Identify your stakeholder role for each system
Are you a provider, deployer, importer, or distributor? For most HR teams purchasing third-party AI tools, you're a deployer. That still carries real obligations.
Review vendor contracts
Check whether your AI vendors can provide the technical documentation, transparency information, and human oversight capabilities you need for compliance. If they can't, start those conversations now. Update contracts to clearly define compliance responsibilities.
Implement human oversight processes
Ensure no high-risk AI system operates without meaningful human review. For AI recruitment tools, this means a human must review and approve (or override) AI-generated shortlists, scores, and recommendations before they affect candidates.
Train your workforce on AI literacy
This isn't optional - Article 4 requirements are already in force. Everyone involved in operating, using, or making decisions based on AI systems needs to understand how those systems work, their limitations, and when to exercise human judgment. Platforms like 5Mins deliver EU AI Act training in bite-sized lessons that fit around the working day - with 95%+ completion rates versus the sub-5% typical of traditional compliance training.
Establish ongoing monitoring
Compliance isn't a one-time project. Implement systems to continuously monitor AI system performance, track regulatory developments, and update your compliance measures as guidance evolves. The EU AI Office and national authorities will continue publishing implementation guidance through 2026 and beyond.
Frequently Asked Questions
EU AI Act for HR and L&D
Everything you need to know about the EU AI Act's impact on human resources and learning & development
What is the EU AI Act?
Does the EU AI Act apply to UK businesses?
What are the penalties for non-compliance with the EU AI Act?
What are high-risk AI systems under the EU AI Act?
When does the EU AI Act come into force?
Is AI literacy training mandatory under the EU AI Act?
- EU AI Act (Regulation (EU) 2024/1689), Official Journal of the European Union, July 12, 2024
- EU AI Act Implementation Timeline, artificialintelligenceact.eu
- EU AI Act Service Desk Timeline, ai-act-service-desk.ec.europa.eu
- The State of AI in HR 2026 Report, SHRM (Surveyed December 2025)
- 2026 EU AI Act Readiness Report, Vision Compliance, April 2026
- AI Act regulatory framework page, European Commission, digital-strategy.ec.europa.eu
- UK Regulatory Outlook January 2026: Artificial Intelligence, Osborne Clarke, osborneclarke.com
- Global AI Governance Law and Policy: United Kingdom, IAPP, iapp.org
- The EU AI Act implementation timeline: understanding the next deadline for compliance, Kennedys Law, March 2026
- EU and Luxembourg Update on the European Harmonised Rules on Artificial Intelligence, K&L Gates, January 2026
- EU AI Act: Regulatory Readiness & Risk Management, Deloitte, deloitte.com
- AI recruitment adoption data, PwC, cited via blog.taleva.io, 2025
- The roadmap to the EU AI Act, Alexander Thamm, alexanderthamm.com
This article provides general guidance on the EU AI Act and should not be considered legal advice. The EU AI Act is subject to ongoing implementation, with guidelines and standards continuing to evolve. Always consult your organization's legal or compliance department for advice specific to your situation.
All content is researched and written by the 5Mins team.


