Synopsis: The EU’s AI Act aims to ensure safe and ethical AI use by classifying systems based on risk, banning harmful applications, and mandating transparency, especially for generative AI in employment actions. Violators of the AI Act will incur heavy fines, thereby underscoring the seriousness with which the EU treats AI in general.
As Artificial Intelligence (AI) continues to shape the modern world, its regulation has become a pressing concern for governments and organizations alike. The European Union (EU) has risen to the challenge by introducing the Artificial Intelligence Act (AI-Act or Act), a comprehensive set of rules designed to regulate AI systems within the EU and ensure they are safe, ethical, and aligned with European values. The AI-Act, set to come into effect in phases beginning on August 1, 2024, aims to set a global benchmark for AI regulation, similar to how the EU’s General Data Protection Regulation (GDPR) has become a standard for data privacy worldwide.1 The Act will have profound implications for employers, human resource (HR) professionals, and organizations using or adopting AI technologies in recruitment, performance management, and workplace monitoring.
The primary objective of the AI-Act is establishing a regulatory framework that ensures AI systems are trustworthy, human-centric, and respectful of fundamental rights. It recognizes that AI has the potential to both improve lives and cause harm, depending on how it is implemented. To manage this risk, the EU takes a risk-based approach, classifying AI systems based on their potential impact on society. These classifications include (1) minimal risk, (2) specific transparency risk (limited risk), (3) high risk, and (4) unacceptable risk.2
There are several resources to determine the appropriate classification. For example, the categories are detailed in the annexes of the Act, particularly Annex III, which lists high-risk applications. The European Commission can also update the list through delegated acts to reflect technological advancements. Additionally, the provider (the entity placing the AI system on the market or putting it into service) also plays a role in classification by self-assessing the risk level of their AI system. Further, each Member State is required to designate a national supervisory authority to oversee the implementation and enforcement of the AI Act. These authorities can intervene if there is doubt about the correct classification of an AI system. The Act places the strictest regulations on high-risk AI applications, which could have significant implications for human lives, privacy, and societal well-being. For high-risk AI systems, third-party conformity assessments may be conducted by notified bodies.3 These independent organizations evaluate whether the system meets the requirements outlined in the AI Act. In the event of a dispute or ambiguity regarding risk classification, the European Artificial Intelligence Board (“EAIB) can provide guidance and resolution.
Unacceptable AI systems, such as those used for manipulative purposes, exploitative behaviors, or social scoring, will be outright banned starting February 2025. This includes AI technologies that could manipulate individuals’ behavior or score people based on their social interactions, a practice already under scrutiny in some countries.4 Such a move signals the EU’s commitment to ensuring that AI technologies do not undermine democratic values, equality, or fundamental rights. High-risk AI applications, such as facial recognition, biometric surveillance, and AI used in hiring processes, will be subject to strict requirements, ensuring transparency, accountability, and fairness in their deployment.
Interestingly, while generative AI tools like ChatGPT, Microsoft Copilot, and Google Gemini are not classified as high-risk AI, they will still face regulatory scrutiny. These systems will need to comply with transparency requirements, including clear disclosures that content has been generated by AI, and they must adhere to EU copyright laws. The EU is particularly concerned about the potential for generative AI to create misleading or harmful content, and the transparency mandate ensures users are aware of the AI’s involvement in content creation.
The penalties for non-compliance with the AI-Act are significant and underscore the EU’s commitment to enforcing its regulations. Companies that violate Article 5, Prohibited AI practices, could face fines as high as EUR 35 million or 7% of their global annual turnover, depending on the severity of the violation, as outlined in Chapter XII, Article 99.5 Lesser violations, such as breaching transparency or submitting false information, can result in penalties of up to EUR 15 million or 3% and EUR 7.5 million or 1.5%, respectively. Enforcement will primarily be the responsibility of individual EU Member States, which will have the authority to investigate AI systems, ensure compliance, and take corrective actions.
The EU AI-Act will undoubtedly profoundly impact how organizations operate, particularly in HR and employment practices. AI technologies used in hiring, employee monitoring, and performance management will be scrutinized to ensure they do not discriminate, manipulate, or infringe upon workers’ rights. Employers and HR professionals must adapt to these new regulations by carefully selecting AI tools that comply with the law and ensuring their AI systems are transparent, accountable, and respectful of workers’ fundamental rights.
In conclusion, the EU’s AI-Act represents a critical step toward establishing a global framework for AI regulation. By emphasizing transparency, accountability, and human-centric principles, the Act seeks to mitigate the risks associated with AI while promoting its potential benefits. As AI becomes an increasingly integral part of daily life, the EU’s efforts to regulate its use could set a precedent for other regions, helping shape a future where AI serves humanity ethically and responsibly.
Compliance with new regulations can often be challenging, especially when it comes to something as novel and quickly evolving as AI. Look to nimble specialized employment lawyers to assist U.S. employers with EU operations in navigating the complexities of the new AI rules. These specialists can provide invaluable support by offering tailored legal advice, ensuring policies and practices align with regulatory requirements, conducting compliance audits, and mitigating risks through proactive strategies.
Endnotes
1 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024, available at: https://eur-lex.europa.eu/eli/reg/2024/1689/oj
2 Id.
3 Notified Bodies are independent organizations designated by Member States to perform conformity assessments for high-risk AI systems.
4 Social scoring is a practice commonly associated with China, which has implemented a Social Credit System (SCS). See (U.S.-China Economic and Security Review Commission , 2020). While not identical to China’s system, other countries, including the United States, Singapore, and Russia, engage in activities that resemble social scoring in more limited or indirect forms.
5 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024, Chapter XII Penalties: available at: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689