The EU AI Act now in Force
The European Union has officially introduced its new risk-based regulation for AI applications, known as the EU AI Act, which is now in force as of August 1, 2024. This regulation sets distinct compliance deadlines for various categories of AI developers and uses, with most provisions becoming effective by mid-2026. However, some stringent prohibitions, such as the use of remote biometrics in public by law enforcement, will be enforced within six months.
Most AI applications are classified as low-risk and will not be subject to regulation. In contrast, high-risk applications, including those involving biometrics, facial recognition, and AI in education and employment, must register in an EU database and comply with stringent risk and quality management protocols. AI technologies like chatbots and deepfake tools are categorized as "limited risk," requiring transparency measures to prevent user deception. For general-purpose AIs (GPAIs), developers will generally face minimal transparency requirements, though the most advanced models will need to conduct risk assessments and implement mitigation strategies. The details of GPAI compliance are still being finalized, with Codes of Practice expected by April 2025.
OpenAI, the creator of GPT and ChatGPT, will work closely with the EU AI Office to implement the new regulatory framework and advise other AI developers on system classification and compliance obligations. AI professionals must understand the impact of these new regulations on their work and seek legal guidance as necessary.
Key Takeaways
- The EU's AI regulation comes into effect on August 1, 2024, marking the commencement of compliance deadlines.
- Prohibitions on specific AI uses, such as biometric surveillance, will be enforced within six months.
- The majority of AI applications are deemed low-risk or exempt from regulation.
- High-risk AI, encompassing biometrics and education tools, necessitate registration in an EU database.
- Developers of general-purpose AIs face diverse transparency and risk management requirements.
Analysis
The EU's AI regulation will have significant implications for high-risk AI developers, particularly those involved in biometrics and education, necessitating registration and strict adherence to compliance standards. The collaboration between OpenAI and the EU AI Office reflects a proactive approach to compliance and is poised to influence industry standards. In the short term, AI firms will grapple with operational adjustments, while in the long term, this regulatory framework could stabilize the market, bolstering trust in AI technologies. Financial instruments linked to AI stocks may undergo volatility as companies adapt to the new regulatory landscape.
Did You Know?
- Risk-Based Regulation for AI Applications: This refers to a regulatory framework where AI applications are categorized based on the level of risk they pose to individuals, society, or the environment. The EU's approach classifies AI into low-risk, limited risk, and high-risk categories, each subject to different compliance requirements and oversight mechanisms.
- General-Purpose AIs (GPAIs): GPAIs are versatile AI systems designed to carry out a diverse array of tasks across various domains, exemplified by OpenAI's GPT and ChatGPT. The EU's regulation imposes varying levels of transparency and risk management requirements on GPAIs, depending on their capabilities and potential impacts.
- Transparency Requirements for AI: These requirements stipulate that AI systems must be developed and deployed in a manner that allows users and stakeholders to comprehend the decision-making processes of the AI. For instance, AI tools like chatbots and those capable of producing deepfakes must transparently disclose their AI-generated nature to prevent user deception and ensure informed consent.