The Artificial Intelligence Act (AI Act)[1] is a European Union regulation concerning artificial intelligence (AI). It establishes a common regulatory and legal framework for AI within the European Union (EU).[2] It came into force on 1 August 2024,[3] with provisions that shall come into operation gradually over the following 6 to 36 months.[4]
European Union regulation | |
Text with EEA relevance | |
Title | Artificial Intelligence Act[1] |
---|---|
Made by | European Parliament and Council |
Journal reference | OJ L, 2024/1689, 12.7.2024 |
History | |
European Parliament vote | 13 March 2024 |
Council Vote | 21 May 2024 |
Entry into force | 1 August 2024 |
Preparative texts | |
Commission proposal | 2021/206 |
Other legislation | |
Amends | Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU |
Current legislation |
It covers all types of AI across a broad range of sectors, with exceptions for AI systems used solely for military, national security, research and non-professional purposes.[5] As a piece of product regulation, it does not confer rights on individuals, but regulates the providers of AI systems and entities using AI in a professional context.[6]
The Act classifies non-exempt AI applications by their risk of causing harm. There are four levels – unacceptable, high, limited, minimal – plus an additional category for general-purpose AI.[7]
- Applications with unacceptable risks are banned.
- High-risk applications must comply with security, transparency and quality obligations, and undergo conformity assessments.
- Limited-risk applications only have transparency obligations.
- Minimal-risk applications are not regulated.
For general-purpose AI, transparency requirements are imposed, with reduced requirements for open source models, and additional evaluations for high-capability models.[8][9]
The Act also creates a European Artificial Intelligence Board to promote national cooperation and ensure compliance with the regulation.[10] Like the EU's General Data Protection Regulation, the Act can apply extraterritorially to providers from outside the EU if they have users within the EU.[6]
Proposed by the European Commission on 21 April 2021,[11] it passed the European Parliament on 13 March 2024,[12] and was unanimously approved by the EU Council on 21 May 2024.[13] The draft Act was revised to address the rise in popularity of generative artificial intelligence systems, such as ChatGPT, whose general-purpose capabilities did not fit the main framework.[14]
Provisions
editRisk categories
editThere are different risk categories depending on the type of application, with a specific category dedicated to general-purpose generative AI:
- Unacceptable risk – AI applications in this category are banned, except for specific exemptions.[15] When no exemption applies, this includes AI applications that manipulate human behaviour, those that use real-time remote biometric identification (such as facial recognition) in public spaces, and those used for social scoring (ranking individuals based on their personal characteristics, socio-economic status, or behaviour).[9]
- High-risk – AI applications that are expected to pose significant threats to health, safety, or the fundamental rights of persons. Notably, AI systems used in health, education, recruitment, critical infrastructure management, law enforcement or justice. They are subject to quality, transparency, human oversight and safety obligations, and in some cases require a "Fundamental Rights Impact Assessment" before deployment.[16] They must be evaluated both before they are placed on the market and throughout their life cycle. The list of high-risk applications can be expanded over time, without the need to modify the AI Act itself.[6]
- General-purpose AI – Added in 2023, this category includes in particular foundation models like ChatGPT. Unless the weights and model architecture are released under free and open source licence, in which case only a training data summary and a copyright compliance policy are required, they are subject to transparency requirements. High-impact general-purpose AI systems including free and open source ones which could pose systemic risks (notably those trained using a computational capability exceeding 1025 FLOPS)[17] must also undergo a thorough evaluation process.[9]
- Limited risk – AI systems in this category have transparency obligations, ensuring users are informed that they are interacting with an AI system and allowing them to make informed choices. This category includes, for example, AI applications that make it possible to generate or manipulate images, sound, or videos (like deepfakes).[9]
- Minimal risk – This category includes, for example, AI systems used for video games or spam filters. Most AI applications are expected to fall into this category.[18] These systems are not regulated, and Member States cannot impose additional regulations due to maximum harmonisation rules. Existing national laws regarding the design or use of such systems are overridden. However, a voluntary code of conduct is suggested.[19]
Exemptions
editArticles 2.3 and 2.6 exempt AI systems used for military or national security purposes or pure scientific research and development from the AI Act.[15]
Article 5.2 bans algorithmic video surveillance only if it is conducted in real time. Exceptions allowing real-time algorithmic video surveillance include policing aims including "a real and present or real and foreseeable threat of terrorist attack".[15]
Recital 31 of the act states that it aims to prohibit "AI systems providing social scoring of natural persons by public or private actors", but allows for "lawful evaluation practices of natural persons that are carried out for a specific purpose in accordance with Union and national law."[20] La Quadrature du Net interprets this exemption as permitting sector-specific social scoring systems,[15] such as the suspicion score used by the French family payments agency Caisse d'allocations familiales.[21][15]
Governance
editThe AI Act establishes various new bodies in Article 64 and the following articles. These bodies are tasked with implementing and enforcing the Act. The approach combines EU-level coordination with national implementation, involving both public authorities and private sector participation.
The following new bodies will be established:[22][23]
- AI Office: attached to the European Commission, this authority will coordinate the implementation of the AI Act in all Member States and oversee the compliance of general-purpose AI providers.
- European Artificial Intelligence Board: composed of one representative from each Member State, the Board will advise and assist the Commission and Member States to facilitate the consistent and effective application of the AI Act. Its tasks include gathering and sharing technical and regulatory expertise, providing recommendations, written opinions, and other advice.
- Advisory Forum: established to advise and provide technical expertise to the Board and the Commission, this forum will represent a balanced selection of stakeholders, including industry, start-ups, small and medium-sized enterprises, civil society, and academia, ensuring that a broad spectrum of opinions is represented during the implementation and application process.
- Scientific Panel of Independent Experts: this panel will provide technical advice and input to the AI Office and national authorities, enforce rules for general-purpose AI models (notably by launching qualified alerts of possible risks to the AI Office), and ensure that the rules and implementations of the AI Act correspond to the latest scientific findings.
While the establishment of new bodies is planned at the EU level, Member States will have to designate "national competent authorities".[24] These authorities will be responsible for ensuring the application and implementation of the AI Act, and for conducting "market surveillance".[25] They will verify that AI systems comply with the regulations, notably by checking the proper performance of conformity assessments and by appointing third-parties to carry out external conformity assessments.
Enforcement
editThe Act regulates the entry to the EU internal market using the New Legislative Framework. It contains essential requirements that all AI systems must meet to access the EU market. These essential requirements are passed on to European Standardisation Organisations, which develop technical standards that further detail these requirements.[26] These standards are developed by CEN/CENELEC JTC 21.[27]
The Act mandates that member states establish their own notifying bodies. Conformity assessments are conducted to verify whether AI systems comply with the standards set out in the AI Act.[28] This assessment can be done in two ways: either through self-assessment, where the AI system provider checks conformity, or through third-party conformity assessment, where the notifying body conducts the assessment.[19] Notifying bodies also have the authority to carry out audits to ensure proper conformity assessments.[29]
Criticism has arisen regarding the fact that many high-risk AI systems do not require third-party conformity assessments.[30][31][32] Some commentators argue that independent third-party assessments are necessary for high-risk AI systems to ensure safety before deployment. Legal scholars have suggested that AI systems capable of generating deepfakes for political misinformation or creating non-consensual intimate imagery should be classified as high-risk and subjected to stricter regulation.[33]
Legislative procedure
editIn February 2020, the European Commission published "White Paper on Artificial Intelligence – A European approach to excellence and trust".[34] In October 2020, debates between EU leaders took place in the European Council. On 21 April 2021, the AI Act was officially proposed by the Commission.[11] On 6 December 2022, the European Council adopted the general orientation, allowing negotiations to begin with the European Parliament. On 9 December 2023, after three days of "marathon" talks, the EU Council and Parliament concluded an agreement.[35][36]
The law was passed in the European Parliament on 13 March 2024, by a vote of 523 for, 46 against, and 49 abstaining.[37] It was approved by the EU Council on 21 May 2024.[13] It entered into force on 1 August 2024,[3] 20 days after being published in the Official Journal on 12 July 2024.[12][38] After coming into force, there will be a delay before it becomes applicable, which depends on the type of application. This delay is 6 months for bans on "unacceptable risk" AI systems, 9 months for codes of practice, 12 months for general-purpose AI systems, 36 months for some obligations related to "high-risk" AI systems, and 24 months for everything else.[38][37]
Reactions
editExperts have argued that though the jurisdiction of the law is European, it could have far-ranging implications for international companies that plan to expand to Europe.[39] Anu Bradford at Columbia has argued that the law provides significant momentum to the world-wide movement to regulate AI technologies.[39]
Amnesty International criticized the AI Act for not completely banning real-time facial recognition, which they said could damage "human rights, civil space and rule of law" in the European Union. It also criticized the absence of ban on exporting AI technologies that can harm human rights.[39]
Some tech watchdogs have argued that there were major loopholes in the law that would allow large tech monopolies to entrench their advantage in AI, or to lobby to weaken rules.[40][41] Some startups welcomed the clarification the act provides, while others argued the additional regulation would make European startups uncompetitive compared to American and Chinese startups.[41] La Quadrature du Net (LQDN) described the AI Act as "tailor-made for the tech industry, European police forces as well as other large bureaucracies eager to automate social control". LQDN described the role of self-regulation and exemptions in the act to render it "largely incapable of standing in the way of the social, political and environmental damage linked to the proliferation of AI".[15]
See also
editNotes
editReferences
edit- ^ a b c Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)
- ^ "Proposal for a Regulation laying down harmonised rules on artificial intelligence: Shaping Europe's digital future". digital-strategy.ec.europa.eu. 21 April 2021. Archived from the original on 4 January 2023. Retrieved 6 October 2024.
- ^ a b "AI Act enters into force" (Press release). Brussels: European Commission. 1 August 2024. Retrieved 5 August 2024.
- ^ "Timeline of Developments". artificialintelligenceact.eu. Future of Life Institute. Retrieved 13 July 2024.
- ^ "Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world". Council of the EU. 9 December 2023. Archived from the original on 10 January 2024. Retrieved 6 January 2024.
- ^ a b c Mueller, Benjamin (4 May 2021). "The Artificial Intelligence Act: A Quick Explainer". Center for Data Innovation. Archived from the original on 14 October 2022. Retrieved 6 January 2024.
- ^ Lilkov, Dimitar (2021). "Regulating artificial intelligence in the EU: A risky game". European View. 20 (2): 166–174. doi:10.1177/17816858211059248.
- ^ Espinoza, Javier (9 December 2023). "EU agrees landmark rules on artificial intelligence". Financial Times. Archived from the original on 29 December 2023. Retrieved 6 January 2024.
- ^ a b c d "EU AI Act: first regulation on artificial intelligence". European Parliament News. Archived from the original on 10 January 2024. Retrieved 6 January 2024.
- ^ MacCarthy, Mark; Propp, Kenneth (4 May 2021). "Machines learn that Brussels writes the rules: The EU's new AI regulation". Brookings. Archived from the original on 27 October 2022. Retrieved 7 September 2021.
- ^ a b c Proposal for a Regulation laying down harmonised rules on artificial intelligence
- ^ a b "World's first major act to regulate AI passed by European lawmakers". CNBC. 14 March 2024. Archived from the original on 13 March 2024. Retrieved 13 March 2024.
- ^ a b Browne, Ryan (21 May 2024). "World's first major law for artificial intelligence gets final EU green light". CNBC. Archived from the original on 21 May 2024. Retrieved 22 May 2024.
- ^ Coulter, Martin (7 December 2023). "What is the EU AI Act and when will regulation come into effect?". Reuters. Archived from the original on 10 December 2023. Retrieved 11 January 2024.
- ^ a b c d e f With the AI Act adopted, the techno-solutionist gold-rush can continue, La Quadrature du Net, 22 May 2024, Wikidata Q126064181, archived from the original on 23 May 2024
- ^ Mantelero, Alessandro (2022), Beyond Data. Human Rights, Ethical and Social Impact Assessment in AI, Information Technology and Law Series, vol. 36, The Hague: Springer-T.M.C. Asser Press, doi:10.1007/978-94-6265-531-7, ISBN 978-94-6265-533-1
- ^ Bertuzzi, Luca (7 December 2023). "AI Act: EU policymakers nail down rules on AI models, butt heads on law enforcement". Euractiv. Archived from the original on 8 January 2024. Retrieved 6 January 2024.
- ^ Liboreiro, Jorge (21 April 2021). "'Higher risk, stricter rules': EU's new artificial intelligence rules". Euronews. Archived from the original on 6 January 2024. Retrieved 6 January 2024.
- ^ a b Veale, Michael; Borgesius, Frederik Zuiderveen (1 August 2021). "Demystifying the Draft EU Artificial Intelligence Act — Analysing the good, the bad, and the unclear elements of the proposed approach". Computer Law Review International. 22 (4): 97–112. arXiv:2107.03721. doi:10.9785/cri-2021-220402. ISSN 2194-4164. S2CID 235765823.
- ^ Artificial Intelligence Act:[1] Recital 31
- ^ Notation des allocataires : la CAF étend sa surveillance à l'analyse des revenus en temps réel (in French), La Quadrature du Net, 13 March 2024, Wikidata Q126066451, archived from the original on 1 April 2024
- ^ Bertuzzi, Luca (21 November 2023). "EU lawmakers to discuss AI rulebook's revised governance structure". Euractiv. Archived from the original on 22 May 2024. Retrieved 18 April 2024.
- ^ Friedl, Paul; Gasiola, Gustavo Gil (7 February 2024). "Examining the EU's Artificial Intelligence Act". Verfassungsblog. doi:10.59704/789d6ad759d0a40b. Archived from the original on 22 May 2024. Retrieved 16 April 2024.
- ^ Proposal:[11] Article 3 – definitions. Excerpt: "'national competent authority' means the national supervisory authority, the notifying authority and the market surveillance authority."
- ^ "Artificial Intelligence – Questions and Answers". European Commission. 12 December 2023. Archived from the original on 6 April 2024. Retrieved 17 April 2024.
- ^ Tartaro, Alessio (2023). "Regulating by standards: current progress and main challenges in the standardisation of Artificial Intelligence in support of the AI Act". European Journal of Privacy Law and Technologies. 1 (1). Archived from the original on 3 December 2023. Retrieved 10 December 2023.
- ^ "With the AI Act, we need to mind the standards gap". CEPS. 25 April 2023. Retrieved 15 September 2024.
- ^ COMMISSION STAFF WORKING DOCUMENT IMPACT ASSESSMENT Accompanying the Proposal for a Regulation of the European Parliament and of the Council LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS
- ^ Casarosa, Federica (1 June 2022). "Cybersecurity certification of Artificial Intelligence: a missed opportunity to coordinate between the Artificial Intelligence Act and the Cybersecurity Act". International Cybersecurity Law Review. 3 (1): 115–130. doi:10.1365/s43439-021-00043-6. ISSN 2662-9739. S2CID 258697805.
- ^ Smuha, Nathalie A.; Ahmed-Rengers, Emma; Harkens, Adam; Li, Wenlong; MacLaren, James; Piselli, Riccardo; Yeung, Karen (5 August 2021). "How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission's Proposal for an Artificial Intelligence Act". SSRN 3899991.
- ^ Ebers, Martin; Hoch, Veronica R. S.; Rosenkranz, Frank; Ruschemeier, Hannah; Steinrötter, Björn (December 2021). "The European Commission's Proposal for an Artificial Intelligence Act—A Critical Assessment by Members of the Robotics and AI Law Society (RAILS)". J: Multidisciplinary Scientific Journal. 4 (4): 589–603. doi:10.3390/j4040043. ISSN 2571-8800.
- ^ Almada, Marco; Petit, Nicolas (27 October 2023). "The EU AI Act: Between Product Safety and Fundamental Rights". SSRN 4308072.
- ^ Romero-Moreno, Felipe (29 March 2024). "Generative AI and deepfakes: a human rights approach to tackling harmful content". International Review of Law, Computers & Technology. 39 (2): 1–30. doi:10.1080/13600869.2024.2324540. hdl:2299/20431. ISSN 1360-0869.
- ^ "White Paper on Artificial Intelligence – a European approach to excellence and trust". European Commission. 19 February 2020. Archived from the original on 5 January 2024. Retrieved 6 January 2024.
- ^ Procedure 2021/0106/COD
- ^ "Timeline – Artificial intelligence". European Council. 9 December 2023. Archived from the original on 6 January 2024. Retrieved 6 January 2024.
- ^ a b "Artificial Intelligence Act: MEPs adopt landmark law". European Parliament. 13 March 2024. Archived from the original on 15 March 2024. Retrieved 14 March 2024.
- ^ a b David, Emilia (14 December 2023). "The EU AI Act passed — now comes the waiting". The Verge. Archived from the original on 10 January 2024. Retrieved 6 January 2024.
- ^ a b c "Europe agreed on world-leading AI rules. How do they work and will they affect people everywhere?". AP News. 11 December 2023. Retrieved 31 May 2024.
- ^ "EU parliament greenlights landmark artificial intelligence regulations". Al Jazeera. Retrieved 31 May 2024.
- ^ a b "The EU passed the first AI law. Tech experts say it's 'bittersweet'". euronews. 16 March 2024. Retrieved 31 May 2024.