European Union policymakers have agreed on a broad new set of rules to govern artificial intelligence.
The EU agreement represents one of the world’s first comprehensive initiatives to control the use of the rapidly expanding technology with far-reaching social and economic repercussions.
The A.I. Act establishes a new worldwide standard for governments wishing to capitalize on the potential advantages of artificial intelligence while guarding against potential concerns such as job automation, internet disinformation, and national security threats. The bill still has to go through a few additional processes before it can be approved, but the political accord implies that its broad contours have been established.
Germany, France, and Italy have argued against actively regulating generative AI models, dubbed “foundation models,” instead supporting self-regulation by the corporations behind them via government-mandated codes of behavior.
Their fear is that excessive regulation would limit Europe’s capacity to compete with Chinese and American leaders in technology. DeepL and Mistral AI, two of Europe’s most promising AI businesses, are based in Germany and France.
European policymakers focused on the most dangerous applications of artificial intelligence (AI) by businesses and governments, such as law enforcement and the operation of critical services such as water and energy. The largest general-purpose A.I. systems, such as those powering the ChatGPT chatbot, would be subject to additional transparency standards. According to E.U. authorities and prior drafts of the regulation, chatbots and software that makes distorted pictures such as “deepfakes” would have to make it apparent that what people were seeing was created by A.I.
Outside of specific safety and national security exemptions, the use of face recognition technologies by police and governments would be prohibited. Companies who breach the rules might face fines of up to 7% of their global revenues.
“Europe has positioned itself as a pioneer, understanding the importance of its role as global standard setter,” said Thierry Breton, the European commissioner who helped negotiate the deal, in a statement.
Even while the bill was lauded as a regulatory achievement, doubts remained concerning its effectiveness. Many components of the policy were not projected to take effect for 12 to 24 months, which is a long period in the development of artificial intelligence. And officials and nations fought to the last minute of discussions over its phrasing and how to reconcile the need to stimulate innovation with the need to protect against potential damage.
The agreement agreed in Brussels followed three days of discussions, including a 22-hour session that began Wednesday afternoon and lasted all day Thursday.
The final agreement was not made public right away because negotiations were likely to continue behind the scenes to finalize technical issues, which might postpone ultimate passage. Votes must be taken in Parliament and the European Council, which is made up of representatives from the union’s 27 member nations.
Regulating A.I. grew more urgent with the publication of ChatGPT last year, which caused a worldwide phenomenon by showcasing A.I.’s increasing powers. In the United States, the Biden administration recently issued an executive order focusing on the national security implications of artificial intelligence. Britain, Japan, and other countries have taken a more hands-off approach, while China has placed certain constraints on data usage and recommendation algorithms.
Trillions of dollars in projected worth are at stake as A.I. is anticipated to transform the global economy. “Technological dominance precedes economic and political dominance,” France’s digital minister, Jean-Noel Barrot, stated last week.
Europe has been one of the more forward-thinking regions in terms of A.I. regulation, having begun work on what would become the A.I. Act in 2018. In recent years, European Union authorities have attempted to introduce a new degree of monitoring to the technology industry, analogous to the regulation of the health care or banking industries. The EU has previously passed far-reaching legislation concerning data protection, competition, and content control.
In 2021, the first draft of the A.I. Act was revealed. However, when technical advances occurred, politicians were forced to rewrite the statute. The first version made no mention of general-purpose A.I. models such as those used by ChatGPT.
Policymakers agreed to a “risk-based approach” to regulating A.I., in which a specific group of applications is subject to the highest oversight and limits. Companies that create A.I. tools with the greatest potential for harm to individuals and society, such as those used in hiring and education, would be required to provide regulators with proof of risk assessments, breakdowns of what data was used to train the systems, and assurances that the software did not cause harm, such as perpetuating racial biases. Human oversight would also be essential in the development and deployment of the systems.
Some activities, such as indiscriminately collecting photographs from the internet to build a facial recognition database, would be explicitly prohibited.
The European Union discussion was acrimonious, demonstrating how artificial intelligence has perplexed politicians. Officials in the European Union were divided on how extensively to regulate the newer A.I. systems for fear of impeding European start-ups attempting to catch up to American giants like Google and OpenAI.
According to Breton, the regulation requires creators of the largest A.I. models to publish information about how their systems function and analyze for “systemic risk.”
The new laws will be widely scrutinized across the world. They will have an impact not just on large A.I. developers like Google, Meta, Microsoft, and OpenAI, but also on other organizations that will employ the technology in fields like education, health care, and banking. Governments are also increasingly relying on artificial intelligence in criminal justice and the distribution of public benefits.
The status of enforcement is unknown. The A.I. Act will include regulators from 27 countries and will need the hiring of new specialists at a time when government finances are constrained. Legal challenges are anticipated as firms put the new standards to the test in court. Previous European Union legislation, particularly the landmark digital privacy law known as the General Data Protection Regulation, has been criticized for inconsistent enforcement.