“It was long and intense, but the effort was worth it.”
Those were the words of Brando Benifei, an Italian member of the European Parliament, after policymakers emerged from 38 hours of dialogue with a tentative agreement on the world’s first attempt to regulate the use of artificial intelligence (AI).
The AI Act aims to establish a global standard for AI usage, preventing systemic risks in the European market and safeguarding democracy and human rights—all without clipping the wings of innovation. The law will apply to any entity operating in the EU, irrespective of its location.
Proposed by the European Commission almost two years ago, the framework now employs a risk-based approach, categorizing AI applications into four groups based on the level of harm they pose. Fines for violating the act are substantial, reaching up to 7% of global turnover for the largest violation.
“Unacceptable” risks that threaten safety, jobs and human rights would be prohibited. “High” risk AI, essential in private and public services, such as credit scoring, would face stringent regulations, including a mandatory fundamental rights assessment for the insurance and banking sectors. When it comes to “lower” risk functions, such as chatbots, companies would have to disclose when customers are speaking to a chatbot and highlight AI-generated content.
Reception has been mixed, with some European tech firms as well as French President Emmanuel Macron criticizing the framework as overly restrictive, potentially hampering innovation. However, the framework is not set in stone The European Parliament and Council still must adopt the final text in order to become EU law. Companies are projected to feel its impact range from late next year to 2026. Whatever emerges is likely to have vast impact. Last month, Scott Zoldi, chief analytics officer at risk analyst FICO, spoke about the ongoing debate in the US as to whether AI innovation and regulation can coexist. “We are looking very carefully at what happens in the EU,” Zoldi said.