The European Union (EU) has reached a unanimous agreement on the bloc's Artificial Intelligence Act, overcoming concerns that the rules would stifle innovation. The law, which still needs formal approval from the European Parliament, would ban certain applications of AI technology and impose strict limits on high-risk use cases.
It would also require advanced software models to adhere to transparency and stress-testing obligations. This move makes the EU the first to establish binding regulations for AI technology, while other countries and international clubs have mostly relied on voluntary guidelines or codes of practice.
The breakthrough was initially celebrated as a pioneering step for Europe, but it faced opposition from Germany, France, and Austria, who expressed concerns about potential limitations on AI models and data protection provisions. However, through a combination of diplomatic maneuvering and reassurances from the European Commission, these countries were eventually brought back into the fold. The Commission also announced the establishment of the EU's Artificial Intelligence Office, which will be responsible for enforcing the AI Act.
The agreement includes plans to set up an expert group to advise and assist the Commission in implementing the law and avoiding overlaps with other EU regulations.
The AI Act still needs to go through the formal approval process in the European Parliament, but most experts involved in its development are confident that it will pass without major changes. Disgruntled lawmakers may propose amendments to hamper the law's progression, but these would require additional negotiations with the Council. Overall, the EU's AI Act represents a significant step in regulating AI technology and ensuring its responsible and ethical use within the bloc.