The European Union’s AI Act has attracted significant attention as a comprehensive regulatory framework for artificial intelligence. However, the practical implementation of these rules remains a challenge, with companies required to comply with specific guidelines outlined in the Code of Practice for general-purpose AI models. As the deadline approaches, tensions between regulators and major tech firms continue to rise, highlighting concerns over compliance and regulatory influence.
Earlier discussions around the AI Act primarily focused on its broad scope and ambition to set global standards for AI governance. While past reports emphasized the legislative framework, the current discourse shifts towards the enforcement phase, where companies are expected to adapt to regulatory requirements. The ongoing delay in finalizing the Code of Practice reflects industry pushback, a contrast to initial optimism regarding regulatory clarity. Previous regulatory efforts in AI governance lacked enforceable measures, whereas the EU’s approach now moves towards tangible compliance mechanisms.
Why Is the Code of Practice Crucial?
The Code of Practice is designed to translate the AI Act’s principles into actionable requirements for businesses. It outlines compliance expectations for AI developers, particularly those working on large-scale models. These rules are set to take effect in August, yet the delay in releasing the third draft suggests underlying disagreements between regulators and industry leaders. The delay, initially scheduled for February, is now expected to extend another month, with industry pressure cited as a key factor.
How Are Tech Companies Responding?
Several major technology firms, including OpenAI, Google (NASDAQ:GOOGL), Meta (NASDAQ:META), Anthropic, and xAI, have expressed concerns about the Code of Practice. Key points of contention include the use of copyrighted material for AI training and requirements for independent third-party evaluations to assess risks. Some companies argue that the new code imposes additional obligations beyond the original AI Act. This has led to resistance, with Meta reportedly declining to sign the voluntary compliance agreement.
“Certain big technology companies are coming out saying they either will not sign this code of practice unless it is changed according to what they want,” said Risto Uuk, head of EU policy and research at the Future of Life Institute.
Google’s global affairs president, Kent Walker, also criticized the regulatory approach, calling it a restrictive measure that could hinder Europe’s competitiveness. These objections have raised concerns about whether the EU will adjust its stance to accommodate corporate interests.
The Future of Life Institute, known for its earlier calls to pause AI development, has continued its advocacy for stronger regulations. Despite warnings about AI risks, the rapid pace of AI advancements has not slowed. OpenAI recently dissolved its AI safety team following the resignation of key safety leaders, raising questions about the industry’s commitment to risk mitigation.
While regulatory frameworks are emerging in multiple regions, including South Korea and China, enforcement remains a challenge. The EU’s AI Act is widely regarded as the most comprehensive attempt at AI governance, yet industry opposition could influence the final regulations. The upcoming months will determine whether policymakers reinforce their stance or adjust to corporate concerns, shaping the broader future of AI regulation.