The European Union’s recent surge of legislation to regulate artificial intelligence (AI) is a testament to the growing importance of AI in our daily lives and the global economy. Laws such as the Digital Markets Act, Digital Services Act, Data Act, AI Act, and others reflect the EU’s intent to protect consumers, promote competition, and ensure responsible AI development.
However, while the regulatory efforts are well-intentioned, there is growing concern that Europe’s heavy-handed approach could stifle innovation, weaken competitiveness, and drive some companies to rethink their operations in the region.
In this article, I argue that the EU’s regulatory environment, particularly in the realm of AI, must tread carefully between fostering a competitive market and overregulation. The EU risks losing its position in the global tech landscape if it continues to impose burdensome regulatory obstacles on AI development, particularly for small and medium-sized enterprises (SMEs).
Early Involvement of Competition Regulators – A Necessary Precaution?
One of the most striking aspects of the current regulatory push in the EU is the early involvement of competition regulators in the AI space. Historically, regulators have been slower to act when it comes to new technologies, often waiting until issues arise before implementing stringent measures.
However, the early issuance of the Joint Statement on Competition in Generative AI Foundation Models and AI Products, signed by key regulatory bodies such as the European Commission, the UK’s Competition and Markets Authority (CMA), and the US Federal Trade Commission (FTC), signals a proactive approach to preventing market imbalances in AI development.
The statement highlights three core risks: the concentration of control over AI’s critical infrastructure (like chips, data, and computing power), the market dominance of large digital firms, and the potential for collusive behaviour among key players. These risks are real, particularly in a market as nascent and fast-moving as AI, where a handful of players could establish early monopolies that are difficult to dismantle later.
However, one must ask whether this early intervention by competition regulators could itself be counterproductive. By acting before AI technologies have fully matured, regulators may inadvertently limit innovation and deter investment in the sector. Large tech firms may feel overly scrutinised, while smaller innovators could be stifled by the fear of regulatory backlash before they’ve even had a chance to establish themselves.
The Impact of Overregulation on European Tech Competitiveness
The concern over the impact of regulation on Europe’s global tech competitiveness is nothing new, but in the context of AI, it has taken on renewed urgency. The Draghi Report on the future of European competitiveness, published in September 2024, emphasised how overregulation is a major obstacle for European companies, particularly SMEs, which already struggle to compete in the global technology market.
The report notes that the EU’s penchant for the precautionary principle has created a legislative environment where businesses must navigate a thicket of regulations designed to pre-empt potential risks.
While this might work in theory to protect consumers and markets, in practice, it can have the opposite effect—stifling innovation, slowing down the rollout of new technologies, and adding significant costs for compliance, particularly for smaller players. When you layer AI regulations like the AI Act on top of the 100-plus tech-focused laws and 270 regulators already active in Europe, it becomes clear that regulatory barriers are more than just an inconvenience; they are a major competitive disadvantage.
The AI Act’s requirement that general-purpose AI models exceeding a certain computational threshold comply with additional regulatory requirements is a prime example of this. While the goal is to ensure that powerful AI models are carefully monitored, the practical impact is to make it harder for smaller firms to scale up and compete with established giants who can more easily absorb the costs of compliance.
AI, Innovation, and Regulatory Overreach
AI has been hailed as one of the most transformative technologies of our time, with applications in sectors ranging from healthcare to finance to logistics. Yet, as the EU moves swiftly to regulate its development and deployment, the risk of regulatory overreach is becoming more pronounced.
Some companies have already paused or delayed the deployment of AI tools in Europe, fearing non-compliance with the Digital Markets Act and other regulations. The issue is not just the scale of fines for non-compliance, but also the difficulty of navigating the complex and sometimes contradictory regulations that exist across different sectors and jurisdictions. In trying to regulate the inner workings of AI technologies, the EU risks creating an environment where companies would rather hold off on innovation than risk running afoul of these intricate laws.
A particular area of concern is the EU’s push for interoperability between messaging systems, as mandated by the Digital Markets Act. While the goal is to create a more open and competitive market, forcing companies to open their platforms to third-party applications raises significant privacy and security risks. Many of the so-called ‘gatekeepers’—major tech companies with dominant platforms—have argued that these requirements could expose their systems to hacking, data breaches, and other security vulnerabilities.
Yet, the regulators have downplayed these concerns, seeing them instead as attempts to protect anti-competitive practices. This back-and-forth between regulators and tech firms illustrates the difficulty of balancing transparency with security in the AI age.
The Costs of Pausing Innovation
There is a real danger that Europe’s overregulation of AI could lead to it falling behind in the global race for AI supremacy. Already, some multinational firms are slowing down or halting the training of their AI models in Europe due to fears of non-compliance. This could leave European consumers and businesses without access to the latest AI-driven tools and technologies, while other regions, like the US and Asia, continue to forge ahead with fewer restrictions.
The Draghi Report pointed to the ‘over-concentration’ of tech regulation in Europe as a major reason why many European tech companies struggle to scale up and compete on the global stage. For a technology as game-changing as AI, the risk is even greater. As Europe imposes more restrictions on the development and deployment of AI, other regions could surge ahead, securing the talent, investment, and infrastructure needed to lead in this new technological frontier.
In my view, the EU’s desire to regulate AI is understandable, but the pace and intensity of the regulation must be carefully considered. If Europe wants to remain competitive in the global AI market, it cannot afford to burden its companies with excessive red tape. A more flexible, iterative approach to AI regulation—one that allows for innovation while still protecting consumers—would be far more effective than the heavy-handed approach we are seeing today.
Conclusion: Striking the Right Balance
The challenge of regulating AI is immense, and the EU is right to take the lead in addressing the potential risks posed by this transformative technology. However, Europe must strike the right balance between protecting consumers and promoting innovation. Overregulation risks stifling the very technology it seeks to manage, driving companies out of Europe and leaving consumers worse off in the long run.
Rather than imposing blanket restrictions and onerous compliance burdens, the EU should adopt a more flexible, risk-based approach to AI regulation. By focusing on areas where real harm can be mitigated—such as transparency, security, and bias—while allowing room for innovation and growth, Europe can remain at the forefront of AI development without losing its competitive edge.
If the EU fails to achieve this balance, it risks not only losing its leadership in AI but also alienating the very companies that are driving this technological revolution forward. The consequences of such a failure would be felt not just in Europe, but across the entire global economy.