Navigating the Interplay Between the Digital Services Act and AI Regulation: A New Era of Compliance

In 2024, the Digital Services Act (DSA) came into full force, ushering in a new regulatory environment for online platforms and intermediaries operating in the European Union. Alongside the DSA, the EU AI Act has introduced a suite of new rules that aim to ensure transparency, accountability, and safety in the deployment of artificial intelligence systems.

Businesses, especially those leveraging AI, are now navigating the complex intersection of these regulations, alongside the GDPR, which has been the cornerstone of EU privacy law since 2018.

The DSA’s broad applicability and its significant focus on content moderation, transparency, and user safety make it one of the most ambitious regulatory efforts the EU has undertaken. Coupled with the AI Act’s stringent guidelines for high-risk AI systems, companies must reassess how they balance innovation with compliance.

In this piece, I’ll argue that while these regulations create significant compliance burdens, they are also necessary to protect user rights in an increasingly data-driven world.

The DSA and AI: A Complex Compliance Landscape

The DSA’s primary objective is to ensure that online platforms foster a safer and more predictable digital environment. For Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs), the DSA places stringent obligations, including requirements to mitigate illegal content, prevent manipulative interface designs like “dark patterns,” enhance advertising transparency, and conduct annual risk assessments. At the same time, the EU AI Act imposes rules on how AI systems can be used, particularly in areas where risks to users’ rights are high.

The convergence of these laws forces companies to adopt a dual approach to compliance. Platforms that use AI for content moderation or advertising must ensure that their AI systems meet the standards of both the DSA and AI Act. For example, if an AI tool is used to monitor or moderate user-generated content, the platform must not only ensure that the tool functions transparently under the DSA but also that it adheres to the rigorous requirements for transparency, fairness, and accountability under the AI Act. This adds complexity, as businesses need to account for both sets of rules while maintaining innovation in their AI-driven tools.

AI as Both a Solution and a Risk in Content Moderation

One of the most significant challenges posed by the DSA for platforms using AI lies in content moderation. AI has the potential to streamline content moderation by automating the detection and removal of illegal or harmful content at scale. However, AI is not infallible. The use of generative AI in particular, which can create vast amounts of content quickly, raises the risk of harmful or illegal material slipping through. This has significant implications for compliance under the DSA, which holds platforms accountable for the illegal content that appears on their services.

AI tools used for content moderation must also comply with the AI Act’s requirements, including transparency and risk assessments. This means that platforms need to disclose when AI is being used for moderation, provide users with explanations of the AI system’s decisions, and assess the potential risks associated with its use. Given that AI moderation systems can be error-prone, the onus is on businesses to ensure that these tools are accurate and fair—requirements that can be difficult to meet at scale. For platforms using generative AI, the potential for generating harmful or illegal content compounds the complexity of maintaining compliance.

The Challenges of Transparency and Accountability

The DSA’s focus on transparency is both a critical component and a significant challenge. Platforms must inform users about their content moderation policies, including how AI tools are used to moderate or remove content. Under the DSA, this means disclosing when AI tools are being used and providing information about their accuracy, error rates, and the role human reviewers play in the process.

The AI Act complements this by requiring platforms to provide detailed information on how AI systems make decisions. For high-risk AI systems, platforms must ensure that these systems operate in a way that is explainable and understandable to users. This creates a heavy burden for platforms that rely on AI systems that are often complex and opaque by nature, such as large language models or other machine learning algorithms.

Transparency is not just a regulatory obligation—it is essential for building trust with users. With AI increasingly embedded in online services, users need to understand how their data is being processed and how decisions that affect them are being made. The DSA and AI Act jointly address this, but meeting these transparency requirements can be resource-intensive and technically challenging for companies that operate across multiple jurisdictions and use a wide array of AI tools.

The Tension Between Innovation and Regulation

One of the most contentious aspects of these regulations is the perceived tension between fostering innovation and ensuring compliance. For many companies, AI is a critical driver of innovation, particularly in areas like content personalization, targeted advertising, and automated decision-making. The DSA and AI Act, however, place limits on how companies can use AI, particularly when it comes to processing user data or making decisions that affect individuals’ rights.

While some may argue that these regulations stifle innovation, I contend that they are necessary guardrails to ensure that the rapid deployment of AI does not come at the expense of users’ rights. Regulations like the DSA and AI Act are essential in an era where technology is advancing faster than our ability to fully understand its consequences. Without these regulatory frameworks, there is a risk that companies may prioritize profit and efficiency over fairness, transparency, and user protection.

In fact, by fostering transparency, fairness, and accountability, these regulations can help build user trust—an asset that is crucial for long-term success in a data-driven economy. Users are becoming increasingly aware of how their data is being used and are more likely to trust companies that are transparent and compliant with data protection regulations. Companies that prioritize ethical AI development and compliance with the DSA and AI Act will not only avoid hefty fines but also position themselves as leaders in responsible technology innovation.

Final Thoughts

The full enforcement of the Digital Services Act in 2024, alongside the EU AI Act, represents a watershed moment in how online platforms, intermediaries, and AI systems are regulated. While these laws introduce significant compliance challenges, they also offer a blueprint for ensuring that AI can be deployed responsibly and ethically in the digital space.

Businesses must not view these regulations as mere obstacles but as an opportunity to build trust, ensure fairness, and foster a more transparent digital ecosystem. The companies that succeed in this new regulatory landscape will be those that integrate compliance into their innovation processes and view transparency as a core business value. As AI continues to evolve and shape the digital world, only those businesses that prioritize both innovation and compliance will thrive.

In conclusion, the interplay between the DSA and AI Act marks a new chapter in the responsible deployment of technology. By embracing these regulations and ensuring that AI systems are transparent, fair, and accountable, businesses can safeguard user trust and maintain a competitive edge in a rapidly changing landscape.

Contact the author
Peter Borner
Executive Chairman and Chief Trust Officer

As Co-founder, Executive Chairman and Chief Trust Officer of The Data Privacy Group, Peter Borner leverages over 30 years of expertise to drive revenue for organisations by prioritising trust. Peter shapes tailored strategies to help businesses reap the rewards of increased customer loyalty, improved reputation, and, ultimately, higher revenue. His approach provides clients with ongoing peace of mind, solidifying their foundation in the realm of digital trust.

Specialises in: Privacy & Data Governance

Peter Borner
Executive Chairman and Chief Trust Officer

As Co-founder, Executive Chairman and Chief Trust Officer of The Data Privacy Group, Peter Borner leverages over 30 years of expertise to drive revenue for organisations by prioritising trust. Peter shapes tailored strategies to help businesses reap the rewards of increased customer loyalty, improved reputation, and, ultimately, higher revenue. His approach provides clients with ongoing peace of mind, solidifying their foundation in the realm of digital trust.

Specialises in: Privacy & Data Governance

Contact Our Team Today
Your confidential, no obligation discussion awaits.