Navigating the UK Online Safety Act
Achieving a True Privacy-Safety Equilibrium is an Evolving Challenge
-
June 23, 2025
-
Organisations around the globe now face a pressing dual challenge: ensuring robust user safety while rigorously protecting privacy under increasingly strict digital regulations. The UK Online Safety Act (the “Act”), which is being rapidly implemented, exemplifies this shift. With several critical compliance deadlines looming, such as the imminent July 2025 deadline for completing children’s risk assessments and implementing child protection duties, pressure is mounting for many organisations. Importantly, this is not only a concern for established platforms; organisations planning to launch user-to-user services or search engines must address UK Online Safety Act requirements from at the outset of design and development.1
Having passed on 26 October 2023, this law mandates that “user-to-user services” — defined as digital platforms hosting user-generated content including social media, messaging apps and other online communication services — embed robust safety measures directly into their systems.2 The Act enforces safety by design to protect vulnerable users, particularly minors, while simultaneously mandating privacy by design to secure personal data in line with the UK General Data Protection Regulation.
According to Ofcom’s published roadmap, providers have been required to adopt the mandated safety measures or effective alternatives since 17 March 2025.3 Early enforcement actions have already led to substantial fines of up to £18 million or 10% of global revenue for non-compliant companies.
This regulatory evolution is not isolated to the UK; similar initiatives, such as the U.S. Kids Online Safety Act, Australia’s Online Safety Act and Ireland’s Online Safety and Media Regulation Act, illustrate global momentum toward enhancing digital safety.4 This article examines the core requirements of the UK Online Safety Act, explores the critical balance between safety by design and privacy by design and offers guidance on navigating the evolving regulatory landscape.
A Closer Look at The UK Online Safety Act
The Act establishes a comprehensive framework aimed at protecting users, particularly vulnerable groups such as children, across a wide range of digital services.
User-to-user services, search services and platforms that enable online interaction are the focus of the regulation. Social media sites, video-sharing platforms, online forums, consumer file storage services, dating apps and instant messaging services are all covered. Notably, any artificial intelligence-generated content shared among users is treated in the same manner as human-generated content, underlining the Act’s all-encompassing regulatory reach.5 Furthermore, the Act applies even to companies outside the UK if they have significant links to the country, such as a large UK user base, a targeted UK market or the capacity to be accessed by UK users where there is a material risk of harm.6
However, the law carves out specific exemptions for certain types of services, including:
- Email-only services: User-to-user services that allow only emails (apart from minimal identifying content).
- SMS/MMS-only services: Those limited solely to SMS or MMS messages.
- One-to-one live aural communications: Platforms providing only live voice communications between individuals.
- Limited functionality services: If a service restricts user interactions to posting comments or reviews, sharing these externally, or simply “liking,” “disliking,” or rating content, and only displays minimal identifying information.
Phased Implementation
Ofcom, the UK’s designated online safety regulator has steered the implementation of the Act in a methodical, three-phase rollout.7 On 3 March 2025, Ofcom officially launched its enforcement programme to monitor compliance with illegal content risk assessment and record-keeping duties. Looking ahead, the next major deadline is 31 July 2025, by which time services likely to be accessed by children must complete their children’s risk assessments and implement appropriate safety measures. This follows the publication of Ofcom’s Protection of Children Codes and risk assessment guidance in April 2025.8 Some services will also be required to disclose these risk assessments to Ofcom by the July deadline.
Ofcom’s approach is built around the following four core objectives:
- Stronger safety governance: Enhancing risk management practices across digital services.
- Safety-centric platform design: Mandating that safety is embedded in the very architecture of digital platforms.
- Enhanced user controls: Empowering users with more choices and control over their online experiences.
- Increased transparency and accountability: Building trust through clear reporting and operational practices.
Source: Ofcom.9
Ofcom’s Approach to Implementing the Online Safety Act.
This framework is particularly stringent for Category 1 services (user-to-user platforms with high reach and advanced features like content recommendation systems) and Category 2A services (search platforms with high user volumes), which face the most rigorous requirements.
Enforcement and Penalties
The stakes for non-compliance are high. Companies can be fined up to £18 million or 10% of their global revenue, whichever is greater. Senior managers risk criminal charges if they neglect to comply with Ofcom’s information requests or enforcement notices, especially those tied to child safety and the prevention of child sexual abuse and exploitation.10 In the most extreme cases, with judicial approval, Ofcom can compel payment providers, advertisers and internet service providers to cut ties with non-compliant sites, effectively blocking their revenue streams and access within the UK.
The Privacy Challenge
While the imperative for safety by design is clear, it introduces a parallel challenge: safeguarding privacy. Privacy by design ensures that while safety measures are in place, user data is collected and processed only as necessary, safeguarding fundamental rights under the UK GDPR. With increasing reliance on advanced technologies like facial recognition, age estimation and AI-driven content monitoring, the balance between effective moderation and user privacy becomes a delicate one.
Implementing Ofcom’s requirements introduces several challenges in maintaining this balance, including:
- Age verification vs. data minimization: The Act mandates stringent age verification mechanisms to prevent minors from accessing inappropriate content. While necessary for child protection, these processes often require the collection of sensitive personal data (e.g., date of birth, ID scans, facial recognition), which may conflict with data minimization principles under the UK GDPR.
- Content moderation vs. user confidentiality: To meet obligations under the UK Online Safety Act, platforms must proactively monitor, detect and remove harmful or illegal content. However, this often requires scanning private communications and creating a backdoor to end-to-end encryption, which can compromise user confidentiality.
- Automated decision making vs. fairness and transparency: AI-powered moderation systems can efficiently detect and remove harmful content but may also generate false positives, resulting in legitimate content being wrongly flagged or removed. The lack of transparency in AI decision-making can further undermine user trust.
- Algorithmic profiling vs. privacy rights: The UK Online Safety Act requires platforms to assess how their algorithms influence users’ exposure to harmful content. However, extensive algorithmic profiling, such as behavioural tracking and content personalization, can infringe on users’ privacy rights.
- Consent and granular control: The evolving regulatory landscape calls for nuanced approaches to user consent. Organisations must explore mechanisms for granular consent, enabling users to have control over their data and ensuring age-appropriate content access without overreaching surveillance.
Achieving a Privacy-Safety Equilibrium
Organisations must now navigate a labyrinth of digital regulations where ensuring robust user safety and safeguarding privacy are two sides of the same coin. To comply with the UK Online Safety Act while upholding individual rights, companies must integrate privacy by design and safety by design simultaneously. Guidance from the UK’s Information Commissioner’s Office (ICO) underscores the delicate interplay, offering a blueprint for balancing content moderation with rigorous data protection.11
Key Strategies Include:
- Proactive privacy and AI risk assessments: Conduct thorough privacy and AI risk assessments to understand how safety measures impact data protection.
- Lawful data processing: Establish a lawful basis before collecting or processing personal information.
- Data minimization: Implement strategies to reduce unnecessary data collection during activities such as age verification and content moderation, while ensuring that access controls limit sensitive information exposure only to those who need it.
- Transparent AI practices: Adopt transparent and fair AI decision-making processes, complete with clear policies on data retention, deletion timelines and user rights.
- Accuracy reviews: Regularly verify that the personal data used in content moderation is accurate and not misleading.
- Data deletion and anonymization: Securely delete or anonymize data when it is no longer needed.
- Privacy-preserving techniques: Utilize innovative methods, such as zero-knowledge proofs or cryptographic tokens, to confirm user attributes like age without compromising sensitive identity details.
Embedding these privacy-centric principles into safety protocols is not merely a checklist. It is a dynamic, ongoing process that requires constant refinement and adaptation as threats evolve and technologies advance. The UK Online Safety Act signals a new era of digital regulation where user safety and privacy are not mutually exclusive but must work together to create trustworthy and secure digital environments.
Yet, several questions remain. How can organisations continuously adapt their privacy and safety frameworks as new technologies and risks emerge? What measures will ensure that safety protocols remain effective without compromising the privacy rights of users? How should organisations balance conflicting demands between strict regulatory compliance and the need for innovation? These questions underscore that achieving a true privacy-safety equilibrium is an evolving challenge that extends beyond a simple checklist.
How FTI Technology Can Support
Navigating these complex regulations requires a strategic approach. FTI Technology offers comprehensive services to guide organisations through compliance, including:
- Strategic advisory: Experts provide tailored guidance to navigate the complexities of the UK Online Safety Act, ensuring digital platforms meet all safety and privacy mandates.
- Risk assessments: Thorough evaluations identify potential safety and privacy risks, offering actionable insights to mitigate them effectively.
- Compliance and governance frameworks: FTI Technology assists in developing robust governance structures and transparent reporting mechanisms to maintain compliance and foster user trust.
- Privacy by design support: Support for product teams to integrate privacy by design principles into platform and service development, balancing user safety with data protection.
- Training and support: Customized training programs to equip teams with the knowledge and tools required to uphold both safety and privacy standards.
About the Authors:
Saniya Dhanjani a Senior Consultant within FTI Technology, based in London. She specializes in Information Governance, Privacy and Security and supports clients with developing programs, policies and procedures to support data privacy compliance. Prior to joining FTI Technology, Ms. Dhanjani was an Associate Privacy Specialist at a leading digital marketing agency and a researcher at a data ethics think-tank. She brings a proven track record of leading high-impact projects and synthesizing insights into actionable strategic recommendations, including advising executives on privacy, trust and safety initiatives, developing privacy-by-design strategies and managing complex privacy programs that align with global regulations such as GDPR, UK Online Safety Act, EU AI Act, EU Digital Markets Act, EU Digital Services Act and more.
1: Department for Science, Innovation & Technology, “Online Safety Act: explainer,” Gov.uk (24 April 2025)
2: Id
3: “Implementing the Online Safety Act: progress update,” Ofcom (17 October 2024)
4: “Kids Online Safety Act,” United States Congress (December 13, 2023)
5: Open letter to UK online service providers regarding Generative AI and chatbots,” Ofcom (8 November 2024)
6: Id
7: Ibid.
8: “Statement: Protecting children from harms online,” Ofcom (24 April 2025)
9: Ibid.
10: “Implementation of the Online Safety Act,” UK Parliament (25 February 2025)
11: “A joint statement by Ofcom and the Information Commissioner’s Office on Collaboration on the Regulation of Online Services,” Information Commissioner’s Office (1 May 2024)
Published
June 23, 2025
Key Contacts
Managing Director