X will now check UK hate speech and terror reports within 24 hours under Ofcom's new rules.
- X promised Ofcom it will review UK illegal content reports within 24 hours on average
- 85% of reports must be handled within 48 hours through X’s UK reporting channel
- X will also work with outside experts to audit its reporting system’s transparency
Britain’s media regulator has forced a rare compliance climbdown from X, the platform formerly known as Twitter, after years of criticism over its hands-off approach to illegal content. Under a binding agreement with Ofcom, the UK’s communications watchdog, X now commits to reviewing and assessing reports of suspected terrorist propaganda and hate speech from British users within a single day on average. The company pledged that at least 85% of these reports will be processed within 48 hours via its dedicated UK reporting channel, a dedicated inbox for UK-specific violations that wasn’t always visible or trusted by users or NGOs before today. After multiple groups complained that reports they filed vanished without a trace, X also agreed to bring in external experts to check whether its systems actually receive and log complaints before they’re closed or ignored. The deal lands just months after Ofcom began fining X £1.5 million per day for failing to hand over requested data on illegal content in the UK, a standoff that finally pushed the company to negotiate in earnest. X did not confirm whether the new rules will apply to its global operations or remain UK-only, but the agreement explicitly covers “UK users” and content accessible within Britain, meaning the changes won’t automatically extend to posts visible only in the US or EU for now. The obligations kick in immediately, though Ofcom said it will monitor compliance through regular audits and could impose further penalties if X misses its targets repeatedly. One unanswered question is what happens to X’s commitment to “free speech absolutism,” the company’s long-standing rhetoric that often clashed with UK laws requiring platforms to remove clearly illegal material like terrorist recruitment videos or racist abuse within hours. The new rules don’t change X’s policies on what’s allowed; they only force the company to enforce its own existing ban on illegal content faster when UK users flag it. In practice, this means videos glorifying ISIS or posts calling for violence against immigrants could face takedowns within a day of being reported, rather than languishing for weeks as they sometimes did before. Critics say the deal is overdue. The Anti-Defamation League and Tell MAMA, a UK hate-crime monitoring group, both told Ofcom last year that X’s reporting system was so opaque that they often couldn’t tell if their complaints had even been logged, let alone reviewed. One senior Ofcom official, speaking on condition of anonymity, said the regulator was “surprised by how little transparency X offered users before today,” adding that the company’s new promises are a “minimum standard” rather than a gold standard. X’s move also comes as the UK government prepares to expand Ofcom’s powers under the Online Safety Act, a sweeping law that will require platforms to remove illegal content faster or face fines up to 10% of global revenue—rough enough to make even Silicon Valley’s most stubborn executives blink. The new law isn’t law yet, but Ofcom has already started using its existing powers to pressure platforms like X into voluntary compliance ahead of the rules taking full effect next year. For X, the timing is awkward. The company, now run by Elon Musk, has repeatedly clashed with regulators worldwide over content moderation, user verification, and data transparency. Musk’s team has argued that rapid takedowns risk “censoring” controversial but legal speech, while critics say X’s slowness to act fuels real-world harm, from radicalization to harassment campaigns. Today’s deal suggests that even a platform built on free-speech absolutism can’t ignore a regulator with real teeth—especially not one that can now fine it millions per day. What happens next depends partly on whether X’s new UK team can actually hit its 24-hour target. Ofcom said it’s already testing the system internally, and early results show “encouraging signs,” though the regulator declined to share specific metrics. The bigger test will come when the first high-profile cases land in the new inbox: will X remove a video of a UK far-right activist calling for attacks on mosques within 24 hours, or will the company’s new promises prove just another PR move? For now, the deal buys X some goodwill with regulators—but it won’t erase years of distrust with NGOs, politicians, or the UK public. The real test starts tomorrow morning, when the first batch of UK reports hits the inbox.
What You Need to Know
- Source: The Register
- Published: May 15, 2026 at 10:52 UTC
- Category: Technology
- Topics: #theregister · #tech · #enterprise · #ofcom · #britain · #x-ofcom-illegal-content-uk
Read the Full Story
This is a curated summary. For the complete article, original data, quotes and full analysis:
All reporting rights belong to the respective author(s) at The Register. GlobalBR News summarizes publicly available content to help readers discover the most relevant global news.
Curated by GlobalBR News · May 15, 2026
Related Articles
- Trump Brand’s First Phone Finally Ships After 9-Month Holdup
- NYT Connections Sports Edition Answers & Hints for May 17, #601
- 🎉 100 Articles in Technology!
🇧🇷 Resumo em Português
O X, plataforma antes conhecida como Twitter, finalmente cedeu à pressão e anunciou que passará a revisar denúncias de discurso de ódio e conteúdos terroristas no Reino Unido em até 24 horas, após acordo firmado com a Ofcom, órgão regulador britânico. A medida, que coloca a rede social sob fiscalização rigorosa, chega após anos de críticas por sua lentidão e ineficácia na moderação de conteúdos prejudiciais, especialmente aqueles que proliferam no Brasil e em outros países lusófonos.
No Brasil, onde o X enfrenta desafios semelhantes — como a disseminação de fake news, discurso de ódio e desinformação durante eleições e crises políticas — a decisão da plataforma pode ser um sinal de mudança. O acordo com a Ofcom estabelece um precedente importante, já que o Brasil também discute regulações mais rígidas para redes sociais, inspiradas em modelos internacionais. Especialistas brasileiros veem a medida como um teste para a efetividade das políticas de moderação da empresa, que até agora têm sido consideradas insuficientes diante do volume de conteúdo nocivo em português.
Se a implementação for bem-sucedida, o X poderá ser pressionado a estender esses padrões para outros mercados, incluindo o Brasil, onde a regulação de plataformas digitais ainda está em discussão no Congresso.
🇪🇸 Resumen en Español
X se compromete a revisar en un plazo de 24 horas las denuncias sobre discurso de odio y terrorismo en Reino Unido, tras un acuerdo con el regulador Ofcom. La plataforma, antes conocida como Twitter, ha cedido a las presiones tras recibir numerosas quejas de usuarios por su lentitud en la moderación de contenidos peligrosos.
El pacto llega en un momento clave, ya que Reino Unido aplica desde este año la Ley de Seguridad en Línea, que obliga a las redes sociales a actuar con celeridad contra contenidos ilegales o que pongan en riesgo a los menores. Para los hispanohablantes, esta medida podría servir como precedente en otros mercados donde la plataforma ha sido criticada por su inacción frente a la desinformación y el acoso. Expertos señalan que, aunque el acuerdo es un avance, la eficacia dependerá de su implementación real y de auditorías independientes que garanticen transparencia.
The Register
Read full article at The Register →This post is a curated summary. All rights belong to the original author(s) and The Register.
Was this article helpful?
Discussion