The Online Safety Act & Nostr

Understanding UK regulation and the case for censorship-resistant alternatives

What is the Online Safety Act?

The **Online Safety Act 2023** is landmark UK legislation that came into force in March 2024. It represents one of the world's most comprehensive attempts to regulate online platforms.

Platform Liability

Companies must take "reasonable precautions" to prevent users from encountering illegal content and protect children from harmful content.

Duty of Care

Platforms have a legal duty of care to users, particularly children, regarding both illegal content and "harmful but legal" content.

Enforcement Powers

Ofcom (communications regulator) can fine companies up to £18 million or 10% of global turnover, and can block access to non-compliant services in the UK.

Age Verification

Platforms must implement age verification to prevent children accessing adult content and must have stricter moderation for child users.

The Act's Good Intentions

The Online Safety Act was created to address real problems:

🛡️

Child Protection

Preventing children from accessing harmful content and protecting them from online abuse

⚖️

Illegal Content

Removing genuinely illegal material like CSAM, terrorism content, and serious threats

❤️

Mental Health

Addressing concerns about social media's impact on young people's wellbeing

🔒

Online Abuse

Reducing harassment, hate speech, and targeted abuse on platforms

These are legitimate concerns that deserve serious attention. The question is whether the Act's approach is the right solution.

Why This Raises Concerns

1. Over-Censorship Incentives

When platforms face massive fines for failing to remove "harmful content," they have strong incentives to over-moderate - removing content that's perfectly legal but might possibly cause them liability.

Example: A platform might automatically remove political speech about sensitive topics, not because it's illegal, but to avoid any risk of being deemed insufficiently protective.

2. Vague "Harmful But Legal" Category

The Act requires platforms to address "harmful but legal" content. But who decides what's "harmful"? This subjective category creates uncertainty and gives platforms broad power to censor legal speech.

Example: Controversial political opinions, criticism of protected characteristics, or discussions of sensitive historical events might all be deemed "harmful" despite being legal.

3. Chilling Effect on Speech

Even if platforms don't actively censor you, knowing that content is being monitored and could lead to account penalties creates self-censorship. People avoid discussing legitimate but controversial topics.

Example: Academics researching sensitive topics, journalists investigating controversial stories, or citizens critiquing government policy might self-censor to avoid platform action.

4. Concentrated Private Power

The Act delegates massive speech-policing power to private companies (mostly American tech firms) with no democratic accountability and opaque decision-making processes.

Example: A handful of Silicon Valley companies effectively become arbiters of what British citizens can say online, with little transparency or recourse.

5. Mission Creep

Regulations often expand beyond their original scope. What starts as protecting children can evolve into broader content controls and surveillance that affect everyone.

Example: Age verification systems require collecting identity data from all users, creating privacy concerns and surveillance infrastructure that could be expanded.

How Nostr's Architecture Addresses This

Nostr isn't designed to circumvent regulation - it's structured fundamentally differently, which changes the entire dynamic:

No Central Platform to Regulate

The Online Safety Act regulates "platforms." Nostr isn't a platform - it's a protocol. There's no company to fine, no CEO to pressure, no central authority to compel action.

Impact: This doesn't mean "no rules" - it means rules are enforced through different mechanisms (relay choices, client filtering, user control) rather than centralized corporate moderation.

Relays vs Platforms

Nostr relays are simple servers that store and distribute messages. They don't have user accounts, don't algorithmically recommend content, don't collect user data beyond what's necessary for operation. They're fundamentally simpler than "platforms."

Impact: Individual relays can have policies (no spam, no illegal content, topic-specific), but if one relay rejects your content, you simply use another. No single point of control.

User-Owned Identity

On traditional platforms, your identity (account) is owned by the platform and can be deleted. On Nostr, your identity is a cryptographic key you generate and control. No one can "deplatform" your identity.

Impact: Platform bans can't silence you. Controversial but legal speech is structurally protected because no central authority controls who can speak.

Transparent, User-Controlled Moderation

Instead of opaque platform algorithms deciding what you see, Nostr clients can implement filtering that users understand and control. You choose which relays to use, which clients to trust, which content to filter.

Impact: Moderation becomes transparent and user-controlled rather than hidden and corporate-controlled. You're empowered to protect yourself rather than relying on platform policies.

Finding the Balance

This isn't about ignoring the real problems the Online Safety Act tries to address. It's about whether the solution is proportionate and preserves free expression.

✅ What We Need

  • Protection of children from exploitation and abuse
  • Removal of genuinely illegal content (CSAM, credible threats)
  • Tools for users to control their own experience
  • Accountability for platforms that enable harm

⚠️ What We Risk

  • Censorship of legal but controversial speech
  • Self-censorship due to chilling effects
  • Concentration of speech control in private hands
  • Surveillance infrastructure affecting privacy

Nostr represents an alternative approach: instead of giving platforms broad power to police speech (with all the incentives for over-moderation that creates), it distributes power - to relay operators, client developers, and ultimately users themselves.

Important Clarifications

Nostr Doesn't Make You Above the Law

UK law still applies. Posting genuinely illegal content (threats, CSAM, incitement to violence) is still illegal whether on Twitter or Nostr. The difference is architectural, not legal.

Individual Relays Can Moderate

Relay operators can choose what content they're comfortable storing and distributing. They can block spam, illegal content, or anything against their policies. But users can choose different relays.

Clients Can Implement Filters

Client applications can offer content filtering, keyword blocking, user muting, and other tools. The difference is transparency and user control rather than hidden algorithms.

This Isn't About Enabling Abuse

Censorship-resistance protects legitimate speech from arbitrary removal. It's not designed to protect illegal content or harassment - it's about ensuring legal speech can't be silenced by unaccountable platforms.

Interested in the Alternative?

Explore how Nostr's decentralized architecture offers a different approach to balancing safety and free expression.