Ecosystem News

Web of Trust on Nostr: How a Decentralised Network Handles Spam Without a Server

How Nostr uses social graph proximity (NIP-02 follows, NIP-51 mute lists, NIP-85 trusted assertions) to filter spam without central moderation. With trade-offs and current implementations.

By Nostr.co.uk
web of trustspammoderationnip-85amethystdecentralisation

Every open network eventually runs into the same wall: anyone can post, so eventually somebody will post badly. Email solved this with centralised spam filters. Twitter solved it (sort of) with employees, machine-learning models, and a willingness to ban accounts. Nostr can do neither — there is no central server to filter, and there are no employees to enforce policy. So how does it stop a feed from drowning in noise?

The short answer is Web of Trust: the social graph itself becomes the filter. The longer answer is more interesting, because Web of Trust is now a layered system spanning four different NIPs, several reference implementations, and a set of real, unsolved trade-offs.

The spam problem on open networks

A working email address used to mean something. Then bulk-send tools made it trivial to send a million messages for the cost of one, and the early-90s internet was nearly unusable until centralised filters caught up. Twitter and Facebook fought the same battle from the other direction — closed accounts, captchas, server-side ML, and a paid-trust-and-safety team large enough to keep ahead of bot farms.

Nostr inherits the email-era problem but starts from a more exposed position. There is no signup gate. Anyone with a piece of code that can produce a valid signature can publish to a relay, and any other client subscribed to that relay will see the event. If the protocol is to scale, the spam answer cannot be “ban the spammer” because there is nobody with the authority to ban them and no centralised list of accounts to ban from.

Why server-side moderation cannot work on Nostr

A Nostr relay is dumb on purpose. It accepts events that match its policy (size limits, signature validity, perhaps a payment), stores them, and serves them back when clients ask. Most relays do not police content beyond that. Some specialised relays do — paid relays like Nostr.wine apply a Web of Trust filter at the relay layer, and certain communities run topic-curated relays — but the protocol does not require any of this.

That design choice is deliberate. The moment you require relays to moderate, you have rebuilt the centralised internet with extra steps: relays become liable, regulators apply pressure, and the censorship-resistance promise evaporates. Nostr’s answer is to push moderation to the edge: each client decides what to render. The relay is a transport. The client is the editor.

The challenge then becomes: how does a client know what to filter, when its only signal is a stream of cryptographically-signed events from strangers?

The social graph as filter

The starting point is NIP-02 — the follow list. When you follow someone on Nostr, your client publishes a signed event listing every public key you follow. That list is itself a Nostr event, public to anyone who asks, and it is portable — switch clients and your follow graph comes with you because it lives on relays, not in a single app.

This sounds like a basic feature, but it is also the cornerstone of trust scoring. Once everyone’s follow list is public, you can compute proximity. A user you follow directly is at distance 1. Someone they follow but you don’t is at distance 2. A complete stranger who appears in nobody’s follow graph is at infinity.

Web of Trust uses this graph as a filter:

  • Distance 1 (people you follow): always show
  • Distance 2 (followed by people you follow): probably show, perhaps with some signal
  • Distance 3+: filter aggressively, or show with a “from outside your network” warning
  • Disconnected (no path through the graph): treat as a stranger, default-hide

The maths is straightforward graph traversal. The art is in the tuning — at what distance do you cut off? Should certain nodes (popular accounts) carry more weight? Should you discount for staleness? Different clients answer this differently, which is itself a form of decentralisation: there is no single “official” trust score on Nostr, only the score that your chosen client computes for you.

The follow graph is the cheapest, fastest, most legible signal in the system. It is also the easiest to game — anyone can buy follows on any social network — but on Nostr the cost of a fake account is non-zero (each requires a keypair and at least some activity to look real), and your own filter weights stay under your control.

Mute lists and content lists

NIP-51 extends the idea from “who do I follow” to “what do I never want to see again”. A NIP-51 list is a generic, signed event that names a set of public keys, event IDs, hashtags, or words — and a kind that says what the list is for.

The standard list types include:

  • Mute lists — public keys, words, or hashtags you never want to see
  • Pinned posts — events you want featured on your profile
  • Bookmarks — events you want to come back to
  • Communities — sets of curators you trust
  • Relay sets — groups of relays for different contexts

The clever part is that lists can be public or private. A public mute list is itself a piece of social signal — others can subscribe to it, treating you as a curator. A private mute list is encrypted to your own key and visible only to your clients. The same NIP supports both shapes.

This makes muting genuinely composable. A new user can subscribe to a trusted moderator’s public mute list and inherit several years of curation work in one click. They can layer their own private list on top. The curator can revoke entries, and the subscriber’s feed updates the next time their client syncs. None of this requires a central moderation team — it just requires public, signed lists and clients that know how to compose them.

NIP-85: Trusted Assertions

The newest piece is NIP-85: Trusted Assertions, still in draft status. NIP-85 standardises a way to publish a signed claim about another user. Examples:

  • “I have verified that this person owns this domain.”
  • “This account is not a bot, in my opinion.”
  • “I trust this account’s posts on the topic of Bitcoin.”

Each assertion is a kind 30382 event signed by the asserter, addressed to a target, with a category and confidence. Clients that trust the asserter’s judgement can use the assertion to score the target. Stack enough assertions from people in your trust graph and you have a richer picture than the follow graph alone provides — someone might not be followed by anyone you know, but if three of your friends have explicitly vouched for them in the topic you are reading, that is meaningful.

The reason for a standard, rather than every client inventing its own format, is interoperability. Once NIP-85 lands as final, an assertion you publish in Amethyst can be read by Coracle, Primal, or a future client that has not been written yet. The assertions ride on the same relay infrastructure as everything else. Trust becomes portable in the same way identity already is.

NIP-85 is also the cleanest answer to the “but how do I find people I should trust” cold-start problem. A new user can adopt a published trust set from a respected source — a journalist’s network, a developer community, a UK Nostr meetup organiser — and inherit a working filter on day one. They can prune and replace it as they go.

In practice today

Several clients already implement Web of Trust to varying degrees:

  • Amethyst computes a WoT score for every author it sees, displayed as a small badge on profiles. It pulls follow graphs in the background, runs the calculation locally, and weights interactions by score. The March 2026 v1.06.0 release added explicit NIP-85 support, so Amethyst can now both publish and consume trusted assertions. See the Amethyst client page for current capabilities.
  • GossipDB is a relay-side trust filter. It indexes follow graphs across many relays and exposes a service that clients can query for trust scores, offloading the calculation from low-power devices.
  • Primal uses algorithmic feeds that incorporate trust signals from your follow graph, surfacing replies and reposts from people in your network rather than from random strangers.
  • Coracle has had WoT-based moderation for several releases, allowing community curators to publish lists that members can subscribe to.
  • Paid relays (Nostr.wine, primal.net’s premium relay) apply WoT at the relay edge — events from accounts outside the operator’s trust set are quietly dropped before they ever reach a subscriber.

Together these implementations show the same architecture from four angles. The math is on the client when you have the cycles, on a relay-side service when you do not, and on the relay itself when the operator wants to take the load. The signed events that drive the system — follows, lists, assertions — are the same regardless of where they are computed.

Trade-offs and criticisms

Web of Trust is not free. The four real costs:

Cold start. A new user with no follows starts at distance ∞ from everyone. The honest answer is that the first week of a new Nostr account is genuinely harder than on a centralised network, because you have not yet built the graph that filters for you. Adoption of NIP-85 trust sets reduces this, but it does not vanish.

Filter bubbles. A graph-distance filter weighted toward people you already follow is, by construction, a popularity-and-proximity engine. Nostr’s answer is that this is your filter, not a centralised one, and you can tune or abandon it. The criticism still lands — a user-sovereign filter bubble is still a bubble.

Discovery cost. A new voice with no connection to your graph is invisible to you under aggressive WoT settings. Some clients show “from outside your network” sections to mitigate this; others surface trending content from across the wider graph as a sampling layer.

Sybil resilience. The system relies on the assumption that creating a believable account is more expensive than creating a believable spam account. That is true today. Whether it is true at scale, when there is more incentive to game the graph, is the open question. NIP-85’s reputation-by-vouching layer helps, because vouching is non-anonymous and accountable.

None of these makes Web of Trust the wrong answer. They make it a real answer rather than a theoretical one.

What this means for UK users

UK Nostr users have a particular reason to care about all of this. The Online Safety Act and the broader regulatory direction of travel point toward services being held responsible for content their algorithms surface. Nostr’s reply is that there is no service surfacing content — the user’s own client, with the user’s own filter, is making the editorial call.

That is not a complete legal argument; it is a structural one. But it does reframe what “moderation” means: rather than a platform deciding what you can see, it becomes a question of what trust set you have chosen to layer on top of the public protocol. For more on the UK-specific landscape, see our UK community hub and the how Nostr works walkthrough.

Web of Trust will keep evolving. Expect more clients to publish NIP-85 assertions, more relays to expose trust services, and the cold-start problem to keep getting smaller as published trust sets accumulate. For now, the system is real, working, and arguably the most interesting answer the open-protocol world has produced to a problem the closed platforms have never fully solved.