WeSearch
Hub / Editorial standards
EDITORIAL · STANDARDS

Editorial standards.

WeSearch is an aggregator with a community layer, not a newsroom. But aggregation is editorial work — what we include, exclude, classify, and amplify is a choice. This is the standard those choices are made against.

WeSearch does not produce reported journalism. We aggregate other publishers' work, summarize stories with AI for searchability, and run a community comments layer. None of that is neutral. Choosing which 700 sources to include and which 5,000 not to include is an editorial position. Choosing how to moderate the comment layer is an editorial position. Choosing what the daily editorial says is an editorial position. This page is the standard we make those choices against.

The source list

Inclusion criteria. A source is eligible for the catalog when it consistently publishes original reporting (not pure aggregation), has an identifiable editorial line (someone is responsible), and offers a usable RSS or Atom feed. We accept opinion sources alongside reported sources, but we tag them differently in the source directory so readers can distinguish.

Balance. Within each topic category, we try to ensure ideological and geographic spread. Adding a US-left analysis source nudges us to find a US-right analysis source for the same beat; adding a Western European source nudges us to find a non-Western counterpart. We don't claim balance is achieved at every moment, only that we are aiming at it. Imbalances visible to readers should be reported via /support and we will respond.

Removal criteria. A source is removed when it (a) goes dormant for 60+ days, (b) starts republishing wire copy without original reporting, (c) begins publishing AI-generated articles without disclosure, (d) gets caught fabricating sources, (e) repeatedly violates copyright in its own publishing in ways that would expose WeSearch by association. Removals are logged in a public changelog.

Story pages

Every story we ingest gets a stable summary page at /s/<slug>. The page includes the publisher's headline, byline, source name, publish time, a 3–5-sentence AI-generated TL;DR clearly labeled as AI-assisted, an AI-generated key-facts list also labeled, and a prominent link to the original article. We never reproduce the full article body. If a publisher requests we de-list a specific story, we will do so within one working day; the source link will be replaced with a "removed at publisher request" notice.

AI-generated summaries are not represented as the publisher's words. The TL;DR uses neutral framing, sticks to facts present in the source, and is regenerated if a reader flags it as inaccurate. We do not invent quotes. We do not extrapolate. The summary's job is to make the page searchable and orient the reader, not to replace the original.

The community layer

Anonymity. Reactions and comments are posted under generated handles tied to a hashed local API key. We can rate-limit and ban a key without ever knowing who's behind it. Anonymity is the default and the design intent. Read more.

Comment moderation. Comments are public by default. We hide comments that (a) target individuals with threats or doxxing, (b) post the personal information of any non-public figure, (c) repeatedly post commercial spam or affiliate-link drops, (d) post content that constitutes incitement under applicable law, (e) post sexual content involving minors. We do not hide comments for being wrong, for being unpopular, for being rude in ordinary disagreement, or for taking positions outside the editorial mainstream.

Transparency. When a comment is hidden, the original poster sees a notice. When a key is banned, the key-owner sees a notice and can appeal via /support. Bans are time-bound by default; permanent bans are reserved for repeat offenses. The platform does not silently shadow-ban; if your comment isn't visible to others, it isn't visible to you either.

The daily editorial

The daily editorial at /daily is a 350–550 word AI-generated note that runs through the day's main stories with a single editorial voice. It is clearly labeled as AI-assisted commentary, not reported news. The voice aims for the literary, restrained, observational tone of an Atlantic or Economist daily briefing, not the engagement-bait of a feed.

The daily editorial does not invent facts. It synthesizes themes from the day's curated stories, links back to the originals, and ends with a single sentence that situates today in a longer arc. Readers can flag drafts they consider inaccurate or unfair via /support; we replace flagged drafts within a working day.

The AI overseer pulse

The Pulse tab includes an AI-overseer commentary that runs every ~30 minutes, summarizing patterns across the live feed (e.g., "tech regulators in three jurisdictions issued statements on AI safety today"). This is also clearly labeled as AI-assisted commentary. Same standard: no invented facts, links back to sources, flaggable.

What we will not do

Conflict-of-interest disclosure

The operator of WeSearch has no equity stake in any of the publishers in the catalog and no advertising relationships with any of them. If that ever changes (e.g., we partner formally with a publisher for a co-published feature), we will disclose at the point of publication and on this page.

Corrections policy

If a story page contains a factual error in our AI-generated summary, send the URL and a description of the error to /support. We replace the summary within one working day. If the error is in the publisher's original reporting, we forward your note to the publisher and add a "correction noted" badge to our story page until the publisher responds.

Frequently asked

Does WeSearch produce its own reporting?

No. We aggregate other publishers' reporting and run a community comments layer. We publish AI-generated summaries on story pages and an AI-generated daily editorial, both clearly labeled as AI-assisted commentary.

How are comments moderated?

Comments are public by default. We hide comments that target individuals, dox non-public figures, post spam, or constitute incitement. We do not hide comments for being unpopular, wrong, or outside the mainstream.

How do I report a factual error?

Send the story URL and a description of the error to /support. AI-summary errors are corrected within one working day. Errors in the publisher's original are forwarded to the publisher with a 'correction noted' badge on our page.

Do donors get editorial influence?

No. Donors don't get story placement, source inclusion, daily editorial direction, or any other editorial input. Donations keep the lights on.