WeSearch
Hub / News without algorithms
WITHOUT · ALGORITHMS

News without algorithms.

Most news apps sort by predicted engagement. WeSearch sorts by publish time. The feed is identical for every reader. There is no personalization vector, no virality smoothing, no engagement-velocity model — just chronology and dedup.

"Algorithm" has become a catch-all word for whatever the feed product is doing under the hood. In practice it usually means a ranking model trained on engagement: clicks, dwell, scroll, share, return. The model decides which headlines you see and in what order, and the publishers downstream of it bend their reporting toward whatever traits the model rewards. WeSearch is, deliberately, news without that.

This page is the precise version of the claim — exactly which algorithmic layers other news products use that WeSearch doesn't, and what the alternatives are.

Layers we don't run

Ranking model. No model scores headlines for you. No "for you" feed. No Reels-style algorithmic surface. The home feed is sorted by publish time, period.

Engagement-velocity boost. Many algorithmic feeds detect that a story is "going viral" within minutes and amplify it accordingly. We don't. A story trends with us when many distinct anonymous handles react to it, and that surfaces only in the explicitly-labeled trending row, not in the main feed.

Personalization vector. No model that learns your preferences from prior taps and shows you more of the same thing. We don't have a per-reader profile to learn from.

Topic clustering. No semantic-similarity model that groups stories. Categories on WeSearch come from a static directory we maintain — each source is hand-classified into a topic — not from cluster output.

Recommendation system. No "you might also like." No related-stories model. The "more from this source" block on a story page is exactly that — a list of recent stories from the same publisher, by publish time.

Engagement-prediction layer. No model that predicts whether you'll click a headline and uses that to reorder.

Why we don't run them

Each of those layers has the same structural property: it makes the feed reflect what the platform thinks will keep you engaged, which is not the same as what's actually happening. A platform that runs them at scale slowly bends what news is for its readers — toward headlines that test well in the model rather than headlines that are most informative. The longer argument.

What we do instead

What about AI on story pages?

We use AI for two narrow things: a 3–5 sentence TL;DR per story page, clearly labeled, and a daily editorial note at /daily, also labeled. Neither affects feed ordering. Neither personalizes. The TL;DR is generated once per story and is the same for every reader who lands on that page.

What about search?

Search ranks by lexical relevance to your query, not by engagement. A search for "kashmir" returns the most recent stories that mention Kashmir, ranked by lexical match. There is no learn-from-clicks signal injected.

The constraint, made plain

If we ever introduce an algorithmic layer that affects feed ordering or what reaches you in push, we'll publish that fact prominently and explain what it does. Currently there is no such layer. The home feed today is the same chronological dedup it was a year ago and the same one it'll be a year from now.

What this looks like in practice

If you load wesearch.press at 9am UTC and again at 5pm UTC, you'll see different stories — but the difference is purely the time difference, not personalization. Two readers loading the site at the same moment see exactly the same headlines in the same order. Refresh the page; same feed. Open it on your laptop and your phone; same feed. Open it logged in (anonymously, via your local key) or freshly without any key; same feed. The reader-invariant property is the design constraint, and the absence of a per-reader profile is what makes it possible.

Why some readers find this jarring at first

Modern news products have trained us to expect that the feed adapts. The first few times you open a chronological feed, you see headlines you wouldn't have clicked on, headlines from publishers you don't usually read, and stories you'd have ignored if a model had filtered them out. That's the point — but it takes a few days of reading to recalibrate. Most readers report that after a week, the chronological feed feels less hectic and more informative than the algorithmic feeds they're used to. A few don't, and prefer to go back to a personalized product. Both are reasonable.

Bottom line

Frequently asked

If I follow specific publishers, isn't that personalization?

Slightly, but the per-reader filter is purely subtractive — you're scoping the same chronological feed to a smaller set of sources. There's no model rearranging within the smaller set.

Are TL;DR summaries different per reader?

No. The TL;DR is generated once per story and is identical for every reader.

What about A/B tests?

We don't run reader-segment A/B tests on feed ordering. We have run small UI A/B tests in the past on layout (button placement, color); none affect feed content.

Can I prove the feed is the same as my friend's feed?

Open WeSearch on two devices side by side at the same moment. Compare the order. They'll match exactly.