WeSearch
Hub / social / r/LocalLLaMA
social · source

r/LocalLLaMA on WeSearch

Recent social headlines from r/LocalLLaMA.

R/LOCALLLAMA

Follow-up: Qwen3.6-27B on 1× RTX 3090 — pushing to ~218K context + ~50–66 TPS, tool calls now stable (PN12 fix)

1h · 1 view
R/LOCALLLAMA

Open Models - April 2026 - One of the best months of all time for Local LLMs?

1h · 1 view
R/LOCALLLAMA

New Stealth Model : Owl Alpha

5h · 5 views
R/LOCALLLAMA

DeepSeek Vision/Multimodal 👀

1d · 14 views
R/LOCALLLAMA

No, nothing special, just a tiny local language model playing a game it itself wrote.

1d · 13 views
R/LOCALLLAMA

I stumbled on a Gemma 4 chat template bug for tools and fixed it

1d · 6 views
R/LOCALLLAMA

MiMo-V2.5-GGUF (preview available)

1d · 5 views
R/LOCALLLAMA

Hipfire dev update: full AMD arch validation incoming (RDNA 1 thru 4, plus Strix Halo and bc250)

1d · 5 views
R/LOCALLLAMA

Deepseek v4 pricing is genuinely silly, did the math and now i am questioning my entire stack

1d · 5 views
R/LOCALLLAMA

100M tokens for $2.65 (Deepseek V4 Pro)

1d · 5 views
LOCALLLAMA

Why isn’t LLM reasoning done in vector space instead of natural language?

1d · 7 views
LOCALLLAMA

llama.cpp's Preliminary SM120 Native NVFP4 MMQ Is Merged

1d · 6 views
R/LOCALLLAMA

great work, Gemma

1d · 6 views

More social sources

Visit r/LocalLLaMA directly →