social · source
r/LocalLLaMA on WeSearch
Recent social headlines from r/LocalLLaMA.
R/LOCALLLAMA
Follow-up: Qwen3.6-27B on 1× RTX 3090 — pushing to ~218K context + ~50–66 TPS, tool calls now stable (PN12 fix)
R/LOCALLLAMA
Open Models - April 2026 - One of the best months of all time for Local LLMs?
R/LOCALLLAMA
New Stealth Model : Owl Alpha
R/LOCALLLAMA
DeepSeek Vision/Multimodal 👀
R/LOCALLLAMA
No, nothing special, just a tiny local language model playing a game it itself wrote.
R/LOCALLLAMA
I stumbled on a Gemma 4 chat template bug for tools and fixed it
R/LOCALLLAMA
MiMo-V2.5-GGUF (preview available)
R/LOCALLLAMA
Hipfire dev update: full AMD arch validation incoming (RDNA 1 thru 4, plus Strix Halo and bc250)
R/LOCALLLAMA
Deepseek v4 pricing is genuinely silly, did the math and now i am questioning my entire stack
R/LOCALLLAMA
100M tokens for $2.65 (Deepseek V4 Pro)
LOCALLLAMA
Why isn’t LLM reasoning done in vector space instead of natural language?
LOCALLLAMA
llama.cpp's Preliminary SM120 Native NVFP4 MMQ Is Merged
R/LOCALLLAMA