7 stories tagged with #local-llms, in publish-time order across the WeSearch catalog. Tag pages update as new stories ingest.
⌘ RSS feed for this tag → or search "Local Llms"
Devstral 2: Run Mistral's Open Coding Agent Locally
Set up Devstral 2 or Devstral Small 2 locally with Ollama. 72.2% SWE-bench, 256K context, Apache 2.0 — the best open coding agent you can self-host.…
Give a 9B model broken tools. By hour 20 it'll have the correct diagnosis
I replaced Claude Pro with a local 9B model for a week, and finally found out what I was paying $20 a month for
The gap was smaller than I expected…
My local agentic dev setup today
Open Models - April 2026 - One of the best months of all time for Local LLMs?
Benchmarking Local LLM/Harness Combinations
I’ve been running a small benchmark, harness-bench, that pairs local LLMs (served via llama.cpp’s llama-server) with agent harnesses (Aider, Claude Code, Ope...…
Running Local LLMs Offline on a Ten-Hour Flight
I flew from London to Google Cloud Next 2026 in Las Vegas. Ten hours with no in-flight wifi. I used the time to test how far a modern MacBook can carry engineering work on local LL…