WeSearch

If you've been waiting to try local AI development, please try it

· 0 reactions · 0 comments · 1 view

I have snobbishly long felt that the local models were not 'up to my standards' for local development, or otherwise able to compete with GHCP, Claude Code, Cursor etc. Boy was I wrong. With the rapid increase of usage constraints and enshittification of plans all the cloud providers are starting to enact, I finally downloaded Opencode and got it setup with llama-server + Qwen3.6-27B at a reasonable quant with 128K context (unsure if I could push this more but it's plenty for the time being). Cur

Original article
LocalLlama
Read full at LocalLlama →
Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Threads WhatsApp Bluesky Mastodon Email

Discussion

0 comments

More from LocalLlama