WeSearch
Hub
FAQ
💖
Support
← Back
r/LocalLLaMA
·
Social
Cuda + ROCm simultaneously with -DGGML_BACKEND_DL=ON !
Apr 30, 2026 · 10:46 PM UTC
·
0 reactions
·
0 comments
·
7 views
via
r/LocalLLaMA
Original article
r/LocalLLaMA
Read full at r/LocalLLaMA →
0
0
0
0
0
Anonymous · no account needed
Share
𝕏
Facebook
Reddit
LinkedIn
Threads
WhatsApp
Bluesky
Mastodon
Email
Copy link
↗ Share
Discussion
0 comments
Replying to
cancel
GIF
Post
More from
r/LocalLLaMA
Follow-up: Qwen3.6-27B on 1× RTX 3090 — pushing to ~218K context + ~50–66 TPS, tool calls now stable (PN12 fix)
r/LocalLLaMA
·
3
Open Models - April 2026 - One of the best months of all time for Local LLMs?
r/LocalLLaMA
·
4
New Stealth Model : Owl Alpha
r/LocalLLaMA
·
7
DeepSeek Vision/Multimodal 👀
r/LocalLLaMA
·
17
No, nothing special, just a tiny local language model playing a game it itself wrote.
r/LocalLLaMA
·
15
I stumbled on a Gemma 4 chat template bug for tools and fixed it
r/LocalLLaMA
·
9