WeSearch
Hub
FAQ
💖
Support
← Back
r/LocalLLaMA
·
Social
24gb vram to 48gb vram
May 2, 2026 · 4:48 AM UTC
·
0 reactions
·
0 comments
·
5 views
via
r/LocalLLaMA
Original article
r/LocalLLaMA
Read full at r/LocalLLaMA →
0
0
0
0
0
Anonymous · no account needed
Share
𝕏
Facebook
Reddit
LinkedIn
Threads
WhatsApp
Bluesky
Mastodon
Email
Copy link
↗ Share
Discussion
0 comments
Replying to
cancel
GIF
Post
More from
r/LocalLLaMA
Mistral Medium 3.5 128b ggufs are fixed
r/LocalLLaMA
·
1
Unsloth solved bug in Mistral Medium 3.5 implementation
r/LocalLLaMA
·
0
A Dark-Money Campaign Is Paying Influencers to Frame Chinese AI as a Threat
r/LocalLLaMA
·
0
"LLM is created so engineer don't have to write a report", anyway found out ONLYOFFICE can connect to OpenAI compatible, using Qwen 3.6 to do elaboration.
r/LocalLLaMA
·
3
What kind of device is suitable for running local LLM?
r/LocalLLaMA
·
4
Been using Qwen-3.6-27B-q8_k_xl + VSCode + RTX 6000 Pro As Daily Driver
r/LocalLLaMA
·
3