WeSearch

VRAM.cpp: Running llama-fit-params directly in your browser

· 0 reactions · 0 comments · 3 views
VRAM.cpp: Running llama-fit-params directly in your browser

Lots of people are always asking on this subreddit if their system can run a certain model. A lot of the "VRAM calculators" that I've found only provide either very rough estimates or are severely limited in the number of models they can estimate the usage for. These are both due to the complexity of figuring out how much memory is utilized for the numerous types of attention on the market today. This leads to a tool that works for a few people, but it doesn't answer the questio: "Can my 16GB GP

Original article
Reddit
Read full at Reddit →
Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Email

Discussion

0 comments

More from Reddit