WeSearch
Hub / Tags / Deepseek V4
TAG · #DEEPSEEK-V4

Deepseek V4 coverage.

Every story in the WeSearch catalog tagged with #deepseek-v4, chronological, with view counts. Subscribe to the per-tag RSS feed to follow this topic in your reader of choice.

8 stories tagged with #deepseek-v4, in publish-time order across the WeSearch catalog. Tag pages update as new stories ingest.

⌘ RSS feed for this tag →   or   search "Deepseek V4"

REDDIT

Kimi K2.6 vs DeepSeek V4 Pro

2 views ·
LOCALLLAMA

No GGUFs for DeepSeek V4-Flash as yet?

Wondering why there aren't any "name brand" (like unsloth, bartowski) GGUFs as yet for DeepSeek V4 Flash?…

4 views ·
REDDIT

anyone actually tried deepseek v4 pro for coding?

so v4 pro dropped and barely anyone is talking about it. feels weird since when kimi k2.6 came out i seen post about it everywhere anyone here tried v4 pro for actual code work? ho…

5 views ·
SIMON WILLISON'S WEBLOG

DeepSeek V4 - almost on the frontier, a fraction of the price

Chinese AI lab DeepSeek's last model release was V3.2 (and V3.2 Speciale) last December . They just dropped the first of their hotly anticipated V4 series in the shape of two previ…

3 views ·
REDDIT

The exact KV cache usage of DeepSeek V4

Figure 1 of DSV4 paper seems to imply that DSV3.2 uses ~50GB at 1m context and DSV4 uses ~5GB: ***Numbers updated with the KV cache breakdown from vllm*** From my own calculations,…

4 views ·
REDDIT

llama.cpp DeepSeek v4 Flash experimental inference

Hi, here you can find experimental llama.cpp support for DeepSeek v4, and here there is the GGUF you can use to run the inference with "just" (lol) 128GB of RAM. The model, even qu…

5 views ·
REDDIT

Decreased Intelligence Density in DeepSeek V4 Pro

In the V3.2 paper, they mentioned: Second, token efficiency remains a challenge; DeepSeek-V3.2 typically requires longer generation trajectories (i.e., more tokens) to match the ou…

9 views ·
REDDIT

DeepSeek V4 Update

DeepSeek V4 Update…

6 views ·