WeSearch

Estimating Black-Box LLM Parameter Counts via Factual Capacity

·3 min read · 0 reactions · 0 comments · 4 views
#machine learning#large language models#parameter estimation#factual knowledge#model scaling
Estimating Black-Box LLM Parameter Counts via Factual Capacity
⚡ TL;DR · AI summary

The paper introduces Incompressible Knowledge Probes (IKPs), a method to estimate the parameter counts of black-box large language models by measuring their factual knowledge, which provides a lower bound on model size. Using a benchmark of 1,400 factual questions across 89 open-weight models, the approach achieves high correlation between knowledge capacity and parameter count, and is applied to estimate sizes of proprietary models. The study finds that factual knowledge scales log-linearly with parameters and does not show evidence of saturation over time, unlike reasoning benchmarks.

Original article
arXiv.org
Read full at arXiv.org →
Opening excerpt (first ~120 words) tap to expand

Computer Science > Machine Learning arXiv:2604.24827 (cs) [Submitted on 27 Apr 2026] Title:Incompressible Knowledge Probes: Estimating Black-Box LLM Parameter Counts via Factual Capacity Authors:Bojie Li View a PDF of the paper titled Incompressible Knowledge Probes: Estimating Black-Box LLM Parameter Counts via Factual Capacity, by Bojie Li View PDF HTML (experimental) Abstract:Closed-source frontier labs do not disclose parameter counts, and the standard alternative -- inference economics -- carries $2\times$+ uncertainty from hardware, batching, and serving-stack assumptions external to the model.

Excerpt limited to ~120 words for fair-use compliance. The full article is at arXiv.org.

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Threads WhatsApp Bluesky Mastodon Email

Discussion

0 comments

More from arXiv.org