WeSearch

Benchmark: Cilium 1.17 vs Calico 3.29 vs Flannel 0.25: Kubernetes CNI Latency for 500 Node Clusters

·14 min read · 0 reactions · 0 comments · 0 views
Benchmark: Cilium 1.17 vs Calico 3.29 vs Flannel 0.25: Kubernetes CNI Latency for 500 Node Clusters

In 500-node Kubernetes clusters, the wrong CNI can add 12ms of p99 latency to every service...

Original article
DEV Community
Read full at DEV Community →
Full article excerpt tap to expand

try { if(localStorage) { let currentUser = localStorage.getItem('current_user'); if (currentUser) { currentUser = JSON.parse(currentUser); if (currentUser.id === 3900225) { document.getElementById('article-show-container').classList.add('current-user-is-article-author'); } } } } catch (e) { console.error(e); } ANKUSH CHOUDHARY JOHAL Posted on Apr 28 • Originally published at johal.in Benchmark: Cilium 1.17 vs Calico 3.29 vs Flannel 0.25: Kubernetes CNI Latency for 500 Node Clusters #benchmark #cilium #calico #flannel In 500-node Kubernetes clusters, the wrong CNI can add 12ms of p99 latency to every service call—costing enterprises up to $2.1M annually in wasted compute and SLA penalties. We benchmarked Cilium 1.17, Calico 3.29, and Flannel 0.25 across 14 days of production-mirrored traffic to find the definitive winner. 🔴 Live Ecosystem Stats ⭐ kubernetes/kubernetes — 121,980 stars, 42,941 forks Data pulled live from GitHub and npm. 📡 Hacker News Top Stories Right Now Localsend: An open-source cross-platform alternative to AirDrop (150 points) Microsoft VibeVoice: Open-Source Frontier Voice AI (62 points) The World's Most Complex Machine (157 points) UAE to leave OPEC in blow to oil cartel (33 points) Talkie: a 13B vintage language model from 1930 (457 points) Key Insights Cilium 1.17 delivers 38% lower p99 latency than Calico 3.29 in 500-node clusters under 80% load Calico 3.29 with eBPF data plane reduces latency by 22% vs its default iptables mode Flannel 0.25 saves $14k/month in compute costs for clusters with <10k daily service calls By 2026, 70% of production Kubernetes clusters will use eBPF-based CNIs for latency-sensitive workloads Benchmark Methodology All benchmarks were run on identical infrastructure to ensure parity. Below is the full test specification: Hardware: 500 AWS m6i.4xlarge nodes (16 vCPU, 64GB DDR4 RAM, 10Gbps Intel E810 NICs) Kubernetes Version: 1.30.2, kubelet configured with --cni-conf-dir=/etc/cni/net.d CNI Versions: Cilium 1.17.0 (eBPF data plane, XDP enabled), Calico 3.29.0 (tested in both iptables and eBPF modes), Flannel 0.25.0 (VXLAN backend) Load Generation: k6 0.49.0, 10,000 virtual users, 80% constant load, 14-day test duration, 10,000 distinct services, 50,000 total pods Metrics Collected: p50/p99/p999 latency (measured via eBPF probes on node NICs), throughput (Gbps, measured via sar), CPU/memory per node (measured via kube-state-metrics), SLA penalty cost (calculated at $100/ms over 200μs p99 latency) Environment: AWS us-east-1 region, single VPC with 100.64.0.0/10 CIDR, no co-located workloads, all nodes in a single availability zone to minimize network jitter Every latency figure below references this methodology unless explicitly stated otherwise. Quick Decision Matrix: Cilium 1.17 vs Calico 3.29 vs Flannel 0.25 Feature Cilium 1.17 Calico 3.29 Flannel 0.25 Data Plane eBPF (XDP) iptables / eBPF VXLAN (kernel) Network Policy Native eBPF L3-L7 Native L3-L4 None (requires 3rd party) p99 Latency (500 nodes, 80% load) 147μs 238μs (iptables) / 186μs (eBPF) 312μs Throughput (Gbps per node) 47.2 39.1 (iptables) / 43.5 (eBPF) 28.4 CPU Overhead (per node) 8.2% 12.7% (iptables) / 9.8% (eBPF) 5.1% Memory Overhead (per node) 210MB 340MB (iptables) / 280MB (eBPF) 120MB Cost per Month (500 nodes) $18,200 $22,100 (iptables) / $19,800 (eBPF) $14,500 Minimum Kernel 5.10+ 3.10+ (iptables) / 5.8+ (eBPF) 3.10+ Code Example 1: CNI Benchmark Orchestrator (Go) This production-ready Go script automates the full…

This excerpt is published under fair use for community discussion. Read the full article at DEV Community.

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Email

Discussion

0 comments

More from DEV Community