WeSearch

How I ran 6 LLMs in parallel without paying a cent in API fees (Electron + DOM Injection)

·3 min read · 0 reactions · 0 comments · 6 views
#ai#electron#dom injection#llm#multi-agent system
How I ran 6 LLMs in parallel without paying a cent in API fees (Electron + DOM Injection)
⚡ TL;DR · AI summary

The author created a desktop application called AI Council using Electron to run six large language models in parallel through their web interfaces, avoiding API costs by injecting prompts directly into the DOM and orchestrating responses via a fan-out/fan-in system. The app enables cross-validation of AI outputs by having multiple models review a primary draft simultaneously, then compiling feedback into a final response. Despite challenges with inconsistent UI behaviors and detecting response completion, the solution operates locally without cloud infrastructure and is available as open-source software.

Original article
DEV.to (Top)
Read full at DEV.to (Top) →
Opening excerpt (first ~120 words) tap to expand

try { if(localStorage) { let currentUser = localStorage.getItem('current_user'); if (currentUser) { currentUser = JSON.parse(currentUser); if (currentUser.id === 3906100) { document.getElementById('article-show-container').classList.add('current-user-is-article-author'); } } } } catch (e) { console.error(e); } Minkyu Posted on Apr 30 How I ran 6 LLMs in parallel without paying a cent in API fees (Electron + DOM Injection) #ai #chatgpt #claude #llm Let’s be honest: trusting a single LLM with a complex problem is basically a coin toss right now. I got incredibly tired of my daily workflow: ask ChatGPT a question -> get a confident answer -> paste the same question into Claude to fact-check -> get a contradictory answer -> ask Perplexity to break the tie.

Excerpt limited to ~120 words for fair-use compliance. The full article is at DEV.to (Top).

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Threads WhatsApp Bluesky Mastodon Email

Discussion

0 comments

More from DEV.to (Top)