WeSearch

Show HN: Llmconfig – configfile and CLI for local LLM

·2 min read · 0 reactions · 0 comments · 4 views
#ai#machine learning#local llm#cli tool#openai#llmconfig#llama.cpp#stable-diffusion.cpp#whisper.cpp#OpenAI#NVIDIA#Apple Silicon#AMD
Show HN: Llmconfig – configfile and CLI for local LLM
⚡ TL;DR · AI summary

llmconfig is a tool that simplifies managing local large language models using a single YAML configuration file and CLI interface. It supports multiple backends like llama.cpp, stable-diffusion.cpp, and whisper.cpp, offering OpenAI-compatible APIs and automatic hardware detection. The tool can be installed via script, Go, or built from source, with comprehensive documentation available.

Key facts
Original article
GitHub
Read full at GitHub →
Opening excerpt (first ~120 words) tap to expand

llmconfig Local Large Model Config — manage local inference with llama.cpp, stable-diffusion.cpp, and whisper.cpp from a single YAML file and a single CLI. llmconfig up gemma # or just: llmc up gemma ✓ gemma is ready at http://127.0.0.1:8080 Ships with a shorter llmc alias — every command works with either binary name. Why llmconfig One YAML, three backends. Define a model once; llmconfig handles downloading, starting, stopping, restarting, and monitoring. Hardware-aware. Profiles for NVIDIA, Apple Silicon, AMD, Intel GPU, and CPU are auto-selected at runtime. OpenAI-compatible. Models run as drop-in replacements for the OpenAI API. The optional gateway command exposes every running model on a single port. No build chain.

Excerpt limited to ~120 words for fair-use compliance. The full article is at GitHub.

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Threads WhatsApp Bluesky Mastodon Email

Discussion

0 comments

More from GitHub