Function Calling with Ollama: Make Your Local LLM Run Real Tools
The article explains how to implement function calling with Ollama, enabling local large language models to interact with external tools and APIs. It demonstrates a complete TypeScript example where an LLM requests weather data by triggering a function call instead of generating a fabricated response. This approach allows for building production-grade AI agents that can plan and execute actions through real code.
- ▪Ollama supports function calling natively for compatible models like qwen2.5:7b and llama3.1:8b.
- ▪Function calling allows an LLM to request specific actions, such as retrieving weather data, without executing them directly.
- ▪The process involves two API round trips: first to get the function call request, then to return the result for a natural language response.
Opening excerpt (first ~120 words) tap to expand
try { if(localStorage) { let currentUser = localStorage.getItem('current_user'); if (currentUser) { currentUser = JSON.parse(currentUser); if (currentUser.id === 337213) { document.getElementById('article-show-container').classList.add('current-user-is-article-author'); } } } } catch (e) { console.error(e); } Pavel Espitia Posted on May 1 Function Calling with Ollama: Make Your Local LLM Run Real Tools #ai #ollama #typescript #tutorial Function Calling with Ollama: Make Your Local LLM Run Real Tools Most Ollama tutorials end at chat completion. The interesting stuff starts when the model can call your code. Function calling is the protocol that lets an LLM say "I want to call getWeather(city: 'Bogotá')" instead of trying to fake the answer from training data.
…
Excerpt limited to ~120 words for fair-use compliance. The full article is at DEV.to (Top).