What is AI Tool Use (Function Calling)?
AI tool use, also called function calling, is the ability of a language model to invoke external tools, APIs, or functions during a conversation. Instead of only generating text responses, the model can decide to call a web browser, code interpreter, database, or any other tool -- then use the result to formulate a better response or complete a real-world task.
Tool use is what transforms a chatbot into an AI agent. Without it, an LLM can only produce text. With tool use, the same LLM can browse websites, execute code, send emails, query databases, and orchestrate complex workflows.
The mechanism works through structured outputs. When the model determines it needs external information or needs to take an action, it generates a JSON object specifying which tool to call and what parameters to pass. The agent framework intercepts this output, executes the tool call, and feeds the result back to the model for continued reasoning.
How Tool Use Works
Tool use follows a loop pattern that repeats until the task is complete:
- Tool declaration -- Available tools are described to the model with names, parameter schemas, and descriptions
- Model reasoning -- The LLM analyzes the user's request and decides whether a tool is needed
- Tool call generation -- The model outputs a structured call with the tool name and arguments
- Execution -- The agent framework executes the tool in a sandboxed environment
- Result integration -- The tool's output is fed back to the model, which continues reasoning
This loop can repeat multiple times. An agent might browse a website, extract data, write a script to analyze it, run the script, and post results to Slack -- all within a single task.
Why Tool Use Matters
Tool use is the foundational capability that makes AI agents possible. It bridges the gap between language understanding and real-world action. An LLM without tools is limited to what it learned during training. An LLM with tools can access current information, interact with live systems, and produce tangible outcomes.
For businesses, tool use means AI can automate workflows that previously required human operators or custom software. Competitive research, data processing, report generation, customer outreach -- all become tasks an agent can handle autonomously.
How KiwiClaw Uses Tool Use
KiwiClaw agents powered by OpenClaw come with built-in tools for web browsing, code execution, and file management. Additional tools can be added through the skills marketplace or via MCP servers. All tool execution runs inside isolated VMs on Fly Machines, ensuring security and tenant isolation.
Related Terms
- What is MCP (Model Context Protocol)?
- What is an AI Agent?
- What is AI Agent Sandboxing?
- What is an AI Agent Framework?
Frequently Asked Questions
What is AI tool use or function calling?
AI tool use (also called function calling) is the ability of a language model to invoke external tools, APIs, or functions during a conversation. Instead of only generating text, the model can decide to call a tool -- like a web browser, code interpreter, or database query -- and use the result to formulate a better response or complete a task.
What is the difference between tool use and plugins?
Tool use is the underlying mechanism -- the model's ability to output structured function calls. Plugins and skills are packaged collections of tools with descriptions, authentication, and configuration. In OpenClaw, skills are the user-facing unit; tool use is what happens under the hood when the agent invokes a skill's capabilities.
How does tool use work in KiwiClaw?
KiwiClaw agents use OpenClaw's skill system, which exposes tools to the LLM. When the agent determines it needs to browse a website, run code, or call an API, it generates a structured tool call. OpenClaw executes the call in a sandboxed environment and returns the result to the model for continued reasoning.