Skip to main content

Prototype tool-calling agents in AI Playground

This article shows how to prototype a tool-calling AI agent with the AI Playground.

Use the AI Playground quickly create a tool-calling agent and chat with it live to see how it behaves. Then, export the agent for deployment or further development in Python code.

To author agents using a code-first approach, see Author AI agents in code.

Requirements

Your workspace must have the following features enabled to prototype agents using AI Playground:

Prototype tool-calling agents in AI Playground

To prototype a tool-calling agent:

  1. From Playground, select a model with the Tools enabled label.

    Select a tool-calling LLM

  2. Click Tools > + Add tool and select tools to give the agent. You can choose up to 20 tools. Tool options include:

    • Hosted Function: Select a Unity Catalog function for your agent to use.
    • Function Definition: Define a custom function for your agent to call.
    • Vector Search: Specify a vector search index for your agent to use as a tool to help respond to queries. If your agent uses a vector search index, its response will cite the sources used.

    For this guide, select the built-in Unity Catalog function, system.ai.python_exec. This function gives your agent the ability to run arbitrary Python code. To learn how to create agent tools, see AI agent tools.

    Select a hosted function tool

    You can also select a vector search index, which allows your agent to query the index to help respond to queries.

    Select a vector search tool

  3. Chat to test out the current combination of LLM, tools, and system prompts, and try variations. The LLM selects the appropriate tool to use to generate a response.

    Prototype the LLM with hosted function tool

    When asking a question related to information in the vector search index, the LLM queries for the information it needs and cites any source documents used in its response.

    Prototype the LLM with vector search tool

Export and deploy AI Playground agents

After prototyping the AI agent in AI Playground, export it to Python notebooks to deploy it to a model serving endpoint.

  1. Click Export to generate the notebook that defines and deploys the AI agent.

    After exporting the agent code, a folder with a driver notebook is saved to your workspace. This driver defines a tool-calling LangGraph ChatAgent, tests the agent locally, uses code-based logging, registers, and deploys the AI agent using Mosaic AI Agent Framework.

  2. Address all the TODO's in the notebook.

note

The exported code might behave differently from your AI Playground session. Databricks recommends running the exported notebooks to iterate and debug further, evaluate agent quality, and then deploy the agent to share with others.

Develop agents in code

Use the exported notebooks to test and iterate programmatically. Use the notebook to do things like add tools or adjust the agent's parameters.

When developing programmatically, agents must meet specific requirements to be compatible with other Databricks agent features. To learn how to author agents using a code-first approach, see Author AI agents in code

Next steps