Skip to main content

Get started: Query LLMs and prototype AI agents with no-code

This 5-minute no-code tutorial introduces generative AI on Databricks. You will use the AI Playground to do the following:

  • Query large language models (LLMs) and compare results side-by-side
  • Prototype a tool-calling AI agent
  • Export your agent to code
  • Optional: Prototype a question-answer chatbot using retrieval-augmented generation (RAG)

Before you begin

Ensure your workspace can access the following:

Step 1: Query LLMs using AI Playground

Use the AI Playground to query LLMs in a chat interface.

  1. In your workspace, select Playground.
  2. Type a question such as "What is RAG?".

Add a new LLM to compare responses side-by-side:

  1. In the upper-right, select + to add a model for comparison.
  2. In the new pane, select a different model using the dropdown selector.
  3. Select the Sync checkboxes to synchronize the queries.
  4. Try a new prompt, such as "What is a compound AI system?" to see the two responses side-by-side.

AI playground Keep testing and comparing different LLMs to help you decide on the best one to use to build an AI agent.

Step 2: Prototype a tool-calling AI agent

Tools allow LLMs to do more than generate language. Tools can query external data, run code, and take other actions. AI Playground gives you a no-code option to prototype tool-calling agents:

  1. From Playground, choose a model labelled Tools enabled.

    Select a tool-calling LLM

  2. Select Tools > + Add tool and select the built-in Unity Catalog function, system.ai.python_exec.

    This function lets your agent run arbitrary Python code.

    Select a hosted function tool

  3. Ask a question that involves generating or running Python code. You can try different variations on your prompt phrasing. If you add multiple tools, the LLM selects the appropriate tool to generate a response.

    Prototype the LLM with hosted function tool

Step 3: Export your agent to code

After testing your agent in AI Playground, select Export to export your agent to a Python notebook.

The Python notebook contains code that defines the agent and deploys it to a model serving endpoint.

Optional: Prototype a RAG question-answering bot

If you have a vector search index set up in your workspace, you can prototype a question-answer bot. This type of agent uses documents in a vector search index to answer questions based on those documents.

  1. Click Tools > + Add tool. Then, select your Vector Search index.

    Select a vector search tool

  2. Ask a question related to your documents. The agent can use the vector index to look up relevant info and will cite any documents used in its answer.

    Prototype the LLM with vector search tool

To set up a vector search index, see Create a vector search index

Next steps