Get started: Query LLMs and prototype AI agents with no-code
This 5-minute no-code tutorial introduces generative AI on Databricks. You will use the AI Playground to do the following:
- Query large language models (LLMs) and compare results side-by-side
- Prototype a tool-calling AI agent
- Export your agent to code
- Optional: Prototype a question-answer chatbot using retrieval-augmented generation (RAG)
Before you begin
Ensure your workspace can access the following:
- Foundation models. See Model serving feature availability.
- Unity Catalog. See Set up and manage Unity Catalog.
- Mosaic AI Agent Framework. See Features with limited regional availability.
Step 1: Query LLMs using AI Playground
Use the AI Playground to query LLMs in a chat interface.
- In your workspace, select Playground.
- Type a question such as "What is RAG?".
Add a new LLM to compare responses side-by-side:
- In the upper-right, select + to add a model for comparison.
- In the new pane, select a different model using the dropdown selector.
- Select the Sync checkboxes to synchronize the queries.
- Try a new prompt, such as "What is a compound AI system?" to see the two responses side-by-side.
Keep testing and comparing different LLMs to help you decide on the best one to use to build an AI agent.
Step 2: Prototype a tool-calling AI agent
Tools allow LLMs to do more than generate language. Tools can query external data, run code, and take other actions. AI Playground gives you a no-code option to prototype tool-calling agents:
-
From Playground, choose a model labelled Tools enabled.
-
Select Tools > + Add tool and select the built-in Unity Catalog function,
system.ai.python_exec
.This function lets your agent run arbitrary Python code.
-
Ask a question that involves generating or running Python code. You can try different variations on your prompt phrasing. If you add multiple tools, the LLM selects the appropriate tool to generate a response.
Step 3: Export your agent to code
After testing your agent in AI Playground, select Export to export your agent to a Python notebook.
The Python notebook contains code that defines the agent and deploys it to a model serving endpoint.
Optional: Prototype a RAG question-answering bot
If you have a vector search index set up in your workspace, you can prototype a question-answer bot. This type of agent uses documents in a vector search index to answer questions based on those documents.
-
Click Tools > + Add tool. Then, select your Vector Search index.
-
Ask a question related to your documents. The agent can use the vector index to look up relevant info and will cite any documents used in its answer.
To set up a vector search index, see Create a vector search index
Next steps
-
Use Agent Framework to develop advanced agents programmatically. See Author AI agents in code.
-
Learn how to build a RAG application. See RAG guide.