Skip to main content

Use Agent Bricks: Information Extraction

Beta

This feature is in Beta. Workspace admins can control access to this feature from the Previews page. See Manage Databricks previews.

This page describes how to create a generative AI agent for information extraction using Agent Bricks: Information Extraction.

Agent Bricks provides a simple approach to build domain-specific, high-quality AI agent systems for common AI use cases.

What is Agent Bricks: Information Extraction?

Agent Bricks supports information extraction and simplifies the process of transforming a large volume of unlabeled text documents into a structured table with extracted information for each document.

Examples of information extraction include:

  • Extracting prices and lease information from contracts.
  • Organizing data from customer notes.
  • Getting important details from news articles.

Agent Bricks: Information Extraction leverages automated evaluation capabilities, including MLflow and Agent Evaluation, to enable rapid assessment of the cost-quality tradeoff for your specific extraction task. This assessment allows you to make informed decisions about the balance between accuracy and resource investment.

Agent Bricks uses default storage to store temporary data transformations, model checkpoints, and internal metadata that power each agent. On agent deletion, all data associated with the agent is removed from default storage.

Requirements

Create an information extraction agent

Go to Agents icon. Agents in the left navigation pane of your workspace. From the Information Extraction tile, click Build.

Step 1: Configure your agent

Configure your agent:

  1. In the Name field, enter a name for your agent.

  2. Select the type of data you want to provide. You can choose either Unlabeled dataset or Labeled dataset.

  3. Select the dataset to provide.

    If you select Unlabeled dataset:

    1. In the Dataset location field, select the folder or table you want to use from your Unity Catalog volume. If you select a folder, the folder must contain documents in a supported document format.

      The following is an example volume:

      /Volumes/main/info-extraction/bbc_articles/

    2. If you're providing a table, select the column containing your text data from the dropdown. The table column must contain data in a supported data format.

      If you want to use PDFs, convert them to a Unity Catalog table first. See Use PDFs in Agent Bricks.

    3. Agent Bricks automatically infers and generates a sample JSON output containing data extracted from your dataset in the Sample JSON output field. You can accept the sample output, edit it, or replace it with an example of your desired JSON output. The agent returns extracted information using this format.

  4. Verify that the Sample JSON output field matches your desired response format. Edit as needed.

    For example, the following sample JSON output might be used to extract information from a set of news articles:

    JSON
    {
    "title": "Economy Slides to Recession",
    "category": "Politics",
    "paragraphs": [
    {
    "summary": "GDP fell by 0.1% in the last three months of 2004.",
    "word_count": 38
    },
    {
    "summary": "Consumer spending had been depressed by one-off factors such as the unseasonably mild winter.",
    "word_count": 42
    }
    ],
    "tags": ["Recession", "Economy", "Consumer Spending"],
    "estimate_time_to_read_min": 1,
    "published_date": "2005-01-15",
    "needs_review": false
    }
  5. Under Model choice, select the best model for your information extraction agent:

    • Optimize for Scale (default): Choose this option if you're processing large volumes of data or prefer a cost-effective agent. This model is designed for high throughput and faster turnaround time and is suitable for most information extraction tasks.
    • Optimize for Complexity: Choose this option if you need complex reasoning and prioritize accuracy over speed and cost. This model offers higher reasoning capabilities for longer documents (such as financial filings) and can handle more complex extractions (such as extracting 40+ schema fields).
  6. Click Create agent.

Supported document formats

The following table shows the supported document file types for your source documents if you provide a Unity Catalog volume.

Code files

Document files

Log files

  • .c
  • .cc
  • .cpp
  • .cs
  • .css
  • .cxx
  • .go
  • .h
  • .hpp
  • .htm
  • .html
  • .java
  • .js
  • .json
  • .jsonl
  • .jsx
  • .lua
  • .md
  • .php
  • .pl
  • .py
  • .rb
  • .sh
  • .swift
  • .tex
  • .ts
  • .tsx
  • .md
  • .rst
  • .tex
  • .txt
  • .xml
  • .xsd
  • .xsl
  • .diff
  • .err
  • .log
  • .out
  • .patch

Supported data formats

Agent Bricks: Information Extraction supports the following data types and schemas for your source documents if you provide a Unity Catalog table. Agent Bricks can also extract these data types from each document.

  • str
  • int
  • float
  • boolean
  • enum (used for classification tasks where the agent should only select from predefined categories)
  • Object
  • Arrays

enum (suited for classification tasks where we want the agent to output only from a set of predefined categories) object (in place of "custom nested fields") array

Step 2: Improve your agent

In the Build tab, review sample outputs to help you refine your schema definition and add instructions for better results.

  1. On the left, review sample responses and provide feedback to tune your agent. These samples are based on your current agent configuration.

    1. Click on a row to review the full input and response.
    2. At the bottom, next to Is this response correct?, provide feedback by selecting either Thumbs up icon. Yes or Thumbs down icon. Fix it. For Fix it feedabck, provide additional details on how the agent should change its response, and then click Check icon. Save.
    3. After you've finished reviewing all responses, click Check icon. Yes, update agent. Or, you can click Save feedback and update after reviewing at least three responses.
  2. On the right, under Output fields, refine the descriptions for your extraction schema fields. These descriptions are what the agent relies on to understand what you want to extract. Use the sample responses on the left to help you refine the schema definition.

    1. For each field, review and edit the schema definition as needed. Use the sample responses on the left to help you refine these descriptions.
    2. To edit the field name and type, click Pencil icon. Edit field.
    3. To add a new field, click Plus icon. Add new field. Enter the name, type, and description, and click Confirm.
    4. To remove a field, click Trash icon. Remove field.
    5. Click Save and update to update your agent configuration.
  3. (Optional) On the right, under Instructions, enter any global instructions for your agent. These instructions apply to all extracted elements. Click Save and update to apply the instructions.

  4. New sample responses are generated on the left side. Review these updated responses and continue to refine your agent configuration until the responses are satisfactory.

Step 3: Use your agent

You can use your agent in workflows across Databricks. By default, Agent Bricks endpoints scale to zero after three days of inactivity, so you're only billed for the uptime.

To start using your agent, click Use. You can choose to use your agent in several ways:

  • Extract data for all documents: Click Start extraction to open the SQL editor and use ai_query to send requests to your new information extraction agent.
  • Create ETL pipeline: Click Create pipeline to deploy a pipeline that runs at scheduled intervals to use your agent on new data. See Lakeflow Spark Declarative Pipelines for more information about pipelines.
  • Test your agent: Click Open in Playground to try out your Agent in a test environment to see how it works. See Chat with LLMs and prototype generative AI apps using AI Playground to learn more about AI Playground.

(Optional) Step 4: Evaluate your agent

To ensure you've built a high-quality agent, run an evaluation and review the resulting quality report.

  1. Switch to the Quality tab.

  2. Click Plus icon. Run evaluation.

  3. On the New Evaluation pane that slides out, configure the evaluation:

    1. Select the evaluation run name. You can choose to use a generated name or to provide a custom name.
    2. Select the evaluation dataset. You can choose to use the same source dataset used to build your agent or provide a custom evaluation dataset using labeled or unlabelled data.
  4. Click Start evaluation.

  5. After your evaluation run completes, review the quality report:

    • A Summary view is shown by default. Review the overall quality, cost, throughput, and summary report of the evaluation metrics. Click Info book icon. next to the schema field to see how that field is evaluated.

      Summary view of the evaluation report.

    • Switch to the Detailed view for additional details. This view shows each request and the evaluation score for each metric. Click into a request to see additional details, such as the input, output, assessments, traces, and linked prompts. You can also edit the request's assessments and provide additional feedback.

      Detailed view of the evaluation report.

Query the agent endpoint

There are multiple ways to query the created knowledge assistant endpoint. Use the code examples provided in AI Playground as a starting point.

  1. On the Configure tab, click Open in playground.
  2. From Playground, click Get code.
  3. Choose how you want to use the endpoint:
    • Select Apply on data to create a SQL query that applies the agent to a specific table column.
    • Select Curl API for a code example to query the endpoint using curl.
    • Select Python API for a code example to interact with the endpoint using Python.

Manage permissions

By default, only Agent Bricks authors and workspace admins have permissions to the agent. To allow other users to edit or query your agent, you need to explicitly grant them permission.

To manage permissions on your agent:

  1. Open your agent in Agent Bricks.
  2. At the top, click the Kebab menu icon. kebab menu.
  3. Click Manage permissions.
  4. In the Permission Settings window, select the user, group, or service principal.
  5. Select the permission to grant:
    • Can Manage: Allows managing the Agent Bricks, including setting permissions, editing the agent configuration, and improving its quality.
    • Can Query: Allows querying the Agent Bricks endpoint in AI Playground and through the API. Users with only this permission cannot view or edit the agent in Agent Bricks.
  6. Click Add.
  7. Click Save.
note

For agent endpoints created before September 16, 2025, you can grant Can Query permissions to the endpoint from the Serving endpoints page.

Use PDFs in Agent Bricks

PDFs are not yet supported natively in Agent Bricks: Information Extraction and Custom LLM. However, you can use Agent Brick's UI workflow to convert a folder of PDF files into markdown, then use the resulting Unity Catalog table as input when building your agent. This workflow uses ai_parse_document for the conversion. Follow these steps:

  1. Click Agents in the left navigation pane to open Agent Bricks in Databricks.

  2. In the Information Extraction or Custom LLM use cases, click Use PDFs.

  3. In the side panel that opens, enter the following fields to create a new workflow to convert your PDFs:

    1. Select folder with PDFs or images: Select the Unity Catalog folder containing the PDFs you want to use.
    2. Select destination table: Select the destination schema for the converted markdown table and, optionally, adjust the table name in the field below.
    3. Select active SQL warehouse: Select the SQL warehouse to run the workflow.

    Configure workflow to use PDFs in Agent Bricks.

  4. Click Start import.

  5. You will be redirected to the All workflows tab, which lists all of your PDF workflows. Use this tab to monitor the status of your jobs.

    Review workflow status to use PDFs in Agent Bricks.

    If your workflow fails, click on the job name to open it and view error messages to help you debug.

  6. When your workflow has completed successfully, click on the job name to open the table in Catalog Explorer to explore and understand the columns.

  7. Use the Unity Catalog table as input data in Agent Bricks when configuring your agent.

Limitations

  • Information Extraction agents have a 128k token max context length.
  • Workspaces that have Enhanced Security and Compliance enabled are not supported.
  • Union schema types are not supported.