Skip to main content

Use Agent Bricks: Information Extraction

Beta

This feature is in Beta.

This page describes how to create a generative AI agent for information extraction using Agent Bricks: Information Extraction.

Agent Bricks provides a simple approach to build and optimize domain-specific, high-quality AI agent systems for common AI use cases.

What is Agent Bricks: Information Extraction?

Agent Bricks supports information extraction and simplifies the process of transforming a large volume of unlabeled text documents into a structured table with extracted information for each document.

Examples of information extraction include:

  • Extracting prices and lease information from contracts.
  • Organizing data from customer notes.
  • Getting important details from news articles.

Agent Bricks: Information Extraction leverages automated evaluation capabilities, including MLflow and Agent Evaluation, to enable rapid assessment of the cost-quality tradeoff for your specific extraction task. This assessment allows you to make informed decisions about the balance between accuracy and resource investment.

Requirements

  • A workspace that includes the following:
  • A workspace in one of the supported regions: us-east-1 or us-west-2.
  • Ability to use the ai_query SQL function.
  • Files that you want to extract data from. The files must be in a Unity Catalog volume or table.
    • If you want to use PDFs, convert them to a Unity Catalog table first. See Use PDFs in Agent Bricks.
    • To build your agent, you need at least 1 unlabeled document in your Unity Catalog volume or 1 row in your table.
    • To optimize your agent ((Optional) Optimize your agent), you must have at least 75 unlabeled documents in your Unity Catalog volume or at least 75 rows in your table.

Create an information extraction agent

Go to Agents icon. Agents in the left navigation pane of your workspace. From the Information Extraction tile, click Build.

Step 1: Configure your agent

Configure your agent:

  1. In the Name field, enter a name for your agent.

  2. Select the type of data you want to provide. You can choose either Unlabeled dataset or Labeled dataset.

  3. Select the dataset to provide.

    If you select Unlabeled dataset:

    1. In the Dataset location field, select the folder or table you want to use from your Unity Catalog volume. If you select a folder, the folder must contain documents in a supported document format.
    2. If you're providing a table, select the column containing your text data from the dropdown. The table column must contain data in a supported data format.

    If you want to use PDFs, convert them to a Unity Catalog table first. See Use PDFs in Agent Bricks.

    The following is an example volume:

    /Volumes/main/info-extraction/bbc_articles/

  4. If you provided an unlabeled dataset, Agent Bricks automatically infers and generates a sample JSON output containing data extracted from your dataset in the Sample JSON output field. You can accept the sample output, edit it, or replace it with an example of your desired JSON output. The agent returns extracted information using this format.

    If you provided a labeled dataset, the Sample JSON output field shows the first row of data from the labeled response column. Verify this JSON output matches the expected format.

    For example, the following sample JSON output might be used to extract information from a set of news articles:

    JSON
    {
    "title": "Economy Slides to Recession",
    "category": "Politics",
    "paragraphs": [
    {
    "summary": "GDP fell by 0.1% in the last three months of 2004.",
    "word_count": 38
    },
    {
    "summary": "Consumer spending had been depressed by one-off factors such as the unseasonably mild winter.",
    "word_count": 42
    }
    ],
    "tags": ["Recession", "Economy", "Consumer Spending"],
    "estimate_time_to_read_min": 1,
    "published_date": "2005-01-15",
    "needs_review": false
    }
  5. Click Create agent.

Supported document formats

The following table shows the supported document file types for your source documents if you provide a Unity Catalog volume.

Code files

Document files

Log files

  • .c
  • .cc
  • .cpp
  • .cs
  • .css
  • .cxx
  • .go
  • .h
  • .hpp
  • .htm
  • .html
  • .java
  • .js
  • .json
  • .jsonl
  • .jsx
  • .lua
  • .md
  • .php
  • .pl
  • .py
  • .rb
  • .sh
  • .swift
  • .tex
  • .ts
  • .tsx
  • .md
  • .rst
  • .tex
  • .txt
  • .xml
  • .xsd
  • .xsl
  • .diff
  • .err
  • .log
  • .out
  • .patch

Supported data formats

Agent Bricks: Information Extraction supports the following data types and schemas for your source documents if you provide a Unity Catalog table. Agent Bricks can also extract these data types from each document.

  • str
  • int
  • float
  • boolean
  • enum (used for classification tasks where the agent should only select from predefined categories)
  • Object
  • Arrays

enum (suited for classification tasks where we want the agent to output only from a set of predefined categories) object (in place of "custom nested fields") array

Step 2: Improve your agent

On the Improve quality tab, review sample outputs to help you refine your schema definition and add instructions for better results.

  1. On the left side, under Improve quality, review sample outputs. These are sample inputs and responses based on your current agent configuration. Use the arrows to navigate between reponses or click View all.

  2. On the Guidelines tab of the Agent Configuration pane on the right side, refine the descriptions for your schema fields. These descriptions are what the agent relies on to understand what you want to extract.

    Improve information extraction agent.

  3. Review recommendations to improve your agent. These appear in a colored box.

  4. Use these suggestions to help you edit field descriptions for better results.

  5. Click Done to dismiss the recommendation.

  6. Edit the other field descriptions as needed. Use the sample outputs on the left to help you refine the schema definition.

  7. You can also add new fields, edit fields, and remove fields.

  8. (Optional) On the Agent Configuration pane, switch to the Instructions tab, and enter any global instructions for your agent. These instructions will apply to all extracted elements.

  9. Click Save and update to update your agent.

  10. New sample responses are generated on the left side. Review these updated responses and continue to refine your agent configuration until the responses are satisfactory.

Step 3: Evaluate your agent

To ensure you've built a high-quality agent, run an evaluation and review the resulting quality report.

  1. On the left side, switch to the Quality report tab.

  2. Click Run evaluation.

  3. On the New Evaluation pane that slides out, configure the evaluation:

    Configure new evaluation.

  4. Select the evaluation run name. You can choose to use a generated name or to provide a custom name.

  5. Select whether to run the evaluation on the baseline agent or an optimized agent.

  6. Select the evaluation dataset. You can choose to use the same source dataset used to build your agent or provide a custom evaluation dataset using labeled or unlabelled data.

  7. Click Start evaluation.

  8. After your evaluation run completes, review the quality report with evaluation scores.

    Review quality report.

  9. Click on a request to view more details.

  10. On the left, review the Summary, Details and timeline, and Linked prompts tabs.

  11. On the right, review the assessments. Click New icon. next to an assessment to edit the score and provide feedback. You can also scroll to the bottom to add a new assessment.

If you're happy with the results, proceed to Step 4: Use your agent. If not, see (Optional) Optimize your agent.

Step 4: Use your agent

You can use your agent in workflows across Databricks. By default, Agent Bricks endpoints scale to zero after 3 days of inactivity, so you'll only be billed for the uptime.

To start using your agent, click Use. You can choose to use your agent in several ways:

  • Extract data for all documents: Click Start extraction to open the SQL editor and use ai_query to send requests to your new information extraction agent.
  • Create ETL pipeline: Click Create pipeline to deploy a pipeline that runs at scheduled intervals to use your agent on new data. See Lakeflow Declarative Pipelines for more information about pipelines.
  • Test your agent: Click Open in Playground to try out your Agent in a test environment to see how it works. See Chat with LLMs and prototype generative AI apps using AI Playground to learn more about AI Playground.

(Optional) Optimize your agent

When you use Databricks to optimize your agent, Databricks compares multiple different optimization strategies to build and recommend an optimized agent. These strategies include Foundation Model Fine-tuning which uses Databricks Geos.

To optimize your agent:

  1. Click Sparkle icon. Optimize at the top. You can also navigate to the Optimizations tab and click Sparkle icon. Start Optimization. Optimization requires at least 75 files.
  2. Click Start Optimization to confirm. Optimization can take several hours. Making changes to your agent is blocked when optimization is in progress.
  3. After your optimzied agent is ready, you can run an evaluation with it from the Quality report tab, and then compare results with the baseline agent. See Step 3: Evaluate your agent.
  4. If the optimized agent meets your needs, start using it. See Step 4: Use your agent.

Query the agent endpoint

There are multiple ways to query the created knowledge assistant endpoint. Use the code examples provided in AI Playground as a starting point.

  1. On the Configure tab, click Open in playground.
  2. From Playground, click Get code.
  3. Choose how you want to use the endpoint:
    • Select Apply on data to create a SQL query that applies the agent to a specific table column.
    • Select Curl API for a code example to query the endpoint using curl.
    • Select Python API for a code example to interact with the endpoint using Python.

Use PDFs in Agent Bricks

PDFs are not yet supported natively in Agent Bricks: Information Extraction and Custom LLM. However, you can use Agent Brick's UI workflow to convert a folder of PDF files into markdown, then use the resulting Unity Catalog table as input when building your agent. This workflow uses ai_parse_document for the conversion. Follow these steps:

  1. Click Agents in the left navigation pane to open Agent Bricks in Databricks.

  2. In the Information Extraction or Custom LLM use cases, click Use PDFs.

  3. In the side panel that opens, enter the following fields to create a new workflow to convert your PDFs:

    1. Select folder with PDFs or images: Select the Unity Catalog folder containing the PDFs you want to use.
    2. Select destination table: Select the destination schema for the converted markdown table and, optionally, adjust the table name in the field below.
    3. Select active SQL warehouse: Select the SQL warehouse to run the workflow.

    Configure workflow to use PDFs in Agent Bricks.

  4. Click Start import.

  5. You will be redirected to the All workflows tab, which lists all of your PDF workflows. Use this tab to monitor the status of your jobs.

    Review workflow status to use PDFs in Agent Bricks.

    If your workflow fails, click on the job name to open it and view error messages to help you debug.

  6. When your workflow has completed successfully, click on the job name to open the table in Catalog Explorer to explore and understand the columns.

  7. Use the Unity Catalog table as input data in Agent Bricks when configuring your agent.

Limitations

  • Databricks requires at least 75 documents to optimize your agent. For better optimization results, at least 1000 documents is recommended. When you add more documents, the knowledge base that the agent can learn from increases, which improves agent quality and its extraction accuracy.
  • Information Extraction agents have a 128k token max context length.
  • Workspaces that have Enhanced Security and Compliance enabled are not supported.
  • Optimization may fail in workspaces that have serverless egress control network policies with restricted access mode.
  • Union schema types are not supported.