Collect feedback on πŸ—‚οΈ Request Logs from expert users

Preview

This feature is in Private Preview. To try it, reach out to your Databricks contact.

Looking for a different RAG Studio doc? Go to the RAG documentation index

This tutorial walks you through the process of collecting feedback on πŸ—‚οΈ Request Logs from your 🧠 Expert Users. This step is done when you have negative feedback from your πŸ‘€ End Users and need to get input in order to understand what went wrong / what should have happened.

Data flow

legend

Step 1: Create the πŸ“‹ Review Set & instructions

  1. Run the following SQL to create a Unity Catalog table called <catalog>.<schema>.<review_table_name>. This table can be stored in any Unity Catalog schema, but we suggest storing it in the Unity Catalog schema you configured for the RAG Application.

    Note

    You can modify the SQL code to only select a subset of logs. If you do this, make sure you keep the original schema of the request column.

    CREATE  TABLE  <catalog>.<schema>.<review_table_name> AS (SELECT  *  FROM  <request_log_table> where app_version_id=<model_uri>  LIMIT 10)
    

    Note

    The schema is intentionally the same between the request logs and the review set.

    Warning

    To review the assessments, you will need to use the request_id from <catalog>.<schema>.<review_table_name>. The generated request_ids are unique UUIDs, but there is a very low probability 2 UUIDs can be identical.

  2. Open the file src/review/instructions.md and modify the instructions as needed.

    # Instructions for reviewers
    
    Please review these chats.  For each conversation, read the question asked, assess the bot's response for accuracy, and respond to the feedback prompts accordingly.
    

Step 2: Deploy the πŸ“‹ Review Set to the <review-ui>

  1. Run the following command.

     ./rag start-review -e dev -v 1 --review-request-table <catalog>.<schema>.<review_table_name>
    
  2. The URL for the πŸ’¬ Review UI is printed to the console.

    ...truncated for clarity...
    
    Your Review UI is now available. Open the Review UI here: <review_url>
    
  3. Add permissions to the deployed version so your 🧠 Expert Users can access the above URL.

    • Give the Databricks user you wish to grant access read permissions to

      • the MLflow Experiment

      • the Model Serving endpoint

      • the Unity Catalog Model

    Tip

    🚧 Roadmap 🚧 Support for adding any corporate SSO to access the πŸ’¬ Review UI e.g., no requirements for a Databricks account.

  4. Share the URL with your 🧠 Expert Users

    RAG review app
  5. The πŸ‘ Assessments from your users will appear in the πŸ‘ Assessment & Evaluation Results Log for the Environment that you deployed to. You can query for just these assessments with:

    SELECT a.*
    FROM <assessment_log> a LEFT SEMI JOIN <catalog>.<schema>.<review_table_name> r ON (a.request.request_id = r.request.request_id)