Lakebase Data API
This feature is in Public Preview in the following regions: us-east-1, us-west-2, eu-west-1.
Lakebase Autoscaling is the new version of Lakebase with autoscaling compute, scale-to-zero, branching, and instant restore. For feature comparison with Lakebase Provisioned, see choosing between versions.
The Lakebase Data API is a PostgREST-compatible RESTful interface that allows you to interact directly with your Lakebase Postgres database using standard HTTP methods. It offers API endpoints derived from your database schema, allowing for secure CRUD (Create, Read, Update, Delete) operations on your data without the need for custom backend development.
Overview
The Data API automatically generates RESTful endpoints based on your database schema. Each table in your database becomes accessible through HTTP requests, enabling you to:
- Query data using HTTP GET requests with flexible filtering, sorting, and pagination
- Insert records using HTTP POST requests
- Update records using HTTP PATCH or PUT requests
- Delete records using HTTP DELETE requests
- Execute functions as RPCs using HTTP POST requests
This approach eliminates the need to write and maintain custom API code, allowing you to focus on your application logic and database schema.
PostgREST compatibility
The Lakebase Data API is compatible with the PostgREST specification. You can:
- Use existing PostgREST client libraries and tools
- Follow PostgREST conventions for filtering, ordering, and pagination
- Adapt documentation and examples from the PostgREST community
The Lakebase Data API is Databricks' implementation designed to be compatible with the PostgREST specification. Because the Data API is an independent implementation, some PostgREST features that aren't applicable to the Lakebase environment aren't included. For details on feature compatibility, see Feature compatibility reference.
For comprehensive details on API features, query parameters, and capabilities, see the PostgREST API reference.
Use cases
The Lakebase Data API is ideal for:
- Web applications: Build frontends that directly interact with your database through HTTP requests
- Microservices: Create lightweight services that access database resources via REST APIs
- Serverless architectures: Integrate with serverless functions and edge computing platforms
- Mobile applications: Provide mobile apps with direct database access through a RESTful interface
- Third-party integrations: Enable external systems to read and write data securely
Set up the Data API
This section guides you through setting up the Data API, from creating required roles to making your first API request.
Prerequisites
The Data API requires a Lakebase Postgres Autoscaling database project. If you don't have one, see Get started with database projects.
If you need sample tables for testing the Data API, create them before enabling the Data API. See Sample schema for a complete example schema.
Enable the Data API
The Data API makes all database access through a single Postgres role named authenticator, which requires no permissions except to log in. When you enable the Data API through the Lakebase App, this role and the necessary infrastructure are created automatically.
To enable the Data API:
- Navigate to Data API page in your project.
- Click Enable Data API.

This automatically performs all the setup steps, including creating the authenticator role, configuring the pgrst schema, and exposing the public schema through the API.
If you need to expose additional schemas (beyond public), you can modify the exposed schemas in the Advanced Data API settings.
After enabling the Data API
After you enable the Data API, the Lakebase App displays the Data API page with two tabs: API and Settings.

The API tab provides:
- API URL: The REST endpoint URL to use in your application code and API requests. The URL displayed doesn't include the schema, so you must append the schema name (for example,
/public) to the URL when making API requests. - Refresh schema cache: A button to refresh the API's schema cache after you make changes to your database schema. See Refresh schema cache.
- Protect your data: Options to enable Postgres row-level security (RLS) for your tables. See Enable row-level security.
The Settings tab provides options to configure API behavior, such as exposed schemas, maximum rows, CORS settings, and more. See Advanced Data API settings.
Sample schema (optional)
The examples in this documentation use the following schema. You can create your own tables or use this sample schema for testing. Run these SQL statements using the Lakebase SQL Editor or any SQL client:
-- Create clients table
CREATE TABLE clients (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
email TEXT UNIQUE NOT NULL,
company TEXT,
phone TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Create projects table with foreign key to clients
CREATE TABLE projects (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
description TEXT,
client_id INTEGER NOT NULL REFERENCES clients(id) ON DELETE CASCADE,
status TEXT DEFAULT 'active',
start_date DATE,
end_date DATE,
budget DECIMAL(10,2),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Create tasks table with foreign key to projects
CREATE TABLE tasks (
id SERIAL PRIMARY KEY,
title TEXT NOT NULL,
description TEXT,
project_id INTEGER NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
status TEXT DEFAULT 'pending',
priority TEXT DEFAULT 'medium',
assigned_to TEXT,
due_date DATE,
estimated_hours DECIMAL(5,2),
actual_hours DECIMAL(5,2),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Insert sample data
INSERT INTO clients (name, email, company, phone) VALUES
('Acme Corp', 'contact@acme.com', 'Acme Corporation', '+1-555-0101'),
('TechStart Inc', 'hello@techstart.com', 'TechStart Inc', '+1-555-0102'),
('Global Solutions', 'info@globalsolutions.com', 'Global Solutions Ltd', '+1-555-0103');
INSERT INTO projects (name, description, client_id, status, start_date, end_date, budget) VALUES
('Website Redesign', 'Complete overhaul of company website with modern design', 1, 'active', '2024-01-15', '2024-06-30', 25000.00),
('Mobile App Development', 'iOS and Android app for customer management', 1, 'planning', '2024-07-01', '2024-12-31', 50000.00),
('Database Migration', 'Migrate legacy system to cloud database', 2, 'active', '2024-02-01', '2024-05-31', 15000.00),
('API Integration', 'Integrate third-party services with existing platform', 3, 'completed', '2023-11-01', '2024-01-31', 20000.00);
INSERT INTO tasks (title, description, project_id, status, priority, assigned_to, due_date, estimated_hours, actual_hours) VALUES
('Design Homepage', 'Create wireframes and mockups for homepage', 1, 'in_progress', 'high', 'Sarah Johnson', '2024-03-15', 16.00, 8.00),
('Setup Development Environment', 'Configure local development setup', 1, 'completed', 'medium', 'Mike Chen', '2024-02-01', 4.00, 3.50),
('Database Schema Design', 'Design new database structure', 3, 'completed', 'high', 'Alex Rodriguez', '2024-02-15', 20.00, 18.00),
('API Authentication', 'Implement OAuth2 authentication flow', 4, 'completed', 'high', 'Lisa Wang', '2024-01-15', 12.00, 10.50),
('User Testing', 'Conduct usability testing with target users', 1, 'pending', 'medium', 'Sarah Johnson', '2024-04-01', 8.00, NULL),
('Performance Optimization', 'Optimize database queries and caching', 3, 'in_progress', 'medium', 'Alex Rodriguez', '2024-04-30', 24.00, 12.00);
Configure user permissions
You must authenticate all Data API requests using Databricks OAuth bearer tokens, which are sent via the Authorization header. The Data API restricts access to authenticated Databricks identities, with Postgres governing the underlying permissions.
The authenticator role assumes the identity of the requesting user when processing API requests. For this to work, each Databricks identity that accesses the Data API must have a corresponding Postgres role in your database. If you need to add users to your Databricks account first, see Add users to your account.
Add Postgres roles
Use the databricks_auth extension to create Postgres roles that correspond to Databricks identities:
Create the extension:
CREATE EXTENSION IF NOT EXISTS databricks_auth;
Add a Postgres role:
SELECT databricks_create_role('user@databricks.com', 'USER');
For detailed instructions, see Create an OAuth role for a Databricks identity using SQL.
Don't use your database owner account (the Databricks identity who created the Lakebase project) to access the Data API. The authenticator role requires the ability to assume your role, and that permission can't be granted for accounts with elevated privileges.
If you attempt to grant the database owner role to authenticator, you receive this error:
ERROR: permission denied to grant role "db_owner_user@databricks.com"
DETAIL: Only roles with the ADMIN option on role "db_owner_user@databricks.com" may grant this role.
Grant permissions to users
Now that you've created corresponding Postgres roles for your Databricks identities, you need to grant permissions to those Postgres roles. These permissions control which database objects (schemas, tables, sequences, functions) each user can interact with via API requests.
Grant permissions using standard SQL GRANT statements. This example uses the public schema; if you're exposing a different schema, replace public with your schema name:
-- Allow authenticator to assume the identity of the user
GRANT "user@databricks.com" TO authenticator;
-- Allow user@databricks.com to access everything in public schema
GRANT USAGE ON SCHEMA public TO "user@databricks.com";
GRANT SELECT, UPDATE, INSERT, DELETE ON ALL TABLES IN SCHEMA public TO "user@databricks.com";
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO "user@databricks.com";
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA public TO "user@databricks.com";
This example grants full access to the public schema for the user@databricks.com identity. Replace this with the actual Databricks identity and adjust permissions based on your requirements.
Implement row-level security: The permissions above grant table-level access, but most API use cases require row-level restrictions. For example, in multi-tenant applications, users should only see their own data or their organization's data. Use PostgreSQL row-level security (RLS) policies to enforce fine-grained access control at the database level. See Implement row-level security.
Authentication
To access the Data API, you must provide a Databricks OAuth token in the Authorization header of your HTTP request. The authenticated Databricks identity must have a corresponding Postgres role (created in the previous steps) that defines its database permissions.
Get an OAuth token
Connect to your workspace as the Databricks identity for whom you created a Postgres role in the previous steps and obtain an OAuth token. See Authentication for instructions.
Making a request
With your OAuth token and API URL (available from the API tab in the Lakebase App), you can make API requests using curl or any HTTP client. Remember to append the schema name (for example, /public) to the API URL. The following examples assume you've exported the DBX_OAUTH_TOKEN and REST_ENDPOINT environment variables.
Here's an example call with the expected output (using the sample clients/projects/tasks schema):
curl -H "Authorization: Bearer $DBX_OAUTH_TOKEN" \
"$REST_ENDPOINT/public/clients?select=id,name,projects(id,name)&id=gte.2"
Example response:
[
{ "id": 2, "name": "TechStart Inc", "projects": [{ "id": 3, "name": "Database Migration" }] },
{ "id": 3, "name": "Global Solutions", "projects": [{ "id": 4, "name": "API Integration" }] }
]
For more examples and detailed information about API operations, see the API reference section. For comprehensive details on query parameters and API capabilities, see the PostgREST API reference. For Lakebase-specific compatibility information, see PostgREST compatibility.
Before using the API extensively, configure Row-level security to protect your data.
Manage the Data API
After enabling the Data API, you can manage schema changes and security settings through the Lakebase App.
Refresh schema cache
When you make changes to your database schema (adding tables, columns, or other schema objects), you need to refresh the schema cache. This makes your changes immediately available through the Data API.
To refresh the schema cache:
- Navigate to Data API in the App Backend section of your project.
- Click Refresh schema cache.
The Data API now reflects your latest schema changes.
Enable row-level security
The Lakebase App provides a quick way to enable row-level security (RLS) for tables in your database. When tables exist in your schema, the API tab displays a Protect your data section that shows:
- Tables with RLS enabled
- Tables with RLS disabled (with warnings)
- An Enable RLS button to enable RLS for all tables
Enabling RLS through the Lakebase App turns on row-level security for your tables. When RLS is enabled, all rows become inaccessible to users by default (except for table owners, roles with the BYPASSRLS attribute, and superusers—though superusers aren’t supported on Lakebase). You must create RLS policies to grant access to specific rows based on your security requirements. See Row-level security for information about creating policies.
To enable RLS for your tables:
- Navigate to Data API in the App Backend section of your project.
- In the Protect your data section, review the tables that don't have RLS enabled.
- Click Enable RLS to enable row-level security for all tables.
You can also enable RLS for individual tables using SQL. See Row-level security for details.
Advanced Data API settings
The Advanced settings section on the API tab in the Lakebase App controls the security, performance, and behavior of your Data API endpoint.
Exposed schemas
Default: public
Defines which PostgreSQL schemas are exposed as REST API endpoints. By default, only the public schema is accessible. If you use other schemas (for example, api, v1), select them from the drop-down list to add them.
Permissions apply: Adding a schema here exposes the endpoints, but the database role used by the API must still have USAGE privileges on the schema and SELECT privileges on the tables.
Maximum rows
Default: Empty
Enforces a hard limit on the number of rows to return in a single API response. This prevents accidental performance degradation from large queries. Clients should use pagination limits to retrieve data within this threshold. This also prevents unexpected egress costs from large data transfers.
CORS allowed origins
Default: Empty (Allows all origins)
Controls which web domains can fetch data from your API using a browser.
- Empty: Allows
*(any domain). Useful for development. - Production: List your specific domains (for example,
https://myapp.com) to prevent unauthorized websites from querying your API.
OpenAPI specification
Default: Disabled
Controls whether an auto-generated OpenAPI 3 schema is available at /openapi.json. This schema describes your tables, columns, and REST endpoints. When enabled, you can use it to:
- Generate API documentation (Swagger UI, Redoc)
- Build typed client libraries (TypeScript, Python, Go)
- Import your API into Postman
- Integrate with API gateways and other OpenAPI-based tools
Server timing headers
Default: Disabled
When enabled, the Data API includes Server-Timing headers in each response. These headers show how long different parts of the request took to process (for example, database execution time and internal processing time). You can use this information to debug slow queries, measure performance, and troubleshoot latency issues in your application.
After making changes to any advanced settings, click Save to apply them.
Row-level security
Row-level security (RLS) policies provide fine-grained access control by restricting which rows users can access in a table.
How RLS works with the Data API: When a user makes an API request, the authenticator role assumes that user's identity. Any RLS policies defined for that user's role are automatically enforced by PostgreSQL, filtering the data they can access. This happens at the database level, so even if application code tries to query all rows, the database only returns rows the user is permitted to see. This provides defense-in-depth security without requiring filtering logic in your application code.
Why RLS is critical for APIs: Unlike direct database connections where you control the connection context, HTTP APIs expose your database to multiple users through a single endpoint. Table-level permissions alone mean that if a user can access the clients table, they can access all client records unless you implement filtering. RLS policies ensure each user automatically sees only their authorized data.
RLS is essential for:
- Multi-tenant applications: Isolate data between different customers or organizations
- User-owned data: Ensure users only access their own records
- Team-based access: Limit visibility to team members or specific groups
- Compliance requirements: Enforce data access restrictions at the database level
Enable RLS
You can enable RLS through the Lakebase App or using SQL statements. For instructions on using the Lakebase App, see Enable row-level security.
If you have tables without RLS enabled, the API tab in the Lakebase App displays a warning that authenticated users can view all rows in those tables. The Data API interacts directly with your Postgres schema, and because the API is accessible over the internet, it's crucial to enforce security at the database level using PostgreSQL row-level security.
To enable RLS using SQL, run the following command:
ALTER TABLE clients ENABLE ROW LEVEL SECURITY;
Create RLS policies
After enabling RLS on a table, you must create policies that define access rules. Without policies, users cannot access any rows (all rows are hidden by default).
How policies work: When RLS is enabled on a table, users can only see rows that match at least one policy. All other rows are filtered out. Table owners, roles with the BYPASSRLS attribute, and superusers can bypass the row security system (though superusers aren't supported on Lakebase).
In Lakebase, current_user returns the authenticated user's email address (for example, user@databricks.com). Use this in your RLS policies to identify which user is making the request.
Basic policy syntax:
CREATE POLICY policy_name ON table_name
[TO role_name]
USING (condition);
- policy_name: A descriptive name for the policy
- table_name: The table to apply the policy to
- TO role_name: Optional. Specifies the role for this policy. Omit this clause to apply the policy to all roles.
- USING (condition): The condition that determines which rows are visible
RLS tutorial
The following tutorial uses the sample schema from this documentation (clients, projects, tasks tables) to show how to implement row-level security.
Scenario: You have multiple users who should only see their assigned clients and related projects. Restrict access so that:
alice@databricks.comcan only view clients with IDs 1 and 2bob@databricks.comcan only view clients with IDs 2 and 3
Step 1: Enable RLS on the clients table
ALTER TABLE clients ENABLE ROW LEVEL SECURITY;
Step 2: Create a policy for Alice
CREATE POLICY alice_clients ON clients
TO "alice@databricks.com"
USING (id IN (1, 2));
Step 3: Create a policy for Bob
CREATE POLICY bob_clients ON clients
TO "bob@databricks.com"
USING (id IN (2, 3));
Step 4: Test the policies
When Alice makes an API request:
# Alice's token in the Authorization header
curl -H "Authorization: Bearer $ALICE_TOKEN" \
"$REST_ENDPOINT/public/clients?select=id,name"
Response (Alice only sees clients 1 and 2):
[
{ "id": 1, "name": "Acme Corp" },
{ "id": 2, "name": "TechStart Inc" }
]
When Bob makes an API request:
# Bob's token in the Authorization header
curl -H "Authorization: Bearer $BOB_TOKEN" \
"$REST_ENDPOINT/public/clients?select=id,name"
Response (Bob only sees clients 2 and 3):
[
{ "id": 2, "name": "TechStart Inc" },
{ "id": 3, "name": "Global Solutions" }
]
Common RLS patterns
These patterns cover typical security requirements for the Data API:
User ownership - Restricts rows to the authenticated user:
CREATE POLICY user_owned_data ON tasks
USING (assigned_to = current_user);
Tenant isolation - Restricts rows to the user's organization:
CREATE POLICY tenant_data ON clients
USING (tenant_id = (
SELECT tenant_id
FROM user_tenants
WHERE user_email = current_user
));
Team membership - Restricts rows to the user's teams:
CREATE POLICY team_projects ON projects
USING (client_id IN (
SELECT client_id
FROM team_clients
WHERE team_id IN (
SELECT team_id
FROM user_teams
WHERE user_email = current_user
)
));
Role-based access - Restricts rows based on role membership:
CREATE POLICY manager_access ON tasks
USING (
status = 'pending' OR
pg_has_role(current_user, 'managers', 'member')
);
Read-only for specific roles - Different policies for different operations:
-- Allow all users to read their assigned tasks
CREATE POLICY read_assigned_tasks ON tasks
FOR SELECT
USING (assigned_to = current_user);
-- Only managers can update tasks
CREATE POLICY update_tasks ON tasks
FOR UPDATE
TO "managers"
USING (true);
Additional resources
For comprehensive information about implementing RLS, including policy types, security best practices, and advanced patterns, see the PostgreSQL Row Security Policies documentation.
For more information about permissions, see Manage permissions.
API reference
This section assumes you've completed the setup steps, configured permissions, and implemented row-level security. The following sections provide reference information for using the Data API, including common operations, advanced features, security considerations, and compatibility details.
Basic operations
Query records
Retrieve records from a table using HTTP GET:
curl -H "Authorization: Bearer $DBX_OAUTH_TOKEN" \
"$REST_ENDPOINT/public/clients"
Example response:
[
{ "id": 1, "name": "Acme Corp", "email": "contact@acme.com", "company": "Acme Corporation", "phone": "+1-555-0101" },
{
"id": 2,
"name": "TechStart Inc",
"email": "hello@techstart.com",
"company": "TechStart Inc",
"phone": "+1-555-0102"
}
]
Filter results
Use query parameters to filter results. This example retrieves clients with id greater than or equal to 2:
curl -H "Authorization: Bearer $DBX_OAUTH_TOKEN" \
"$REST_ENDPOINT/public/clients?id=gte.2"
Example response:
[
{ "id": 2, "name": "TechStart Inc", "email": "hello@techstart.com" },
{ "id": 3, "name": "Global Solutions", "email": "info@globalsolutions.com" }
]
Select specific columns and join tables
Use the select parameter to retrieve specific columns and join related tables:
curl -H "Authorization: Bearer $DBX_OAUTH_TOKEN" \
"$REST_ENDPOINT/public/clients?select=id,name,projects(id,name)&id=gte.2"
Example response:
[
{ "id": 2, "name": "TechStart Inc", "projects": [{ "id": 3, "name": "Database Migration" }] },
{ "id": 3, "name": "Global Solutions", "projects": [{ "id": 4, "name": "API Integration" }] }
]
Insert records
Create new records using HTTP POST:
curl -X POST \
-H "Authorization: Bearer $DBX_OAUTH_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "New Client",
"email": "newclient@example.com",
"company": "New Company Inc",
"phone": "+1-555-0104"
}' \
"$REST_ENDPOINT/public/clients"
Update records
Update existing records using HTTP PATCH:
curl -X PATCH \
-H "Authorization: Bearer $DBX_OAUTH_TOKEN" \
-H "Content-Type: application/json" \
-d '{"phone": "+1-555-0199"}' \
"$REST_ENDPOINT/public/clients?id=eq.1"
Delete records
Delete records using HTTP DELETE:
curl -X DELETE \
-H "Authorization: Bearer $DBX_OAUTH_TOKEN" \
"$REST_ENDPOINT/public/clients?id=eq.5"
Advanced features
Pagination
Control the number of records returned using the limit and offset parameters:
curl -H "Authorization: Bearer $DBX_OAUTH_TOKEN" \
"$REST_ENDPOINT/public/tasks?limit=10&offset=0"
Sorting
Sort results using the order parameter:
curl -H "Authorization: Bearer $DBX_OAUTH_TOKEN" \
"$REST_ENDPOINT/public/tasks?order=due_date.desc"
Complex filtering
Combine multiple filter conditions:
curl -H "Authorization: Bearer $DBX_OAUTH_TOKEN" \
"$REST_ENDPOINT/public/tasks?status=eq.in_progress&priority=eq.high"
Common filter operators:
eq- equalsgte- greater than or equallte- less than or equalneq- not equallike- pattern matchingin- matches any value in list
For more information about supported query parameters and API features, see the PostgREST API reference. For Lakebase-specific compatibility information, see PostgREST compatibility.
Feature compatibility reference
This section lists PostgREST features that have different behavior or are not supported in the Lakebase Data API.
Authentication and authorization
Feature | Status | Details |
|---|---|---|
JWT configuration | Not applicable | The Lakebase Data API uses Databricks OAuth tokens instead of JWT authentication. JWT-specific configuration options (custom secrets, RS256 keys, audience validation) are not available. |
Resource embedding
Feature | Status | Details |
|---|---|---|
Computed relationships | Not supported | Custom relationships defined through database functions that return |
Inner join embedding ( | Not supported | The |
Response formats
Feature | Status | Details |
|---|---|---|
Custom media type handlers | Not supported | Custom output formats through PostgreSQL aggregates (binary formats, XML, protocol buffers) are not supported. |
Stripped nulls | Not supported | The |
PostGIS GeoJSON | Partially supported | PostGIS geometry columns can be queried, but automatic GeoJSON formatting via |
Pagination and counting
Feature | Status | Details |
|---|---|---|
Planned count | Not supported | The |
Estimated count | Not supported | The |
Request preferences
Feature | Status | Details |
|---|---|---|
Timezone preference | Partially supported | Timezone handling exists, but the |
Transaction control | Not supported | Transaction control via |
Preference handling modes | Not supported | The |
Observability
The Lakebase Data API implements its own observability features. The following PostgREST observability features are not supported:
Feature | Status | Details |
|---|---|---|
Query plan exposure | Not supported | The |
Server-Timing header | Not supported | The Server-Timing header that provides request timing breakdown is not supported. Lakebase implements its own observability features. |
Trace header propagation | Not supported | X-Request-Id and custom trace header propagation for distributed tracing is not supported. Lakebase implements its own observability features. |
Advanced configuration
Feature | Status | Details |
|---|---|---|
Application settings (GUCs) | Not supported | Passing custom configuration values to database functions via PostgreSQL GUCs is not supported. |
Pre-request function | Not supported | The |
For more information about PostgREST features, see the PostgREST documentation.
Security considerations
The Data API enforces your database's security model at multiple levels:
- Authentication: All requests require valid OAuth token authentication
- Role-based access: Database-level permissions control which tables and operations users can access
- Row-level security: RLS policies enforce fine-grained access control, restricting which specific rows users can see or modify
- User context: The API assumes the authenticated user's identity, ensuring database permissions and policies apply correctly
Recommended security practices
For production deployments:
- Implement row-level security: Use RLS policies to restrict data access at the row level. This is especially important for multi-tenant applications and user-owned data. See Row-level security.
- Grant minimal permissions: Only grant the permissions users need (
SELECT,INSERT,UPDATE,DELETE) on specific tables rather than granting broad access. - Use separate roles per application: Create dedicated roles for different applications or services rather than sharing a single role.
- Audit access regularly: Review granted permissions and RLS policies periodically to ensure they match your security requirements.
For information about managing roles and permissions, see: