LogoLogo
2025.1Book a demo
  • Immuta Documentation - 2025.1
  • Configuration
    • Deploy Immuta
      • Requirements
      • Install
        • Managed Public Cloud
        • Red Hat OpenShift
      • Upgrade
        • Migrating to the New Helm Chart
        • Upgrading IEHC
      • Guides
        • Ingress Configuration
        • TLS Configuration
        • Cosign Verification
        • Production Best Practices
        • Rotating Credentials
        • External Cache Configuration
        • Enabling Legacy Query Engine
        • Private Container Registries
        • Air-Gapped Environments
      • Disaster Recovery
      • Troubleshooting
      • Conventions
    • Connect Data Platforms
      • Data Platforms Overview
      • Amazon S3
      • AWS Lake Formation
        • Register an AWS Lake Formation Connection
        • AWS Lake Formation Reference Guide
      • Azure Synapse Analytics
        • Getting Started with Azure Synapse Analytics
        • Configure Azure Synapse Analytics Integration
        • Reference Guides
          • Azure Synapse Analytics Integration
          • Azure Synapse Analytics Pre-Configuration Details
      • Databricks
        • Databricks Spark
          • Getting Started with Databricks Spark
          • How-to Guides
            • Configure a Databricks Spark Integration
            • Manually Update Your Databricks Cluster
            • Install a Trusted Library
            • Project UDFs Cache Settings
            • Run R and Scala spark-submit Jobs on Databricks
            • DBFS Access
            • Troubleshooting
          • Reference Guides
            • Databricks Spark Integration Configuration
              • Installation and Compliance
              • Customizing the Integration
              • Setting Up Users
              • Spark Environment Variables
              • Ephemeral Overrides
            • Security and Compliance
            • Registering and Protecting Data
            • Accessing Data
              • Delta Lake API
        • Databricks Unity Catalog
          • Getting Started with Databricks Unity Catalog
          • How-to Guides
            • Register a Databricks Unity Catalog Connection
            • Configure a Databricks Unity Catalog Integration
            • Migrate to Unity Catalog
          • Databricks Unity Catalog Integration Reference Guide
      • Google BigQuery
      • Redshift
        • Getting Started with Redshift
        • How-to Guides
          • Configure Redshift Integration
          • Configure Redshift Spectrum
        • Reference Guides
          • Redshift Integration
          • Redshift Pre-Configuration Details
      • Snowflake
        • Getting Started with Snowflake
        • How-to Guides
          • Register a Snowflake Connection
          • Configure a Snowflake Integration
          • Snowflake Table Grants Migration
          • Edit or Remove Your Snowflake Integration
          • Integration Settings
            • Enable Snowflake Table Grants
            • Use Snowflake Data Sharing with Immuta
            • Configure Snowflake Lineage Tag Propagation
            • Enable Snowflake Low Row Access Policy Mode
              • Upgrade Snowflake Low Row Access Policy Mode
        • Reference Guides
          • Snowflake Integration
          • Snowflake Data Sharing
          • Snowflake Lineage Tag Propagation
          • Snowflake Low Row Access Policy Mode
          • Snowflake Table Grants
          • Warehouse Sizing Recommendations
        • Explanatory Guides
          • Phased Snowflake Onboarding
      • Starburst (Trino)
        • Getting Started with Starburst (Trino)
        • How-to Guides
          • Configure Starburst (Trino) Integration
          • Customize Read and Write Access Policies for Starburst (Trino)
        • Starburst (Trino) Integration Reference Guide
      • Queries Immuta Runs in Remote Platforms
      • Legacy Integrations
        • Securing Hive and Impala Without Sentry
        • Enabling ImmutaGroupsMapping
      • Connect Your Data
        • Connections
          • How-to Guides
            • Run Object Sync
            • Manage Connection Settings
            • Use the Connection Upgrade Manager
              • Troubleshooting
          • Reference Guides
            • Connections Reference Guide
            • Upgrading to Connections
              • Before You Begin
              • API Changes
              • FAQ
        • Data Sources
          • Data Sources in Immuta
          • Register Data Sources
            • Amazon S3 Data Source
            • Azure Synapse Analytics Data Source
            • Databricks Data Source
            • Google BigQuery Data Source
            • Redshift Data Source
            • Snowflake Data Source
              • Bulk Create Snowflake Data Sources
            • Starburst (Trino) Data Source
          • Data Source Settings
            • How-to Guides
              • Manage Data Sources and Data Source Settings
              • Manage Data Source Members
              • Manage Access Requests and Tasks
              • Manage Data Dictionary Descriptions
              • Disable Immuta from Sampling Raw Data
            • Data Source Health Checks Reference Guide
          • Schema Monitoring
            • How-to Guides
              • Run Schema Monitoring and Column Detection Jobs
              • Manage Schema Monitoring
            • Reference Guides
              • Schema Monitoring
              • Schema Projects
            • Why Use Schema Monitoring?
    • Manage Data Metadata
      • Connect External Catalogs
        • Getting Started with External Catalogs
        • Configure an External Catalog
        • Reference Guides
          • External Catalogs
          • Custom REST Catalogs
            • Custom REST Catalog Interface Endpoints
      • Data Identification
        • Introduction
        • Getting Started with Data Identification
        • How-to Guides
          • Use Identification
          • Manage Identifiers
          • Run and Manage Identification
          • Manage Identification Frameworks
          • Use Sensitive Data Discovery (SDD)
        • Reference Guides
          • How Competitive Criteria Analysis Works
          • Built-in Identifier Reference
            • Built-In Identifier Changelog
          • Built-in Discovered Tags Reference
      • Data Classification
        • How-to Guides
          • Activate Classification Frameworks
          • Adjust Identification and Classification Framework Tags
          • How to Use a Built-In Classification Framework with Your Own Tags
        • Classification Frameworks Reference Guide
      • Manage Tags
        • How-to Guides
          • Create and Manage Tags
          • Add Tags to Data Sources and Projects
        • Tags Reference Guide
    • Manage Users
      • Getting Started with Users
      • Identity Managers (IAMs)
        • How-to Guides
          • Okta LDAP Interface
          • OpenID Connect
            • OpenID Connect Protocol
            • Okta and OpenID Connect
            • OneLogin with OpenID Connect
          • SAML
            • SAML Protocol
            • Microsoft Entra ID
            • Okta SAML SCIM
        • Reference Guides
          • Identity Managers
          • SAML Single Logout
          • SAML Protocol Configuration Options
      • Immuta Users
        • How-to Guides
          • Managing Personas and Permissions
          • Manage Attributes and Groups
          • User Impersonation
          • External User ID Mapping
          • External User Info Endpoint
        • Reference Guides
          • Attributes and Groups in Immuta
          • Permissions and Personas
    • Organize Data into Domains
      • Getting Started with Domains
      • Domains Reference Guide
    • Application Settings
      • How-to Guides
        • App Settings
        • BI Tools
          • BI Tool Configuration Recommendations
          • Power BI Configuration Example
          • Tableau Configuration Example
        • Add a License Key
        • Add ODBC Drivers
        • Manage Encryption Keys
        • System Status Bundle
      • Reference Guides
        • Data Processing, Encryption, and Masking Practices
        • Metadata Ingestion
  • Governance
    • Introduction
      • Automate Data Access Control Decisions
        • The Two Paths: Orchestrated RBAC and ABAC
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
        • Test and Deploy Policy
      • Compliantly Open More Sensitive Data for ML and Analytics
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
    • Author Policies for Data Access Control
      • Introduction
        • Scalability and Evolvability
        • Understandability
        • Distributed Stewardship
        • Consistency
        • Availability of Data
      • Policies
        • Authoring Policies at Scale
        • Data Engineering with Limited Policy Downtime
        • Subscription Policies
          • How-to Guides
            • Author a Subscription Policy
            • Author an ABAC Subscription Policy
            • Subscription Policies Advanced DSL Guide
            • Author a Restricted Subscription Policy
            • Clone, Activate, or Stage a Global Policy
          • Reference Guides
            • Subscription Policies
            • Subscription Policy Access Types
            • Advanced Use of Special Functions
        • Data Policies
          • Overview
          • How-to Guides
            • Author a Masking Data Policy
            • Author a Minimization Policy
            • Author a Purpose-Based Restriction Policy
            • Author a Restricted Data Policy
            • Author a Row-Level Policy
            • Author a Time-Based Restriction Policy
            • Policy Certifications and Diffs
          • Reference Guides
            • Data Policy Types
            • Masking Policies
            • Row-Level Policies
            • Custom WHERE Clause Functions
            • Data Policy Conflicts and Fallback
            • Custom Data Policy Certifications
            • Orchestrated Masking Policies
      • Projects and Purpose-Based Access Control
        • Projects and Purpose Controls
          • Getting Started
          • How-to Guides
            • Create a Project
            • Create and Manage Purposes
            • Project Management
              • Manage Projects and Project Settings
              • Manage Project Data Sources
              • Manage Project Members
          • Reference Guides
            • Projects and Purposes
          • Why Use Purposes?
        • Equalized Access
          • Manage Project Equalization
          • Project Equalization Reference Guide
          • Why Use Project Equalization?
        • Masked Joins
          • Enable Masked Joins
          • Why Use Masked Joins?
        • Writing to Projects
          • How-to Guides
            • Create and Manage Snowflake Project Workspaces
            • Create and Manage Databricks Spark Project Workspaces
            • Write Data to the Workspace
          • Reference Guides
            • Project Workspaces
            • Project UDFs (Databricks)
    • Observe Access and Activity
      • Introduction
      • Audit
        • How-to Guides
          • Export Audit Logs to S3
          • Export Audit Logs to ADLS
          • Run Governance Reports
        • Reference Guides
          • Universal Audit Model (UAM)
            • UAM Schema
          • Query Audit Logs
            • Snowflake Query Audit Logs
            • Databricks Unity Catalog Query Audit Logs
            • Databricks Spark Query Audit Logs
            • Starburst (Trino) Query Audit Logs
          • Audit Export GraphQL Reference Guide
          • Governance Report Types
          • Unknown Users in Audit Logs
      • Dashboards
        • Use the Audit Dashboards How-To Guide
        • Audit Dashboards Reference Guide
      • Monitors
        • Manage Monitors and Observations
        • Monitors Reference Guide
    • Access Data
      • Subscribe to a Data Source
      • Query Data
        • Querying Snowflake Data
        • Querying Databricks Data
        • Querying Databricks SQL Data
        • Querying Starburst (Trino) Data
        • Querying Redshift Data
        • Querying Azure Synapse Analytics Data
        • Connect to a Database Tool to Run Ad Hoc Queries
      • Subscribe to Projects
  • Releases
    • Release Notes
      • Immuta v2025.1 Release Notes
        • User Interface Changes in v2025.1 LTS
      • Immuta LTS Changelog
      • Immuta Image Digests
      • Immuta CLI Release Notes
    • Immuta Release Lifecycle
    • Immuta Support Matrix Overview
    • Preview Features
      • Features in Preview
    • Deprecations and EOL
  • Developer Guides
    • The Immuta CLI
      • Install and Configure the Immuta CLI
      • Manage Your Immuta Tenant
      • Manage Data Sources
      • Manage Sensitive Data Discovery
        • Manage Sensitive Data Discovery Rules
        • Manage Identification Frameworks
        • Run Sensitive Data Discovery on Data Sources
      • Manage Policies
      • Manage Projects
      • Manage Purposes
      • Manage Audit
    • The Immuta API
      • Authentication
      • Integrations API
        • Getting Started
        • How-to Guides
          • Configure an Amazon S3 Integration
          • Configure an Azure Synapse Analytics Integration
          • Configure a Databricks Unity Catalog Integration
          • Configure a Google BigQuery Integration
          • Configure a Redshift Integration
          • Configure a Snowflake Integration
          • Configure a Starburst (Trino) Integration
        • Reference Guides
          • Integrations API Endpoints
          • Integration Configuration Payload
          • Response Schema
          • HTTP Status Codes and Error Messages
      • Connections API
        • How-to Guides
          • Register a Connection
            • Register a Snowflake Connection
            • Register a Databricks Unity Catalog Connection
            • Register an AWS Lake Formation Connection
          • Manage a Connection
          • Deregister a Connection
        • Connection Registration Payloads Reference Guide
      • Immuta V2 API
        • Create a Data Source
        • Create a Data Policy
        • Create a Subscription Policy
        • Create a Project
        • Create a Purpose
      • Immuta V1 API
        • Authenticate with the API
        • Configure Your Instance of Immuta
          • Get Job Status
          • Manage Frameworks
          • Manage IAMs
          • Manage Licenses
          • Manage Notifications
          • Manage Tags
          • Manage Webhooks
          • Search Filters
          • Manage Identification
            • Identification Frameworks to Identifiers in Domains
            • Manage Sensitive Data Discovery (SDD)
        • Connect Your Data
          • Create and Manage an Amazon S3 Data Source
          • Create an Azure Synapse Analytics Data Source
          • Create an Azure Blob Storage Data Source
          • Create a Databricks Data Source
          • Create a Presto Data Source
          • Create a Redshift Data Source
          • Create a Snowflake Data Source
          • Create a Starburst (Trino) Data Source
          • Manage the Data Dictionary
        • Use Domains
        • Manage Data Access
          • Manage Access Requests
          • Manage Data and Subscription Policies
          • Manage Write Policies
            • Write Policies Payloads and Response Schema Reference Guide
          • Policy Handler Objects
          • Search Connection Strings
          • Search for Organizations
          • Search Schemas
        • Subscribe to and Manage Data Sources
        • Manage Projects and Purposes
          • Manage Projects
          • Manage Purposes
        • Generate Governance Reports
Powered by GitBook
On this page
  • Configure your integration
  • Amazon S3 example
  • Azure Synapse Analytics example
  • Databricks Unity Catalog example
  • Google BigQuery example
  • Redshift example
  • Snowflake example
  • Starburst (Trino) example
  • Protect your data
  • Register metadata
  • Additional resources
  • How-to guides
  • Reference guides
Export as PDF
  1. Developer Guides
  2. The Immuta API
  3. Integrations API

Getting Started

PreviousIntegrations APINextHow-to Guides

Last updated 4 days ago

Other versions

  • SaaS
  • 2024.3
  • 2024.2

Copyright © 2014-2025 Immuta Inc. All rights reserved.

The integrations API is a REST API that allows you to integrate your remote data platform with Immuta so that Immuta can manage and enforce access controls on your data.

To configure an integration using the API, you must have the APPLICATION_ADMIN Immuta permission.

Configure your integration

Use the POST /integrations endpoint to configure the integration so that Immuta can enforce access controls on tables registered as Immuta data sources. See a section below for a sample request and details about configuring your integration.

Amazon S3 example

Private preview: This integration is available to select accounts. Contact your Immuta representative for details.

curl -X 'POST' \
    'https://www.organization.immuta.com/integrations' \
    -H 'accept: application/json' \
    -H 'Content-Type: application/json' \
    -H 'Authorization: 846e9e43c86a4ct1be14290d95127d13f' \
    -d '{
    "type": "Native S3",
    "autoBootstrap": false,
    "config": {
        "name": "S3 integration",
        "awsAccountId": "123456789",
        "awsRegion": "us-east-1",
        "awsLocationRole": "arn:aws:iam::123456789:role/access-grants-instance-role",
        "awsLocationPath": "s3://",
        "authenticationType": "accessKey",
        "awsAccessKeyId": "123456789",
        "awsSecretAccessKey": "123456789"
    }
    }'
  1. Set up an Access Grants instance.

  2. Copy the request example.

  3. Replace the values in the request with your Immuta URL and API key or bearer token.

  4. Change the config values to your own, where

    • name is the name for the integration that is unique across all Amazon S3 integrations configured in Immuta.

    • awsAccountId is the ID of your AWS account.

    • awsRegion is the account's AWS region (such as us-east1).

    • awsLocationRole is the AWS IAM role ARN assigned to the base access grants location. This is the role the AWS Access Grants service assumes to vend credentials to the grantee.

    • awsLocationPath is the base S3 location prefix that Immuta will use for this connection when registering S3 data sources. This path must be unique across all S3 integrations configured in Immuta.

    • awsAccessKeyId is the AWS access key ID of the AWS account configuring the integration.

    • awsSecretAccessKey is the AWS secret access key of the AWS account configuring the integration.

For more configuration examples, see the Configure an Amazon S3 integration guide. For information about the configuration payload, see the Integration payload reference guide.

Azure Synapse Analytics example

curl -X 'POST' \
    'https://www.organization.immuta.com/integrations' \
    -H 'accept: application/json' \
    -H 'Content-Type: application/json' \
    -H 'Authorization: 846e9e43c86a4ct1be14290d95127d13f' \
    -d '{
    "type": "Azure Synapse Analytics",
    "autoBootstrap": true,
    "config": {
        "host": "organization.azure.com",
        "schema": "immuta",
        "database": "sample_database",
        "username": "taylor@synapse.com",
        "password": "abc1234",
        "authenticationType": "userPassword"
    }
    }'

Replace the values in the request with your Immuta URL and API key or bearer token, and change the config values to your own, where

  • host is the URL of your Azure Synapse Analytics account.

  • schema is the name of the Immuta-managed schema where all your secure views will be created and stored.

  • database is the name of an existing database where the Immuta system user will store all Immuta-generated schemas and views.

  • username and password are the username and password of the system account that can act on Azure Synapse Analytics objects and configure the integration.

The example sets autoBootstrap to true, which grants Immuta one-time access to credentials to configure the resources in your Azure Synapse Analytics environment for you. If you set autoBootstrap to false, you must manually run the bootstrap script in your Azure Synapse Analytics environment yourself before making the request.

For more configuration examples, see the Configure an Azure Synapse Analytics integration guide. For information about the configuration payload, see the Integration payload reference guide.

Databricks Unity Catalog example

curl -X 'POST' \
    'https://www.organization.immuta.com/integrations' \
    -H 'accept: application/json' \
    -H 'Content-Type: application/json' \
    -H 'Authorization: 846e9e43c86a4ct1be14290d95127d13f' \
    -d '{
    "type": "Databricks",
    "autoBootstrap": true,
    "config": {
        "workspaceUrl": "www.example-workspace.cloud.databricks.com",
        "httpPath": "sql/protocolv1/o/0/0000-00000-abc123",
        "authenticationType": "token",
        "token": "REDACTED",
        "catalog": "immuta"
    }
    }'

Replace the values in the request with your Immuta URL and API key or bearer token, and change the config values to your own, where

  • workspaceUrl is your Databricks workspace URL.

  • httpPath is the HTTP path of your Databricks cluster or SQL warehouse.

  • token is the Databricks personal access token. This is the access token for the Immuta service principal.

  • catalog is the name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number.

For more configuration examples, see the Configure a Databricks Unity Catalog integration guide. For information about the configuration payload, see the Integration payload reference guide.

Google BigQuery example

Private preview: This integration is available to select accounts. Contact your Immuta representative for details.

curl -X 'POST' \
    'https://www.organization.immuta.com/integrations' \
    -H 'accept: application/json' \
    -H 'Content-Type: application/json' \
    -H 'Authorization: 846e9e43c86a4ct1be14290d95127d13f' \
    -d '{
    "type": "Google BigQuery",
    "autoBootstrap": false,
    "config": {
        "role": "immuta",
        "datasetSuffix": "_secureView",
        "dataset": "immuta",
        "location": "us-east1",
        "credential": "{\"type\":\"service_account\",\"project_id\":\"innate-conquest-123456\",\"private_key_id\":\"9163c12345690924f5dd218ff39\",\"private_key\":\"-----BEGIN PRIVATE KEY-----\nXXXXXXXro0s\n/yQlPQijowkccmrmWJyr93kdLnwJzBvLHCto/+W\ncvF2ygX9oM/dyUK//z//4nptMp+Ck//Yw3D4rIBwGu4DWiR1qRnf\nDoGyXfThPTQ==\n-----END PRIVATE KEY-----\n\",\"client_email\":\"service-account-id@innate-conquest-123456.iam.gserviceaccount.com\",\"client_id\":\"1166290***432952487857\",\"auth_uri\":\"https://accounts.google.com/o/oauth2/auth\",\"token_uri\":\"https://oauth2.googleapis.com/token\",\"auth_provider_x509_cert_url\":\"https://www.googleapis.com/oauth2/v1/certs\",\"client_x509_cert_url\":\"https://www.googleapis.com/robot/v1/metadata/x509/service-accound-id%40innate-conquest-123456.iam.gserviceaccount.com\",\"universe_domain\":\"googleapis.com\"}"
    }
    }'
  1. Create a Google Cloud service account and role by either using the Google Cloud console or the provided Immuta script.

  2. Copy the request example. The example uses JSON format, but the request also accepts YAML.

  3. Replace the Immuta URL and API key with your own.

  4. Change the config values to your own, where

    • role is the Google Cloud role used to connect to Google BigQuery.

    • datasetSuffix is the suffix to postfix to the name of each dataset created to store secure views. This string must start with an underscore.

    • dataset is the name of the BigQuery dataset to provision inside of the project for Immuta metadata storage.

    • location is the dataset's location, which can be any valid GCP location (such as us-east1).

    • credential is the Google BigQuery service account JSON keyfile credential content. See the Google documentation for guidance on generating and downloading this keyfile.

For more configuration examples, see the Configure a Google BigQuery integration guide. For information about the configuration payload, see the Integration payload reference guide.

Redshift example

curl -X 'POST' \
    'https://www.organization.immuta.com/integrations' \
    -H 'accept: application/json' \
    -H 'Content-Type: application/json' \
    -H 'Authorization: 846e9e43c86a4ct1be14290d95127d13f' \
    -d '{
    "type": "Redshift",
    "autoBootstrap": true,
    "config": {
        "host": "organization.aws.amazon.com",
        "database": "immuta",
        "initialDatabase": "REDSHIFT_SAMPLE_DATA",
        "authenticationType": "userPassword",
        "username": "taylor@redshift.com",
        "password": "abc1234"
    }
    }'

Replace the values in the request with your Immuta URL and API key or bearer token, and change the config values to your own, where

  • host is the URL of your Redshift account.

  • database is the name of a new empty database that the Immuta system user will manage and store metadata in.

  • initialDatabase is the name of an existing database in Redshift that Immuta initially connects to and creates the Immuta-managed database.

  • authenticationType is the type of authentication to use when connecting to Redshift.

  • username and password are the credentials for the system account that can act on Redshift objects and configure the integration.

The example sets autoBootstrap to true, which grants Immuta one-time access to credentials to configure the resources in your Redshift environment for you. If you set autoBootstrap to false, you must manually run the bootstrap script in your Redshift environment yourself before making the request.

For more configuration examples, see the Configure a Redshift integration guide. For information about the configuration payload, see the Integration payload reference guide.

Snowflake example

curl -X 'POST' \
    'https://www.organization.immuta.com/integrations' \
    -H 'accept: application/json' \
    -H 'Content-Type: application/json' \
    -H 'Authorization: 846e9e43c86a4ct1be14290d95127d13f' \
    -d '{
    "type": "Snowflake",
    "autoBootstrap": true,
    "config": {
        "host": "organization.us-east-1.snowflakecomputing.com",
        "warehouse": "SAMPLE_WAREHOUSE",
        "database": "SNOWFLAKE_SAMPLE_DATA",
        "authenticationType": "userPassword",
        "username": "taylor@snowflake.com",
        "password": "abc1234",
        "role": "ACCOUNTADMIN"
    }
    }'

Replace the values in the request with your Immuta URL and API key or bearer token, and change the config values to your own, where

  • host is the URL of your Snowflake account.

  • warehouse is the default pool of Snowflake compute resources the Immuta system user will use to run queries and perform other Snowflake operations.

  • database is the name of a new empty database that the Immuta system user will manage and store metadata in.

  • authenticationType is the type of authentication to use when connecting to Snowflake.

  • username and password are credentials of a Snowflake account attached to a role with these privileges. These credentials are not stored; they are used by Immuta to configure the integration.

  • role is a Snowflake role that has been granted these privileges.

The example sets autoBootstrap to true, which grants Immuta one-time access to credentials to configure the resources in your Snowflake environment for you. If you set autoBootstrap to false, you must manually run the bootstrap script in your Snowflake environment yourself before making the request.

For more configuration examples, see the Configure a Snowflake integration guide. For information about the configuration payload, see the Integration payload reference guide.

Starburst (Trino) example

  1. Replace the values in the request with your Immuta URL and API key or bearer token.

    curl -X 'POST' \
        'https://www.organization.immuta.com/integrations' \
        -H 'accept: application/json' \
        -H 'Content-Type: application/json' \
        -H 'Authorization: 846e9e43c86a4ct1be14290d95127d13f' \
        -d '{
        "type": "Trino"
        }'
  2. Navigate to the Immuta App Settings page and click the Integrations tab.

  3. Click your enabled Starburst (Trino) integration and copy the configuration snippet displayed.

  4. Follow the steps in the configure the Immuta system access control plugin in Starburst or Trino section to add the configuration in the appropriate immuta-access-control.properties file to finish configuring your cluster.

Protect your data

Map usernames and create policies before you register your metadata to ensure that policies are enforced on tables and views immediately.

  1. Map usernames to Immuta to ensure Immuta properly enforces policies and audits user queries.

  2. Build global policies in Immuta to enforce access controls:

    • Create policies via the API

    • Create subscription policies in the UI

    • Create data policies in the UI

Register metadata

Register your metadata using the API or Immuta UI:

  • API how-to guides

    • Amazon S3

    • Azure Synapse Analytics

    • Databricks

    • Redshift

    • Snowflake

    • Starburst (Trino)

  • UI how-to guide: Create a data source

Additional resources

How-to guides

See the following how-to guides for configuration examples and steps for creating, managing, or disabling your integration:

  • Amazon S3 integration API reference

  • Azure Synapse Analytics integration API reference

  • Databricks Unity Catalog integration API reference

  • Google BigQuery integration API reference

  • Redshift integration API reference

  • Snowflake integration API reference

  • Starburst (Trino) integration API reference

Reference guides

See the following reference guides for information about the integrations API endpoints, payloads, and responses:

  • Integrations API endpoints

  • Integrations API payloads

  • Integrations API response schema

  • HTTP status codes and error messages