The integrations resource allows you to create, configure, and manage your integration. How Immuta manages and administers policies in your data platform varies by integration.
To configure or manage an integration, users must have the APPLICATION_ADMIN Immuta permission.
The response returns the configuration for all integrations. See the response schema reference for details about the response schema. An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Creates an integration configuration that allows Immuta to manage access policies on data registered in Immuta.
Amazon S3 example
When you connect Immuta to your AWS account, the awsLocationPath is the base S3 location prefix that Immuta will use for this connection when registering S3 data sources.
This request configures the integration using the AWS access key authentication method.
When you connect Immuta to your Azure Synapse Analytics account, the schema you specify is where all the policy-enforced views will be created and managed by Immuta.
This request creates a Databricks Unity Catalog integration configuration that allows Immuta to administer Unity Catalog policies on data registered in Immuta.
When you connect Immuta to your Google BigQuery account, the dataset you specify is where all the policy-enforced views will be created and managed by Immuta.
When you connect Immuta to your Redshift account, the Immuta system user will use the database you specify to manage and store metadata. The initial database (REDSHIFT_SAMPLE_DATA, in the request below) is an existing Redshift database that Immuta connects to in order to create the Immuta-managed database (immuta, in the request below).
This request specifies userPassword as the authentication type for the Immuta system user. The username and password provided are credentials for a system account that can manage the database.
When you connect Immuta to your Snowflake account, the warehouse you specify is the default pool of compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
This request specifies userPassword authentication type. The username and password provided are credentials of a Snowflake account attached to a role with these privileges. These credentials are not stored; they are used by Immuta to configure the integration.
When you configure the Starburst (Trino) integration, Immuta generates an API key and configuration snippet on the Immuta app settings page that you will use to configure your Starburst cluster.
The request accepts a JSON or YAML payload with the parameters outlined below.
Parameter
Description
Required or optional
Default values
Accepted values
typestring
The type of integration to configure.
Required
-
Azure Synapse Analytics
Databricks
Google BigQuery
Native S3
Redshift
Snowflake
Trino
autoBootstrapboolean
When true, Immuta will automatically configure the integration in your Azure Synapse Analytics, Databricks Unity Catalog, Redshift, or Snowflake environment for you. When false, you must set up your environment manually before configuring the integration with the API. This parameter must be set to false in the Amazon S3 and Google BigQuery configurations. See the specific how-to guide for configuring your integration for details: Azure Synapse Analytics, Databricks Unity Catalog, Redshift, Snowflake.
Required for all integrations except Starburst (Trino)
Required for all integrations except Starburst (Trino)
-
-
Query parameter
Parameter
Description
Required or optional
dryRunboolean
When true, the integration configuration will not actually be created, and the response returns the validation tests statuses.
Optional
Response
The response returns the status of the integration configuration connection. See the response schema reference for details about the response schema.
A successful response includes the validation tests statuses.
{"id":"123456789","status":"creating","validationResults": {"status":"passed","validationTests": [ {"name":"Initial Validation: Basic Connection Test","status":"passed" }, {"name":"Initial Validation: Default Warehouse Access Test","status":"passed","result": [] }, {"name":"Initial Validation: Validate access to Privileged Role","status":"passed","result": [] }, {"name":"Validate Automatic: Database Does Not Exist","status":"passed" }, {"name":"Validate Automatic: Impersonation Role Does Not Exist","status":"skipped" }, {"name":"Validate Automatic Bootstrap User Grants","status":"passed" } ] }}
An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
{"statusCode":409,"error":"Conflict", "message": "Snowflake integration already exists on host organization.us-east-1.snowflakecomputing.com (id = 123456789)"
}
DELETE /integrations/{id}
Deletes the integration configuration you specify in the request.
The unique identifier of the integration configuration.
Required
Query parameter
Parameter
Description
Required or optional
dryRunboolean
When true, the integration configuration will not actually be deleted, and the response returns the validation tests statuses.
Optional
forceDisableboolean
When true, the integration will be deleted in Immuta. Users must manually remove all Immuta objects in the remote data platform.
Optional
Body parameters
For Amazon S3 integrations, Databricks Unity Catalog integrations, Google BigQuery integrations, Starburst (Trino) integrations, or integration configurations with autoBootstrap set to false, no payload is required to delete the integration.
For the integrations below, the request accepts a JSON or YAML payload when autoBootstrap is set to true. See the payload description for your integration for parameters and details:
The response returns the status of the integration configuration that has been deleted. See the response schema reference for details about the response schema. An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
{"id":"123456789","status":"deleting","validationResults": {"status":"passed","validationTests": [ {"name":"Initial Validation: Basic Connection Test","status":"passed" }, {"name":"Initial Validation: Default Warehouse Access Test","status":"passed","result": [] }, {"name":"Initial Validation: Validate access to Privileged Role","status":"passed","result": [] }, {"name":"Validate Automatic: Database Does Not Exist","status":"passed" }, {"name":"Validate Automatic: Impersonation Role Does Not Exist","status":"skipped" }, {"name":"Validate Automatic Bootstrap User Grants","status":"passed" } ] }}
GET /integrations/{id}
Gets the integration configuration you specify in the request.
The unique identifier of the integration configuration.
Required
Response
The response returns an integration configuration. See the response schema reference for details about the response schema. An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
{"id":"123456789","status":"enabled","validationResults": {"status":"passed","validationTests": [ {"name":"Initial Validation: Basic Connection Test","status":"passed" }, {"name":"Initial Validation: Default Warehouse Access Test","result": [],"status":"passed" }, {"name":"Initial Validation: Table Grants Role Prefix is Unique","status":"passed" }, {"name":"Initial Validation: Validate access to Privileged Role","result": [],"status":"passed" }, {"name":"Validate Automatic: Database Does Not Exist","status":"passed" }, {"name":"Validate Automatic: Impersonation Role Does Not Exist","status":"skipped" }, {"name":"Validate Automatic Bootstrap User Grants","status":"passed" }] },"type":"Snowflake","autoBootstrap":true,"config": {"host":"organization.us-east-1.snowflakecomputing.com","warehouse":"SAMPLE_WAREHOUSE","database":"SNOWFLAKE_SAMPLE_DATA","port":443,"audit": {"enabled":false },"workspaces": {"enabled":false },"impersonation": {"enabled":false },"lineage": {"enabled":false },"authenticationType":"userPassword","username":"<REDACTED>","password":"<REDACTED>","role":"ACCOUNTADMIN" }}
The request accepts a JSON or YAML payload with the parameters outlined below.
Parameter
Description
Required or optional
Default values
Accepted values
typestring
The type of integration to configure.
Required
-
Azure Synapse Analytics
Databricks
Google BigQuery
Redshift
Snowflake
autoBootstrapboolean
When true, Immuta will automatically configure the integration in your Azure Synapse Analytics, Databricks Unity Catalog, Redshift, or Snowflake environment for you. When false, you must set up your environment manually before configuring the integration with the API. This parameter must be set to false in the Google BigQuery configuration. See the specific how-to guide for configuring other integrations: Azure Synapse Analytics, Databricks Unity Catalog, Redshift, Snowflake.
When true, the integration configuration will not actually be updated, and the response returns the validation tests statuses.
Optional
Response
The response returns the status of the integration configuration connection. See the response schema reference for details about the response schema.
A successful response includes the validation tests statuses.
{"id":"123456789","status":"editing","validationResults": {"status":"passed","validationTests": [ {"name":"Initial Validation: Basic Connection Test","status":"passed" }, {"name":"Initial Validation: Default Warehouse Access Test","status":"passed","result": [] }, {"name":"Initial Validation: Validate access to Privileged Role","status":"passed","result": [] }, {"name":"Validate Automatic: Database Does Not Exist","status":"passed" }, {"name":"Validate Automatic: Impersonation Role Does Not Exist","status":"skipped" }, {"name":"Validate Automatic Bootstrap User Grants","status":"passed" } ] }}
An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
{"statusCode":409,"error":"Conflict","message":"Unable to edit integration with ID 123456789 in current state editing."}