Getting Started
Last updated
Last updated
The integrations API is a REST API that allows you to integrate your remote data platform with Immuta so that Immuta can manage and enforce access controls on your data.
To configure an integration using the API, you must have the APPLICATION_ADMIN Immuta permission.
There are two methods for making an authenticated request to the integrations API. Select a tab below for instructions.
Generate your API key on the API Keys tab on your profile page and save the API key somewhere secure.
You will pass this API key in the authorization header when you make a request, as illustrated in the example below:
Use the POST /integrations
endpoint to configure the integration so that Immuta can enforce access controls on tables registered as Immuta data sources. See a section below for a sample request and details about configuring your integration.
Private preview: The Amazon S3 integration is available to select accounts. Reach out to your Immuta representative for details.
Copy the request example.
Replace the values in the request with your Immuta URL and API key or bearer token.
Change the config
values to your own, where
name is the name for the integration that is unique across all Amazon S3 integrations configured in Immuta.
awsAccountId is the ID of your AWS account.
awsRegion is the account's AWS region (such as us-east1
).
awsLocationRole is the AWS IAM role ARN assigned to the base access grants location. This is the role the AWS Access Grants service assumes to vend credentials to the grantee.
awsLocationPath is the base S3 location prefix that Immuta will use for this connection when registering S3 data sources. This path must be unique across all S3 integrations configured in Immuta.
awsAccessKeyId is the AWS access key ID of the AWS account configuring the integration.
awsSecretAccessKey is the AWS secret access key of the AWS account configuring the integration.
Replace the values in the request with your Immuta URL and API key or bearer token, and change the config
values to your own, where
host is the URL of your Azure Synapse Analytics account.
schema is the name of the Immuta-managed schema where all your secure views will be created and stored.
database is the name of an existing database where the Immuta system user will store all Immuta-generated schemas and views.
username and password are the username and password of the system account that can act on Azure Synapse Analytics objects and configure the integration.
The example sets autoBootstrap
to true
, which grants Immuta one-time access to credentials to configure the resources in your Azure Synapse Analytics environment for you. If you set autoBootstrap
to false
, you must manually run the bootstrap script in your Azure Synapse Analytics environment yourself before making the request.
For more configuration examples, see the Configure an Azure Synapse Analytics integration guide. For information about the configuration payload, see the Integration payload reference guide.
Replace the values in the request with your Immuta URL and API key or bearer token, and change the config
values to your own, where
workspaceUrl is your Databricks workspace URL.
httpPath is the HTTP path of your Databricks cluster or SQL warehouse.
token is the Databricks personal access token. This is the access token for the Immuta system account user.
catalog is the name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number.
For more configuration examples, see the Configure a Databricks Unity Catalog integration guide. For information about the configuration payload, see the Integration payload reference guide.
Private preview: This integration is available to select accounts. Reach out to your Immuta representative for details.
Create a Google Cloud service account and role by either using the Google Cloud console or the provided Immuta script.
Copy the request example. The example uses JSON format, but the request also accepts YAML.
Replace the Immuta URL and API key with your own.
Change the config values to your own, where
role is the Google Cloud role used to connect to Google BigQuery.
datasetSuffix is the suffix to postfix to the name of each dataset created to store secure views. This string must start with an underscore.
dataset is the name of the BigQuery dataset to provision inside of the project for Immuta metadata storage.
location is the dataset's location, which can be any valid GCP location (such as us-east1
).
credential is the Google BigQuery service account JSON keyfile credential content. See the Google documentation for guidance on generating and downloading this keyfile.
For more configuration examples, see the Configure a Google BigQuery integration guide. For information about the configuration payload, see the Integration payload reference guide.
Replace the values in the request with your Immuta URL and API key or bearer token, and change the config
values to your own, where
host is the URL of your Redshift account.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
initialDatabase is the name of an existing database in Redshift that Immuta initially connects to and creates the Immuta-managed database.
authenticationType is the type of authentication to use when connecting to Redshift.
username and password are the credentials for the system account that can act on Redshift objects and configure the integration.
The example sets autoBootstrap
to true
, which grants Immuta one-time access to credentials to configure the resources in your Redshift environment for you. If you set autoBootstrap
to false
, you must manually run the bootstrap script in your Redshift environment yourself before making the request.
For more configuration examples, see the Configure a Redshift integration guide. For information about the configuration payload, see the Integration payload reference guide.
Replace the values in the request with your Immuta URL and API key or bearer token, and change the config values to your own, where
host is the URL of your Snowflake account.
warehouse is the default pool of Snowflake compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
authenticationType is the type of authentication to use when connecting to Snowflake.
username and password are the credentials for the system account that can act on Snowflake objects and configure the integration.
The example sets autoBootstrap
to true
, which grants Immuta one-time access to credentials to configure the resources in your Snowflake environment for you. If you set autoBootstrap
to false
, you must manually run the bootstrap script in your Snowflake environment yourself before making the request.
For more configuration examples, see the Configure a Snowflake integration guide. For information about the configuration payload, see the Integration payload reference guide.
Replace the values in the request with your Immuta URL and API key or bearer token.
Navigate to the Immuta App Settings page and click the Integrations tab.
Click your enabled Starburst (Trino) integration and copy the configuration snippet displayed.
Map usernames and create policies before you register your metadata to ensure that policies are enforced on tables and views immediately.
Map usernames to Immuta to ensure Immuta properly enforces policies and audits user queries.
Build global policies in Immuta to enforce access controls:
Register your metadata using the API or Immuta UI:
API how-to guides
UI how-to guide: Create a data source
See the following how-to guides for configuration examples and steps for creating, managing, or disabling your integration:
See the following reference guides for information about the integrations API endpoints, payloads, and responses: