The integrations API returns HTTP status codes, error codes, and messages in JSON format.
The table below provides the HTTP code, the error code, an example message, and troubleshooting guidance for the error.
Code | Text | Description |
---|---|---|
The table below provides the HTTP code, the error code, an example message, and troubleshooting guidance for each error.
Code | Text | Description |
---|---|---|
The table below provides the HTTP code, the error code, an example message, and troubleshooting guidance for each error.
Code | Text | Description |
---|---|---|
The table below provides the HTTP code, the error code, an example message, and troubleshooting guidance for each error.
Code | Text | Description |
---|---|---|
404
Not found
Example: "No Integration found with ID 5." The request failed because the integration with the given ID was not found. Use the GET /integrations endpoint to list all integration configurations to find the correct ID.
400
Bad request
Example: "Credentials are not required for disable unless the integration was configured automatically. If you need to update your integration credentials, use PUT to update the integration before disabling." The request failed because the payload provided authentication credentials for a manually bootstrapped integration. Remove the authentication credentials from the payload.
400
Bad request
Example: "Integrations that are automatically configured require privileged credentials to disable. Please provide them in your payload." The request failed because the integration was created with autoBootstrap set to true, and privileged credentials were not provided in the request payload to delete the integration. Provide the credentials you used to configure your Azure Synapse Analytics, Redshift, or Snowflake integration.
400
Bad request
Example: "Credentials are not required to disable a Databricks Unity Catalog integration. If you need to update your integration credentials, use PUT to update the integration before disabling." The request failed because the payload provided authentication credentials for a Databricks Unity Catalog integration. Remove the authentication credentials from the payload.
404
Not found
Example: "No integration found with ID 5." The request failed because the integration with the given ID was not found. Use the GET /integrations endpoint to list all integration configurations to find the correct ID.
409
Conflict
Example: "Unable to edit integration with ID 10 in current state editing." The request failed because the integration is currently being modified or deleted. Use the GET /integrations/{id}/status endpoint to determine when the integration has finished updating. Then, delete the integration.
422
Unprocessable entity
Example: "Unable to delete integration with ID 7, validation failed." The request failed because a validation test failed. See the validation results object documentation for a list of validation test messages and errors to address the issue.
400
Bad Request
Example: "Use PUT /integrations/1 endpoint to update connection information for Snowflake integration on host test-account.snowflakecomputing.com (id = 1) that previously failed to create." The request failed because the integration previously failed to create. The message you receive includes the ID and host of the integration that failed. Use the PUT /integrations/{id} endpoint to update the connection information for that integration to create it.
409
Conflict
Example: "Snowflake integration already exists on test-account.snowflakecomputing.com (id = 1)." The request failed because an integration already exists on the host. Use the integration ID provided in the error message to delete or modify the existing integration. Ensure that the name and config parameters in the new configuration do not conflict with your existing integration.
422
Unprocessable entity
Example: "Validation of prerequisite setup failed. Unable to create integration." The request failed because a validation test failed. See the validation results object documentation for a list of validation test messages and errors to address the issue.
422
Unprocessable entity
Example: "Processing Error: Error trying to get the current metastore info." The request failed because Immuta could not find the Databricks metastore information.
400
Bad request
Example: "Unable to edit integration due to changes of non-editable attribute(s)." The request failed because an attribute was changed that cannot be edited. The error message includes a list of the attributes that the request attempted to change.
404
Not found
Example: "No integration found with ID 5." The request failed because the integration with the given ID was not found. Use the GET /integrations endpoint to list all integration configurations to find the correct ID.
409
Conflict
Example: "Unable to edit integration with ID 10 in current state editing." The request failed because the integration is currently being modified or deleted. Use the GET /integrations/{id}/status endpoint to determine when the integration has finished updating. Then, modify the integration. If the integration has been deleted, use the POST /integrations endpoint to re-create the integration.
422
Unprocessable entity
Example: "Unable to edit integration with ID 7, validation failed." The request failed because a validation test failed. See the validation results object documentation for a list of validation test messages and errors to address the issue.
The table below outlines the response schema for all integration configurations.
Property | Description |
---|---|
The status property in the response schema shows the status of the integration. The table below provides definitions for each status and the state of the integration configuration.
The validationResults object provides details about the status of each test Immuta runs to validate the the integration configuration.
Attribute | Description | Possible values |
---|---|---|
The table below provides the errors and messages for validation tests that fail when configuring or updating the integration.
The table below provides the errors and messages for validation tests that fail when configuring or updating the integration.
The table below provides the errors and messages for validation tests that fail when configuring or updating the integration.
The table below provides the errors and messages for validation tests that fail when configuring or updating the integration.
The table below provides the errors and messages for validation tests that fail when configuring or updating the integration.
The table below provides the errors and messages for validation tests that fail when configuring or updating the integration.
Test name | Description | Message |
---|---|---|
Test name | Description | Message |
---|---|---|
Test name | Description | Message |
---|---|---|
Test name | Description | Message |
---|---|---|
Test name | Description | Message |
---|---|---|
Test name | Description | Message |
---|---|---|
There is no existing integration matching this configuration
Verifies that the integration configuration does not match an existing one.
-
The provided integration name is unique across Immuta S3 integrations
Verifies that the name of the integration does not match an existing S3 integration name.
"The Immuta service account does not end with expected value."
The provided access grants location role is a valid ARN format
Verifies that the access grants location role is in the correct format.
"The specified access grants location role is not a valid ARN format."
The provided AWS credentials allow fetching the caller's identity via the AWS STS API
Verifies that the integration can use the AWS STS API to get the caller's identity using the provided credentials.
"Found user ID [], ARN [] using STS."
An AWS Access Grants instance is configured in the provided AWS account and region
Verifies that an Access Grants instance has been created in the specified AWS account and region.
"AccessDenied
: Unable to retrieve the access grants instance for account, region: AWS responded with code [403], name [AccessDenied
] and message [Access Denied] for request ID []."
The provided S3 path exists and Immuta can list prefixes
Verifies that Immuta can access and list prefixes for the provided S3 path.
"Immuta does not have access to the requested path [s3://]. Without access, Immuta will be unable to assist with S3 path discovery during data source creation."
An AWS Access Grants location does not yet exist for the provided path
Verifies that an Access Grants location has not already been registered for the specified S3 path.
"AccessDenied
: Unable to list S3 access grants locations for account [], location scope []: AWS responded with code [403], name [AccessDenied
] and message [Access Denied] for request ID []."
Initial validation: connect
Verifies that Immuta can connect to Azure Synapse Analytics.
"Unable to connect to host."
Initial validation: delimiters test
Verifies that the delimiters are unique.
"Hash delimiter and array delimiter must not have the same value."
Validate automatic: impersonation role does not exist
Verifies that the user impersonation role specified in the request payload does not already exist.
"Impersonation role already exists. If this role can be safely dropped please do so and try again. Alternatively, specify a different role name."
Validate Immuta system user can manage database
Verifies that the specified user can manage the database.
"User does not have permission to manage database."
Basic connection test
Verifies that Immuta can connect to Databricks Unity Catalog.
"Could not connect to host, please confirm you are using valid connection parameters."
Manual catalog setup
Verifies that the catalog and tables used by Immuta are present and have the correct permissions. This test is run when autoBootstrap is false
in the Databricks Unity Catalog integration configuration.
"Encountered an error looking up catalog metadata for catalog."
Metastore validation
Verifies that the Unity Catalog metastore is assigned to the specified workspace.
"No metastore is assigned to workspace."
Basic validation: connection can be made to BigQuery
Verifies that Immuta can connect to Google BigQuery.
"Could not connect to the remote BigQuery connection."
Basic validation: Immuta service account postfix
Verifies that the service account ends with the expected value of @<projectId>.iam.gserviceaccount.com
.
"The Immuta service account does not end with expected value."
Basic validation: non-matching service account in key file
Verifies that the service account matches the one provided in the keyfile.
"The service account does not match the service account in the provided key file."
Basic validation: verify service account not being used for data source connection credentials
Verifies that credentials that have been used to create Google BigQuery data sources are not the same credentials used to configure the Google BigQuery integration.
"Native BigQuery doesn't support the reuse of service accounts for integrations that are currently being used for data sources."
Validate manual: [dataset - create]
Verifies that the custom role assigned to the service account has the permissions to create the dataset.
Message includes a permission warning.
Validate manual: [dataset - delete]
Verifies that the custom role assigned to the service account has the permissions to delete the Immuta-managed dataset.
Message includes a permission warning.
Initialize validation: [dataset - exists]
Verifies that this dataset does not already exist.
"An existing Immuta instance exists. Delete this dataset to continue."
Validate manual: [table - create]
Verifies that the custom role assigned to the service account has the permissions to create Immuta-managed tables.
Message includes a permission warning.
Validate manual: [table - delete]
Verifies that the custom role assigned to the service account has the permissions to delete Immuta-managed tables.
Message includes a permission warning.
Validate manual: [table - get]
Verifies that the custom role assigned to the service account has the permissions to get Immuta-managed tables.
Message includes a permission warning.
Validate manual: [table - insert]
Verifies that the custom role assigned to the service account has the permissions to insert rows in Immuta-managed tables.
Message includes a permission warning.
Validate manual: [table - update]
Verifies that the custom role assigned to the service account has the permissions to update Immuta-managed tables.
Message includes a permission warning.
Initial validation: basic connection test
Verifies that Immuta can connect to Redshift.
"Unable to connect to host."
Validate automatic: database does not exist
Verifies that the database specified in the request payload does not already exist.
"The database already exists. If this database can be safely dropped, please do so and try again. Alternatively, specify a different database name."
Validate automatic: impersonation role does not exist
Verifies that the user impersonation role specified in the request payload does not already exist.
"Impersonation role already exists. If this role can be safely dropped please do so and try again. Alternatively, specify a different role name."
Initial validation: basic connection test
Verifies that Immuta can connect to the Snowflake database.
"Unable to connect to host."
Initial validation: default warehouse access test
Verifies that the default warehouse exists and that the Immuta system account user has permissions to act on the default warehouse specified.
"Unable to access default warehouse. If this was a manual installation, ensure that the user has been granted usage on the specified warehouse."
Initial validation: table grants role prefix is unique
Verifies that the prefix for Snowflake table grants does not already exist. If this prefix already exists, navigate to the Integration Settings section on the Immuta app settings page to disable Snowflake table grants, re-enable it, and then update the role prefix.
"The Snowflake table grants role prefix IMMUTA
is used by another Immuta instance connected to the same Snowflake host. Please update the table grants role prefix for this Immuta instance and try again."
Initial validation: validate access to privileged role
Verifies that the privileged role exists and that it has been assigned to the Immuta system account user.
"User does not have access to the privileged role."
Validate automatic bootstrap user grants
Verifies the credentials of the user executing the Immuta bootstrap script in Snowflake.
-
Validate automatic: database does not exist
Verifies that the database specified in the request payload does not already exist.
"The database already exists. If this database can be safely dropped, please do so and try again. Alternatively, specify a different database name."
Validate automatic: impersonation role does not exist
Verifies that the user impersonation role specified in the request payload does not already exist.
"Impersonation role already exists. If this role can be safely dropped please do so and try again. Alternatively, specify a different role name."
The reference guides in this section define the integrations API endpoints, request and body parameters, and response schema.
The integrations API endpoints allow you to create, update, get, and delete integrations and generate scripts to run in your data platform to manually set up or remove Immuta-managed resources.
Consult this guide for endpoint descriptions and examples.
The integrations API request payloads accept JSON or YAML format, and each integration has parameters and objects specific to the data platform.
Consult this guide for parameter value types, requirements, definitions, and accepted values.
The response returns the status of the integration configuration in JSON format.
Consult this guide for response schema definitions and integration state definitions.
The integrations API uses standard HTTP status codes. Status codes specific to the integrations API are described in this reference guide.
Consult this guide for a list of status codes, integration states, common error messages, and troubleshooting guidance.
id number
The unique identifier of the integration.
The status of the integration. Statuses include createError
, creating
, deleteError
, deleting
, editError
, editing
, enabled
, migrateError
, and migrating
. See the statuses table below for descriptions.
The results of the validation tests. See the object description for details.
config object
The integration configuration. See the integration configuration payload for Amazon S3, Azure Synapse Analytics, Databricks Unity Catalog, Google BigQuery, Redshift, or Snowflake for details.
status string
Whether or not the connection validation passed.
passed
failed
warning
skipped
validationTests array[]
This array includes the validation tests run on the integration connection.
-
validationTests.name string
The name of the validation test.
See the section corresponding to your integration type for a list of test names and messages:
validationTests.status string
The status of the validation test.
passed
failed
warning
skipped
validationTests.message string
When a test fails, the message provides context and guidance for addressing the failure.
See the section corresponding to your integration type for a list of test names and messages:
The table below provides definitions for each status and the state of configured data platform integrations. The status of the integration appears on the integrations tab of the Immuta application settings page and in the .
If any errors occur with the integration configuration, a banner will appear in the Immuta UI with guidance for remediating the error.
Status | Description | State |
---|
createError | Error occurred during creation of the integration. |
creating | Integration is in the process of being created and set up. |
deleted | Integration is deleted. | Not in use |
deleteError | Error occurred while deleting the integration. The integration has been rolled back to the previous state. |
deleting | Integration is in the process of being disabled or deleted. |
disabled | Integration was force disabled and no cleanup was performed on the native platform. | Not in use |
editError | Error occurred while editing the integration. The integration has been rolled back to the previous state. |
editing | The integration is in the process of being edited. |
enabled | The integration is enabled and active. |
migrateError | Error occurred while performing a migration of the integration. The integration has been rolled back to the previous state. |
migrating | Migration is being performed on the integration. An example of a migration is a stored procedure update. |
recurringValidationError | Validation has failed during the periodic check and the integration may be misconfigured. |
The integrations resource allows you to create, configure, and manage your integration. How Immuta manages and administers policies in your data platform varies by integration.
To configure or manage an integration, users must have the APPLICATION_ADMIN Immuta permission.
Method | Endpoint | Description |
---|---|---|
Gets all integration configurations.
The response returns the configuration for all integrations. See the response schema reference for details about the response schema. An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Creates an integration configuration that allows Immuta to manage access policies on data registered in Immuta.
When you connect Immuta to your AWS account, the awsLocationPath
is the base S3 location prefix that Immuta will use for this connection when registering S3 data sources.
This request configures the integration using the AWS access key authentication method.
When you connect Immuta to your Azure Synapse Analytics account, the schema you specify is where all the policy-enforced views will be created and managed by Immuta.
This request creates a Databricks Unity Catalog integration configuration that allows Immuta to administer Unity Catalog policies on data registered in Immuta.
When you connect Immuta to your Google BigQuery account, the dataset you specify is where all the policy-enforced views will be created and managed by Immuta.
When you connect Immuta to your Redshift account, the Immuta system user will use the database you specify to manage and store metadata. The initial database (REDSHIFT_SAMPLE_DATA
, in the request below) is an existing Redshift database that Immuta connects to in order to create the Immuta-managed database (immuta
, in the request below).
This request specifies userPassword
as the authentication type for the Immuta system user. The username and password provided are credentials for a system account that can manage the database.
When you connect Immuta to your Snowflake account, the warehouse you specify is the default pool of compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
This request specifies userPassword
authentication type. The username and password provided are credentials of a Snowflake account attached to a role with these privileges. These credentials are not stored; they are used by Immuta to configure the integration.
When you configure the Starburst (Trino) integration, Immuta generates an API key and configuration snippet on the Immuta app settings page that you will use to configure your Starburst cluster.
The request accepts a JSON or YAML payload with the parameters outlined below.
The response returns the status of the integration configuration connection. See the response schema reference for details about the response schema.
A successful response includes the validation tests statuses.
An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Deletes the integration configuration you specify in the request.
For Amazon S3 integrations, Databricks Unity Catalog integrations, Google BigQuery integrations, Starburst (Trino) integrations, or integration configurations with autoBootstrap set to false
, no payload is required to delete the integration.
For the integrations below, the request accepts a JSON or YAML payload when autoBootstrap is set to true
. See the payload description for your integration for parameters and details:
The response returns the status of the integration configuration that has been deleted. See the response schema reference for details about the response schema. An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Gets the integration configuration you specify in the request.
The response returns an integration configuration. See the response schema reference for details about the response schema. An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Updates an existing integration configuration.
This request changes the name of the integration.
This request enables user impersonation for the Azure Synapse Analytics integration.
This request updates the access token.
This request updates the private key for the Google BigQuery integration.
This request enables user impersonation for the Redshift integration.
This request enables auditing queries run in Snowflake.
The request accepts a JSON or YAML payload with the parameters outlined below.
The response returns the status of the integration configuration connection. See the response schema reference for details about the response schema.
A successful response includes the validation tests statuses.
An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Regenerates an Immuta API key for the configured integration.
This request regenerates an Immuta API key for the configured Starburst (Trino) integration. Once you make this request, your old Immuta API key will be deleted and will no longer be valid. See the Configure a Starburst (Trino) integration page for instructions on updating your Starburst (Trino) integration to use the new API key.
The response returns the new Immuta API key. An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages page for a list of statuses, error messages, and troubleshooting guidance.
Gets the status of the integration specified in the request.
The response returns the status of the specified integration. An unsuccessful request returns the HTTP status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Creates a script to remove Immuta-managed resources from your platform. This endpoint is for Azure Synapse Analytics, Redshift, and Snowflake integrations that were not successfully created and, therefore, do not have an integration ID.
For Azure Synapse Analytics integrations, you must also make a request to the /integrations/scripts/post-cleanup endpoint to create another script that will finish removing Immuta-managed resources from the platform.
The request accepts a JSON or YAML payload with the parameters outlined below.
The response returns the script that you will run in your Azure Synapse Analytics, Redshift, or Snowflake environment.
Once you have run the script,
use the DELETE /integrations/{id}
endpoint to delete your Redshift or Snowflake integration in Immuta:
use the /integrations/scripts/post-cleanup endpoint to create another script that will finish removing Immuta-managed resources from your Azure Synapse Analytics platform.
Creates a script for you to run manually to set up objects and resources for Immuta to manage and enforce access controls on your data. This endpoint is available for Azure Synapse Analytics, Databricks Unity Catalog, Redshift, and Snowflake integrations.
The request accepts a JSON or YAML payload with the parameters outlined below.
The response returns the script that you will run in your Azure Synapse Analytics, Databricks Unity Catalog, Redshift, or Snowflake environment.
Creates a script to remove Immuta-managed resources from your platform. This endpoint is for Azure Synapse Analytics, Redshift, and Snowflake integrations that were successfully created.
The response returns the script that you will run in your Azure Synapse Analytics, Redshift, or Snowflake environment.
Once you have run the script, use the DELETE /integrations/{id}
endpoint to delete your integration in Immuta:
Creates a script for you to run manually to edit objects and resources managed by Immuta in your platform. This endpoint is available for Azure Synapse Analytics, Databricks Unity Catalog, Redshift, and Snowflake integrations.
The request accepts a JSON or YAML payload with the parameters outlined below.
The response returns the script that you will run in your Azure Synapse Analytics, Databricks Unity Catalog, Redshift, or Snowflake environment. Once you have run the script, use the PUT /integrations/{id}
endpoint to finish editing your integration:
Creates the first script for you to run manually to set up objects and resources for Immuta to manage and enforce access controls on your data in Azure Synapse Analytics or Redshift integrations.
The request accepts a JSON or YAML payload with the parameters outlined below.
The response returns the script that you will run in your Azure Synapse Analytics or Redshift environment.
Once you have run this script, use the /integrations/scripts/create endpoint to generate a script to finish creating the Immuta-managed resources in your platform.
Creates a second script to remove the final Immuta-managed resources from your Azure Synapse Analytics platform. This endpoint is for Azure Synapse Analytics integrations that were not successfully created and, therefore, do not have an integration ID.
Before making a request like the one below, you must make a request to the /integrations/scripts/cleanup endpoint to create the first script that will remove the initial Immuta-managed resources from the platform.
The request accepts a JSON or YAML payload with the parameters outlined below.
The response returns the script that you will run in your Azure Synapse Analytics environment.
Once you have run the script, use the DELETE /integrations/{id}
endpoint to delete your integration in Immuta by following the Delete Azure Synapse Analytics integration instructions.
See the following how-to guides for configuration examples and steps for creating, managing, or deleting your integration:
The parameters for configuring an integration in Immuta are outlined in the table below.
Parameter | Description | Required or optional | Default values | Accepted values |
---|
The config object configures the S3 integration. The table below outlines its child parameters.
Parameter | Description | Required or optional | Default values | Accepted values |
---|
The table below outlines the parameters for creating an S3 data source.
The table below outlines the response schema for successful requests.
The config object configures the Azure Synapse Analytics integration. The table below outlines its child parameters.
The impersonation object enables and defines roles for user impersonation for Azure Synapse Analytics. The table below outlines its child parameters.
The credentials you used when configuring your integration are required in the payload when autoBootstrap was set to true
when setting up your integration. For integration configurations with autoBootstrap set to false
, no payload is required when deleting the integration.
The metadataDelimiters object specifies the delimiters that Immuta uses to store profile data in Azure Synapse Analytics. The table below outlines its child parameters.
The config object configures the Databricks Unity Catalog integration. The table below outlines its child parameters.
The additionalWorkspaceConnections array allows you to configure additional workspace connections for your Databricks Unity Catalog integration. The table below outlines its child attributes.
The audit object enables Databricks Unity Catalog query audit. The table below outlines its child parameter.
The groupPattern object excludes the listed group from having data policies applied in the Databricks Unity Catalog integration. This account-level group should be used for privileged users and service accounts that require an unmasked view of data. The table below outlines its child parameters.
The proxyOptions object represents your proxy server configuration in Databricks Unity Catalog. The table below outlines the object's child attributes.
The config object configures the Google BigQuery integration. The table below outlines its child parameters.
The config object configures the Redshift integration. The table below outlines its child parameters.
The authentication type and credentials you used when configuring your integration are required in the payload when autoBootstrap was set to true
when setting up your integration. For integration configurations with autoBootstrap set to false
, no payload is required when deleting the integration.
The impersonation object enables and defines roles for user impersonation for Redshift. The table below outlines its child parameters.
The okta object represents your Okta configuration. This object is required if you set okta
as your authentication type in the Redshift integration configuration. The table below outlines its child parameters.
The config object configures the Snowflake integration. The table below outlines its child parameters.
The audit object enables Snowflake query audit. The table below outlines its child parameter.
The authentication type and credentials you used when configuring your integration are required in the payload when autoBootstrap was set to true
when setting up your integration. For integration configurations with autoBootstrap set to false
, no payload is required when deleting the integration.
The impersonation object enables and defines roles for user impersonation for Snowflake. The table below outlines its child parameters.
The lineage object enables Snowflake native lineage ingestion. When this setting is enabled, Immuta automatically applies tags added to a Snowflake table to its descendant data source columns in Immuta so you can build policies using those tags to restrict access to sensitive data. The table below outlines its child parameters.
The oAuthClientConfig object represents your OAuth configuration in Snowflake. This object is required if you set oAuthClientCredentials
as your authentication type in the Snowflake integration configuration, and you must set autoBootstrap to false
. The table below outlines the object's child parameters.
The userRolePattern object excludes roles and users from authorization checks in the Snowflake integration. The table below outlines its child parameter.
The workspaces object represents an Immuta project workspace configured for Snowflake. The table below outlines its child parameters.
Parameter | Description | Required or optional | Default values | Accepted values |
---|---|---|---|---|
Parameter | Description | Required or optional |
---|---|---|
Parameter | Description | Required or optional |
---|---|---|
Parameter | Description | Required or optional |
---|---|---|
Parameter | Description | Required or optional |
---|---|---|
Parameter | Description | Required or optional | Default values | Accepted values |
---|---|---|---|---|
Parameter | Description | Required or optional |
---|---|---|
Parameter | Description | Required or optional |
---|---|---|
Parameter | Description | Required or optional | Default values | Accepted values |
---|---|---|---|---|
Parameter | Description | Required or optional | Default values | Accepted values |
---|---|---|---|---|
Parameter | Description | Required or optional | Default values | Accepted values |
---|---|---|---|---|
Parameter | Description | Required or optional | Default values | Accepted values |
---|---|---|---|---|
Parameter | Description | Required or optional | Default values | Accepted values |
---|---|---|---|---|
Parameter | Description | Required or optional | Default values | Accepted values |
---|
Property | Description |
---|
Parameter | Description | Required or optional | Default values | Accepted values |
---|
Parameter | Description | Default values | Accepted values |
---|
Parameter | Description | Required or optional | Accepted values |
---|
Parameter | Description | Default values | Accepted values |
---|
Parameter | Description | Required or optional | Default values | Accepted values |
---|
Attribute | Description | Required or optional | Default values | Accepted values |
---|
Parameter | Description | Default values | Accepted values |
---|
Parameter | Description | Default values | Accepted values |
---|
Parameter | Description | Required or optional | Default values | Accepted values |
---|
The oAuthClientConfig object represents your OAuth configuration in Databricks Unity Catalog. This object is required if you set oAuthM2M
as your . The table below outlines the object's child parameters.
Parameter | Description | Required or optional | Default values | Accepted values |
---|
Parameter | Description | Required or optional | Default values | Accepted values |
---|
Parameter | Description | Required or optional | Default values | Accepted values |
---|
Parameter | Description | Required or optional | Accepted values |
---|
Parameter | Description | Default values | Accepted values |
---|
Parameter | Description | Required or optional | Default values | Accepted values |
---|
Parameter | Description | Required or optional | Default values | Accepted values |
---|
Parameter | Description | Default values | Accepted values |
---|
Parameter | Description | Required or optional | Accepted values |
---|
Parameter | Description | Default values | Accepted values |
---|
Parameter | Description | Required or optional | Default values | Accepted values |
---|
Parameter | Description | Required or optional | Default values | Accepted values |
---|
Parameter | Description | Default values | Accepted values |
---|
Parameter | Description | Default values | Accepted values |
---|
GET
Gets all integration configurations
POST
Creates an integration
DELETE
Deletes a configured integration
GET
Gets an integration configuration
PUT
Updates a configured integration
POST
Regenerates an Immuta API key for the configured integration
GET
Gets the status of the specified integration
POST
Creates a script to remove Immuta-managed resources from your platform for integrations that were not successfully created
POST
Creates a script to set up Immuta-managed resources in your platform
POST
Creates a script to remove Immuta-managed resources from your platform for integrations that were successfully configured
POST
Creates a script to edit existing Immuta-managed resources in your platform
POST
Creates the first script to set up Immuta-managed resources in your Azure Synapse Analytics or Redshift platform
POST
Creates the second script to remove Immuta-managed resources from your Azure Synapse Analytics integration if it was not successfully created
type string
The type of integration to configure.
Required
-
Azure Synapse Analytics
Databricks
Google BigQuery
Native S3
Redshift
Snowflake
Trino
autoBootstrap boolean
When true
, Immuta will automatically configure the integration in your Azure Synapse Analytics, Databricks Unity Catalog, Redshift, or Snowflake environment for you. When false
, you must set up your environment manually before configuring the integration with the API. This parameter must be set to false
in the Amazon S3 and Google BigQuery configurations. See the specific how-to guide for configuring your integration for details: Azure Synapse Analytics, Databricks Unity Catalog, Redshift, Snowflake.
Required for all integrations except Starburst (Trino)
-
true
or false
config object
This object specifies the integration settings. See the config object description for your integration for details: Amazon S3, Azure Synapse Analytics, Databricks Unity Catalog, Google BigQuery, Redshift, or Snowflake.
Required for all integrations except Starburst (Trino)
-
-
dryRun boolean
When true
, the integration configuration will not actually be created, and the response returns the validation tests statuses.
Optional
id number
The unique identifier of the integration configuration.
Required
dryRun boolean
When true
, the integration configuration will not actually be deleted, and the response returns the validation tests statuses.
Optional
forceDisable boolean
When true
, the integration will be deleted in Immuta. Users must manually remove all Immuta objects in the remote data platform.
Optional
id number
The unique identifier of the integration configuration.
Required
type string
The type of integration to configure.
Required
-
Azure Synapse Analytics
Databricks
Google BigQuery
Redshift
Snowflake
autoBootstrap boolean
When true
, Immuta will automatically configure the integration in your Azure Synapse Analytics, Databricks Unity Catalog, Redshift, or Snowflake environment for you. When false
, you must set up your environment manually before configuring the integration with the API. This parameter must be set to false
in the Google BigQuery configuration. See the specific how-to guide for configuring other integrations: Azure Synapse Analytics, Databricks Unity Catalog, Redshift, Snowflake.
Required
-
true
or false
config object
This object specifies the integration settings. See the config object description for your integration for details: Azure Synapse Analytics, Databricks Unity Catalog, Google BigQuery, Redshift, or Snowflake.
Required
-
-
dryRun boolean
When true
, the integration configuration will not actually be updated, and the response returns the validation tests statuses.
Optional
id number
The unique identifier of the integration configuration.
Required
type string
The type of integration to clean up.
Required
-
Azure Synapse Analytics
Redshift
Snowflake
autoBootstrap boolean
Set to false
to specify that you will run the script in your environment yourself to clean up the integration resources. See the Azure Synapse Analytics, Redshift, or Snowflake manual setup section for details.
Required
-
false
config object
This object specifies the integration settings. See the config object description for your integration for details: Azure Synapse Analytics, Redshift, or Snowflake.
Required
-
-
type string
The type of integration to configure.
Required
-
Azure Synapse Analytics
Databricks
Redshift
Snowflake
autoBootstrap boolean
Set to false
to specify that you will run the script in your environment yourself to configure the integration. You must run the Immuta script before creating the integration. See the Azure Synapse Analytics, Databricks Unity Catalog, Redshift, or Snowflake manual setup guides for details.
Required
-
false
config object
This object specifies the integration settings. See the config object description for your integration for details: Azure Synapse Analytics, Databricks Unity Catalog, Redshift, or Snowflake.
Required
-
-
type string
The type of integration to configure.
Required
-
Azure Synapse Analytics
Databricks
Redshift
Snowflake
autoBootstrap boolean
Set to false
to specify that you will run the script in your environment yourself to configure the integration. You must run the Immuta script before creating the integration. See the Azure Synapse Analytics, Databricks Unity Catalog, Redshift, or Snowflake manual setup guides for details.
Required
-
false
config object
This object specifies the integration settings. Some settings cannot be changed once an integration is configured. See the config object description for your integration for details: Azure Synapse Analytics, Databricks Unity Catalog, Redshift, or Snowflake.
Required
-
-
type string
The type of integration to configure.
Required
-
Azure Synapse Analytics
Redshift
autoBootstrap boolean
Set to false
to specify that you will run the script in your environment yourself to configure the integration. You must run the Immuta script before creating the integration. See the Azure Synapse Analytics or Redshift manual setup guides for details.
Required
-
false
config object
This object specifies the integration settings. See the config object description of the Azure Synapse Analytics or Redshift integration configuration for details.
Required
-
-
type string
The type of integration to clean up.
Required
-
Azure Synapse Analytics
autoBootstrap boolean
Set to false
to specify that you will run the script in your environment yourself to clean up the integration resources. See the Azure Synapse Analytics manual setup section for details.
Required
-
false
config object
This object specifies the integration settings. See the config object description of Azure Synapse Analytics for details.
Required
-
-
type | The type of integration. | Required | - |
|
integrationId | The unique identifier of the S3 integration. | Required | - | - |
dataSources.dataSourceName | The name of the S3 data source you want to create. | Required | - | - |
dataSources.prefix | The S3 prefix that creates a data source for the prefix, bucket, or object provided in the path. | Required | - | - |
dataSourceId | The unique identifier of the data source. |
prefix | The S3 path of the prefix, bucket, or object used to create the data source. |
dataSourceName | The name of the data source. |
enabled | When |
|
|
role | The name of the user impersonation role. |
| - |
authenticationType | The type of authentication used when originally configuring the Azure Synapse Analytics integration. | Required |
|
username | The username of the system account that configured the integration. | Required if autoBootstrap was | - |
password | The password of the system account that configured the integration. | Required if autoBootstrap was | - |
hashDelimiter | A delimiter used to separate key-value pairs. | ` | ` |
hashKeyDelimiter | A delimiter used to separate a key from its value. |
| - |
arrayDelimiter | A delimiter used to separate array elements. |
| - |
enabled | This setting enables or disables Databricks Unity Catalog query audit. |
|
|
deny | The name of a group in Databricks that will be excluded from having data policies applied. This account-level group should be used for privileged users and service accounts that require an unmasked view of data. |
| - |
host | The hostname of the proxy server. | Required | - | Valid URL hostnames |
port | The port to use when connecting to your proxy server. | Optional |
|
|
username | The username to use with the proxy server. | Optional |
| - |
password | The password to use with the proxy server. | Optional |
| - |
enabled | When |
|
|
role | The name of the user impersonation role. |
| - |
username | The username of the system account that can act on Redshift objects and configure the integration. | Required | - | - |
password | The password of the system account that can act on Redshift objects and configure the integration. | Required | - | - |
appId | The Okta application ID. | Required | - | - |
idpHost | The Okta identity provider host URL. | Required | - | - |
role | The Okta role. | Required | - | - |
enabled | This setting enables or disables Snowflake query audit. |
|
|
enabled | When |
|
|
role | The name of the user impersonation role. |
| - |
enabled | When | Optional |
|
|
lineageConfig | Configures what tables Immuta will ingest lineage history for, the number of rows to ingest per batch, and what tags to propagate. Child parameters include tableFilter, tagFilterRegex, and ingestBatchSize. | Required if enabled is | - | - |
lineageConfig.tableFilter | This child parameter of lineageConfig determines which tables Immuta will ingest lineage for. Use a regular expression that excludes | Optional |
| Regular expression that excludes |
lineageConfig.tagFilterRegex | This child parameter of lineageConfig determines which tags to propagate using lineage. Use a regular expression that excludes | Optional |
| Regular expression that excludes |
lineageConfig.ingestBatchSize | This child parameter of lineageConfig configures the number of rows Immuta ingests per batch when streaming Access History data from your Snowflake instance. | Optional |
| Minimum value of |
exclude | This array is a list of roles and users to exclude from authorization checks. |
| - |
enabled | This setting enables or disables Snowflake project workspaces. If you use Snowflake secure data sharing with Immuta, set this property to |
|
|
warehouses | This array is a list of warehouses workspace users have usage privileges on. |
| - |
type | The type of integration to configure. | Required | - |
|
autoBootstrap | Required for all integrations except Starburst (Trino) | - |
|
config | Required for all integrations except Starburst (Trino) | - | - |
name | A name for the integration that is unique across all Amazon S3 integrations configured in Immuta. | Required | - | - |
awsAccountId | The ID of your AWS account. | Required | - | - |
awsRegion | The AWS region to use. | Required | - | Any valid AWS region (us-east-1, for example) |
awsLocationRole | The AWS IAM role ARN assigned to the base access grants location. This is the role the AWS Access Grants service assumes to vend credentials to the grantee. When a grantee accesses S3 data, the AWS Access Grants service attaches session policies and assumes this role in order to vend scoped down credentials to the grantee. This role needs full access to all paths under the S3 location prefix. | Required | - | - |
awsLocationPath | The base S3 location prefix that Immuta will use for this connection when registering S3 data sources. This path must be unique across all S3 integrations configured in Immuta. | Required | - | - |
awsRoleToAssume | The optional AWS IAM role ARN Immuta assumes when interacting with AWS. | Optional |
| - |
authenticationType | The method used to authenticate with AWS when configuring the S3 integration. | Required | - |
|
awsAccessKeyId | The AWS access key ID for the AWS account configuring the integration. | Required when authenticationType is | - | - |
awsSecretAccessKey | The AWS secret access key for the AWS account configuring the integration. | Required when authenticationType is | - | - |
port | The port to use when connecting to your S3 Access Grants instance. | Optional |
|
|
host | The URL of your Azure Synapse Analytics account. | Required | - | Valid URL hostnames. |
database | Name of an existing database where the Immuta system user will store all Immuta-generated schemas and views. | Required | - | - |
schema | Name of the Immuta-managed schema where all your secure views will be created and stored. | Required | - | - |
authenticationType | The method used to authenticate with Azure Synapse Analytics when configuring the integration. | Required | - |
|
username | The username of the system account that can act on Azure Synapse Analytics objects and configure the integration. | Required | - | - |
password | The password of the system account that can act on Azure Synapse Analytics objects and configure the integration. | Required | - | - |
Optional | - |
port | The port to use when connecting to your Azure Synapse Analytics account host. | Optional |
|
|
Optional | - |
connectArgs | The connection string arguments to pass to the ODBC driver when connecting as the Immuta system user. | Optional | - | - |
port | The port to use when connecting to your Databricks account host. | Optional |
|
|
workspaceUrl | Databricks workspace URL. For example, | Required | - | - |
httpPath | The HTTP path of your Databricks cluster or SQL warehouse. | Required | - | - |
authenticationType | The type of authentication to use when connecting to Databricks. | Required | - |
|
token | The Databricks personal access token. This is the access token for the Immuta service principal. | Required if authenticationType is | - | - |
Optional |
| - |
Required if you selected | - | - |
Optional | - |
workspaceIds | This array can be used to scope query audit to only ingest activity for specified workspaces. | Optional |
|
catalog | The name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number. | Optional |
| - |
Optional |
| - |
Optional |
| - |
workspaceUrl | Databricks workspace URL. For example, | Required | - | - |
httpPath | The HTTP path of the compute for the workspace. | Required | - | - |
authenticationType | The type of authentication to use when connecting to Databricks. The additional workspace credentials will be used when processing objects in bound catalogs that are not accessible via the default workspace. | Required | - |
|
token | The Databricks personal access token. This is the access token for the Immuta service principal. The additional workspace credentials will be used when processing objects in bound catalogs that are not accessible via the default workspace. | Required if authenticationType is | - | - |
Required if you selected | - | - |
catalogs | The name of the catalog to use for this additional workspace connection. The catalog name may only contain letters, numbers, and underscores and cannot start with a number. Users may configure one additional workspace connection per catalog. Users may still bind a catalog to more than one workspace in Databricks, as long as there is only one additional workspace connection in Immuta, as Immuta requires a single connection from which to control the catalog. | Required |
| - |
clientId | The client identifier of the Immuta service principal you configured. This is the client ID displayed in Databricks when creating the client secret for the service principal. | Required | - | - |
authorityUrl | Authority URL of your identity provider. | Required |
| - |
scope | Optional |
| - |
clientSecret | Required | - | - |
role | Google Cloud role used to connect to Google BigQuery. | Required | - | - |
datasetSuffix | Suffix to postfix to the name of each dataset created to store secure views. This string must start with an underscore. | Required | - | - |
dataset | Name of the BigQuery dataset to provision inside of the project for Immuta metadata storage. | Optional |
| - |
location | The dataset's location. After a dataset is created, the location can't be changed. | Required | - | Any valid GCP location ( |
credential | Required | - | - |
port | The port to use when connecting to your BigQuery account host. | Optional |
|
|
host | The URL of your Redshift account. | Required | - | Valid URL hostnames |
database | Name of a new empty database that the Immuta system user will manage and store metadata in. | Required | - | - |
initialDatabase | Name of the existing database in Redshift that Immuta initially connects to and creates the Immuta-managed database. | Required if autoBootstrap is | - | - |
authenticationType | The type of authentication to use when connecting to Redshift. | Required | - |
|
username | The username of the system account that can act on Redshift objects and configure the integration. | Required if you selected | - | - |
password | The password of the system account that can act on Redshift objects and configure the integration. | Required if you selected | - | - |
Required if you selected | - | - |
databaseUser | The Redshift database username. | Required if you selected | - | - |
accessKeyId | The Redshift access key ID. | Required if you selected | - | - |
secretKey | The Redshift secret key. | Required if you selected | - | - |
sessionToken | The Redshift session token. | Optional if you selected | - | - |
port | The port to use when connecting to your Redshift account host. | Optional |
|
|
Optional | - |
connectArgs | The connection string arguments to pass to the ODBC driver when connecting as the Immuta system user. | Optional | - | - |
authenticationType | The type of authentication used when originally configuring the Redshift integration. | Required if autoBootstrap was |
|
username | The username of the system account that configured the integration. | Required if you selected | - |
password | The password of the system account that configured the integration. | Required if you selected | - |
databaseUser | The Redshift database username. | Required if you selected | - |
accessKeyId | The Redshift access key ID. | Required if you selected | - |
secretKey | The Redshift secret key. | Required if you selected | - |
sessionToken | The Redshift session token. | Optional if you selected | - |
Required if you selected | - |
host | The URL of your Snowflake account. | Required | - | Valid URL hostnames |
warehouse | The default pool of compute resources the Immuta system user will use to run queries and perform other Snowflake operations. | Required | - | - |
database | Name of a new empty database that the Immuta system user will manage and store metadata in. | Required | - | - |
authenticationType | The type of authentication to use when connecting to Snowflake. | Required | - |
|
username | The username of a Snowflake account that can act on Snowflake objects and configure the integration. | Required if you selected | - | - |
password | The password of a Snowflake account that can act on Snowflake objects and configure the integration. | Required if you selected | - | - |
privateKey | The private key. Replace new lines in the private key with a backslash before the new line character: "\n". If you are using another means of configuration, such as a Python script, the "\n" should not be added. | Required if you selected | - | - |
Required if you selected | - | - |
role | The privileged Snowflake role used by the Immuta system account when configuring the Snowflake integration. | Required when autoBootstrap is | - | - |
port | The port to use when connecting to your Snowflake account host. | Optional |
|
|
Optional | - |
Optional | - |
Optional |
| - |
Optional | - |
connectArgs | The connection string arguments to pass to the Node.js driver when connecting as the Immuta system user. | Optional | - | - |
privilegedConnectArgs | The connection string arguments to pass to the Node.js driver when connecting as the privileged user. | Optional when autoBootstrap is | - | - |
Optional | - | - |
authenticationType | The type of authentication used when originally configuring the integration. | Required if autoBootstrap was |
|
username | The username of the system account that configured the integration. | Required for the Azure Synapse Analytics integration or if you selected | - |
password | The password of the system account that configured the integration. | Required for the Azure Synapse Analytics integration or if you selected | - |
privateKey | The private key. Replace new lines in the private key with a backslash before the new line character: "\n". If you are using another means of configuration, such as a Python script, the "\n" should not be added. | Required if you selected | - |
Required if you selected | - |
role | The privileged Snowflake role used by the Immuta system account when configuring the Snowflake integration. | Required when autoBootstrap is | - |
provider | The identity provider for OAuth, such as Okta. | Required | - | - |
clientId | The client identifier of your registered application. | Required | - | - |
authorityUrl | Authority URL of your identity provider. | Required | - | - |
useCertificate | Specifies whether or not to use a certificate and private key for authenticating with OAuth. | Required | - |
|
publicCertificateThumbprint | Your certificate thumbprint. | Required if useCertificate is | - | - |
oauthPrivateKey | The private key content. | Required if useCertificate is | - | - |
clientSecret | Client secret of the application. | Required if useCertificate is | - | - |
resource | An optional resource to pass to the token provider. | Optional | - | - |
scope | Optional |
| - |
When true
, Immuta will automatically configure the integration in your Azure Synapse Analytics, Databricks Unity Catalog, Redshift, or Snowflake environment for you. When false
, you must set up your environment manually before configuring the integration with the API. This parameter must be set to false
in the Amazon S3 and Google BigQuery configurations. See the specific how-to guide for configuring your integration for details: , , , .
This object specifies the integration settings. See the config object description for your integration for details: , , , , , or .
object
This object is a set of delimiters that Immuta uses to store profile data in Azure Synapse Analytics. See the for parameters.
See the for default values.
object
Enables user impersonation. See the for parameters.
Disabled by default. See the for parameters.
object
This object allows you to configure your integration to use a proxy server. See the for child attributes.
object
This object represents your OAuth configuration. To use this authentication method, authenticationType must be oAuthM2M
. See the for parameters.
object
This object enables Databricks Unity Catalog query audit. See the for parameters.
Disabled by default. See the for parameters.
object
This object allows you to exclude groups in Databricks from authorization checks. See the for parameters.
array[ object]
This object allows you to configure additional workspace connections for your integration. See the for child attributes.
object
This object represents your OAuth configuration. To use this authentication method, authenticationType must be oAuthM2M
. See the for parameters. The additional workspace credentials will be used when processing objects in bound catalogs that are not accessible via the default workspace.
The scope limits the operations and roles allowed in Databricks by the access token. See the for details about scopes.
.
The Google BigQuery service account JSON keyfile credential content. See the for guidance on generating and downloading this keyfile.
object
This object represents your Okta configuration. See the for parameters.
object
Enables user impersonation. See the for parameters.
Disabled by default. See the for parameters.
object
This object represents your Okta configuration. See the for parameters.
object
This object represents your OAuth configuration. To use this authentication method, autoBootstrap must be false
. See the for parameters.
object
This object enables Snowflake query audit. See the for the parameter.
Disabled by default. See the for the parameter.
object
Enables user impersonation. See the for parameters.
Disabled by default. See the for parameters.
object
This object excludes roles and users from authorization checks. See the for parameters.
object
This object represents an Immuta project workspace configured for Snowflake. See the for parameters.
Disabled by default. See the for parameters.
object
Enables Snowflake lineage ingestion so that Immuta can apply tags added to Snowflake tables to their descendant data source columns. See the for parameters.
object
This object represents your OAuth configuration. See the for parameters.
The scope limits the operations and roles allowed in Snowflake by the access token. See the for details about scopes.