Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Immuta’s integration with Unity Catalog allows you to manage multiple Databricks workspaces through Unity Catalog while protecting your data with Immuta policies. Instead of manually creating UDFs or granting access to each table in Databricks, you can author your policies in Immuta and have Immuta manage and enforce Unity Catalog access-control policies on your data in Databricks clusters or SQL warehouses.
Use the /integrations
endpoint to
Several different accounts are used to set up and maintain the Databricks Unity Catalog integration. The permissions required for each are outlined below.
Immuta account (required): This user configures the integration on the app settings page in Immuta. To access the app settings page, this user needs the following permission:
APPLICATION_ADMIN
Immuta permission
Databricks service principal (required): This service principal is used continuously by Immuta to orchestrate Unity Catalog policies and maintain state between Immuta and Databricks. In the automatic setup option, Immuta also uses this service principal to create the Immuta-managed catalog. This service principal needs the following Databricks privileges:
CREATE CATALOG
privilege on the Unity Catalog metastore. This is only required if you have Immuta automatically configure the integration in Databricks for you. If a separate user will run the Immuta script in Databricks to manually configure the integration, that Databricks user account needs this privilege instead.
OWNER
permission on the Immuta catalog you configure.
OWNER
privilege on one of the securables below so that Immuta can administer Unity Catalog row-level and column-level security controls.
on catalogs with schemas and tables registered as Immuta data sources. This permission could also be applied by granting OWNER
on a catalog to a Databricks group that includes the Immuta service principal to allow for multiple owners.
on schemas with tables registered as Immuta data sources.
on all tables registered as Immuta data sources - if the OWNER
permission cannot be applied at the catalog- or schema-level. In this case, each table registered as an Immuta data source must individually have the OWNER
permission granted to the Immuta service principal.
USE CATALOG
and USE SCHEMA
on parent catalogs and schemas of tables registered as Immuta data sources so that the Immuta service principal can SELECT
and MODIFY
securables within the parent catalog and schema.
SELECT
and MODIFY
on all tables registered as Immuta data sources so that the Immuta service principal can grant and revoke access to tables and apply Unity Catalog row- and column-level security controls.
For native query audit (optional)
USE CATALOG
on the system
catalog
USE SCHEMA
on the system.access
schema
SELECT
on the following system tables:
system.access.audit
system.access.table_lineage
system.access.column_lineage
Databricks account (recommended): This user account can manually configure the integration in Databricks to create the Immuta-managed catalog. To do so, this account requires the following Databricks privileges:
CREATE CATALOG
on the Unity Catalog metastore
ACCOUNT ADMIN
on the Unity Catalog metastore for native query audit (optional)
Access token authentication: If using this method, generate a personal access token for the service principal that Immuta will use to manage policies in Unity Catalog. This service principal must have the privileges listed above for the metastore associated with the Databricks workspace.
OAuth machine-to-machine (M2M) authentication: If using this method, follow Databricks documentation to create a client secret for the Immuta service principal. This service principal must have the privileges listed above for the metastore associated with the Databricks workspace.
In Databricks, create a service principal with the privileges listed above.
Opt to enable native query audit for Unity Catalog:
If you will configure the integration using the manual setup option, the Immuta script you will generate includes the SQL statements for granting required privileges to the service principal, so you can skip this step and continue to the manual setup section. Otherwise, manually grant the Immuta service principal access to the Databricks Unity Catalog system tables. For Databricks Unity Catalog audit to work, the service principal must have the following access at minimum:
USE CATALOG
on the system
catalog
USE SCHEMA
on the system.access
schema
SELECT
on the following system tables:
system.access.audit
system.access.table_lineage
system.access.column_lineage
You have two options for configuring your Databricks Unity Catalog integration. Select the method you prefer below to navigate to configuration instructions:
Automatic setup: Immuta creates the catalogs, schemas, tables, and functions using the service principal you created.
Manual setup: Run the Immuta script in Databricks yourself to create the catalog. You can also modify the script to customize your storage location for tables, schemas, or catalogs. The user running the script must have the Databricks privileges listed above.
Copy the request example, and replace the values with your own as directed to configure the integration settings. The examples provided use JSON format, but the request also accepts YAML.
See the config object description for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and API key with your own.
Change the config values to your own, where
workspaceUrl is your Databricks workspace URL.
httpPath is the HTTP path of your Databricks cluster or SQL warehouse.
token is the Databricks personal access token. This is the access token for the Immuta service principal.
catalog is the name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number.
Replace the Immuta URL and API key with your own.
Change the config values to your own, where
workspaceUrl is your Databricks workspace URL.
httpPath is the HTTP path of your Databricks cluster or SQL warehouse.
oAuthClientConfig specifies your client ID, client secret, and authority URL. See the object description for details about child parameters.
catalog is the name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number.
Replace the Immuta URL and API key with your own.
Change the config values to your own, where
workspaceUrl is your Databricks workspace URL.
httpPath is the HTTP path of your Databricks cluster or SQL warehouse.
oAuthClientConfig specifies your client ID, client secret, and authority URL. See the object description for details about child parameters.
catalog is the name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number.
additionalWorkspaceConnections.workspaceURL: The Databricks workspace URL.
additionalWorkspaceConnections.HTTPpath: The HTTP path of the compute for the workspace.
additionalWorkspaceConnections.authenticationType: Specifies the authentication type to use to access the workspace. The additional workspace credentials will be used when processing objects in bound catalogs that are not accessible via the default workspace. See the additionalWorkspaceConnections section for details about additional authentication types and required and child attributes.
additionalWorkspaceConnections.catalogs: The to use for the additional workspace connection.
If the integration tries to process an object that is in a bound catalog and none of the specified additional workspaces have access to that catalog, the operation will fail and an error will be reported.
The response returns the status of the Databricks Unity Catalog integration configuration connection. See the response schema reference for details about the response schema.
A successful response includes the validation tests statuses.
An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
To manually configure the integration, complete the following steps:
Copy the request example, and replace the values with your own as directed to configure the integration settings. The examples provided use JSON format, but the request also accepts YAML.
See the config object description for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and API key with your own.
Change the config values to your own, where
workspaceUrl is your Databricks workspace URL.
httpPath is the HTTP path of your Databricks cluster or SQL warehouse.
token is the Databricks personal access token. This is the access token for the Immuta service principal.
catalog is the name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number.
Run the script returned in the response in your Databricks environment.
Replace the Immuta URL and API key with your own.
Change the config values to your own, where
workspaceUrl is your Databricks workspace URL.
httpPath is the HTTP path of your Databricks cluster or SQL warehouse.
oAuthClientConfig specifies your client ID, client secret, and authority URL. See the object description for details about child parameters.
catalog is the name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number.
Run the script returned in the response in your Databricks environment.
Replace the Immuta URL and API key with your own.
Change the config values to your own, where
workspaceUrl is your Databricks workspace URL.
httpPath is the HTTP path of your Databricks cluster or SQL warehouse.
oAuthClientConfig specifies your client ID, client secret, and authority URL. See the object description for details about child parameters.
catalog is the name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number.
additionalWorkspaceConnections.workspaceURL: The Databricks workspace URL.
additionalWorkspaceConnections.HTTPpath: The HTTP path of the compute for the workspace.
additionalWorkspaceConnections.authenticationType: Specifies the authentication type to use to access the workspace. The additional workspace credentials will be used when processing objects in bound catalogs that are not accessible via the default workspace. See the additionalWorkspaceConnections section for details about additional authentication types and required and child attributes.
additionalWorkspaceConnections.catalogs: The to use for the additional workspace connection.
Run the script returned in the response in your Databricks environment.
If the integration tries to process an object that is in a bound catalog and none of the specified additional workspaces have access to that catalog, the operation will fail and an error will be reported.
Response
The response returns the script for you to run in your environment.
Copy the request example, and replace the values with your own as directed to configure the integration settings. The examples provided use JSON format, but the request also accepts YAML. The payload you provide must match the payload sent when generating the script.
See the config object description for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and API key with your own.
Pass the same payload you sent when generating the script, where
workspaceUrl is your Databricks workspace URL.
httpPath is the HTTP path of your Databricks cluster or SQL warehouse.
token is the Databricks personal access token. This is the access token for the Immuta service principal.
catalog is the name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number.
Replace the Immuta URL and API key with your own.
Pass the same payload you sent when generating the script, where
workspaceUrl is your Databricks workspace URL.
httpPath is the HTTP path of your Databricks cluster or SQL warehouse.
oAuthClientConfig specifies your client ID, client secret, and authority URL. See the object description for details about child parameters.
catalog is the name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number.
Replace the Immuta URL and API key with your own.
Change the config values to your own, where
workspaceUrl is your Databricks workspace URL.
httpPath is the HTTP path of your Databricks cluster or SQL warehouse.
oAuthClientConfig specifies your client ID, client secret, and authority URL. See the object description for details about child parameters. The additional workspace credentials will be used when processing objects in bound catalogs that are not accessible via the default workspace.
catalog is the name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number.
additionalWorkspaceConnections.workspaceURL: The Databricks workspace URL.
additionalWorkspaceConnections.HTTPpath: The HTTP path of the compute for the workspace.
additionalWorkspaceConnections.authenticationType: Specifies the authentication type to use to access the workspace. The additional workspace credentials will be used when processing objects in bound catalogs that are not accessible via the default workspace. See the additionalWorkspaceConnections section for details about additional authentication types and required and child attributes.
additionalWorkspaceConnections.catalogs: The to use for the additional workspace connection.
If the integration tries to process an object that is in a bound catalog and none of the specified additional workspaces have access to that catalog, the operation will fail and an error will be reported.
The response returns the status of the Databricks Unity Catalog integration configuration connection. See the response schema reference for details about the response schema.
A successful response includes the validation tests statuses.
An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Copy the request example.
Replace the Immuta URL and API key with your own.
Replace the {id} request parameter with the unique identifier of the integration you want to get. Alternatively, you can get a list of all integrations and their IDs with the GET /integrations
endpoint.
The response returns a Databricks Unity Catalog integration configuration. See the response schema reference for details about the response schema. An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Copy the request example.
Replace the Immuta URL and API key with your own.
The response returns the configuration for all integrations. See the response schema reference for details about the response schema. An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
You have two options for updating your integration. Follow the steps that match your initial configuration of autoBootstrap:
automatic update (autoBootstrap is true
)
manual update (autoBootstrap is false
)
Copy the request example, and replace the values with your own as directed to configure the integration settings. The examples provided use JSON format, but the request also accepts YAML.
See the config object description for parameter definitions, value types, and additional configuration options.
This example updates the access token.
Replace the Immuta URL and API key with your own.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
Change the config values to your own, where
workspaceUrl is your Databricks workspace URL.
httpPath is the HTTP path of your Databricks cluster or SQL warehouse.
token is the Databricks personal access token. This is the access token for the Immuta service principal.
catalog is the name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number.
This example adds additional workspace connections to an existing configuration.
Replace the Immuta URL and API key with your own.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
Change the config values to your own, where
workspaceUrl is your Databricks workspace URL.
httpPath is the HTTP path of your Databricks cluster or SQL warehouse.
oAuthClientConfig specifies your client ID, client secret, and authority URL. See the object description for details about child parameters.
catalog is the name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number.
additionalWorkspaceConnections.workspaceURL: The Databricks workspace URL.
additionalWorkspaceConnections.HTTPpath: The HTTP path of the compute for the workspace.
additionalWorkspaceConnections.authenticationType: Specifies the authentication type to use to access the workspace. The additional workspace credentials will be used when processing objects in bound catalogs that are not accessible via the default workspace. Note: The credentials themselves can be omitted from the payload if they are not being updated.
additionalWorkspaceConnections.catalogs: The to use for the additional workspace connection.
The response returns the status of the Databricks Unity Catalog integration configuration connection. See the response schema reference for details about the response schema.
A successful response includes the validation tests statuses.
An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
To manually update the integration, complete the following steps:
Copy the request example, and replace the values with your own as directed to generate the script. The example provided uses JSON format, but the request also accepts YAML.
See the config object description for parameter definitions, value types, and additional configuration options.
This example updates the access token.
Replace the Immuta URL and API key with your own.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
Change the config values to your own, where
workspaceUrl is your Databricks workspace URL.
httpPath is the HTTP path of your Databricks cluster or SQL warehouse.
token is the Databricks personal access token. This is the access token for the Immuta service principal.
catalog is the name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number.
Run the script returned in the response in your Databricks environment.
This example adds additional workspace connections to an existing configuration.
Replace the Immuta URL and API key with your own.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
Change the config values to your own, where
workspaceUrl is your Databricks workspace URL.
httpPath is the HTTP path of your Databricks cluster or SQL warehouse.
oAuthClientConfig specifies your client ID, client secret, and authority URL. See the object description for details about child parameters.
catalog is the name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number.
additionalWorkspaceConnections.workspaceURL: The Databricks workspace URL.
additionalWorkspaceConnections.HTTPpath: The HTTP path of the compute for the workspace.
additionalWorkspaceConnections.authenticationType: Specifies the authentication type to use to access the workspace. The additional workspace credentials will be used when processing objects in bound catalogs that are not accessible via the default workspace. Note: The credentials themselves can be omitted from the payload if they are not being updated.
additionalWorkspaceConnections.catalogs: The to use for the additional workspace connection.
Run the script returned in the response in your Databricks environment.
If the integration tries to process an object that is in a bound catalog and none of the specified additional workspaces have access to that catalog, the operation will fail and an error will be reported.
Response
The response returns the script for you to run in your Databricks environment.
Copy the request example, and replace the values with your own as directed to update the integration settings. The example provided uses JSON format, but the request also accepts YAML. The payload you provide must match the payload sent when generating the script.
See the config object description for parameter definitions, value types, and additional configuration options.
This example updates the access token.
Replace the Immuta URL and API key with your own.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
Pass the same payload you sent when updating the script, where
workspaceUrl is your Databricks workspace URL.
httpPath is the HTTP path of your Databricks cluster or SQL warehouse.
token is the Databricks personal access token. This is the access token for the Immuta service principal.
catalog is the name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number.
This example adds additional workspace connections to an existing configuration.
Replace the Immuta URL and API key with your own.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
Change the config values to your own, where
workspaceUrl is your Databricks workspace URL.
httpPath is the HTTP path of your Databricks cluster or SQL warehouse.
oAuthClientConfig specifies your client ID, client secret, and authority URL. See the object description for details about child parameters.
catalog is the name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number.
additionalWorkspaceConnections.workspaceURL: The Databricks workspace URL.
additionalWorkspaceConnections.HTTPpath: The HTTP path of the compute for the workspace.
additionalWorkspaceConnections.authenticationType: Specifies the authentication type to use to access the workspace. The additional workspace credentials will be used when processing objects in bound catalogs that are not accessible via the default workspace. Note: The credentials themselves can be omitted from the payload if they are not being updated.
additionalWorkspaceConnections.catalogs: The to use for the additional workspace connection.
If the integration tries to process an object that is in a bound catalog and none of the specified additional workspaces have access to that catalog, the operation will fail and an error will be reported.
The response returns the status of the Databricks Unity Catalog integration configuration connection. See the response schema reference for details about the response schema.
A successful response includes the validation tests statuses.
An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Copy the request example.
Replace the Immuta URL and API key with your own.
Replace the {id} request parameter with the unique identifier of the integration you want to delete.
The response returns the status of the Databricks Unity Catalog integration configuration that has been deleted. See the response schema reference for details about the response schema. An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Private preview: The Amazon S3 integration is available to select accounts. Reach out to your Immuta representative for details.
The Amazon S3 resource allows you to create, configure, and manage your S3 integration. In this integration, Immuta provides coarse-grained access controls for data in S3 by performing permission grants using the Access Grants API so that users don't have to manage individual IAM policies themselves.
Use the /integrations
endpoint to
S3 integration enabled in Immuta; contact your Immuta representative to enable this integration
Write policies private preview enabled for your account; contact your Immuta representative to get this feature enabled
No location is registered in your AWS Access Grants instance before configuring the integration in Immuta
APPLICATION_ADMIN
Immuta permission to configure the integration
CREATE_S3_DATASOURCE
Immuta permission to register S3 prefixes
The AWS account credentials or optional AWS IAM role you provide Immuta to configure the integration must
have the permissions to perform the following actions to create locations and issue grants:
accessgrantslocation resource:
s3:CreateAccessGrant
s3:DeleteAccessGrantsLocation
s3:GetAccessGrantsLocation
s3:UpdateAccessGrantsLocation
accessgrantsinstance resource:
s3:CreateAccessGrantsInstance
s3:CreateAccessGrantsLocation
s3:DeleteAccessGrantsInstance
s3:GetAccessGrantsInstance
s3:GetAccessGrantsInstanceForPrefix
s3:GetAccessGrantsInstanceResourcePolicy
s3:ListAccessGrants
s3:ListAccessGrantsLocations
accessgrant resource:
s3:DeleteAccessGrant
s3:GetAccessGrant
bucket resource: s3:ListBucket
role resource:
iam:GetRole
iam:PassRole
all resources: s3:ListAccessGrantsInstances
Follow AWS documentation to create an Access Grants instance using the S3 console, AWS CLI, AWS SDKs, or the REST API. AWS supports one Access Grants instance per region per AWS account.
Follow the instructions at the top of the "Register a location" page in AWS documentation to create an AWS IAM role and edit the trust policy to give the S3 Access Grants service principal access to this role in the resource policy file. You will add this role to your integration configuration in Immuta so that Immuta can register this role with your Access Grants location. The policy should include at least the following permissions, but might need additional permissions depending on other local setup factors. An example trust policy is provided below.
sts:AssumeRole
sts:SetSourceIdentity
Follow the instructions at the top of the "Register a location" page in AWS documentation to create an IAM policy with the following permissions, and attach the policy to the IAM role you created to grant the permissions to the role. The policy should include the following permissions. An example policy is provided below.
s3:GetObject
s3:GetObjectVersion
s3:GetObjectAcl
s3:GetObjectVersionAcl
s3:ListMultipartUploadParts
s3:PutObject
s3:PutObjectAcl
s3:PutObjectVersionAcl
s3:DeleteObject
s3:DeleteObjectVersion
s3:AbortMultipartUpload
s3:ListBucket
s3:ListAllMyBuckets
If you use server-side encryption with AWS Key Management Service (AWS KMS) keys to encrypt your data, the following permissions are required for the IAM role in the policy. If you do not use this feature, do not include these permissions in your IAM policy:
kms:Decrypt
kms:GenerateDataKey
Opt to create an AWS IAM role that Immuta can use to create Access Grants locations and issue grants. This role must have the S3 permissions listed in the permissions section. An example policy is provided below.
This request configures the integration using the AWS access key authentication method.
Copy the request example. The example uses JSON format, but the request also accepts YAML.
Replace the Immuta URL and API key with your own.
Change the config values to your own, where
name is the name for the integration that is unique across all Amazon S3 integrations configured in Immuta.
awsAccountId is the ID of your AWS account.
awsRegion is the account's AWS region (such as us-east1
).
awsLocationRole is the AWS IAM role ARN assigned to the base access grants location. This is the role the AWS Access Grants service assumes to vend credentials to the grantee.
awsLocationPath is the base S3 location prefix that Immuta will use for this connection when registering S3 data sources. This path must be unique across all S3 integrations configured in Immuta.
awsAccessKeyId is the AWS access key ID of the AWS account configuring the integration.
awsSecretAccessKey is the AWS secret access key of the AWS account configuring the integration.
This request configures the integration using the automatic authentication method, which searches and obtains credentials using the AWS SDK's default credential provider chain. This method requires a configured IAM role for a service account and is only supported if you're using a self-managed deployment of Immuta. Work with your Immuta representative to customize your deployment and set up an IAM role for a service account that can give Immuta the credentials to set up the integration.
Copy the request example. The example uses JSON format, but the request also accepts YAML.
Replace the Immuta URL and API key with your own.
Change the config values to your own, where
name is the name for the integration that is unique across all Amazon S3 integrations configured in Immuta.
awsAccountId is the ID of your AWS account.
awsRegion is the account's AWS region (such as us-east1
).
awsLocationRole is the AWS IAM role ARN assigned to the base access grants location. This is the role the AWS Access Grants service assumes to vend credentials to the grantee.
awsLocationPath is the base S3 location that Immuta will use for this connection when registering S3 data sources. This path must be unique across all S3 integrations configured in Immuta.
See the config object description for parameter definitions, value types, and additional configuration options.
The response returns the status of the S3 integration configuration connection. See the response schema reference for details about the response schema.
A successful response includes the validation tests statuses.
An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Copy the request example.
Replace the Immuta URL and API key with your own.
Replace the {id} request parameter with the unique identifier of the integration you want to get. Alternatively, you can get a list of all integrations and their IDs with the GET /integrations
endpoint.
The response returns an S3 integration configuration. See the response schema reference for details about the response schema. An unsuccessful response returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Copy the request example.
Replace the Immuta URL and API key with your own.
The response returns the configuration for all integrations. See the response schema reference for details about the response schema. An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Copy the request example, and replace the values with your own as directed to update the integration settings. The example provided uses JSON format, but the request also accepts YAML.
See the config object description for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and API key with your own.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
Change the config values from above to your own. The editable values listed below are the only parameters that can change from the integration's existing configuration. The required parameters listed below must match the integration's existing configuration.
editable values:
name is the name for the integration that is unique across all Amazon S3 integrations configured in Immuta.
authenticationType is the method used to authenticate with AWS when configuring the S3 integration (accepted values are auto
or accessKey
).
awsAccessKeyId is the AWS access key ID for the AWS account editing the integration.
awsSecretAccessKey is the AWS secret access key for the AWS account editing the integration.
required values from existing configuration:
awsRoleToAssume is the optional AWS IAM role ARN Immuta assumes when interacting with AWS.
awsAccountId is the ID of your AWS account.
awsRegion is the account's AWS region (such as us-east1
).
awsLocationRole is the AWS IAM role ARN assigned to the base access grants location. This is the role the AWS Access Grants service assumes to vend credentials to the grantee.
awsLocationPath is the base S3 location that Immuta will use for this connection when registering S3 data sources. This path must be unique across all S3 integrations configured in Immuta.
The response returns the status of the Amazon S3 integration configuration connection. See the response schema reference for details about the response schema.
A successful response includes the validation tests statuses.
An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Copy the request example.
Replace the Immuta URL and API key with your own.
Replace the {id} request parameter with the unique identifier of the integration you want to delete.
The response returns the status of the S3 integration configuration that has been deleted. See the response schema reference for details about the response schema. An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Copy the request example. The example uses JSON format, but the request also accepts YAML.
Replace the Immuta URL and API key with your own.
Change the integrationsID to the ID of your S3 integration. This ID can be retrieved with the GET /integrations
request.
Change the dataSources values to your own, where
dataSourceName is the name of your data source.
prefix creates a data source for the prefix, bucket, or object provided in the path. If the data source prefix ends in a wildcard (), it protects all items starting with that prefix. If the data source prefix ends without a wildcard (), it protects a single object.
See the S3 data source payload description for parameter definitions and value types.
The response returns the ID, name, and prefix of the data source. See the response schema reference for details about the response schema. An unsuccessful response returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
The Starburst (Trino) resource allows you to create and manage your Starburst (Trino) integration. In this integration, Immuta policies are translated into Starburst rules and permissions and applied directly to tables within users’ existing catalogs.
Use the /integrations
endpoint to
APPLICATION_ADMIN
Immuta permission
A valid Starburst Enterprise license
To configure the Starburst (Trino) integration, complete the following steps:
Copy the request example. The example provided uses JSON format, but the request also accepts YAML.
Replace the Immuta URL and API key with your own.
The response returns the status of the Starburst (Trino) integration configuration connection. See the response schema reference for details about the response schema.
A successful response includes the validation tests statuses.
An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Navigate to the Immuta App Settings page and click the Integrations tab.
Click your enabled Starburst (Trino) integration and copy the configuration snippet displayed.
Copy the request example.
Replace the Immuta URL and API key with your own.
Replace the {id} request parameter with the unique identifier of the integration you want to get. Alternatively, you can get a list of all integrations and their IDs with the GET /integrations
endpoint.
The response returns a Starburst (Trino) integration configuration and the Immuta API key used to configure the Starburst cluster. See the response schema reference for details about the response schema. An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Copy the request example.
Replace the Immuta URL and API key with your own.
The response returns the configuration for all integrations. See the response schema reference for details about the response schema. An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
Cluster restart required
To update your API key in Starburst (Trino), you must shut down your cluster, generate and update the API key, and then restart your cluster. If you do not shut down your cluster, generating a new API key using the endpoint below will cause downtime for your deployment.
Copy the request example below and replace these values:
Replace the Immuta URL and API key with your own.
Replace the {id} request parameter with the unique identifier of the Starburst (Trino) integration you want to regenerate the Immuta API key for.
Once you make this request, your old Immuta API key will be deleted and will no longer be valid.
The response includes your new Immuta API key. Replace the old immuta.apikey
value in the Immuta access control properties file with this new key.
The response includes your new Immuta API key.
Copy the request example.
Replace the Immuta URL and API key with your own.
Replace the {id} request parameter with the unique identifier of the integration you want to delete.
The response returns the status of the Starburst (Trino) integration configuration that has been deleted. See the response schema reference for details about the response schema. An unsuccessful request returns the status code and an error message. See the HTTP status codes and error messages for a list of statuses, error messages, and troubleshooting guidance.
In the Redshift integration, Immuta generates policy-enforced views in your configured Redshift schema for tables registered as Immuta data sources.
Use the /integrations
endpoint to
APPLICATION_ADMIN
Immuta permission
A Redshift cluster with an RA3 node is required for the multi-database integration. You must use a Redshift RA3 instance type because Immuta requires cross-database views, which are only supported in Redshift RA3 instance types. For other instance types, you may configure a single-database integration using one of the :
Configure the integration with an existing database that contains the external tables. In the steps below, specify an existing database in Redshift as the database
in which Immuta will add the Immuta-managed schemas and views instead of creating a new database.
Create a new database as specified in the steps below, and then re-create all of your external tables in that database.
For automated installations, the credentials provided must be a Superuser or have the ability to create databases and users and modify grants.
Account used to configure or edit the integration must have the following Redshift permissions:
CREATE DATABASE
CREATE USER
REVOKE ALL PRIVILEGES ON DATABASE
GRANT TEMP ON DATABASE
MANAGE GRANTS ON ACCOUNT
You have two options for configuring your Redshift integration:
These privileges will be used to create and configure a new Immuta-managed database within the specified Redshift instance. The credentials are not stored or saved by Immuta, and Immuta doesn’t retain access to them after initial setup is complete.
You can create a new account for Immuta to use that has these privileges, or you can grant temporary use of a pre-existing account. By default, the pre-existing account with appropriate privileges is a Superuser. If you create a new account, it can be deleted after initial setup is complete.
Copy the request example from one of the sections below, and replace the values with your own as directed to configure the integration settings. The examples provided use JSON format, but the request also accepts YAML.
This request specifies userPassword
as the authentication type for the Immuta system user. The username and password provided are credentials for a system account that can manage the database.
Change the config values to your own, where
host is the URL of your Redshift account.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
initialDatabase is the name of an existing database in Redshift that Immuta initially connects to and creates the Immuta-managed database.
username and password are the credentials for the system account that can act on Redshift objects and configure the integration.
This request uses Okta as the authentication type for the Immuta system user and enables impersonation to allow Immuta administrators to grant users permission to query Redshift data as other Immuta users.
Change the config values to your own, where
host is the URL of your Redshift account.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
initialDatabase is the name of an existing database in Redshift that Immuta initially connects to and creates the Immuta-managed database.
A successful response includes the validation tests statuses.
To manually configure the integration, complete the following steps:
Copy the request example from one of the tabs below, and replace the values with your own as directed to generate the first script. The examples provided use JSON format, but the request also accepts YAML.
This request specifies userPassword
as the authentication type for the Immuta system user. The username and password provided are credentials for a system account that can manage the database.
Change the config values to your own, where
host is the URL of your Redshift account.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
initialDatabase is the name of an existing database in Redshift that Immuta initially connects to and creates the Immuta-managed database.
username and password are the credentials for the system account that can act on Redshift objects and configure the integration.
Run the script returned in the response in the Redshift initialDatabase specified in the payload.
This request uses Okta as the authentication type for the Immuta system user and enables impersonation to allow Immuta administrators to grant users permission to query Redshift data as other Immuta users.
Change the config values to your own, where
host is the URL of your Redshift account.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
initialDatabase is the name of an existing database in Redshift that Immuta initially connects to and creates the Immuta-managed database.
Run the script returned in the response in the Redshift initialDatabase specified in the payload.
Response
The response returns the script for you to run in the Redshift initialDatabase.
This request specifies userPassword
as the authentication type for the Immuta system user. The username and password provided are credentials for a system account that can manage the database.
host is the URL of your Redshift account.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
initialDatabase is the name of an existing database in Redshift that Immuta initially connects to and creates the Immuta-managed database.
username and password are the credentials for the system account that can act on Redshift objects and configure the integration.
Run the script returned in the response in the database created by the first script.
This request uses Okta as the authentication type for the Immuta system user and enables impersonation to allow Immuta administrators to grant users permission to query Redshift data as other Immuta users.
host is the URL of your Redshift account.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
initialDatabase is the name of an existing database in Redshift that Immuta initially connects to and creates the Immuta-managed database.
Run the script returned in the response in the database created by the first script.
Response
The response returns the script for you to run in the database created by the first script.
This request specifies userPassword
as the authentication type for the Immuta system user. The username and password provided are credentials for a system account that can manage the database.
host is the URL of your Redshift account.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
initialDatabase is the name of an existing database in Redshift that Immuta initially connects to and creates the Immuta-managed database.
username and password are the credentials for the system account that can act on Redshift objects and configure the integration.
This request uses Okta as the authentication type for the Immuta system user and enables impersonation to allow Immuta administrators to grant users permission to query Redshift data as other Immuta users.
host is the URL of your Redshift account.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
initialDatabase is the name of an existing database in Redshift that Immuta initially connects to and creates the Immuta-managed database.
A successful response includes the validation tests statuses.
Copy the request example.
Copy the request example.
You have two options for updating your integration. Follow the steps that match your initial configuration of autoBootstrap:
Copy the request example, and replace the values with your own as directed to update the integration settings. The example provided uses JSON format, but the request also accepts YAML.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
Change the config values to your own, where
host is the URL of your Redshift account.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
initialDatabase is the name of an existing database in Redshift that Immuta initially connects to and creates the Immuta-managed database.
A successful response includes the validation tests statuses.
To manually update the integration, complete the following steps:
Copy the request example, and replace the values with your own as directed to generate the script. The example provided uses JSON format, but the request also accepts YAML.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
Change the config values to your own, where
host is the URL of your Redshift account.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
initialDatabase is the name of an existing database in Redshift that Immuta initially connects to and creates the Immuta-managed database.
Run the script returned in the response in your Redshift environment.
Response
The response returns the script for you to run in your Redshift environment.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
host is the URL of your Redshift account.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
initialDatabase is the name of an existing database in Redshift that Immuta initially connects to and creates the Immuta-managed database.
A successful response includes the validation tests statuses.
Copy the request example.
Replace the {id} request parameter with the unique identifier of the integration you want to delete.
If you set
autoBootstrap to true
when enabling the integration, specify the authenticationType and the credentials you used to configure the integration in the payload, as illustrated in the example.
Private preview: This integration is available to select accounts. Reach out to your Immuta representative for details.
The Google BigQuery resource allows you to create, configure, and manage your Google BigQuery integration. In this integration, Immuta generates policy-enforced views in your configured Google BigQuery dataset for tables registered as Immuta data sources.
Use the /integrations
endpoint to
APPLICATION_ADMIN
Immuta permission
Google BigQuery integration enabled in Immuta (work with your Immuta representative to enable this integration)
To execute the Immuta script from your command line to , you must be authenticated to the gcloud CLI utility as a user with all of the following roles:
roles/iam.roleAdmin
roles/iam.serviceAccountAdmin
roles/serviceusage.serviceUsageAdmin
Copy the request example. The example uses JSON format, but the request also accepts YAML.
Change the config values to your own, where
role is the Google Cloud role used to connect to Google BigQuery.
datasetSuffix is the suffix to postfix to the name of each dataset created to store secure views. This string must start with an underscore.
dataset is the name of the BigQuery dataset to provision inside of the project for Immuta metadata storage.
location is the dataset's location, which can be any valid GCP location (such as us-east1
).
A successful response includes the validation tests statuses.
Copy the request example.
Copy the request example.
Copy the request example, which updates the private key. The example uses JSON format, but the request also accepts YAML.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
Change the config values to your own, where
role is the Google Cloud role used to connect to Google BigQuery.
datasetSuffix is the suffix to postfix to the name of each dataset created to store secure views. This string must start with an underscore.
dataset is the name of the BigQuery dataset to provision inside of the project for Immuta metadata storage.
location is the dataset's location, which can be any valid GCP location (such as us-east1
).
A successful response includes the validation tests statuses.
Copy the request example.
Replace the {id} request parameter with the unique identifier of the integration you want to delete.
In the Snowflake integration, Immuta manages access to Snowflake tables by administering Snowflake and on those tables, allowing users to query tables directly in Snowflake while dynamic policies are enforced.
Use the /integrations
endpoint to
APPLICATION_ADMIN
Immuta permission
Snowflake Enterprise account
Role used to configure, edit, or remove the integration needs to have the following Snowflake privileges:
CREATE DATABASE ON ACCOUNT WITH GRANT OPTION
CREATE ROLE ON ACCOUNT WITH GRANT OPTION
CREATE USER ON ACCOUNT WITH GRANT OPTION
MANAGE GRANTS ON ACCOUNT WITH GRANT OPTION
APPLY MASKING POLICY ON ACCOUNT WITH GRANT OPTION
APPLY ROW ACCESS POLICY ON ACCOUNT WITH GRANT OPTION
You have two options for configuring your Snowflake integration:
These permissions will be used to create and configure a new Immuta-managed database within the specified Snowflake instance. The credentials are not stored or saved by Immuta, and Immuta doesn’t retain access to them after initial setup is complete.
You can create a new account for Immuta to use that has these permissions, or you can grant temporary use of a pre-existing account. By default, the pre-existing account with appropriate permissions is ACCOUNTADMIN. If you create a new account, it can be deleted after initial setup is complete.
Select the section below that matches your authentication method.
Copy the request example and replace the values with your own as directed to configure the integration settings. The examples provided use JSON format, but the request also accepts YAML.
Change the config values to your own, where
host is the URL of your Snowflake account.
warehouse is the default pool of Snowflake compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
Change the config values to your own, where
host is the URL of your Snowflake account.
warehouse is the default pool of Snowflake compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
username is the system account user that can assume the role to manage the database and administer Snowflake masking and row access policies.
privateKey is your private key. If you are using curl, replace new lines in the private key with a backslash before the new line character: "\n". If you are using another means of configuration, such as a Python script, the "\n" should not be added.
connectArgs is used to set PRIV_KEY_FILE_PWD
if the private key is encrypted.
A successful response includes the validation tests statuses.
Best practices
The account you create for Immuta should only be used for the integration and should not be used as the credentials for creating data sources in Immuta; doing so will cause issues. Instead, create a separate, dedicated READ-ONLY account for creating and registering data sources within Immuta.
To manually configure the integration, complete the following steps:
Select the tab below that matches your authentication method.
Copy the request example and replace the values with your own as directed to generate the script. The examples provided use JSON format, but the request also accepts YAML.
Change the config values to your own, where
host is the URL of your Snowflake account.
warehouse is the default pool of Snowflake compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
username and password are the credentials for the system account that can assume the role to manage the database and administer Snowflake masking and row access policies.
Run the script returned in the response in your Snowflake environment.
Change the config values to your own, where
host is the URL of your Snowflake account.
warehouse is the default pool of Snowflake compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
username is the system account user that can assume the role to manage the database and administer Snowflake masking and row access policies.
privateKey is your private key. If you are using curl, replace new lines in the private key with a backslash before the new line character: "\n". If you are using another means of configuration, such as a Python script, the "\n" should not be added.
connectArgs is used to set PRIV_KEY_FILE_PWD
if the private key is encrypted.
Run the script returned in the response in your Snowflake environment.
In this example, Snowflake External OAuth is used to authenticate the system account user, ensuring secure communication between Immuta and Snowflake. To use this authentication method, autoBootstrap
must be false
.
Change the config values to your own, where
host is the URL of your Snowflake account.
warehouse is the default pool of Snowflake compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
workspaces.enabled specifies whether Immuta project workspaces are enabled for Snowflake.
workspaces.warehouses is a list of warehouses that workspace users have usage privileges on.
username is the system account user that can act on Snowflake objects and configure the integration.
Run the script returned in the response in your Snowflake environment.
Response
The response returns the script for you to run in your environment.
Select the tab below that matches your authentication method.
host is the URL of your Snowflake account.
warehouse is the default pool of Snowflake compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
username and password are the credentials for the system account that can assume the role to manage the database and administer Snowflake masking and row access policies.
host is the URL of your Snowflake account.
warehouse is the default pool of Snowflake compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
username is the system account user that can assume the role to manage the database and administer Snowflake masking and row access policies.
privateKey is your private key. If you are using curl, replace new lines in the private key with a backslash before the new line character: "\n". If you are using another means of configuration, such as a Python script, the "\n" should not be added.
connectArgs is used to set PRIV_KEY_FILE_PWD
if the private key is encrypted.
In this example, Snowflake External OAuth is used to authenticate the system account user, ensuring secure communication between Immuta and Snowflake. To use this authentication method, autoBootstrap
must be false
.
host is the URL of your Snowflake account.
warehouse is the default pool of Snowflake compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
A successful response includes the validation tests statuses.
Copy the request example.
Copy the request example.
You have two options for updating your integration. Follow the steps that match your initial configuration of autoBootstrap:
Select the section below that matches your authentication method.
Copy the request example and replace the values with your own as directed to update the integration settings. The examples provided use JSON format, but the request also accepts YAML.
This request updates the configuration to enable query audit in Snowflake.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
Change the config values to your own, where
host is the URL of your Snowflake account.
warehouse is the default pool of Snowflake compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
This request updates the configuration to enable query audit in Snowflake.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
Change the config values to your own, where
host is the URL of your Snowflake account.
warehouse is the default pool of Snowflake compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
username is the system account user that can assume the role to manage the database and administer Snowflake masking and row access policies.
privateKey is your private key. If you are using curl, replace new lines in the private key with a backslash before the new line character: "\n". If you are using another means of configuration, such as a Python script, the "\n" should not be added.
connectArgs is used to set PRIV_KEY_FILE_PWD
if the private key is encrypted.
A successful response includes the validation tests statuses.
To manually update the integration, complete the following steps:
Select the tab below that matches your authentication method.
Copy the request example and replace the values with your own as directed to generate the script. The examples provided use JSON format, but the request also accepts YAML.
This request updates the configuration to enable query audit in Snowflake.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
Change the config values to your own, where
host is the URL of your Snowflake account.
warehouse is the default pool of Snowflake compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
username and password are the credentials for the system account that can assume the role to manage the database and administer Snowflake masking and row access policies.
Run the script returned in the response in your Snowflake environment.
This request updates the configuration to enable query audit in Snowflake.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
Change the config values to your own, where
host is the URL of your Snowflake account.
warehouse is the default pool of Snowflake compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
username is the system account user that can assume the role to manage the database and administer Snowflake masking and row access policies.
privateKey is your private key. If you are using curl, replace new lines in the private key with a backslash before the new line character: "\n". If you are using another means of configuration, such as a Python script, the "\n" should not be added.
connectArgs is used to set PRIV_KEY_FILE_PWD
if the private key is encrypted.
Run the script returned in the response in your Snowflake environment.
This request updates the configuration to disable Snowflake workspaces for the integration.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
Change the config values to your own, where
host is the URL of your Snowflake account.
warehouse is the default pool of Snowflake compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
username is the system account user that can act on Snowflake objects and configure the integration.
Run the script returned in the response in your Snowflake environment.
Response
The response returns the script for you to run in your environment.
Select the section below that matches your authentication method.
This request updates the configuration to enable query audit in Snowflake.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
host is the URL of your Snowflake account.
warehouse is the default pool of Snowflake compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
username and password are the credentials for the system account that can assume the role to manage the database and administer Snowflake masking and row access policies.
This request updates the configuration to enable query audit in Snowflake.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
host is the URL of your Snowflake account.
warehouse is the default pool of Snowflake compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
username is the system account user that can assume the role to manage the database and administer Snowflake masking and row access policies.
privateKey is your private key. If you are using curl, replace new lines in the private key with a backslash before the new line character: "\n". If you are using another means of configuration, such as a Python script, the "\n" should not be added.
connectArgs is used to set PRIV_KEY_FILE_PWD
if the private key is encrypted.
This request updates the configuration to disable Snowflake workspaces and enable Snowflake query audit for the integration.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
host is the URL of your Snowflake account.
warehouse is the default pool of Snowflake compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
database is the name of a new empty database that the Immuta system user will manage and store metadata in.
A successful response includes the validation tests statuses.
Copy the request example.
Replace the {id} request parameter with the unique identifier of the integration you want to delete.
If you set
autoBootstrap to false
when enabling the integration,
Make the request above without including a payload to remove the integration from Immuta.
Run the generated script in Snowflake.
The how-to guides in this section illustrate how to integrate your remote data platform with Immuta so you can manage and enforce access controls on your data.
Private preview: The Amazon S3 integration is available to select accounts. Reach out to your Immuta representative for details.
Immuta's Amazon S3 integration allows users to apply subscription policies to data in S3 to restrict what prefixes, buckets, or objects users can access. To enforce access controls on this data, Immuta creates S3 grants that are administered by S3 Access Grants, an AWS feature that defines access permissions to data in S3.
Follow this guide to configure and manage your Amazon S3 integration. Example requests and responses are provided throughout the guide so that you can quickly copy and update them with your own settings.
In this integration, Immuta generates policy-enforced views in a schema in your configured Azure Synapse Analytics Dedicated SQL pool for tables registered as Immuta data sources.
Follow this guide to configure and manage your Azure Synapse Analytics integration. Example requests and responses are provided throughout the guide so that you can quickly copy and update them with your own settings.
Immuta’s integration with Unity Catalog allows you to manage multiple Databricks workspaces through Unity Catalog while protecting your data with Immuta policies. Instead of manually creating UDFs or granting access to each table in Databricks, you can author your policies in Immuta and have Immuta manage and enforce Unity Catalog access-control policies on your data in Databricks clusters or SQL warehouses.
Follow this guide to configure and manage your Databricks Unity Catalog integration. Example requests and responses are provided throughout the guide so that you can quickly copy and update them with your own settings.
Private preview: This integration is available to select accounts. Reach out to your Immuta representative for details.
In this integration, Immuta generates policy-enforced views in your configured Google BigQuery dataset for tables registered as Immuta data sources.
Follow this guide to configure and manage your Google BigQuery integration. Example requests and responses are provided throughout the guide so that you can quickly copy and update them with your own settings.
Immuta generates policy-enforced views in your configured Redshift schema for tables registered as Immuta data sources.
Follow this guide to configure and manage your Redshift integration. Example requests and responses are provided throughout the guide so that you can quickly copy and update them with your own settings.
Immuta manages access to Snowflake tables by administering Snowflake row access policies and column masking policies on those tables, allowing users to query tables directly in Snowflake while dynamic policies are enforced.
Follow this guide to configure and manage your Snowflake integration. Example requests and responses are provided throughout the guide so that you can quickly copy and update them with your own settings.
Immuta policies are translated into Starburst rules and permissions and applied directly to tables within users’ existing catalogs.
Follow this guide to configure and manage your Starburst (Trino) integration. Example requests and responses are provided throughout the guide so that you can quickly copy and update them with your own settings.
: Grant Immuta one-time use of credentials to automatically configure your Redshift environment and the integration. When performing an automated installation, Immuta requires temporary, one-time use of credentials with the Redshift permissions listed in the .
: Run the Immuta script in your Redshift environment yourself to configure your Redshift environment and the integration. The specified role used to run the bootstrap needs to have the Redshift permissions listed in the .
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
Replace the Immuta URL and with your own.
impersonation specifies whether user impersonation is enabled. See the for child parameters.
okta specifies your username, password, appId, idpHost, and role. See the for details about child parameters.
The response returns the status of the Redshift integration configuration connection. See the for details about the response schema.
An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
impersonation specifies whether user impersonation is enabled. See the for child parameters.
Replace the Immuta URL and with your own.
impersonation specifies whether user impersonation is enabled. See the for child parameters.
okta specifies your username, password, appId, idpHost, and role. See the for details about child parameters.
Copy the request example from one of the tabs below, and replace the values with your own as directed to configure the integration settings. The examples provided use JSON format, but the request also accepts YAML. The payload you provide must match the payload sent when .
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
Pass the same payload you sent when , where
impersonation specifies whether user impersonation is enabled. See the for child parameters.
Replace the Immuta URL and with your own.
Pass the same payload you sent when , where
impersonation specifies whether user impersonation is enabled. See the for child parameters.
okta specifies your username, password, appId, idpHost, and role. See the for details about child parameters.
Copy the request example from one of the tabs below, and replace the values with your own as directed to configure the integration settings. The examples provided use JSON format, but the request also accepts YAML. The payload you provide must match the payload sent when .
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
Pass the same payload you sent when , where
impersonation specifies whether user impersonation is enabled. See the for child parameters.
Replace the Immuta URL and with your own.
Pass the same payload you sent when , where
impersonation specifies whether user impersonation is enabled. See the for child parameters.
okta specifies your username, password, appId, idpHost, and role. See the for details about child parameters.
The response returns the status of the Redshift integration configuration connection. See the for details about the response schema.
An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
Replace the Immuta URL and with your own.
Replace the {id} request parameter with the unique identifier of the integration you want to get. Alternatively, you can get a list of all integrations and their IDs with the .
The response returns a Redshift integration configuration. See the for details about the response schema. An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
Replace the Immuta URL and with your own.
The response returns the configuration for all integrations. See the for details about the response schema. An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
(autoBootstrap is true
)
(autoBootstrap is false
)
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
okta specifies your username, password, appId, idpHost, and role. See the for details about child parameters.
The response returns the status of the Redshift integration configuration connection. See the for details about the response schema.
An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
impersonation specifies whether user impersonation is enabled. See the for child parameters.
okta specifies your username, password, appId, idpHost, and role. See the for details about child parameters.
Copy the request example, and replace the values with your own as directed to update the integration settings. The example provided uses JSON format, but the request also accepts YAML. The payload you provide must match the payload sent when .
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
Pass the same payload you sent when , where
impersonation specifies whether user impersonation is enabled. See the for child parameters.
okta specifies your username, password, appId, idpHost, and role. See the for details about child parameters.
The response returns the status of the Redshift integration configuration connection. See the for details about the response schema.
An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
Replace the Immuta URL and with your own.
autoBootstrap to false
when enabling the integration, use the script endpoint (for integrations that were not successfully created) or the endpoint (for integrations that were successfully created) to remove Immuta-managed resources from your environment. Then, make the request above without including a payload to remove the integration from Immuta.
The response returns the status of the Redshift integration configuration that has been deleted. See the for details about the response schema. An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
by either using the Google Cloud console or the provided Immuta script.
Replace the Immuta URL and with your own.
credential is the Google BigQuery service account JSON keyfile credential content. See the for guidance on generating and downloading this keyfile.
See the for parameter definitions, value types, and additional configuration options.
The response returns the status of the Google BigQuery integration configuration connection. See the for details about the response schema.
An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
Replace the Immuta URL and with your own.
Replace the {id} request parameter with the unique identifier of the integration you want to get. Alternatively, you can get a list of all integrations and their IDs with the .
The response returns a Google BigQuery integration configuration. See the for details about the response schema. An unsuccessful response returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
Replace the Immuta URL and with your own.
The response returns the configuration for all integrations. See the for details about the response schema. An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
Replace the Immuta URL and with your own.
credential is the Google BigQuery service account JSON keyfile credential content. See the for guidance on generating and downloading this keyfile.
See the for parameter definitions, value types, and additional configuration options.
The response returns the status of the Google BigQuery integration configuration connection. See the for details about the response schema.
An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
Replace the Immuta URL and with your own.
The response returns the status of the Google BigQuery integration configuration that has been deleted. See the for details about the response schema. An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
: Grant Immuta one-time use of credentials to automatically configure your Snowflake environment and the integration. When performing an automated installation, Immuta requires temporary, one-time use of credentials with the Snowflake privileges listed in the .
: Run the Immuta script in your Snowflake environment yourself to configure your Snowflake environment and the integration. The specified role used to run the bootstrap needs to have the Snowflake privileges listed in the .
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
username and password are credentials of a . These credentials are not stored; they are used by Immuta to configure the integration.
role is a Snowflake role that has been granted the .
Replace the Immuta URL and with your own.
The response returns the status of the Snowflake integration configuration connection. See the for details about the response schema.
An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
audit specifies whether query audit is enabled for Snowflake. See the for child parameters.
workspaces represents an Immuta project workspace configured for Snowflake. See the for child parameters.
impersonation specifies whether user impersonation is enabled. See the for child parameters.
Replace the Immuta URL and with your own.
audit specifies whether query audit is enabled for Snowflake. See the for child parameters.
workspaces represents an Immuta project workspace configured for Snowflake. See the for child parameters.
impersonation specifies whether user impersonation is enabled. See the for child parameters.
Replace the Immuta URL and with your own.
audit specifies whether query audit is enabled for Snowflake. See the for child parameters.
impersonation specifies whether user impersonation is enabled. See the for child parameters.
oAuthClientConfig specifies your provider, client ID, client secret, authority URL, and your encoded public and private keys. See the for details about child parameters.
Copy the request example and replace the values with your own as directed to configure the integration settings. The examples provided use JSON format, but the request also accepts YAML. The parameters and values you provide in this payload must match those you provided when .
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
Pass the same payload you sent when , where
audit specifies whether query audit is enabled for Snowflake. See the for child parameters.
workspaces represents an Immuta project workspace configured for Snowflake. See the for child parameters.
impersonation specifies whether user impersonation is enabled. See the for child parameters.
Replace the Immuta URL and with your own.
Pass the same payload you sent when , where
audit specifies whether query audit is enabled for Snowflake. See the for child parameters.
workspaces represents an Immuta project workspace configured for Snowflake. See the for child parameters.
impersonation specifies whether user impersonation is enabled. See the for child parameters.
Replace the Immuta URL and with your own.
Pass the same payload you sent when , where
impersonation specifies whether user impersonation is enabled. See the for child parameters.
workspaces specifies whether Immuta project workspaces are enabled for Snowflake. See the for details about child parameters.
oAuthClientConfig specifies your provider, client ID, client secret, authority URL, and your encoded public and private keys. See the for details about child parameters.
The response returns the status of the Snowflake integration configuration connection. See the for details about the response schema.
An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
Replace the Immuta URL and with your own.
Replace the {id} request parameter with the unique identifier of the integration you want to get. Alternatively, you can get a list of all integrations and their IDs with the .
The response returns the Snowflake integration configuration. See the for details about the response schema. An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
Replace the Immuta URL and with your own.
The response returns the configuration for all integrations. See the for details about the response schema. An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
(autoBootstrap is true
)
(autoBootstrap is false
)
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
audit specifies whether query audit is enabled for Snowflake. See the for child parameters.
username and password are credentials of a . These credentials are not stored; they are used by Immuta to enable or disable configuration settings.
role is a Snowflake role that has been granted the .
Replace the Immuta URL and with your own.
audit specifies whether query audit is enabled for Snowflake. See the for child parameters.
The response returns the status of the Snowflake integration configuration. See the for details about the response schema.
An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
audit specifies whether query audit is enabled for Snowflake. See the for child parameters.
workspaces represents an Immuta project workspace configured for Snowflake. See the for child parameters.
impersonation specifies whether user impersonation is enabled. See the for child parameters.
Replace the Immuta URL and with your own.
audit specifies whether query audit is enabled for Snowflake. See the for child parameters.
workspaces represents an Immuta project workspace configured for Snowflake. See the for child parameters.
impersonation specifies whether user impersonation is enabled. See the for child parameters.
Replace the Immuta URL and with your own.
audit specifies whether query audit is enabled for Snowflake. See the for child parameters.
workspaces represents an Immuta project workspace configured for Snowflake. See the for child parameters.
impersonation specifies whether user impersonation is enabled. See the for child parameters.
oAuthClientConfig specifies your provider, client ID, client secret, authority URL, and your encoded public and private keys. See the for details about child parameters.
Copy the request example and replace the values with your own as directed to update the integration settings. The examples provided use JSON format, but the request also accepts YAML. The payload you provide must match the one you provided when .
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
Pass the same payload you sent when , where
audit specifies whether query audit is enabled for Snowflake. See the for child parameters.
workspaces represents an Immuta project workspace configured for Snowflake. See the for child parameters.
impersonation specifies whether user impersonation is enabled. See the for child parameters.
Replace the Immuta URL and with your own.
Pass the same payload you sent when , where
audit specifies whether query audit is enabled for Snowflake. See the for child parameters.
workspaces represents an Immuta project workspace configured for Snowflake. See the for child parameters.
impersonation specifies whether user impersonation is enabled. See the for child parameters.
Replace the Immuta URL and with your own.
Pass the same payload you sent when , where
impersonation specifies whether user impersonation is enabled. See the for child parameters.
audit specifies whether query audit is enabled for Snowflake. See the for child parameters.
workspaces specifies whether Immuta project workspaces are enabled for Snowflake. See the for details about child parameters.
oAuthClientConfig specifies your provider, client ID, client secret, authority URL, and your encoded public and private keys. See the for details about child parameters.
The response returns the status of the Snowflake integration configuration. See the for details about the response schema.
An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
Replace the Immuta URL and with your own.
autoBootstrap to true
when enabling the integration, specify the and the credentials you used to configure the integration in the payload, as illustrated in the example. See the for details.
Use the script endpoint (for integrations that were not successfully created) or the endpoint (for integrations that were successfully created) to generate a script that will remove Immuta-managed resources and policies from your environment.
The response returns the status of the Snowflake integration configuration that has been deleted. See the for details about the response schema. An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
The Azure Synapse Analytics resource allows you to create, configure, and manage your Azure Synapse Analytics integration. In this integration, Immuta generates policy-enforced views in a schema in your configured Azure Synapse Analytics Dedicated SQL pool for tables registered as Immuta data sources.
Use the /integrations
endpoint to
APPLICATION_ADMIN
Immuta permission
A running Dedicated SQL pool
Account used to configure or edit the integration must have the Azure Synapse Analytics permission to manage GRANTS
You have two options for configuring your Azure Synapse Analytics integration:
: Grant Immuta one-time use of credentials to automatically configure your Azure Synapse Analytics environment and the integration. When performing an automated installation, Immuta requires temporary, one-time use of credentials with the permission to manage GRANTS
.
: Run the Immuta script in your Azure Synapse Analytics environment yourself to configure your environment and the integration. The specified role used to run the bootstrap needs to have the permission to manage GRANTS
.
Copy the request example from one of the tabs below, and replace the values with your own as directed to configure the integration settings. The examples provided use JSON format, but the request also accepts YAML.
Change the config values to your own, where
host is the URL of your Azure Synapse Analytics account.
schema is the name of the Immuta-managed schema where all your secure views will be created and stored.
database is the name of an existing database where the Immuta system user will store all Immuta-generated schemas and views.
username and password are the username and password of the system account that can act on Azure Synapse Analytics objects and configure the integration.
This request enables impersonation to allow Immuta administrators to grant users permission to query Azure Synapse Analytics data as other Immuta users.
Change the config values to your own, where
host is the URL of your Azure Synapse Analytics account.
schema is the name of the Immuta-managed schema where all your secure views will be created and stored.
database is the name of an existing database where the Immuta system user will store all Immuta-generated schemas and views.
username and password are the username and password of the system account that can act on Azure Synapse Analytics objects and configure the integration.
A successful response includes the validation tests statuses.
To manually configure the integration, complete the following steps:
Copy the request example from one of the tabs below, and replace the values with your own as directed to generate the script. The examples provided use JSON format, but the request also accepts YAML.
Change the config values to your own, where
host is the URL of your Azure Synapse Analytics account.
schema is the name of the Immuta-managed schema where all your secure views will be created and stored.
database is the name of an existing database where the Immuta system user will store all Immuta-generated schemas and views.
username and password are the username and password of the system account that can act on Azure Synapse Analytics objects and configure the integration.
Run the script returned in the response in your Azure Synapse Analytics environment.
This request enables impersonation to allow Immuta administrators to grant users permission to query Azure Synapse Analytics data as other Immuta users.
Change the config values to your own, where
host is the URL of your Azure Synapse Analytics account.
schema is the name of the Immuta-managed schema where all your secure views will be created and stored.
database is the name of an existing database where the Immuta system user will store all Immuta-generated schemas and views.
username and password are the username and password of the system account that can act on Azure Synapse Analytics objects and configure the integration.
Run the script returned in the response in your Azure Synapse Analytics environment.
Response
The response returns the script for you to run in your environment.
host is the URL of your Azure Synapse Analytics account.
schema is the name of the Immuta-managed schema where all your secure views will be created and stored.
database is the name of an existing database where the Immuta system user will store all Immuta-generated schemas and views.
username and password are the username and password of the system account that can act on Azure Synapse Analytics objects and configure the integration.
Run the script returned in the response in your Azure Synapse Analytics environment.
This request enables impersonation to allow Immuta administrators to grant users permission to query Azure Synapse Analytics data as other Immuta users.
host is the URL of your Azure Synapse Analytics account.
schema is the name of the Immuta-managed schema where all your secure views will be created and stored.
database is the name of an existing database where the Immuta system user will store all Immuta-generated schemas and views.
username and password are the username and password of the system account that can act on Azure Synapse Analytics objects and configure the integration.
Run the script returned in the response in your Azure Synapse Analytics environment.
Response
The response returns the script for you to run in your environment.
host is the URL of your Azure Synapse Analytics account.
schema is the name of the Immuta-managed schema where all your secure views will be created and stored.
database is the name of an existing database where the Immuta system user will store all Immuta-generated schemas and views.
username and password are the username and password of the system account that can act on Azure Synapse Analytics objects and configure the integration.
This request enables impersonation to allow Immuta administrators to grant users permission to query Azure Synapse Analytics data as other Immuta users.
host is the URL of your Azure Synapse Analytics account.
schema is the name of the Immuta-managed schema where all your secure views will be created and stored.
database is the name of an existing database where the Immuta system user will store all Immuta-generated schemas and views.
username and password are the username and password of the system account that can act on Azure Synapse Analytics objects and configure the integration.
A successful response includes the validation tests statuses.
Copy the request example.
Copy the request example.
You have two options for updating your integration. Follow the steps that match your initial configuration of autoBootstrap:
Copy the request example, and replace the values with your own as directed to update the integration settings. The example provided uses JSON format, but the request also accepts YAML.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
Change the config values to your own, where
host is the URL of your Azure Synapse Analytics account.
schema is the name of the Immuta-managed schema where all your secure views will be created and stored.
database is the name of an existing database where the Immuta system user will store all Immuta-generated schemas and views.
username and password are the username and password of the system account that can act on Azure Synapse Analytics objects and configure the integration.
A successful response includes the validation tests statuses.
To manually update the integration, complete the following steps:
Copy the request example, and replace the values with your own as directed to generate the script. The example provided uses JSON format, but the request also accepts YAML.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
Change the config values to your own, where
host is the URL of your Azure Synapse Analytics account.
schema is the name of the Immuta-managed schema where all your secure views will be created and stored.
database is the name of an existing database where the Immuta system user will store all Immuta-generated schemas and views.
username and password are the username and password of the system account that can act on Azure Synapse Analytics objects and configure the integration.
Run the script returned in the response in your Azure Synapse Analytics environment.
Response
The response returns the script for you to run in your environment.
Replace the {id} request parameter with the unique identifier of the integration you want to update.
host is the URL of your Azure Synapse Analytics account.
schema is the name of the Immuta-managed schema where all your secure views will be created and stored.
database is the name of an existing database where the Immuta system user will store all Immuta-generated schemas and views.
username and password are the username and password of the system account that can act on Azure Synapse Analytics objects and configure the integration.
A successful response includes the validation tests statuses.
Copy the request example.
Replace the {id} request parameter with the unique identifier of the integration you want to delete.
If you set
autoBootstrap to true
when enabling the integration, include the credentials you used to configure the integration in the payload, as illustrated in the example.
autoBootstrap to false
when enabling the integration,
make the request above without including a payload to remove the integration from Immuta.
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
Replace the Immuta URL and with your own.
impersonation specifies whether user impersonation is enabled. See the for child parameters.
metadataDelimeters are a set of delimiters that Immuta uses to store profile data in Azure Synapse Analytics. See the for child parameters.
The response returns the status of the Azure Synapse Analytics integration configuration connection. See the for details about the response schema.
An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
impersonation specifies whether user impersonation is enabled. See the for child parameters.
metadataDelimeters are a set of delimiters that Immuta uses to store profile data in Azure Synapse Analytics. See the for child parameters.
Replace the Immuta URL and with your own.
impersonation specifies whether user impersonation is enabled. See the for child parameters.
metadataDelimeters are a set of delimiters that Immuta uses to store profile data in Azure Synapse Analytics. See the for child parameters.
Copy the request example from one of the tabs below, and replace the values with your own as directed to configure the integration settings. The examples provided use JSON format, but the request also accepts YAML. The payload you provide must match the payload sent when .
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
Pass the same payload you sent when , where
impersonation specifies whether user impersonation is enabled. See the for child parameters.
metadataDelimeters are a set of delimiters that Immuta uses to store profile data in Azure Synapse Analytics. See the for child parameters.
Replace the Immuta URL and with your own.
Pass the same payload you sent when , where
impersonation specifies whether user impersonation is enabled. See the for child parameters.
metadataDelimeters are a set of delimiters that Immuta uses to store profile data in Azure Synapse Analytics. See the for child parameters.
Copy the request example from one of the tabs below, and replace the values with your own as directed to configure the integration settings. The examples provided use JSON format, but the request also accepts YAML. The payload you provide must match the payload sent when .
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
Pass the same payload you sent when , where
impersonation specifies whether user impersonation is enabled. See the for child parameters.
metadataDelimeters are a set of delimiters that Immuta uses to store profile data in Azure Synapse Analytics. See the for child parameters.
Replace the Immuta URL and with your own.
Pass the same payload you sent when , where
impersonation specifies whether user impersonation is enabled. See the for child parameters.
metadataDelimeters are a set of delimiters that Immuta uses to store profile data in Azure Synapse Analytics. See the for child parameters.
The response returns the status of the Azure Synapse Analytics integration configuration connection. See the for details about the response schema.
An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
Replace the Immuta URL and with your own.
Replace the {id} request parameter with the unique identifier of the integration you want to get. Alternatively, you can get a list of all integrations and their IDs with the .
The response returns an Azure Synapse Analytics integration configuration. See the for details about the response schema. An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
Replace the Immuta URL and with your own.
The response returns the configuration for all integrations. See the for details about the response schema. An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
(autoBootstrap is true
)
(autoBootstrap is false
)
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
metadataDelimeters are a set of delimiters that Immuta uses to store profile data in Azure Synapse Analytics. See the for child parameters.
The response returns the status of the Azure Synapse Analytics integration configuration connection. See the for details about the response schema.
An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
impersonation specifies whether user impersonation is enabled. See the for child parameters.
metadataDelimeters are a set of delimiters that Immuta uses to store profile data in Azure Synapse Analytics. See the for child parameters.
Copy the request example, and replace the values with your own as directed to update the integration settings. The example provided uses JSON format, but the request also accepts YAML. The payload you provide must match the payload sent when .
See the for parameter definitions, value types, and additional configuration options.
Replace the Immuta URL and with your own.
Pass the same payload you sent when , where
impersonation specifies whether user impersonation is enabled. See the for child parameters.
metadataDelimeters are a set of delimiters that Immuta uses to store profile data in Azure Synapse Analytics. See the for child parameters.
The response returns the status of the Azure Synapse Analytics integration configuration connection. See the for details about the response schema.
An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.
Replace the Immuta URL and with your own.
use the script endpoint (for integrations that were not successfully created) or the endpoint (for integrations that were successfully created) to remove Immuta-managed resources from your environment,
use the script endpoint to finish removing Immuta-managed resources from your environment,
The response returns the status of the Azure Synapse Analytics integration configuration that has been deleted. See the for details about the response schema. An unsuccessful request returns the status code and an error message. See the for a list of statuses, error messages, and troubleshooting guidance.