Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Audience: Data Owners
Content Summary: This section of API documentation is specific to connecting your data to Immuta and managing Data Dictionaries.
Data sources can also be created and managed using the V2 API.
Connect a host reference guide: Create a single connection to Snowflake or Databricks Unity Catalog to register the integration and data sources.
Azure Synapse Analytics API reference guide: Create an Azure Synapse Analytics data source.
Databricks API reference guide: Create a Databricks data source.
Redshift API reference guide: Create a Redshift data source.
Snowflake API reference guide: Create a Snowflake data source.
Starburst (Trino) API reference guide: Create a Starburst (Trino) data source.
Data dictionary API reference guide: Manage the data dictionary.
Data API reference guide
This page details the /data
v1 API, which allows users to connect a to Immuta with a single set of credentials rather than configuring an integration and registering data sources separately.
Required Immuta permission: APPLICATION_ADMIN
You can connect the following technologies to Immuta using supported authentication methods:
Databricks Unity Catalog host
To connect a host, you must follow this process:
Run the /integrations/scripts/create
endpoint to receive a script.
Run the script in your native host, either Snowflake or Databricks Unity Catalog.
Run the /data/connection
endpoint to finish creating the connection between your host and Immuta.
This page details how to use the /data
v1 API to connect a Snowflake host to Immuta using username and password authentication. This connection works with a single set of credentials rather than configuring an integration and registering data sources separately. To manage your host, see the Manage a host reference guide.
To complete this guide, you must be a user with the following:
Immuta permissions:
APPLICATION_ADMIN
CREATE_DATA_SOURCE
Snowflake permissions:
CREATE DATABASE ON ACCOUNT WITH GRANT OPTION
CREATE ROLE ON ACCOUNT WITH GRANT OPTION
CREATE USER ON ACCOUNT WITH GRANT OPTION
MANAGE GRANTS ON ACCOUNT WITH GRANT OPTION
APPLY MASKING POLICY ON ACCOUNT WITH GRANT OPTION
APPLY ROW ACCESS POLICY ON ACCOUNT WITH GRANT OPTION
Complete the following steps to connect a Snowflake host:
Use the /integrations/scripts/create
endpoint to receive a script.
Run the script in Snowflake.
Use the /data/connection
endpoint to finish creating the connection to your host and Immuta.
POST
/integrations/scripts/create
Copy the request and update the <placeholder_values>
with your connection details. Then submit the request.
Find descriptions of the editable attributes in the table below and of the full payload in the Integration configuration payload reference guide. All values should be included and those you should not edit are noted.
Step one will return a script. Copy the script and run it in your Snowflake environment as a user with the permissions listed in the requirements section.
The script will create an Immuta system user that will authenticate using the username and password you specified in step one. This new system user will have the permissions listed on the Snowflake integration reference guide. Additionally, the script will create the database you specified in step one.
POST
/data/connection
Copy the request and update the <placeholder_values>
with your connection details. Note that the connection details here should match the ones used in step one. Then submit the request.
Find descriptions of the editable attributes in the table below and of the full payload in the Snowflake object table. All values should be included and those you should not edit are noted.
Test run
Opt to test and validate the create connection payload using a dry run:
POST
/data/connection/test
This page details how to use the /data
v1 API to connect a Snowflake host to Immuta using key pair authentication. This connection works with a single set of credentials rather than configuring an integration and registering data sources separately. To manage your host, see the Manage a host reference guide.
To complete this guide, you must be a user with the following:
Immuta permissions:
APPLICATION_ADMIN
CREATE_DATA_SOURCE
Snowflake permissions:
CREATE DATABASE ON ACCOUNT WITH GRANT OPTION
CREATE ROLE ON ACCOUNT WITH GRANT OPTION
CREATE USER ON ACCOUNT WITH GRANT OPTION
MANAGE GRANTS ON ACCOUNT WITH GRANT OPTION
APPLY MASKING POLICY ON ACCOUNT WITH GRANT OPTION
APPLY ROW ACCESS POLICY ON ACCOUNT WITH GRANT OPTION
Complete the following steps to connect a Snowflake host:
Use the /integrations/scripts/create
endpoint to receive a script.
Run the script in Snowflake.
Use the /data/connection
endpoint to finish creating the connection to your host and Immuta.
POST
/integrations/scripts/create
Copy the request and update the <placeholder_values>
with your connection details. Then submit the request.
Find descriptions of the editable attributes in the table below and of the full payload in the Integration configuration payload reference guide. All values should be included and those you should not edit are noted.
Step one will return a script. Copy the script and run it in your Snowflake environment as a user with the permissions listed in the requirements section.
The script will allow an Immuta system user to authenticate using the username and private key you specified in step one. This system user will have the permissions listed on the Snowflake integration reference guide. Additionally, the script will create the database you specified in step one.
POST
/data/connection
Copy the request and update the <placeholder_values>
with your connection details. Note that the connection details here should match the ones used in step one. Then submit the request.
Find descriptions of the editable attributes in the table below and of the full payload in the Snowflake object table. All values should be included and those you should not edit are noted.
Test run
Opt to test and validate the create connection payload using a dry run:
POST
/data/connection/test
This page details how to use the /data
v1 API to connect a Snowflake host to Immuta using Snowflake OAuth with a client secret. This connection works with a single set of credentials rather than configuring an integration and registering data sources separately. To manage your host, see the Manage a host reference guide.
To complete this guide, you must be a user with the following:
Immuta permissions:
APPLICATION_ADMIN
CREATE_DATA_SOURCE
Snowflake permissions:
CREATE DATABASE ON ACCOUNT WITH GRANT OPTION
CREATE ROLE ON ACCOUNT WITH GRANT OPTION
CREATE USER ON ACCOUNT WITH GRANT OPTION
MANAGE GRANTS ON ACCOUNT WITH GRANT OPTION
APPLY MASKING POLICY ON ACCOUNT WITH GRANT OPTION
APPLY ROW ACCESS POLICY ON ACCOUNT WITH GRANT OPTION
Complete the following steps to connect a Snowflake host:
Use the /integrations/scripts/create
endpoint to receive a script.
Run the script in Snowflake.
Use the /data/connection
endpoint to finish creating the connection to your host and Immuta.
POST
/integrations/scripts/create
Copy the request and update the <placeholder_values>
with your connection details. Then submit the request.
Find descriptions of the editable attributes in the table below and of the full payload in the Integration configuration payload reference guide. All values should be included and those you should not edit are noted.
Step one will return a script. Copy the script and run it in your Snowflake environment as a user with the permissions listed in the requirements section.
The script will allow an Immuta system user to authenticate using the Snowflake OAuth and client secret you specified in step one. This system user will have the permissions listed on the Snowflake integration reference guide. Additionally, the script will create the database you specified in step one.
POST
/data/connection
Copy the request and update the <placeholder_values>
with your connection details. Note that the connection details here should match the ones used in step one. Then submit the request.
Find descriptions of the editable attributes in the table below and of the full payload in the Snowflake object table. All values should be included and those you should not edit are noted.
Test run
Opt to test and validate the create connection payload using a dry run:
POST
/data/connection/test
This page details how to use the /data
v1 API to connect a Snowflake host to Immuta using Snowflake OAuth and certificate authentication. This connection works with a single set of credentials rather than configuring an integration and registering data sources separately. To manage your host, see the Manage a host reference guide.
To complete this guide, you must be a user with the following:
Immuta permissions:
APPLICATION_ADMIN
CREATE_DATA_SOURCE
Snowflake permissions:
CREATE DATABASE ON ACCOUNT WITH GRANT OPTION
CREATE ROLE ON ACCOUNT WITH GRANT OPTION
CREATE USER ON ACCOUNT WITH GRANT OPTION
MANAGE GRANTS ON ACCOUNT WITH GRANT OPTION
APPLY MASKING POLICY ON ACCOUNT WITH GRANT OPTION
APPLY ROW ACCESS POLICY ON ACCOUNT WITH GRANT OPTION
Complete the following steps to connect a Snowflake host:
Use the /integrations/scripts/create
endpoint to receive a script.
Run the script in Snowflake.
Use the /data/connection
endpoint to finish creating the connection to your host and Immuta.
POST
/integrations/scripts/create
Copy the request and update the <placeholder_values>
with your connection details. Then submit the request.
Find descriptions of the editable attributes in the table below and of the full payload in the Integration configuration payload reference guide. All values should be included and those you should not edit are noted.
Step one will return a script. Copy the script and run it in your Snowflake environment as a user with the permissions listed in the requirements section.
The script will allow an Immuta system user to authenticate using the Snowflake OAuth and certificate you specified in step one. This system user will have the permissions listed on the Snowflake integration reference guide. Additionally, the script will create the database you specified in step one.
POST
/data/connection
Copy the request and update the <placeholder_values>
with your connection details. Note that the connection details here should match the ones used in step one. Then submit the request.
Find descriptions of the editable attributes in the table below and of the full payload in the Snowflake object table. All values should be included and those you should not edit are noted.
Test run
Opt to test and validate the create connection payload using a dry run:
POST
/data/connection/test
Snowflake data source API reference guide
The snowflake
endpoint allows you to connect and manage Snowflake data sources in Immuta.
Additional fields may be included in some responses you receive; however, these attributes are for internal purposes and are therefore undocumented.
Snowflake imported databases
Immuta does not support Snowflake tables from imported databases. Instead, create a view of the table and register that view as a data source.
POST
/snowflake/handler
Save the provided connection information as a data source.
This request creates a Snowflake data source.
GET
/snowflake/handler/{handlerId}
Get the handler metadata associated with the provided handler ID.
This request returns metadata for the handler with the ID 30
.
PUT
/snowflake/handler/{handlerId}
Update the data source metadata associated with the provided handler ID. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case, it uses the current dictionary.
This request updates the metadata for the data source with the handler ID 30
.
Payload example
The payload below updates the eventTime
to MOST_RECENT_ORDER
.
PUT
/snowflake/bulk
Update the data source metadata associated with the provided connection string.
This request updates the metadata for all data sources with the connection string specified in example-payload.json
.
Payload example
The payload below updates the database to ANALYST_DEMO
for the provided connection string.
PUT
/snowflake/handler/{handlerId}/triggerHighCardinalityJob
Recalculate the high cardinality column for the specified data source.
The response returns a string of characters that identify the high cardinality job run.
This request re-runs the job that calculates the high cardinality column for the data source with the handler ID 30
.
PUT
/snowflake/handler/{handlerId}/refreshNativeViewJob
Refresh the native view of a data source.
The response returns a string of characters that identifies the refresh view job run.
This request refreshes the view for the data source with the handler ID 7
.
Starburst (Trino) data source API reference guide
The trino
endpoint allows you to connect and manage Trino data sources in Immuta.
Additional fields may be included in some responses you receive; however, these attributes are for internal purposes and are therefore undocumented.
POST
/trino/handler
Save the provided connection information as a data source.
This request creates a Trino data source.
GET
/trino/handler/{handlerId}
Get the handler metadata associated with the provided handler ID.
This request returns metadata for the handler with the ID 1
.
PUT
/trino/handler/{handlerId}
Update the data source metadata associated with the provided handler ID. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case, it uses the current dictionary.
This request updates the data source name to Marketing Data
for the data source with the handler ID 1
.
PUT
/trino/bulk
Update the data source metadata associated with the provided connection string.
This request updates the metadata for all data sources with the connection string specified in example-payload.json
.
The payload below adds a certificate (certificate.json
) to the data sources with the provided connection.
PUT
/trino/handler/{handlerId}/triggerHighCardinalityJob
Recalculate the high cardinality column for the specified data source.
The response returns a string of characters that identify the high cardinality job run.
This request re-runs the job that calculates the high cardinality column for the data source with the handler ID 30
.
PUT
/trino/handler/{handlerId}/refreshNativeViewJob
Refresh the native view of a data source.
The response returns a string of characters that identifies the refresh view job run.
This request refreshes the view for the data source with the handler ID 7
.
To complete this guide, you must be a user with the following:
Immuta permissions:
APPLICATION_ADMIN
CREATE_DATA_SOURCE
Databricks authorizations:
Account or workspace admin
CREATE CATALOG
privilege on the Unity Catalog metastore to create an Immuta-owned catalog and tables
Unity Catalog enabled on your Databricks cluster or SQL warehouse. All SQL warehouses have Unity Catalog enabled if your workspace is attached to a Unity Catalog metastore. Immuta recommends linking a SQL warehouse to your Immuta tenant rather than a cluster for both performance and availability reasons.
To connect a Databricks host, you must do the following:
Create a service principal in Databricks Unity Catalog with the proper Databricks permissions for Immuta to use to manage policies in Unity Catalog
Enable Databricks Unity Catalog in Immuta
Set up Unity Catalog system tables for native query audit.
Use the /integrations/scripts/create
endpoint to receive a script.
Run the script in Databricks Unity Catalog.
Use the /data/connection
endpoint to finish creating the connection to your host and Immuta.
The Immuta service principal requires the following Databricks privileges to connect to Databricks to create the integration catalog, configure the necessary procedures and functions, and maintain state between Databricks and Immuta:
OWNER
permission on the Immuta catalog you configure.
OWNER
permission on catalogs with schemas and tables registered as Immuta data sources so that Immuta can administer Unity Catalog row-level and column-level security controls. This permission can be applied by granting OWNER
on a catalog to a Databricks group that includes the Immuta service principal to allow for multiple owners. If the OWNER
permission cannot be applied at the catalog- or schema-level, each table registered as an Immuta data source must individually have the OWNER
permission granted to the Immuta service principal.
USE CATALOG
and USE SCHEMA
on parent catalogs and schemas of tables registered as Immuta data sources so that the Immuta service principal can interact with those tables.
SELECT
and MODIFY
on all tables registered as Immuta data sources so that the Immuta service principal can grant and revoke access to tables and apply Unity Catalog row- and column-level security controls.
USE CATALOG
on the system
catalog for native query audit.
USE SCHEMA
on the system.access
schema for native query audit.
SELECT
on the following system tables for native query audit:
system.access.audit
system.access.table_lineage
system.access.column_lineage
Enable Databricks Unity Catalog on the Immuta app settings page:
Click the App Settings icon in the left sidebar.
Scroll to the Global Integrations Settings section and check the Enable Databricks Unity Catalog support in Immuta checkbox.
Enable native query audit by completing these steps in Unity Catalog:
USE CATALOG
on the system
catalog
USE SCHEMA
on the system.access
schema
SELECT
on the following system tables:
system.access.audit
system.access.table_lineage
system.access.column_lineage
POST
/integrations/scripts/create
Copy the request and update the <placeholder_values>
with your connection details. Then submit the request.
The script will use the service principal that will authenticate using the personal access token (PAT) that you specified in step four. Additionally, the script will create the catalog you specified in step four.
POST
/data/connection
Copy the request and update the <placeholder_values>
with your connection details. Note that the connection details here should match the ones used in step four. Then submit the request.
Test run
Opt to test and validate the create connection payload using a dry run:
POST
/data/connection/test
Azure Synapse Analytics API reference guide
This page describes the asa
(Azure Synapse Analytics data sources) endpoint.
Additional fields may be included in some responses you receive; however, these attributes are for internal purposes and are therefore undocumented.
POST
/asa/handler
Save the provided connection information as a data source.
The following request saves the provided connection information as a data source.
GET
/asa/handler/{handlerId}
Get the handler metadata associated with the provided handler ID.
The following request returns the handler metadata associated with the provided handler ID.
PUT
/asa/handler/{handlerId}
Updates the handler metadata associated with the provided handler ID. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case it uses the current dictionary.
The following request updates the handler metadata (saved in example_payload.json
) associated with the provided handler ID.
Request payload example
PUT
/asa/bulk
Updates the data source metadata associated with the provided connection string.
The following request updates the handler metadata for the handler ID specified in example_payload.json
.
Request payload example
PUT
/asa/handler/{handlerId}/triggerHighCardinalityJob
Recalculates the high cardinality column for the provided handler ID.
The response returns a string of characters that identify the high cardinality job run.
The following request recalculates the high cardinality column for the provided handler ID.
PUT
/asa/handler/{handlerId}/refreshNativeViewJob
Refresh the native view of a data source.
The response returns a string of characters that identifies the refresh view job run.
This request refreshes the view for the data source with the handler ID 7
.
Databricks data source API reference guide
The databricks
endpoint allows you to connect and manage Databricks data sources in Immuta.
Additional fields may be included in some responses you receive; however, these attributes are for internal purposes and are therefore undocumented.
Databricks Spark integration
When exposing a table or view from an Immuta-enabled Databricks cluster, be sure that at least one of these traits is true:
The user exposing the tables has READ_METADATA and SELECT permissions on the target views/tables (specifically if Table ACLs are enabled).
The user exposing the tables is listed in the immuta.spark.acl.whitelist
configuration on the target cluster.
The user exposing the tables is a Databricks workspace administrator.
Databricks Unity Catalog integration
When exposing a table from Databricks Unity Catalog, be sure the credentials used to register the data sources have the Databricks privileges listed below.
The following privileges on the parent catalogs and schemas of those tables:
SELECT
USE CATALOG
USE SCHEMA
USE SCHEMA
on system.information_schema
Azure Databricks Unity Catalog limitation
POST
/databricks/handler
Save the provided connection information as a data source.
This request creates two Databricks data sources.
GET
/databricks/handler/{handlerId}
Get the handler metadata associated with the provided handler ID.
This request returns metadata for the handler with the ID 48
.
PUT
/databricks/handler/{handlerId}
Update the data source metadata associated with the provided handler ID. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case, it uses the current dictionary.
This request updates the metadata for the data source with the handler ID 48
.
Payload example
The payload below updates the dataSourceName
to Cities
.
PUT
/databricks/bulk
Update the data source metadata associated with the provided connection string.
This request updates the metadata for all data sources with the connection string specified in example-payload.json
.
Payload example
The payload below adds a certificate (certificate.json
) to connect to the data sources with the provided connection.
PUT
/databricks/handler/{handlerId}/triggerHighCardinalityJob
Recalculate the high cardinality column for the specified data source.
The response returns a string of characters that identify the high cardinality job run.
This request re-runs the job that calculates the high cardinality column for the data source with the handler ID 47
.
Redshift data source API reference guide
The redshift
endpoint allows you to connect and manage Redshift data sources in Immuta.
Additional fields may be included in some responses you receive; however, these attributes are for internal purposes and are therefore undocumented.
POST
/redshift/handler
Save the provided connection information as a data source.
This request creates two Redshift data sources, which are specified in example-payload.json
.
GET
/redshift/handler/{handlerId}
Get the handler metadata associated with the provided handler ID.
This request returns metadata for the handler with the ID 41
.
PUT
/redshift/handler/{handlerId}
Update the data source metadata associated with the provided handler ID. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case, it uses the current dictionary.
This request updates the metadata for the data source with the handler ID 41
.
Payload example
The payload below removes the paragraph_count
column from the data source.
PUT
/redshift/bulk
Update the data source metadata associated with the provided connection string.
This request updates the metadata for all data sources with the connection string specified in example-payload.json
.
Payload example
The payload below adds a certificate (certificate.json
) to connect to the data sources with the provided connection string.
PUT
/redshift/handler/{handlerId}/triggerHighCardinalityJob
Recalculate the high cardinality column for the specified data source.
The response returns a string of characters that identify the high cardinality job run.
This request re-runs the job that calculates the high cardinality column for the data source with the handler ID 41
.
PUT
/redshift/handler/{handlerId}/refreshNativeViewJob
Refresh the native view of a data source.
The response returns a string of characters that identifies the refresh view job run.
This request refreshes the view for the data source with the handler ID 7
.
Data dictionary API reference guide
The data dictionary API allows you to manage the data dictionary for your data sources.
Additional fields may be included in some responses you receive; however, these attributes are for internal purposes and are therefore undocumented.
POST
/dictionary/{dataSourceId}
Create the dictionary for the specified data source.
The following request creates a data dictionary (saved in example-payload.json
) for the data source with ID 1
.
Payload example
Other status codes returned include:
PUT
/dictionary/{dataSourceId}
Update the dictionary for the specified data source.
The request below updates the data dictionary for the data source with the ID 1
.
Payload example
Other status codes returned include
GET
/dictionary/{dataSourceId}
Get the dictionary for the specified data source.
The request below gets the data dictionary for the data source with the ID 1
.
GET
/dictionary/columns
Search across all dictionary columns.
The following request searches for columns in all dictionaries that contain the text address
in their name, with a limit of 10
results.
DELETE
/dictionary/{dataSourceId}
Delete the data dictionary for the specified data source.
The request below deletes the data dictionary for the data source with ID 1
.
This endpoint returns {}
on success.
Other status codes returned include
Attribute | Description | Required |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Method | Path | Purpose |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Method | Path | Purpose |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description | Required |
---|---|---|
This page details how to use the /data
v1 API to connect a Databricks Unity Catalog host to Immuta using a service principal with a personal access token (PAT). This connection works with a single set of credentials rather than configuring an integration and registering data sources separately. To manage your host, see the .
Unity Catalog and attached to a Databricks workspace. See the for information on workspaces and catalog isolation support with Immuta.
Create a Databricks with the Databricks permissions outlined below and set up with personal access token (PAT) authentication.
.
. For Databricks Unity Catalog audit to work, Immuta must have, at minimum, the following access.
.
Find descriptions of the editable attributes in the table below and of the full payload in the . All values should be included and those you should not edit are noted.
Attribute | Description | Required |
---|
Step one will return a script. Copy the script and run it in your Databricks Unity Catalog environment as a user with the permissions listed in .
Find descriptions of the editable attributes in the table below and of the full payload in the . All values should be included and those you should not edit are noted.
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Accepted values |
---|
Attribute | Description | Accepted values |
---|
.
.
.
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Endpoint | Purpose |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description | Required |
---|
Set all table-level ownership on your Unity Catalog data sources to an individual user or service principal instead of a Databricks group before proceeding. Otherwise, Immuta cannot apply data policies to the table in Unity Catalog. See the for details.
.
.
.
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Method | Path | Purpose |
---|
Attribute | Description | Required |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
.
.
.
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Method | Path | Purpose |
---|
Attribute | Description | Required |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description | Required |
---|
.
.
.
Method | Path | Purpose |
---|
Attribute | Description | Required |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Status Code | Message |
---|
Attribute | Description | Required |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Status Code | Message |
---|
Method | Path | Purpose |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Status Code | Message |
---|
config.host string
The URL of your Snowflake account.
Yes
config.username string
The new username of the system account that can act on Snowflake objects and configure the host. The system account will be created by the script in step two.
Yes
config.password string
The password of the system account that can act on Snowflake objects and configure the host. The system account will be created by the script in step two.
Yes
config.warehouse string
The default pool of compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
Yes
config.database string
Name of a new empty database that the Immuta system user will manage and store metadata in.
Yes
connectionKey string
A unique name for the host connection.
Yes
connection object
Configuration attributes that should match the values used when getting the script from the integration endpoint.
Yes
connection.hostname string
The URL of your Snowflake account. This is the same as host
.
Yes
connection.port integer
The port to use when connecting to your Snowflake account host. Defaults to 443
.
Yes
connection.warehouse string
The default pool of compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
Yes
connection.role string
The privileged Snowflake role used by the Immuta system account when configuring the Snowflake host. At minimum, it must be able to see the data that Immuta will govern.
Yes
connection.username string
The username of the system account that can act on Snowflake objects and configure the host.
Yes
connection.password string
The password of the system account that can act on Snowflake objects and configure the host.
Yes
nativeIntegration object
Configuration attributes that should match the values used when getting the script from the integration endpoint.
Yes
nativeIntegration.config.username string
Same as connection.username
Yes
nativeIntegration.config.password string
Same as connection.password
Yes
nativeIntegration.config.host string
Same as connection.hostname
Yes
nativeIntegration.config.port integer
Same as connection.port
Yes
nativeIntegration.config.warehouse string
Same as connection.warehouse
Yes
nativeIntegration.config.database string
Name of a new empty database that the Immuta system user will manage and store metadata in.
Yes
objectPath string
The list of names that uniquely identify the path to a data object in the remote platform's hierarchy. The first element will be the associated connectionKey
.
bulkId string
A bulk ID that can be used to search for the status of background jobs triggered by this request.
config.username string
The username of the system account that can act on Snowflake objects and configure the host.
Yes
config.privateKey string
The private key. Replace new lines in the private key with a backslash before the new line character: "\n". If you are using another means of configuration, such as a Python script, the "\n" should not be added.
Yes
config.host string
The URL of your Snowflake account.
Yes
config.warehouse string
The default pool of compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
Yes
config.database string
Name of a new empty database that the Immuta system user will manage and store metadata in.
Yes
connectionKey string
A unique name for the host connection.
Yes
connection object
Configuration attributes that should match the values used when getting the script from the integration endpoint.
Yes
connection.hostname string
The URL of your Snowflake account. This is the same as host
.
Yes
connection.port integer
The port to use when connecting to your Snowflake account host. Defaults to 443
.
Yes
connection.warehouse string
The default pool of compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
Yes
connection.role string
The privileged Snowflake role used by the Immuta system account when configuring the Snowflake host. At minimum, it must be able to see the data that Immuta will govern.
Yes
connection.username string
The username of the system account that can act on Snowflake objects and configure the host.
Yes
connection.privateKeyPassword string
The Snowflake private key password. Required if the private key is encrypted.
No
connection.privateKey.userFilename string
The name of your private key file on your machine.
Yes
connection.privateKey.content string
The private key. Replace new lines in the private key with a backslash before the new line character: "\n". If you are using another means of configuration, such as a Python script, the "\n" should not be added. This is the same as config.privateKey
.
Yes
nativeIntegration object
Configuration attributes that should match the values used when getting the script from the integration endpoint.
Yes
nativeIntegration.config.username string
Same as connection.username
Yes
nativeIntegration.config.privateKeyPassword string
Same as connection.privateKeyPassword
No
nativeIntegration.config.privateKey.keyName string
Same as connection.keyName
Yes
nativeIntegration.config.privateKey.userFilename string
Same as connection.userFilename
Yes
nativeIntegration.config.privateKey.content string
Same as connection.content
Yes
nativeIntegration.config.host string
Same as connection.hostname
Yes
nativeIntegration.config.port integer
Same as connection.port
Yes
nativeIntegration.config.warehouse string
Same as connection.warehouse
Yes
nativeIntegration.config.database string
Name of a new empty database that the Immuta system user will manage and store metadata in.
Yes
objectPath string
The list of names that uniquely identify the path to a data object in the remote platform's hierarchy. The first element should be the associated connectionKey
.
bulkId string
A bulk ID that can be used to search for the status of background jobs triggered by this request.
config.host string
The URL of your Snowflake account.
Yes
config.warehouse string
The default pool of compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
Yes
config.database string
Name of a new empty database that the Immuta system user will manage and store metadata in.
Yes
config.oAuthClientConfig.provider string
The identity provider for OAuth, such as Okta.
Yes
config.oAuthClientConfig.clientId string
The client identifier of your registered application.
Yes
config.oAuthClientConfig.authorityUrl string
Authority URL of your identity provider.
Yes
config.oAuthClientConfig.clientSecret string
Client secret of the application.
Yes
connectionKey string
A unique name for the host connection.
Yes
connection object
Configuration attributes that should match the values used when getting the script from the integration endpoint.
Yes
connection.hostname string
The URL of your Snowflake account. This is the same as host
.
Yes
connection.port integer
The port to use when connecting to your Snowflake account host. Defaults to 443
.
Yes
connection.warehouse string
The default pool of compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
Yes
connection.role string
The privileged Snowflake role used by the Immuta system account when configuring the Snowflake host. At minimum, it must be able to see the data that Immuta will govern.
Yes
connection.oAuthClientConfig.clientId string
The client identifier of your registered application.
Yes
connection.oAuthClientConfig.authorityUrl string
Authority URL of your identity provider.
Yes
connection.oAuthClientConfig.clientSecret string
Client secret of the application.
Yes
connection.oAuthClientConfig.resource string
An optional resource to pass to the token provider.
No
nativeIntegration object
Configuration attributes that should match the values used when getting the script from the integration endpoint.
Yes
nativeIntegration.config.oAuthClientConfig.clientId string
Same as connection.oAuthClientConfig.clientId
Yes
nativeIntegration.config.oAuthClientConfig.authorityUrl string
Same as connection.oAuthClientConfig.authorityUrl
Yes
nativeIntegration.config.oAuthClientConfig.resource string
Same as connection.oAuthClientConfig.resource
No
nativeIntegration.config.oAuthClientConfig.clientSecret string
Same as connection.oAuthClientConfig.clientSecret
Yes
nativeIntegration.config.host string
Same as connection.hostname
Yes
nativeIntegration.config.port integer
Same as connection.port
Yes
nativeIntegration.config.warehouse string
Same as connection.warehouse
Yes
nativeIntegration.config.database string
Name of a new empty database that the Immuta system user will manage and store metadata in.
Yes
objectPath string
The list of names that uniquely identify the path to a data object in the remote platform's hierarchy. The first element should be the associated connectionKey
.
bulkId string
A bulk ID that can be used to search for the status of background jobs triggered by this request.
config.host string
The URL of your Snowflake account.
Yes
config.warehouse string
The default pool of compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
Yes
config.database string
Name of a new empty database that the Immuta system user will manage and store metadata in.
Yes
config.oAuthClientConfig.provider string
The identity provider for OAuth, such as Okta.
Yes
config.oAuthClientConfig.clientId string
The client identifier of your registered application.
Yes
config.oAuthClientConfig.authorityUrl string
Authority URL of your identity provider.
Yes
config.oAuthClientConfig.publicCertificateThumbprint string
Your certificate thumbprint.
Yes
config.oAuthClientConfig.oauthPrivateKey string
The private key. Replace new lines in the private key with a backslash before the new line character: "\n". If you are using another means of configuration, such as a Python script, the "\n" should not be added.
Yes
connectionKey string
A unique name for the host connection.
Yes
connection object
Configuration attributes that should match the values used when getting the script from the integration endpoint.
Yes
connection.hostname string
The URL of your Snowflake account. This is the same as host
.
Yes
connection.port integer
The port to use when connecting to your Snowflake account host. Defaults to 443
.
Yes
connection.warehouse string
The default pool of compute resources the Immuta system user will use to run queries and perform other Snowflake operations.
Yes
connection.role string
The privileged Snowflake role used by the Immuta system account when configuring the Snowflake host. At minimum, it must be able to see the data that Immuta will govern.
Yes
connection.oAuthClientConfig.clientId string
The client identifier of your registered application.
Yes
connection.oAuthClientConfig.authorityUrl string
Authority URL of your identity provider.
Yes
connection.oAuthClientConfig.publicCertificateThumbprint string
Your certificate thumbprint.
Yes
connection.oAuthClientConfig.resource string
An optional resource to pass to the token provider.
No
connection.oAuthClientConfig.oauthPrivateKey.userFilename string
The name of your private key file on your machine.
Yes
connection.oAuthClientConfig.oauthPrivateKey.content string
The private key. Replace new lines in the private key with a backslash before the new line character: "\n". If you are using another means of configuration, such as a Python script, the "\n" should not be added. This is the same as config.oauthPrivateKey
in the script request.
Yes
nativeIntegration object
Configuration attributes that should match the values used when getting the script from the integration endpoint.
Yes
nativeIntegration.config.oAuthClientConfig.clientId string
Same as connection.oAuthClientConfig.clientId
Yes
nativeIntegration.config.oAuthClientConfig.authorityUrl string
Same as connection.oAuthClientConfig.authorityUrl
Yes
nativeIntegration.config.oAuthClientConfig.publicCertificateThumbprint string
Same as connection.oAuthClientConfig.publicCertificateThumbprint
Yes
nativeIntegration.config.oAuthClientConfig.resource string
Same as connection.oAuthClientConfig.resource
No
nativeIntegration.config.oAuthClientConfig.oauthPrivateKey.userFilename string
Same as connection.oAuthClientConfig.oauthPrivateKey.userFilename
Yes
nativeIntegration.config.oAuthClientConfig.oauthPrivateKey.content string
Same as connection.oAuthClientConfig.oauthPrivateKey.content
Yes
nativeIntegration.config.host string
Same as connection.hostname
Yes
nativeIntegration.config.port integer
Same as connection.port
Yes
nativeIntegration.config.warehouse string
Same as connection.warehouse
Yes
nativeIntegration.config.database string
Name of a new empty database that the Immuta system user will manage and store metadata in.
Yes
objectPath string
The list of names that uniquely identify the path to a data object in the remote platform's hierarchy. The first element should be the associated connectionKey
.
bulkId string
A bulk ID that can be used to search for the status of background jobs triggered by this request.
private
boolean
When false
, the data source will be publicly available in the Immuta UI.
Yes
blobHandler
array[object]
The parameters for this array include scheme
("https"
) and url
(an empty string).
Yes
blobHandlerType
string
Describes the type of underlying blob handler that will be used with this data source (e.g., MS SQL
).
Yes
recordFormat
string
The data format of blobs in the data source, such as json
, xml
, html
, or jpeg
.
Yes
type
string
The type of data source: ingested
(metadata will exist in Immuta) or queryable
(metadata is dynamically queried).
Yes
name
string
The name of the data source. It must be unique within the Immuta tenant.
Yes
sqlTableName
string
A string that represents this data source's table in Immuta.
Yes
organization
string
The organization that owns the data source.
Yes
category
string
The category of the data source.
No
description
string
The description of the data source.
No
hasExamples
boolean
When true
, the data source contains examples.
No
id
integer
The handler ID.
dataSourceId
integer
The ID of the data source.
warnings
string
This message describes issues with the created data source, such as the data source being unhealthy.
connectionString
string
The connection string used to connect the data source to Immuta.
handlerId
integer
The ID of the handler.
Yes
skipCache
boolean
When true
, will skip the handler cache when retrieving metadata.
No
body
array[object]
Metadata about the data source, including the data source ID, schema, database, and connection string.
PUT
/snowflake/handler/{handlerId}
Update the data source metadata associated with the provided handler ID. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case, it uses the current dictionary.
PUT
/snowflake/bulk
PUT
/snowflake/handler/{handlerId}/triggerHighCardinalityJob
PUT
/snowflake/handler/{handlerId}/refreshNativeViewJob
handlerId
integer
The ID of the handler.
Yes
skipCache
boolean
When true
, will skip the handler cache when retrieving metadata.
No
handler
metadata
Includes metadata about the handler, such as ssl
, port
, database
, hostname
, username
, and password
.
Yes
connectionString
string
The connection string used to connect to the data source.
Yes
id
integer
The ID of the handler.
ca
string
The certificate authority.
columns
array[object]
This is a Data Dictionary object, which provides metadata about the columns in the data source, including the name and data type of the column.
handler
metadata
Includes metadata about the handler, such as ssl
, port
, database
, hostname
, username
, and password
.
Yes
connectionString
string
The connection string used to connect to the data sources.
Yes
bulkId
string
The ID of the bulk data source update.
connectionString
string
The connection string shared by the data sources bulk updated.
jobsCreated
integer
The number of jobs that ran to update the data sources; this number corresponds to the number of data sources updated.
handlerId
integer
The ID of the handler.
Yes
handlerId
integer
The ID of the handler.
Yes
private
boolean
When false
, the data source will be publicly available in the Immuta UI.
Yes
blobHandler
array[object]
The parameters for this array include scheme
("https"
) and url
(an empty string).
Yes
blobHandlerType
string
Describes the type of underlying blob handler that will be used with this data source (e.g., MS SQL
).
Yes
recordFormat
string
The data format of blobs in the data source, such as json
, xml
, html
, or jpeg
.
Yes
type
string
The type of data source: ingested
(metadata will exist in Immuta) or queryable
(metadata is dynamically queried).
Yes
name
string
The name of the data source. It must be unique within the Immuta tenant.
Yes
sqlTableName
string
A string that represents this data source's table in Immuta.
Yes
organization
string
The organization that owns the data source.
Yes
category
string
The category of the data source.
No
description
string
The description of the data source.
No
hasExamples
boolean
When true
, the data source contains examples.
No
id
integer
The handler ID.
dataSourceId
integer
The ID of the data source.
warnings
string
This message describes issues with the created data source, such as the data source being unhealthy.
connectionString
string
The connection string used to connect the data source to Immuta.
handlerId
integer
The ID of the handler.
Yes
skipCache
boolean
When true
, will skip the handler cache when retrieving metadata.
No
body
array[object]
Metadata about the data source, including the data source ID, schema, database, and connection string.
PUT
/trino/handler/{handlerId}
Update the data source metadata associated with the provided handler ID. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case, it uses the current dictionary.
PUT
/trino/bulk
PUT
/trino/handler/{handlerId}/triggerHighCardinalityJob
PUT
/trino/handler/{handlerId}/refreshNativeViewJob
handlerId
integer
The ID of the handler.
Yes
skipCache
boolean
When true
, will skip the handler cache when retrieving metadata.
No
handler
metadata
Includes metadata about the handler, such as ssl
, port
, database
, hostname
, username
, and password
.
Yes
connectionString
string
The connection string used to connect to the data source.
Yes
id
integer
The ID of the handler.
ca
string
The certificate authority.
columns
array[object]
This is a Data Dictionary object, which provides metadata about the columns in the data source, including the name and data type of the column.
handler
metadata
Includes metadata about the handler, such as ssl
, port
, database
, hostname
, username
, and password
.
Yes
connectionString
string
The connection string used to connect to the data sources.
Yes
bulkId
string
The ID of the bulk data source update.
connectionString
string
The connection string shared by the data sources bulk updated.
jobsCreated
integer
The number of jobs that ran to update the data sources; this number corresponds to the number of data sources updated.
handlerId
integer
The ID of the handler.
Yes
handlerId
integer
The ID of the handler.
Yes
config.workspaceUrl | Your Databricks workspace URL. | Yes |
config.httpPath | The HTTP path of your Databricks cluster or SQL warehouse. | Yes |
config.token | The Databricks personal access token for the service principal created in step one for Immuta. | Yes |
config.catalog | The name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number. | Yes |
connectionKey | A unique name for the host connection. | Yes |
connection | Configuration attributes that should match the values used when getting the script from the integration endpoint. | Yes |
connection.hostname | Your Databricks workspace URL. This is the same as | Yes |
connection.port | The port to use when connecting to your Databricks account host. Defaults to | Yes |
connection.httpPath | The HTTP path of your Databricks cluster or SQL warehouse. | Yes |
connection.token | The Databricks personal access token for the service principal created in step one for Immuta. | Yes |
nativeIntegration | Configuration attributes that should match the values used when getting the script from the integration endpoint. | Yes |
nativeIntegration.config.token | Same as | Yes |
nativeIntegration.config.host | Same as | Yes |
nativeIntegration.config.port | Same as | Yes |
nativeIntegration.config.catalog | The name of the Databricks catalog created with the script from step four. | Yes |
objectPath | The list of names that uniquely identify the path to a data object in the remote platform's hierarchy. The first element should be the associated |
bulkId | A bulk ID that can be used to search for the status of background jobs triggered by this request. |
private |
| Yes |
blobHandler |
| Yes |
blobHandlerType |
| Yes |
recordFormat |
| Yes |
type |
| Yes |
name |
| Yes |
sqlTableName |
| Yes |
organization |
| Yes |
category |
| No |
description |
| No |
hasExamples |
| No |
id |
|
dataSourceId |
|
warnings |
|
connectionString |
|
handlerId |
| Yes |
skipCache |
| No |
dataSourceId |
|
value |
|
handlerId |
| Yes |
skipCache |
| No |
dataSourceId |
|
body |
|
body |
| Yes |
bulkId |
|
connectionString |
|
jobsCreated |
|
handlerId |
| Yes |
handlerId |
| Yes |
private |
| Yes |
blobHandler |
| Yes |
blobHandlerType |
| Yes |
recordFormat |
| Yes |
type |
| Yes |
name |
| Yes |
sqlTableName |
| Yes |
organization |
| Yes |
category |
| No |
description |
| No |
hasExamples |
| No |
id |
|
dataSourceId |
|
warnings |
|
connectionString |
|
handlerId |
| Yes |
skipCache |
| No |
body |
|
handlerId |
| Yes |
skipCache |
| No |
handler |
| Yes |
connectionString |
| Yes |
id |
|
ca |
|
columns |
|
handler |
| Yes |
connectionString |
| Yes |
bulkId |
|
connectionString |
|
jobsCreated |
|
handlerId |
| Yes |
private |
| Yes |
blobHandler |
| Yes |
blobHandlerType |
| Yes |
recordFormat |
| Yes |
type |
| Yes |
name |
| Yes |
sqlTableName |
| Yes |
organization |
| Yes |
category |
| No |
description |
| No |
hasExamples |
| No |
id |
|
dataSourceId |
|
warnings |
|
connectionString |
|
handlerId |
| Yes |
skipCache |
| No |
body |
|
handlerId |
| Yes |
skipCache |
| No |
handler |
| Yes |
connectionString |
| Yes |
id |
|
ca |
|
columns |
|
handler |
| Yes |
connectionString |
| Yes |
bulkId |
|
connectionString |
|
jobsCreated |
|
handlerId |
| Yes |
handlerId |
| Yes |
dataSourceId |
| Yes |
body |
| Yes |
name |
| Yes |
dataType |
| Yes |
remoteType |
| Yes |
createdAt |
|
dataSource |
|
id |
|
metadata |
|
types |
|
400 | Bad request: (detailed reason). |
401 | A valid Authorization token must be provided. |
403 | User must have one of the following roles to delete dictionary: owner,expert. |
404 | Data source not found. |
dataSourceId |
| Yes |
body |
| Yes |
name |
| Yes |
dataType |
| Yes |
remoteType |
| Yes |
createdAt |
|
dataSource |
|
id |
|
metadata |
|
types |
|
400 | Bad request: (detailed reason). |
401 | A valid Authorization token must be provided. |
403 | User must have one of the following roles to delete dictionary: owner,expert. |
404 | Data source not found. |
dataSourceId |
| Yes |
createdAt |
|
dataSource |
|
id |
|
metadata |
|
types |
|
searchText |
| No |
limit |
| No |
columnName |
|
dataSourceId |
| Yes |
401 | A valid Authorization token must be provided. |
403 | User must have one of the following roles to delete dictionary: owner,expert. |
404 | Data source not found. |
connectionKey | A unique name for the host connection. |
connection.technology | The technology backing the new host. |
|
connection.hostname | The URL of your Snowflake account. This is the same as |
connection.port | The port to use when connecting to your Snowflake account host. Defaults to |
connection.warehouse | The default pool of compute resources the Immuta system user will use to run queries and perform other Snowflake operations. |
connection.role | The privileged Snowflake role used by the Immuta system account when configuring the Snowflake host. It must be able to see the data that Immuta will govern. |
connection.authenticationType | The authentication type to connect to the host. Make sure this auth type is the same used when requesting the script. |
|
connection.username | The username of the system account that can act on Snowflake objects and configure the host. Required if using |
connection.password | The password of the system account that can act on Snowflake objects and configure the host. Required if using |
connection.privateKeyPassword | The Snowflake private key password. Required if using |
connection.privateKey.keyName | The Immuta-given name of your private key. Required if using | This must be |
connection.privateKey.userFilename | The name of the private key file on your machine. Required if using |
connection.privateKey.content |
connection.oAuthClientConfig.useCertificate | Specifies whether or not to use a certificate and private key for authenticating with OAuth. Required if using |
|
connection.oAuthClientConfig.clientId | The client identifier of your registered application. Required if using |
connection.oAuthClientConfig.authorityUrl | Authority URL of your identity provider. Required if using |
connection.oAuthClientConfig.scope | The scope limits the operations and roles allowed in Snowflake by the access token. Required if using | This must be |
connection.oAuthClientConfig.resource | An optional resource to pass to the token provider. |
connection.oAuthClientConfig.publicCertificateThumbprint | Your certificate thumbprint. Required if using |
connection.oAuthClientConfig.oauthPrivateKey.keyName | The Immuta-given name of your private key. Required if using | This must be |
connection.oAuthClientConfig.oauthPrivateKey.userFilename | The name of your private key file on your machine. Required if using |
connection.oAuthClientConfig.oauthPrivateKey.content |
connection.oAuthClientConfig.clientSecret | Client secret of the application. Required if using |
settings.isActive | When |
options.forceRecursiveCrawl | When |
nativeIntegration | Configuration attributes that should match the values used when getting the script from the integration endpoint. |
nativeIntegration.type | The type of technology. |
|
nativeIntegration.autoBootstrap | When | This must be |
nativeIntegration.config.authenticationType | The authentication type to connect to the host. Make sure this auth type is the same used when requesting the script. |
|
nativeIntegration.config.username | The username of the system account that can act on Snowflake objects and configure the host. Required if using |
nativeIntegration.config.password | The password of the system account that can act on Snowflake objects and configure the host. Required if using |
nativeIntegration.config.privateKeyPassword | The Snowflake private key password. Required if using |
nativeIntegration.config.keyName | The Immuta-given name of your private key. Required if using | This must be |
nativeIntegration.config.userFilename | The name of the private key file on your machine. Required if using |
nativeIntegration.config.content |
nativeIntegration.config.oAuthClientConfig.useCertificate | Specifies whether or not to use a certificate and private key for authenticating with OAuth. Required if using |
nativeIntegration.config.oAuthClientConfig.clientId | The client identifier of your registered application. Required if using |
nativeIntegration.config.oAuthClientConfig.authorityUrl | Authority URL of your identity provider. Required if using |
nativeIntegration.config.oAuthClientConfig.scope | The scope limits the operations and roles allowed in Snowflake by the access token. Required if using | This must be |
nativeIntegration.config.oAuthClientConfig.resource | An optional resource to pass to the token provider. |
nativeIntegration.config.oAuthClientConfig.oauthPrivateKey.keyName | The Immuta-given name of your private key. Required if using | This must be |
nativeIntegration.config.oAuthClientConfig.oauthPrivateKey.userFiles | The name of your private key file on your machine. Required if using |
nativeIntegration.config.oAuthClientConfig.oauthPrivateKey.content |
connection.oAuthClientConfig.clientSecret | Client secret of the application. Required if using |
nativeIntegration.config.host | The URL of your Snowflake account. |
nativeIntegration.config.port | The port to use when connecting to your Snowflake account host. Defaults to |
nativeIntegration.config.warehouse | The default pool of compute resources the Immuta system user will use to run queries and perform other Snowflake operations. |
nativeIntegration.config.database | Name of a new empty database that the Immuta system user will manage and store metadata in. |
nativeIntegration.config.impersonation | Enables user impersonation. User impersonation is not currently supported with this connection. | This must be |
nativeIntegration.config.audit | This object enables Snowflake query audit. | This must be |
nativeIntegration.config.workspaces | This object represents an Immuta project workspace configured for Snowflake. Project workspaces are not currently supported with this connection. | This must be |
nativeIntegration.config.lineage | Enables Snowflake lineage ingestion so that Immuta can apply tags added to Snowflake tables to their descendant data source columns. Lineage is not currently supported with this connection. | This must be |
nativeIntegration.config.userRolePattern | This object excludes roles and users from authorization checks. Excluded roles and users are not currently supported with this connection. | This must be |
connectionKey | A unique name for the host connection. |
connection.technology | The technology backing the new host. |
|
connection.hostname | Your Databricks workspace URL. This is the same as |
connection.port | The port to use when connecting to your Databricks account host. Defaults to |
connection.httpPath | The HTTP path of your Databricks cluster or SQL warehouse. |
connection.authenticationType | The authentication type to connect to the host. Make sure this auth type is the same used when requesting the script. |
|
connection.token | The Databricks personal access token for the service principal created for Immuta. |
settings.isActive | When | This must be |
options.forceRecursiveCrawl | When | This must be |
nativeIntegration | Configuration attributes that should match the values used when getting the script from the integration endpoint. |
nativeIntegration.type | The type of technology. |
|
nativeIntegration.autoBootstrap | When | This must be |
nativeIntegration.unityCatalog | When | This must be |
nativeIntegration.config.authenticationType | The authentication type to connect to the host. Make sure this auth type is the same used when requesting the script. |
|
nativeIntegration.config.token | The Databricks personal access token for the service principal created for Immuta. |
nativeIntegration.config.host | Your Databricks workspace URL. This is the same as |
nativeIntegration.config.port | The port to use when connecting to your Databricks account host. Defaults to |
nativeIntegration.config.httpPath | The HTTP path of your Databricks cluster or SQL warehouse. |
nativeIntegration.config.catalog | The name of the Databricks catalog Immuta will create to store internal entitlements and other user data specific to Immuta. This catalog will only be readable for the Immuta service principal and should not be granted to other users. The catalog name may only contain letters, numbers, and underscores and cannot start with a number. |
nativeIntegration.config.audit | This object enables Snowflake query audit. | This must be |
nativeIntegration.config.enableNativeQueryParsing | If | This must be |
nativeIntegration.config.jobConfig.workspaceDirectoryPath | The file path of the workspace directory. | This must be |
nativeIntegration.config.jobConfig.jobClusterId | The ID of the job cluster. | This must be |
|
|
|
|
PUT |
|
PUT |
|
PUT |
|
PUT |
|
PUT |
|
PUT |
|
PUT |
|
POST |
|
PUT |
|
GET |
|
GET |
|
The private key. Replace new lines in the private key with a backslash before the new line character: "\n". If you are using another means of configuration, such as a Python script, the "\n" should not be added. Required if using keyPair
.
In the , this is the config.privateKey
attribute.
The private key. Replace new lines in the private key with a backslash before the new line character: "\n". If you are using another means of configuration, such as a Python script, the "\n" should not be added. Required if using oAuthClientCredentials
and useCertificate
is true
.
In the , this is the config.oauthPrivateKey
attribute.
The private key. Replace new lines in the private key with a backslash before the new line character: "\n". If you are using another means of configuration, such as a Python script, the "\n" should not be added. Required if using keyPair
.
In the , this is the config.privateKey
attribute.
The private key. Replace new lines in the private key with a backslash before the new line character: "\n". If you are using another means of configuration, such as a Python script, the "\n" should not be added. Required if using oAuthClientCredentials and useCertificate
is true
.
In the , this is the config.oauthPrivateKey
attribute.
.
.
.
. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case, it uses the current dictionary.
.
.
. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case, it uses the current dictionary.
.
.
.
.
.
.