Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Audience: Data Owners
Content Summary: This section of API documentation is specific to connecting your data to Immuta and managing Data Dictionaries.
Data sources can also be created and managed using the V2 API.
Azure blob storage API reference guide: Create an Azure blob storage data source.
Azure Synapse Analytics API reference guide: Create an Azure Synapse Analytics data source.
Databricks API reference guide: Create a Databricks data source.
Redshift API reference guide: Create a Redshift data source.
Snowflake API reference guide: Create a Snowflake data source.
Data dictionary API reference guide: Manage the data dictionary.
**Deprecation notice**: Support for this database has been deprecated.
The adls-gen2
endpoint allows you to connect and manage Azure Data Lake Storage Gen2 data sources in Immuta.
Additional fields may be included in some responses you receive; however, these attributes are for internal purposes and are therefore undocumented.
**Duplicate data sources**: In order to avoid two data sources referencing the same table, users can not create duplicate data sources. If you attempt to create a duplicate data source using the API, you will encounter a warning stating "duplicate tables are specified in the payload."
POST
/adls-gen2/handler
Save the provided connection information as a data source.
Attribute | Description | Required |
---|---|---|
This request creates two Azure Data Lake Storage Gen2 data sources.
GET
/adls-gen2/handler/{handlerId}
Get the handler metadata associated with the provided handler ID.
This request returns metadata for the handler with the ID 17
.
PUT
/adls-gen2/handler/{handlerId}
Update the data source metadata associated with the provided handler ID. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case, it uses the current dictionary.
This request updates the metadata for the data source with the handler ID 17
.
The payload below updates the dataSourceName
to Customer Details
.
The azureblob
endpoint allows you to connect and manage Azure Blob Storage data sources in Immuta.
Additional fields may be included in some responses you receive; however, these attributes are for internal purposes and are therefore undocumented.
.
.
.
POST
/azureblob/handler
Save the provided connection for an Azure Blob Storage data source.
Attribute | Description | Required |
---|
The following request saves the provided connection information (in example-payload.json
) as a data source.
GET
/azureblob/handler/{handlerId}
Return the handler metadata associated with the provided handler ID.
The following request returns the handler metadata associated with the provided handler ID.
PUT
/azureblob/handler/{handlerId}
Update the provided information for an Azure Blob Storage data source.
The following request with the payload below updates the metadata for the data source with the handler ID 18
.
Payload example
PUT
/azureblob/bulk
Update the data source metadata associated with the provided connection string.
The following request updates the autoIngest
value to true
for data sources with the connection string specified in the payload below.
Payload example
PUT
/azureblob/handler/{handlerId}/crawl
Re-crawls the data source and updates the metadata.
The response returns a string of characters that identify the job run.
The following request re-crawls the data source.
This page describes the asa
(Azure Synapse Analytics data sources) endpoint.
Additional fields may be included in some responses you receive; however, these attributes are for internal purposes and are therefore undocumented.
.
.
.
**Duplicate data sources**: In order to avoid two data sources referencing the same table, users can not create duplicate data sources. If you attempt to create a duplicate data source using the API, you will encounter a warning stating "duplicate tables are specified in the payload."
POST
/asa/handler
Save the provided connection information as a data source.
Attribute | Description | Required |
---|
The following request saves the provided connection information as a data source.
GET
/asa/handler/{handlerId}
Get the handler metadata associated with the provided handler ID.
The following request returns the handler metadata associated with the provided handler ID.
PUT
/asa/handler/{handlerId}
Updates the handler metadata associated with the provided handler ID. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case it uses the current dictionary.
The following request updates the handler metadata (saved in example_payload.json
) associated with the provided handler ID.
Request payload example
PUT
/asa/bulk
Updates the data source metadata associated with the provided connection string.
The following request updates the handler metadata for the handler ID specified in example_payload.json
.
Request payload example
PUT
/asa/handler/{handlerId}/triggerHighCardinalityJob
Recalculates the high cardinality column for the provided handler ID.
The response returns a string of characters that identify the high cardinality job run.
The following request recalculates the high cardinality column for the provided handler ID.
PUT
/asa/handler/{handlerId}/refreshNativeViewJob
Refresh the native view of a data source.
The response returns a string of characters that identifies the refresh view job run.
This request refreshes the view for the data source with the handler ID 7
.
The databricks
endpoint allows you to connect and manage Databricks data sources in Immuta.
Additional fields may be included in some responses you receive; however, these attributes are for internal purposes and are therefore undocumented.
Databricks Spark integration
When exposing a table or view from an Immuta-enabled Databricks cluster, be sure that at least one of these traits is true:
The user exposing the tables has READ_METADATA and SELECT permissions on the target views/tables (specifically if Table ACLs are enabled).
The user exposing the tables is listed in the immuta.spark.acl.whitelist
configuration on the target cluster.
The user exposing the tables is a Databricks workspace administrator.
Databricks Unity Catalog integration
When exposing a table from Databricks Unity Catalog, be sure the credentials used to register the data sources have the Databricks privileges listed below.
The following privileges on the parent catalogs and schemas of those tables:
SELECT
USE CATALOG
USE SCHEMA
USE SCHEMA
on system.information_schema
Azure Databricks Unity Catalog limitation
Set all table-level ownership on your Unity Catalog data sources to an individual user or service principal instead of a Databricks group before proceeding. Otherwise, Immuta cannot apply data policies to the table in Unity Catalog. See the for details.
**Duplicate data sources**: In order to avoid two data sources referencing the same table, users can not create duplicate data sources. If you attempt to create a duplicate data source using the API, you will encounter a warning stating "duplicate tables are specified in the payload."
POST
/databricks/handler
Save the provided connection information as a data source.
This request creates two Databricks data sources.
GET
/databricks/handler/{handlerId}
Get the handler metadata associated with the provided handler ID.
This request returns metadata for the handler with the ID 48
.
PUT
/databricks/handler/{handlerId}
Update the data source metadata associated with the provided handler ID. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case, it uses the current dictionary.
This request updates the metadata for the data source with the handler ID 48
.
Payload example
The payload below updates the dataSourceName
to Cities
.
PUT
/databricks/bulk
Update the data source metadata associated with the provided connection string.
This request updates the metadata for all data sources with the connection string specified in example-payload.json
.
Payload example
The payload below adds a certificate (certificate.json
) to connect to the data sources with the provided connection.
PUT
/databricks/handler/{handlerId}/triggerHighCardinalityJob
Recalculate the high cardinality column for the specified data source.
The response returns a string of characters that identify the high cardinality job run.
This request re-runs the job that calculates the high cardinality column for the data source with the handler ID 47
.
This page describes the presto
(Presto data sources) endpoint.
Additional fields may be included in some responses you receive; however, these attributes are for internal purposes and are therefore undocumented.
.
.
.
**Duplicate data sources**: In order to avoid two data sources referencing the same table, users can not create duplicate data sources. If you attempt to create a duplicate data source using the API, you will encounter a warning stating "duplicate tables are specified in the payload."
POST
/presto/handler
Save the provided connection information as a data source.
Attribute | Description | Required |
---|
This request creates a Presto data source.
GET
/presto/handler/{handlerId}
Get the handler metadata associated with the provided handler ID.
The following request returns the handler metadata associated with the provided handler ID.
PUT
/presto/handler/{handlerId}
Updates the handler metadata associated with the provided handler ID. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case it uses the current dictionary.
This request updates the data source name to Marketing Data
for the data source with the handler ID 67
.
Request payload example
PUT
/presto/bulk
Updates the data source metadata associated with the provided connection string.
This request updates the metadata for all data sources with the connection string specified in example-payload.json
.
The payload below adds a certificate (certificate.json
) to the data sources with the provided connection.
PUT
/presto/handler/{handlerId}/triggerHighCardinalityJob
Recalculates the high cardinality column for the provided handler ID.
The response returns a string of characters that identify the high cardinality job run.
The following request recalculates the high cardinality column for the provided handler ID.
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Method | Path | Purpose |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Endpoint | Purpose |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description | Required |
---|
.
.
.
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Method | Path | Purpose |
---|
Attribute | Description | Required |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Endpoint | Purpose |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
Attribute | Description |
---|
Attribute | Description | Required |
---|
private
boolean
When false
, the data source will be publicly available in the Immuta UI.
Yes
blobHandler
array[object]
A list of full URLs providing the locations of all blob store handlers to use with this data source.
Yes
blobHandlerType
string
Describes the type of underlying blob handler that will be used with this data source (e.g., MS SQL
).
Yes
recordFormat
string
The data format of blobs in the data source, such as json
, xml
, html
, or jpeg
.
Yes
type
string
The type of data source: ingested
(metadata will exist in Immuta) or queryable
(metadata is dynamically queried).
Yes
name
string
The name of the data source. It must be unique within the Immuta instance.
Yes
sqlTableName
string
A string that represents this data source's table in the Query Engine.
Yes
organization
string
The organization that owns the data source.
Yes
category
string
The category of the data source.
No
description
string
The description of the data source.
No
hasExamples
boolean
When true
, the data source contains examples.
No
id
integer
The handler ID.
dataSourceId
integer
The ID of the data source.
warnings
string
This message describes issues with the created data source, such as the data source being unhealthy.
connectionString
string
The connection string used to connect the data source to Immuta.
handlerId
integer
The ID of the handler.
Yes
skipCache
boolean
When true
, will skip the handler cache when retrieving metadata.
No
body
array[object]
Metadata about the data source, including the data source ID, schema, database, and connection string.
handlerId
integer
The ID of the handler.
Yes
skipCache
boolean
When true
, will skip the handler cache when retrieving metadata.
No
handler
metadata
Includes metadata about the handler, such as ssl
, port
, database
, hostname
, username
, and password
.
Yes
connectionString
string
The connection string used to connect to the data source.
Yes
id
integer
The ID of the handler.
ca
string
The certificate authority.
columns
array[object]
This is a Data Dictionary object, which provides metadata about the columns in the data source, including the name and data type of the column.
id |
|
dataSourceId |
|
warnings |
|
connectionString |
|
handlerId |
| Yes |
skipCache |
| No |
dataSourceId |
|
value |
|
handlerId |
| Yes |
skipCache |
| No |
id |
|
dataSourceId |
|
metadata |
|
handler |
| Yes |
connectionString |
| Yes |
bulkId |
|
connectionString |
|
jobsCreated |
|
HandlerId |
| Yes |
id |
|
dataSourceId |
|
warnings |
|
connectionString |
|
handlerId |
| Yes |
skipCache |
| No |
dataSourceId |
|
value |
|
handlerId |
| Yes |
skipCache |
| No |
dataSourceId |
|
body |
|
body |
| Yes |
bulkId |
|
connectionString |
|
jobsCreated |
|
handlerId |
| Yes |
handlerId |
| Yes |
private |
| Yes |
blobHandler |
| Yes |
blobHandlerType |
| Yes |
recordFormat |
| Yes |
type |
| Yes |
name |
| Yes |
sqlTableName |
| Yes |
organization |
| Yes |
category |
| No |
description |
| No |
hasExamples |
| No |
id |
|
dataSourceId |
|
warnings |
|
connectionString |
|
handlerId |
| Yes |
skipCache |
| No |
body |
|
handlerId |
| Yes |
skipCache |
| No |
handler |
| Yes |
connectionString |
| Yes |
id |
|
ca |
|
columns |
|
handler |
| Yes |
connectionString |
| Yes |
bulkId |
|
connectionString |
|
jobsCreated |
|
handlerId |
| Yes |
id |
|
dataSourceId |
|
warnings |
|
connectionString |
|
handlerId |
| Yes |
skipCache |
| No |
dataSourceId |
|
value |
|
handlerId |
| Yes |
skipCache |
| No |
dataSourceId |
|
body |
|
body |
| Yes |
bulkId |
|
connectionString |
|
jobsCreated |
|
handlerId |
| Yes |
private |
| Yes |
blobHandler |
| Yes |
blobHandlerType |
| Yes |
recordFormat |
| Yes |
type |
| Yes |
name |
| Yes |
sqlTableName |
| Yes |
organization |
| Yes |
category |
| No |
description |
| No |
hasExamples |
| No |
private |
| Yes |
blobHandler |
| Yes |
blobHandlerType |
| Yes |
recordFormat |
| Yes |
type |
| Yes |
name |
| Yes |
sqlTableName |
| Yes |
organization |
| Yes |
category |
| No |
description |
| No |
hasExamples |
| No |
private |
| Yes |
blobHandler |
| Yes |
blobHandlerType |
| Yes |
recordFormat |
| Yes |
type |
| Yes |
name |
| Yes |
sqlTableName |
| Yes |
organization |
| Yes |
category |
| No |
description |
| No |
hasExamples |
| No |
PUT |
|
PUT |
|
PUT |
|
|
|
|
|
PUT |
|
PUT |
|
PUT |
|
|
|
|
The data dictionary API allows you to manage the data dictionary for your data sources.
Additional fields may be included in some responses you receive; however, these attributes are for internal purposes and are therefore undocumented.
Method | Path | Purpose |
---|---|---|
POST
/dictionary/{dataSourceId}
Create the dictionary for the specified data source.
Attribute | Description | Required |
---|---|---|
The following request creates a data dictionary (saved in example-payload.json
) for the data source with ID 1
.
Payload example
Other status codes returned include:
PUT
/dictionary/{dataSourceId}
Update the dictionary for the specified data source.
The request below updates the data dictionary for the data source with the ID 1
.
Payload example
Other status codes returned include
GET
/dictionary/{dataSourceId}
Get the dictionary for the specified data source.
The request below gets the data dictionary for the data source with the ID 1
.
GET
/dictionary/columns
Search across all dictionary columns.
The following request searches for columns in all dictionaries that contain the text address
in their name, with a limit of 10
results.
DELETE
/dictionary/{dataSourceId}
Delete the data dictionary for the specified data source.
The request below deletes the data dictionary for the data source with ID 1
.
This endpoint returns {}
on success.
Other status codes returned include
The snowflake
endpoint allows you to connect and manage Snowflake data sources in Immuta.
Additional fields may be included in some responses you receive; however, these attributes are for internal purposes and are therefore undocumented.
Snowflake imported databases
Immuta does not support Snowflake tables from imported databases. Instead, create a view of the table and register that view as a data source.
**Duplicate data sources**: In order to avoid two data sources referencing the same table, users can not create duplicate data sources. If you attempt to create a duplicate data source using the API, you will encounter a warning stating "duplicate tables are specified in the payload."
POST
/snowflake/handler
Save the provided connection information as a data source.
Attribute | Description | Required |
---|---|---|
This request creates a Snowflake data source.
GET
/snowflake/handler/{handlerId}
Get the handler metadata associated with the provided handler ID.
This request returns metadata for the handler with the ID 30
.
PUT
/snowflake/handler/{handlerId}
Update the data source metadata associated with the provided handler ID. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case, it uses the current dictionary.
This request updates the metadata for the data source with the handler ID 30
.
Payload example
The payload below updates the eventTime
to MOST_RECENT_ORDER
.
PUT
/snowflake/bulk
Update the data source metadata associated with the provided connection string.
This request updates the metadata for all data sources with the connection string specified in example-payload.json
.
Payload example
The payload below updates the database to ANALYST_DEMO
for the provided connection string.
PUT
/snowflake/handler/{handlerId}/triggerHighCardinalityJob
Recalculate the high cardinality column for the specified data source.
The response returns a string of characters that identify the high cardinality job run.
This request re-runs the job that calculates the high cardinality column for the data source with the handler ID 30
.
PUT
/snowflake/handler/{handlerId}/refreshNativeViewJob
Refresh the native view of a data source.
The response returns a string of characters that identifies the refresh view job run.
This request refreshes the view for the data source with the handler ID 7
.
The trino
endpoint allows you to connect and manage Trino data sources in Immuta.
Additional fields may be included in some responses you receive; however, these attributes are for internal purposes and are therefore undocumented.
**Duplicate data sources**: In order to avoid two data sources referencing the same table, users can not create duplicate data sources. If you attempt to create a duplicate data source using the API, you will encounter a warning stating "duplicate tables are specified in the payload."
POST
/trino/handler
Save the provided connection information as a data source.
Attribute | Description | Required |
---|---|---|
This request creates a Trino data source.
GET
/trino/handler/{handlerId}
Get the handler metadata associated with the provided handler ID.
This request returns metadata for the handler with the ID 1
.
PUT
/trino/handler/{handlerId}
Update the data source metadata associated with the provided handler ID. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case, it uses the current dictionary.
This request updates the data source name to Marketing Data
for the data source with the handler ID 1
.
PUT
/trino/bulk
Update the data source metadata associated with the provided connection string.
This request updates the metadata for all data sources with the connection string specified in example-payload.json
.
The payload below adds a certificate (certificate.json
) to the data sources with the provided connection.
PUT
/trino/handler/{handlerId}/triggerHighCardinalityJob
Recalculate the high cardinality column for the specified data source.
The response returns a string of characters that identify the high cardinality job run.
This request re-runs the job that calculates the high cardinality column for the data source with the handler ID 30
.
PUT
/trino/handler/{handlerId}/refreshNativeViewJob
Refresh the native view of a data source.
The response returns a string of characters that identifies the refresh view job run.
This request refreshes the view for the data source with the handler ID 7
.
The redshift
endpoint allows you to connect and manage Redshift data sources in Immuta.
Additional fields may be included in some responses you receive; however, these attributes are for internal purposes and are therefore undocumented.
**Duplicate data sources**: In order to avoid two data sources referencing the same table, users can not create duplicate data sources. If you attempt to create a duplicate data source using the API, you will encounter a warning stating "duplicate tables are specified in the payload."
POST
/redshift/handler
Save the provided connection information as a data source.
Attribute | Description | Required |
---|---|---|
This request creates two Redshift data sources, which are specified in example-payload.json
.
GET
/redshift/handler/{handlerId}
Get the handler metadata associated with the provided handler ID.
This request returns metadata for the handler with the ID 41
.
PUT
/redshift/handler/{handlerId}
Update the data source metadata associated with the provided handler ID. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case, it uses the current dictionary.
This request updates the metadata for the data source with the handler ID 41
.
Payload example
The payload below removes the paragraph_count
column from the data source.
PUT
/redshift/bulk
Update the data source metadata associated with the provided connection string.
This request updates the metadata for all data sources with the connection string specified in example-payload.json
.
Payload example
The payload below adds a certificate (certificate.json
) to connect to the data sources with the provided connection string.
PUT
/redshift/handler/{handlerId}/triggerHighCardinalityJob
Recalculate the high cardinality column for the specified data source.
The response returns a string of characters that identify the high cardinality job run.
This request re-runs the job that calculates the high cardinality column for the data source with the handler ID 41
.
PUT
/redshift/handler/{handlerId}/refreshNativeViewJob
Refresh the native view of a data source.
The response returns a string of characters that identifies the refresh view job run.
This request refreshes the view for the data source with the handler ID 7
.
.
.
.
.
.
.
. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case, it uses the current dictionary.
.
.
.
.
.
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Status Code | Message |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Status Code | Message |
---|---|
Method | Path | Purpose |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Status Code | Message |
---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Method | Path | Purpose |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Method | Path | Purpose |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Method | Path | Purpose |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description |
---|---|
Attribute | Description | Required |
---|---|---|
Attribute | Description | Required |
---|---|---|
POST
/dictionary/{dataSourceId}
PUT
/dictionary/{dataSourceId}
dataSourceId
integer
The ID of the data source that will contain the data dictionary.
Yes
body
array[object]
Data dictionary metadata, including column names, data types, description and tags.
Yes
metadata
array[string]
Metadata for each column in the dictionary.
Yes
metadata.name
string
The name of the column.
Yes
metadata.dataType
string
The type of data in the column.
Yes
metadata.remoteType
string
The type of data in the remote column.
Yes
createdAt
timestamp
When the object was created.
dataSource
integer
The ID of the data source the dictionary is associated with.
id
integer
The ID of the dictionary object.
metadata
array[string]
Metadata for the individual fields in the dictionary, including name
, dataType
, and remoteType
.
types
array[string]
A list of all data types the dictionary contains, such as text
, integer
, json
, or timestamp with time zone
.
400
Bad request: (detailed reason).
401
A valid Authorization token must be provided.
403
User must have one of the following roles to delete dictionary: owner,expert.
404
Data source not found.
dataSourceId
integer
The ID of the data source that will contain the data dictionary.
Yes
body
array[object]
Data dictionary metadata, including column names, data types, description and tags.
Yes
metadata
array[string]
Metadata for each column in the dictionary.
Yes
metadata.name
string
The name of the column.
Yes
metadata.dataType
string
The type of data in the column.
Yes
metadata.remoteType
string
The type of data in the remote column.
Yes
createdAt
timestamp
When the object was created.
dataSource
integer
The ID of the data source the dictionary is associated with.
id
integer
The ID of the dictionary object.
metadata
array[string]
Metadata for the individual fields in the dictionary, including name
, dataType
, and remoteType
.
types
array[string]
A list of all data types the dictionary contains, such as text
, integer
, json
, or timestamp with time zone
.
400
Bad request: (detailed reason).
401
A valid Authorization token must be provided.
403
User must have one of the following roles to delete dictionary: owner,expert.
404
Data source not found.
GET
/dictionary/{dataSourceId}
GET
/dictionary/columns
dataSourceId
integer
The ID of the data source that contains the data dictionary.
Yes
createdAt
timestamp
When the object was created.
dataSource
integer
The ID of the data source the dictionary is associated with.
id
integer
The ID of the dictionary object.
metadata
array[string]
Metadata for the individual fields in the dictionary, including name
, dataType
, and remoteType
.
types
array[string]
A list of all data types the dictionary contains, such as text
, integer
, json
, or timestamp with time zone
.
searchText
string
A string used to filter returned columns. The query is executed with a wildcard prefix and suffix.
No
limit
integer
The maximum number of search results that will be returned. Default is 10
.
No
columnName
string
The name of the column.
dataSourceId
integer
The ID of the data source.
Yes
401
A valid Authorization token must be provided.
403
User must have one of the following roles to delete dictionary: owner,expert.
404
Data source not found.
private
boolean
When false
, the data source will be publicly available in the Immuta UI.
Yes
blobHandler
array[object]
The parameters for this array include scheme
("https"
) and url
(an empty string).
Yes
blobHandlerType
string
Describes the type of underlying blob handler that will be used with this data source (e.g., MS SQL
).
Yes
recordFormat
string
The data format of blobs in the data source, such as json
, xml
, html
, or jpeg
.
Yes
type
string
The type of data source: ingested
(metadata will exist in Immuta) or queryable
(metadata is dynamically queried).
Yes
name
string
The name of the data source. It must be unique within the Immuta tenant.
Yes
sqlTableName
string
A string that represents this data source's table in Immuta.
Yes
organization
string
The organization that owns the data source.
Yes
category
string
The category of the data source.
No
description
string
The description of the data source.
No
hasExamples
boolean
When true
, the data source contains examples.
No
id
integer
The handler ID.
dataSourceId
integer
The ID of the data source.
warnings
string
This message describes issues with the created data source, such as the data source being unhealthy.
connectionString
string
The connection string used to connect the data source to Immuta.
handlerId
integer
The ID of the handler.
Yes
skipCache
boolean
When true
, will skip the handler cache when retrieving metadata.
No
body
array[object]
Metadata about the data source, including the data source ID, schema, database, and connection string.
PUT
/snowflake/handler/{handlerId}
Update the data source metadata associated with the provided handler ID. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case, it uses the current dictionary.
PUT
/snowflake/bulk
PUT
/snowflake/handler/{handlerId}/triggerHighCardinalityJob
PUT
/snowflake/handler/{handlerId}/refreshNativeViewJob
handlerId
integer
The ID of the handler.
Yes
skipCache
boolean
When true
, will skip the handler cache when retrieving metadata.
No
handler
metadata
Includes metadata about the handler, such as ssl
, port
, database
, hostname
, username
, and password
.
Yes
connectionString
string
The connection string used to connect to the data source.
Yes
id
integer
The ID of the handler.
ca
string
The certificate authority.
columns
array[object]
This is a Data Dictionary object, which provides metadata about the columns in the data source, including the name and data type of the column.
handler
metadata
Includes metadata about the handler, such as ssl
, port
, database
, hostname
, username
, and password
.
Yes
connectionString
string
The connection string used to connect to the data sources.
Yes
bulkId
string
The ID of the bulk data source update.
connectionString
string
The connection string shared by the data sources bulk updated.
jobsCreated
integer
The number of jobs that ran to update the data sources; this number corresponds to the number of data sources updated.
handlerId
integer
The ID of the handler.
Yes
handlerId
integer
The ID of the handler.
Yes
private
boolean
When false
, the data source will be publicly available in the Immuta UI.
Yes
blobHandler
array[object]
The parameters for this array include scheme
("https"
) and url
(an empty string).
Yes
blobHandlerType
string
Describes the type of underlying blob handler that will be used with this data source (e.g., MS SQL
).
Yes
recordFormat
string
The data format of blobs in the data source, such as json
, xml
, html
, or jpeg
.
Yes
type
string
The type of data source: ingested
(metadata will exist in Immuta) or queryable
(metadata is dynamically queried).
Yes
name
string
The name of the data source. It must be unique within the Immuta tenant.
Yes
sqlTableName
string
A string that represents this data source's table in Immuta.
Yes
organization
string
The organization that owns the data source.
Yes
category
string
The category of the data source.
No
description
string
The description of the data source.
No
hasExamples
boolean
When true
, the data source contains examples.
No
id
integer
The handler ID.
dataSourceId
integer
The ID of the data source.
warnings
string
This message describes issues with the created data source, such as the data source being unhealthy.
connectionString
string
The connection string used to connect the data source to Immuta.
handlerId
integer
The ID of the handler.
Yes
skipCache
boolean
When true
, will skip the handler cache when retrieving metadata.
No
body
array[object]
Metadata about the data source, including the data source ID, schema, database, and connection string.
PUT
/trino/handler/{handlerId}
Update the data source metadata associated with the provided handler ID. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case, it uses the current dictionary.
PUT
/trino/bulk
PUT
/trino/handler/{handlerId}/triggerHighCardinalityJob
PUT
/trino/handler/{handlerId}/refreshNativeViewJob
handlerId
integer
The ID of the handler.
Yes
skipCache
boolean
When true
, will skip the handler cache when retrieving metadata.
No
handler
metadata
Includes metadata about the handler, such as ssl
, port
, database
, hostname
, username
, and password
.
Yes
connectionString
string
The connection string used to connect to the data source.
Yes
id
integer
The ID of the handler.
ca
string
The certificate authority.
columns
array[object]
This is a Data Dictionary object, which provides metadata about the columns in the data source, including the name and data type of the column.
handler
metadata
Includes metadata about the handler, such as ssl
, port
, database
, hostname
, username
, and password
.
Yes
connectionString
string
The connection string used to connect to the data sources.
Yes
bulkId
string
The ID of the bulk data source update.
connectionString
string
The connection string shared by the data sources bulk updated.
jobsCreated
integer
The number of jobs that ran to update the data sources; this number corresponds to the number of data sources updated.
handlerId
integer
The ID of the handler.
Yes
handlerId
integer
The ID of the handler.
Yes
private
boolean
When false
, the data source will be publicly available in the Immuta UI.
Yes
blobHandler
array[object]
A list of full URLs providing the locations of all blob store handlers to use with this data source.
Yes
blobHandlerType
string
Describes the type of underlying blob handler that will be used with this data source (e.g., MS SQL
).
Yes
recordFormat
string
The data format of blobs in the data source, such as json
, xml
, html
, or jpeg
.
Yes
type
string
The type of data source: ingested
(metadata will exist in Immuta) or queryable
(metadata is dynamically queried).
Yes
name
string
The name of the data source. It must be unique within the Immuta tenant.
Yes
sqlTableName
string
A string that represents this data source's table in Immuta.
Yes
organization
string
The organization that owns the data source.
Yes
category
string
The category of the data source.
No
description
string
The description of the data source.
No
hasExamples
boolean
When true
, the data source contains examples.
No
id
integer
The handler ID.
dataSourceId
integer
The ID of the data source.
warnings
string
This message describes issues with the created data source, such as the data source being unhealthy.
connectionString
string
The connection string used to connect the data source to Immuta.
handlerId
integer
The ID of the handler.
Yes
skipCache
boolean
When true
, will skip the handler cache when retrieving metadata.
No
body
array[object]
Metadata about the data source, including the data source ID, schema, database, and connection string.
PUT
/redshift/handler/{handlerId}
Update the data source metadata associated with the provided handler ID. This endpoint does not perform partial updates, but will allow the dictionary to be omitted. In this case, it uses the current dictionary.
PUT
/redshift/bulk
PUT
/redshift/handler/{handlerId}/triggerHighCardinalityJob
PUT
/redshift/handler/{handlerId}/refreshNativeViewJob
handlerId
integer
The ID of the handler.
Yes
skipCache
boolean
When true
, will skip the handler cache when retrieving metadata.
No
handler
metadata
Includes metadata about the handler, such as ssl
, port
, database
, hostname
, username
, and password
.
Yes
connectionString
string
The connection string used to connect to the data source.
Yes
id
integer
The ID of the handler.
ca
string
The certificate authority.
columns
array[object]
This is a Data Dictionary object, which provides metadata about the columns in the data source, including the name and data type of the column.
handler
metadata
Includes metadata about the handler, such as ssl
, port
, database
, hostname
, username
, and password
.
Yes
connectionString
string
The connection string used to connect to the data sources.
Yes
bulkId
string
The ID of the bulk data source update.
connectionString
string
The connection string shared by the data sources bulk updated.
jobsCreated
integer
The number of jobs that ran to update the data sources; this number corresponds to the number of data sources updated.
handlerId
integer
The ID of the handler.
Yes
handlerId
integer
The ID of the handler.
Yes