Payload Attribute Details
Audience: Data Engineers
Content Summary: This page contains details and examples of payload attributes for creating data sources.
connectionKey
connectionKey
The connectionKey
is a unique identifier for the collection of data sources being created. If an existing connectionKey
is used with new connection information, it will delete the old data sources and create new ones from the new information in the payload.
connection
connection
handler
Databricks
, Google BigQuery
, Presto
, Redshift
, Snowflake
, and Trino
.
ssl
boolean
Set to true
to enable SSL communication with the remote database.
database
string
The database name.
schema
string
The schema in the remote database.
userFiles
array
Array of objects; each object must have keyName
(corresponds to an ODBC connection string option), content
(base-64 encoded content), and userFilename
(the name of the file - for display purposes in the app).
connectionStringOptions
string
Additional ODBC connection string options to be used when connecting to the remote database.
hostname
string
The hostname of the remote database instance.
port
number
The port of the remote database instance.
authenticationMethod
username
string
The username used to connect to the remote database.
password
string
The password used to connect to the remote database.
Special Cases
Athena: Also requires
region
andqueryResultLocationBucket
.queryResultLocationDirectory
is optional.authenticationMethod
can benone
,accessKey
(default: username = access key, password = secret key), orinstanceRole
.BigQuery: Does not require
hostname
andpassword
. Requiressid
, which is the GCP project ID, anduserFiles
with thekeyName
ofKeyFilePath
and the base64-encodedkeyfile.json
.Databricks: Also requires
httpPath
. Nousername
is required.Hadoop:
authenticationMethod
can benone
,userPassword
,hdInsight
,kerberos
, orkerberosHdInsight
.Trino:
authenticationMethod
can beNo Authentication
,LDAP Authentication
, orKerberos Authentication
.Snowflake: Also requires
warehouse
.authenticationMethod
can beuserPassword
orPRIV_KEY_FILE
. If usingPRIV_KEY_FILE
, do not specify apassword
;userFiles
is required with thekeyName
ofPRIV_KEY_FILE
and the base64-encoded Snowflake key.
nameTemplate
nameTemplate
dataSourceFormat
string
Format to be used to name the data sources created in this group.
schemaFormat
string
Format to be used to name the Immuta schema created in this group.
tableFormat
string
Format to be used to name the Immuta table created in this group.
schemaProjectNameFormat
string
Format to be used to name the Immuta schema project created in this group.
Available templates include
<tablename>
<schema>
<database>
All cases of the name in Immuta should be lowercase.
For example, consider a table TPC.CUSTOMER
that is given the following nameTemplate
:
This nameTemplate
will produce a data source named tpc.customer
in a schema project named tpc
.
options
options
staleDataTolerance
integer
The length in seconds that data for these sources can be cached.
expiration
date
Date that the data source should be purged from Immuta. Defaults to no expiration.
disableSensitiveDataDiscovery
domainCollectionId
hardDelete
boolean
If true
, when the table backing the data source is no longer available, the data source in Immuta is deleted. If this is false
, the data source will be disabled. Default: false
.
tableTags
array
An array of tags (strings) to place at the data source level on every data source.
owners
owners
type
group or user
The type of owner that is being added.
name
string
The name of the group or the user (username they log in with).
iam (optional)
string
The ID of the identity manager system the user or group comes from. If excluded, any user/group that matches will be added as an owner.
sources
sources
Best practice: Use Subscription Policies to Control Access
If you are not tagging individual columns, omit sources
to create data sources for all tables in the schema or database, and then use Subscription Policies to control access to the tables instead of excluding them from Immuta.
This attribute configures which sources are created. If sources
is not provided, all sources from the given connection will be created.
There are 3 types of sources than can be specified:
Recommended: Specify All Tables
If you specify any sources (either tables or queries), but you still want to create data sources for the rest of the tables in the schema or database, you can specify all
as a source:
Best practice: Use schema monitoring
Excluding sources or specifying all: true
will turn on automatic schema monitoring in Immuta. As tables are added or removed, Immuta will look for those changes on a schedule (by default, once a day) and either disable or delete data sources for removed tables or create data sources for new tables. New tables will be tagged New
so that you can build a policy to restrict access to new tables until they are evaluated by data owners. Data owners will be notified of new tables, and all subscribers will be notified if data sources are disabled or deleted.
Specify a Query
Immuta recommends creating a view in your native database instead of using this option, but if that is not possible, you can create data sources based on SQL statements:
Specify a Table
If you want to select specific tables to be created as data sources, or if you want to tag individual data sources or columns within a data source, you need to leverage this parameter:
Additional Options
When specifying a table or query there are other options that can be specified:
columnDescriptions
description
A short description for the data source.
documentation
Markdown-supported documentation for the data source.
naming
owners
Specify owners for an individual data source. The payload is the same as owners at the root level.
tags
Columns
If any columns are specified, those are the only columns that will be available in the data source.
If no columns are specified, Immuta will look for new or removed columns on a schedule (by default, once a day) and add or remove columns from the data sources automatically as needed.
New columns will be tagged
New
, so you can build a policy to automatically mask new columns until they are approved.Data Owners will be notified when columns are added or removed.
columns
is an array of objects for each column:
name
The column name.
dataType
The data type.
nullable
Whether or not the column contains null
.
remoteType
The actual data type in the remote database.
primaryKey
Specify whether this is the primary key of the remote table.
description
Describe the column.
Column Descriptions
You can add descriptions to columns without having to specify all the columns in the data source. columnDescriptions
is an array of objects with the following schema:
columnName
string
The column name.
description
string
The description of the column.
Tags
You can add tags to columns or data sources. tags
is an object with the following schema:
table
array
An array of tags (strings) to add to this table.
columns
array
An array of objects that specifies columnName (string) and tags (an array to tags).
Last updated