Reduces complexity: The data source API has been simplified to only require the connection information in most instances and one endpoint for all database technologies.
Maintains less state: Whether updating or creating, the same endpoint is used, and the same data is passed. No ids are required, so no additional state is required.
Requires fewer steps: Only an API key is required; no additional authentication step is required before using the API.
Integrates with Git: Define data sources and policies in files that can be tracked in Git and easily pushed to Immuta. Both JSON and YAML are supported for more flexibility. (For example, use YAML to add comments in files.)
Before using the Immuta API, users need to authenticate with an API key. To generate an API key, complete the following steps in the Immuta UI.
Click your initial in the top right corner of the screen and select Profile.
Go to the API Keys tab and then click Generate Key.
Complete the required fields in the modal and click Create.
Pass the key that is provided in the Authorization header:
All of the API endpoints described below take either JSON or YAML, and the endpoint and payload are the same for both creating and updating data sources, policies, projects, and purposes.
The V2 API is built to easily enable an “as-code” approach to managing your data sources, so each time you POST data to this endpoint, you are expected to provide complete details of what you want in Immuta. The two examples below illustrate this design:
If you POST once explicitly defining a single table under sources, and then POST a second time with a different table, this will result in a single data source in Immuta pointing to the second table and the first data source will be deleted or disabled (depending on the value specified for hardDelete
).
If you POST once with two tableTags
specified (e.g., Tag.A
and Tag.B
) and do a follow-up POST with tableTags: [Tag.C]
, only Tag.C
will exist on all of the tables specified; tags Tag.A
and Tag.B
will be removed from all the data sources. Note: If you are frequently using the v2 API to update data tags, consider using the custom REST catalog integration instead.
Through this endpoint, you can create or update all data sources for a given schema or database.
POST /api/v2/data
dryRun
boolean
If true, no updates will actually be made. Default: false
wait
number
The number of seconds to wait for data sources to be created before returning. Anything less than 0 will wait indefinitely. Default: 0
connectionKey
string
A key/name to uniquely identify this collection of data sources.
object
Connection information.
object
Supply a template to override naming conventions. Immuta will use the system default if not supplied.
object
Override options for these sources. If not provided, system defaults will all be used.
object
Specify owners for all data sources created. If an empty array is provided, all data owners (other than the calling user) will be removed from the data source. To allow for an external process (or the UI) to control data owners, if the element is completely missing from the payload, data owners will not be modified.
object
Configure which sources are created. If not provided, all sources from the given connection will be created.
Note: See Create Data Source Payload Attribute Details for more details about these attributes.
POST /api/v2/policy
Requirements:
Immuta permission GOVERNANCE
dryRun
boolean
If true, no updates will actually be made. Default: false
reCertify
boolean
If true (and if the certification has changed), someone will need to re-certify this policy on all impacted data sources. Default: false
policyKey
string
A key/name to uniquely identify this policy.
name
string
The name of the policy.
type
subscription
or data
The type of policy.
actions
object
The actual rules for this policy (see examples).
ownerRestrictions (optional)
object[]
Object identifying the entities to which this global policy should be restricted.
circumstances (optional)
object
When this policy should get applied
circumstanceOperator (optional)
all
or any
Specify whether "all" of the circumstances must be met for the policy to be applied, or just "any" of them.
staged (optional)
boolean
Whether or not this global policy is in a staged. status. Default: false
certification (optional)
object
Certification information for the global policy.
Note: See Policy Request Payload Examples for payload details.
POST /api/v2/project
dryRun
boolean
If true, no updates will actually be made. Default: false
deleteDataSourcesOnWorkspaceDelete
boolean
If true, will delete all data and the data sources associated with a project workspace when the workspace is deleted. Default: false
projectKey
string
A key/name to uniquely identify this project.
name
string
The name of the project.
description (optional)
string
A short description for the project.
documentation (optional)
object
Markdown-supported documentation for this project.
allowedMaskedJoins (optional)
boolean
If true, will allow joining on masked columns between data sources in this project. Only certain policies allow masked join. Default: false
purposes (optional)
string[]
The list of purposes to add to this project.
datasources (optional)
string[]
The list of data sources to add to this project.
subscriptionPolicy (optional)
object
The policy for which users can subscribe to this project. Default: manual subscription policy
workspace (optional)
object
If this is a workspace project, this is the workspace configuration. The project will automatically be equalized.
equalization (optional)
boolean
If true, will normalize all users to the same entitlements so that everyone sees the same data. Default: false
tags (optional)
string[]
Tags to add to the project.
Note: See Project Request Payload Examples for payload details.
POST /api/v2/purpose
dryRun
boolean
If true, no updates will actually be made. Default: false
reAcknowledgeRequired
boolean
If true, will require all users of any projects using this purpose to re-acknowledge any updated acknowledgement statements. Default: false
name
string
The name of the purpose.
description (optional)
string
A short description for the purpose.
acknowledgement (optional)
string
The acknowledgement that users must agree to when joining a project with this purpose. If not provided, the system default will be used.
kAnonNoiseReduction (optional)
string
The level of reduction allowed when doing policy adjustments on data sources in projects with this purpose.
Note: See Purposes Request Payload Examples for payload details.
Register all tables in a schema by enabling schema monitoring. Schema monitoring will negate the need to re-call the V2 /data
endpoint when you have new tables because schema monitoring will automatically recognize and register them.
To frequently update data tags on a data source, use the custom REST catalog integration instead of the V2/data
endpoint.
Use the Data engineering with limited policy downtime guide. Rather than relying on re-calling the V2 /data
endpoint after a dbt run to update your data sources, follow the dbt and transform workflow and use schema monitoring to recognize changes to your data sources and reapply policies.
connectionKey
The connectionKey
is a unique identifier for the collection of data sources being created. If an existing connectionKey
is used with new connection information, it will delete the old data sources and create new ones from the new information in the payload.
connection
handler
Snowflake
Required
ssl
boolean
Set to true
to enable SSL communication with the remote database.
Optional
database
string
The database name.
Required
schema
string
The schema in the remote database.
Optional
hostname
string
The hostname of the remote database instance.
Required
port
number
The port of the remote database instance.
Optional
warehouse
string
The default pool of compute resources Immuta will use to run queries and other Snowflake operations.
Required
connectionStringOptions
string
Additional connection string options to be used when connecting to the remote database.
Optional
authenticationMethod
string
The type of authentication method to use. Options include userPassword
, keyPair
, and oAuthClientCredentials
.
Required
username
string
The username used to connect to the remote database.
Required if using userPassword
or keyPair
.
password
string
The password used to connect to the remote database.
Required if using userPassword
.
useCertificate
boolean
Set to true
when using client certificate credentials to request an access token. Otherwise, set to false
to use client secret.
Required if using oAuthClientCredentials
.
userFiles
object
Details about the files required for the request.
Required if using keyPair
or oAuthClientCredentials
with useCertificate
set to true
.
keyName
string
The connection name of the key file. Must be PRIV_KEY_FILE
if using keyPair
, or must be oauth client certificate
if using oAuthClientCredentials
.
Required if using keyPair
or oAuthClientCredentials
with useCertificate
set to true
.
content
string
The content of the file, base-64 encoded.
Required if using keyPair
or oAuthClientCredentials
with useCertificate
set to true
.
userFilename
string
The name of the file - for display purposes in the UI.
Required if using keyPair
or oAuthClientCredentials
with useCertificate
set to true
.
handler
Databricks
Required
ssl
boolean
Set to true
to enable SSL communication with the remote database.
Optional
database
string
The database name.
Optional
hostname
string
The hostname of the remote database instance.
Required
port
number
The port of the remote database instance.
Optional
connectionStringOptions
string
Additional connection string options to be used when connecting to the remote database.
Optional
authenticationMethod
string
The type of authentication method to use. Options include oAuthM2M
and token
.
Required
token
string
The Databricks personal access token for the service principal created for Immuta.
Required if using token
authentication.
useCertificate
boolean
True when using client certificate credentials to request an access token. Otherwise, client secret.
Required if using oAuthM2M
.
clientId
string
The client identifier of the Immuta service principal you configured. This is the client ID displayed in Databricks when creating the client secret for the service principal.
Required if using oAuthM2M
.
audience
string
The audience for the OAuth Client Credential token request.
Required if using oAuthM2M
.
clientSecret
string
An application password an app can use in place of a certificate to identity itself.
Required if using oAuthM2M
and useCertificate
is set to false
.
certificateThumbprint
string
The certificate thumbprint to use to generate the JWT for the OAuth Client Credential request.
Required if using oAuthM2M
and useCertificate
is set to true
.
scope
clientSecret
The scope limits the operations and roles allowed in Databricks by the access token. See the OAuth 2.0 documentation for details about scopes.
Optional
httpPath
string
The HTTP path of your Databricks cluster or SQL warehouse.
Required
handler
Redshift
Required
ssl
boolean
Set to true
to enable SSL communication with the remote database.
Optional
database
string
The database name.
Optional
schema
string
The schema in the remote database.
Required
connectionStringOptions
string
Additional connection string options to be used when connecting to the remote database.
Optional
hostname
string
The hostname of the remote database instance.
Required
port
number
The port of the remote database instance.
Optional
authenticationMethod
string
The type of authentication method to use. Options include userPassword
and okta
.
Required
username
string
The username used to connect to the remote database.
Required
password
string
The password used to connect to the remote database.
Required
idpHost
string
The Okta identity provider host URL.
Required if using okta
.
appID
string
The Okta application ID.
Required if using okta
.
role
string
The Okta role.
Required if using okta
.
handler
Google BigQuery
, Presto
, and Trino
ssl
boolean
Set to true
to enable SSL communication with the remote database.
database
string
The database name.
schema
string
The schema in the remote database.
userFiles
array
Array of objects; each object must have keyName
(corresponds to a connection string option), content
(base-64 encoded content), and userFilename
(the name of the file - for display purposes in the app).
connectionStringOptions
string
Additional connection string options to be used when connecting to the remote database.
hostname
string
The hostname of the remote database instance.
port
number
The port of the remote database instance.
authenticationMethod
string
The type of authentication method to use. Options include userPassword
, keyPair
, oAuthClientCredentials
, token
, oAuthM2M
, keyFile
, auto
and okta
.
username
string
The username used to connect to the remote database.
password
string
The password used to connect to the remote database.
sid
string
For Google BigQuery, the BigQuery project used to build the connection string.
BigQuery: Does not require hostname
and password
. Requires sid
, which is the GCP project ID, and userFiles
with the keyName
of KeyFilePath
and the base64-encoded keyfile.json
.
Trino: authenticationMethod
can be No Authentication
, LDAP Authentication
, or Kerberos Authentication
.
nameTemplate
dataSourceFormat
string
Format to be used to name the data sources created in this group.
schemaFormat
string
Format to be used to name the Immuta schema created in this group.
tableFormat
string
Format to be used to name the Immuta table created in this group.
schemaProjectNameFormat
string
Format to be used to name the Immuta schema project created in this group.
Available templates include
<tablename>
<schema>
<database>
All cases of the name in Immuta should be lowercase.
For example, consider a table TPC.CUSTOMER
that is given the following nameTemplate
:
This nameTemplate
will produce a data source named tpc.customer
in a schema project named tpc
.
options
staleDataTolerance
integer
The length in seconds that data for these sources can be cached.
disableSensitiveDataDiscovery
boolean
If true, Immuta will not perform sensitive data discovery. Default: false
.
domainCollectionId
string
The ID of the domain to assign the data sources to. Use the GET /collection endpoint to retrieve domains and domain IDs.
hardDelete
boolean
If true
, when the table backing the data source is no longer available, the data source in Immuta is deleted. If this is false
, the data source will be disabled. Default: false
.
tableTags
array
An array of tags (strings) to place at the data source level on every data source.
owners
type
group or user
The type of owner that is being added.
name
string
The name of the group or the user (username they log in with).
iam (optional)
string
The ID of the identity manager system the user or group comes from. If excluded, any user/group that matches will be added as an owner.
sources
Best practice: Use Subscription Policies to Control Access
If you are not tagging individual columns, omit sources
to create data sources for all tables in the schema or database, and then use Subscription Policies to control access to the tables instead of excluding them from Immuta.
This attribute configures which sources are created. If sources
is not provided, all sources from the given connection will be created.
There are 3 types of sources than can be specified:
If you specify any sources (either tables or queries), but you still want to create data sources for the rest of the tables in the schema or database, you can specify all
as a source:
Best practice: Use schema monitoring
Excluding sources or specifying all: true
will turn on automatic schema monitoring in Immuta. As tables are added or removed, Immuta will look for those changes on a schedule (by default, once a day) and either disable or delete data sources for removed tables or create data sources for new tables. New tables will be tagged New
so that you can build a policy to restrict access to new tables until they are evaluated by data owners. Data owners will be notified of new tables, and all subscribers will be notified if data sources are disabled or deleted.
Immuta recommends creating a view in your native database instead of using this option, but if that is not possible, you can create data sources based on SQL statements:
If you want to select specific tables to be created as data sources, or if you want to tag individual data sources or columns within a data source, you need to leverage this parameter:
When specifying a table or query there are other options that can be specified:
columnDescriptions
description
A short description for the data source.
documentation
Markdown-supported documentation for the data source.
naming
See the example above in Specify a Query. This is required for query-based sources, but is optional for table-based sources and can be used to override the nameTemplate
provided for the whole database/schema.
owners
Specify owners for an individual data source. The payload is the same as owners at the root level.
tags
If any columns are specified, those are the only columns that will be available in the data source.
If no columns are specified, Immuta will look for new or removed columns on a schedule (by default, once a day) and add or remove columns from the data sources automatically as needed.
New columns will be tagged New
, so you can build a policy to automatically mask new columns until they are approved.
Data Owners will be notified when columns are added or removed.
columns
is an array of objects for each column:
name
The column name.
dataType
The data type.
nullable
Whether or not the column contains null
.
remoteType
The actual data type in the remote database.
primaryKey
Specify whether this is the primary key of the remote table.
description
Describe the column.
You can add descriptions to columns without having to specify all the columns in the data source. columnDescriptions
is an array of objects with the following schema:
columnName
string
The column name.
description
string
The description of the column.
You can add tags to columns or data sources. tags
is an object with the following schema:
table
array
An array of tags (strings) to add to this table.
columns
array
An array of objects that specifies columnName (string) and tags (an array to tags).
Audience: Data Engineers
Content Summary: This page contains example request payloads for creating data sources.
Your nativeSchemaFormat
must contain _immuta
to avoid schema name conflicts.
Sample data is processed during computation of k-anonymization policies
When a k-anonymization policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process that generates rules enforcing k-anonymity. The results of this query, which may contain data that is subject to regulatory constraints such as GDPR or HIPAA, are stored in Immuta's metadata database.
The location of the metadata database depends on your deployment:
Self-managed Immuta deployment: The metadata database is located in the server where you have your external metadata database deployed.
SaaS Immuta deployment: The metadata database is located in the AWS global segment you have chosen to deploy Immuta.
To ensure this process does not violate your organization's data localization regulations, you need to first activate this masking policy type before you can use it in your Immuta tenant. To enable k-anonymization for your account, see the k-anonymization section on the app settings how-to guide.
Sample data is processed during computation of k-anonymization policies
When a k-anonymization policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process that generates rules enforcing k-anonymity. The results of this query, which may contain data that is subject to regulatory constraints such as GDPR or HIPAA, are stored in Immuta's metadata database.
The location of the metadata database depends on your deployment:
Self-managed Immuta deployment: The metadata database is located in the server where you have your external metadata database deployed.
SaaS Immuta deployment: The metadata database is located in the AWS global segment you have chosen to deploy Immuta.
To ensure this process does not violate your organization's data localization regulations, you need to first activate this masking policy type before you can use it in your Immuta tenant. To enable k-anonymization for your account, see the k-anonymization section on the app settings how-to guide.
Sample data is processed during computation of k-anonymization policies
When a k-anonymization policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process that generates rules enforcing k-anonymity. The results of this query, which may contain data that is subject to regulatory constraints such as GDPR or HIPAA, are stored in Immuta's metadata database.
The location of the metadata database depends on your deployment:
Self-managed Immuta deployment: The metadata database is located in the server where you have your external metadata database deployed.
SaaS Immuta deployment: The metadata database is located in the AWS global segment you have chosen to deploy Immuta.
To ensure this process does not violate your organization's data localization regulations, you need to first activate this masking policy type before you can use it in your Immuta tenant. To enable k-anonymization for your account, see the k-anonymization section on the app settings how-to guide.
Sample data is processed during computation of randomized response policies
When a randomized response policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process. To enforce the policy, Immuta generates and stores predicates and a list of allowed replacement values that may contain data that is subject to regulatory constraints (such as GDPR or HIPAA) in Immuta's metadata database.
The location of the metadata database depends on your deployment:
Self-managed Immuta deployment: The metadata database is located in the server where you have your external metadata database deployed.
SaaS Immuta deployment: The metadata database is located in the AWS global segment you have chosen to deploy Immuta.
To ensure this process does not violate your organization's data localization regulations, you need to first activate this masking policy type before you can use it in your Immuta tenant. To enable randomized response for your account, see the randomized response section on the app settings how-to guide.