Immuta V2 API
Policy as code benefits
Reduces complexity: The data source API has been simplified to only require the connection information in most instances and one endpoint for all database technologies.
Maintains less state: Whether updating or creating, the same endpoint is used, and the same data is passed. No ids are required, so no additional state is required.
Requires fewer steps: Only an API key is required; no additional authentication step is required before using the API.
Integrates with Git: Define data sources and policies in files that can be tracked in Git and easily pushed to Immuta. Both JSON and YAML are supported for more flexibility. (For example, use YAML to add comments in files.)
Authentication
Before using the Immuta API, users need to authenticate with an API key. To generate an API key, complete the following steps in the Immuta UI.
Click your initial in the top right corner of the screen and select Profile.
Go to the API Keys tab and then click Generate Key.
Complete the required fields in the modal and click Create.
Pass the key that is provided in the Authorization header:
Endpoints and details
All of the API endpoints described below take either JSON or YAML, and the endpoint and payload are the same for both creating and updating data sources, policies, projects, and purposes.
Create a data source
The V2 API is built to easily enable an “as-code” approach to managing your data sources, so each time you POST data to this endpoint, you are expected to provide complete details of what you want in Immuta. The two examples below illustrate this design:
If you POST once explicitly defining a single table under sources, and then POST a second time with a different table, this will result in a single data source in Immuta pointing to the second table and the first data source will be deleted or disabled (depending on the value specified for
hardDelete
).If you POST once with two
tableTags
specified (e.g.,Tag.A
andTag.B
) and do a follow-up POST withtableTags: [Tag.C]
, onlyTag.C
will exist on all of the tables specified; tagsTag.A
andTag.B
will be removed from all the data sources. Note: If you are frequently using the v2 API to update data tags, consider using the custom REST catalog integration instead.
Through this endpoint, you can create or update all data sources for a given schema or database.
POST /api/v2/data
Query parameters
Parameter | Description |
---|---|
dryRun |
|
wait |
|
Payload
Attribute | Description |
---|---|
connectionKey |
|
| |
| |
| |
| |
|
Note: See Create Data Source Payload Attribute Details for more details about these attributes.
Request payload examples
Create a policy
POST /api/v2/policy
Query parameters
Parameter | Description |
---|---|
dryRun |
|
reCertify |
|
Payload
Attribute | Description |
---|---|
policyKey |
|
name |
|
type |
|
actions |
|
ownerRestrictions (optional) |
|
circumstances (optional) |
|
circumstanceOperator (optional) |
|
staged (optional) |
|
certification (optional) |
|
Note: See Policy Request Payload Examples for payload details.
Request payload examples**
Create a project
POST /api/v2/project
Query parameters
Parameter | Description |
---|---|
dryRun |
|
deleteDataSourcesOnWorkspaceDelete |
|
Payload
Attribute | Description |
---|---|
projectKey |
|
name |
|
description (optional) |
|
documentation (optional) |
|
allowedMaskedJoins (optional) |
|
purposes (optional) |
|
datasources (optional) |
|
subscriptionPolicy (optional) |
|
workspace (optional) |
|
equalization (optional) |
|
tags (optional) |
|
Note: See Project Request Payload Examples for payload details.
Request payload examples
Create a purpose
POST /api/v2/purpose
Query parameters
Parameter | Description |
---|---|
dryRun |
|
reAcknowledgeRequired |
|
Payload
Attribute | Description |
---|---|
name |
|
description (optional) |
|
acknowledgement (optional) |
|
kAnonNoiseReduction (optional) |
|
Note: See Purposes Request Payload Examples for payload details.
Request payload examples
Best practices
Register all tables in a schema by enabling schema monitoring. Schema monitoring will negate the need to re-call the
/data
endpoint when you have new tables because schema monitoring will automatically recognize and register them.To frequently update data tags on a data source, use the custom REST catalog integration instead of the
/data
endpoint.Use the Data engineering with limited policy downtime guide. Rather than relying on re-calling the
/data
endpoint after a dbt run to update your data sources, follow the dbt and transform workflow and use schema monitoring to recognize changes to your data sources and reapply policies.
Last updated
Was this helpful?