arrow-left

All pages
gitbookPowered by GitBook
1 of 4

Loading...

Loading...

Loading...

Loading...

API Changes

hashtag
Deprecated endpoints

The following endpoints have been deprecated with connections. Use the recommended endpoint instead.

Action
Deprecated endpoint
Use this with connections instead

hashtag
Impacted endpoints

If you have any automated actions using the following APIs, ensure you do the required change after the upgrade to ensure they continue working as expected.

Action
Impacted endpoint
Required change

Create a single data source

  • POST /{technology}/handler

  • POST /api/v2/data

Step 1: Ensure your system user has been granted access to the relevant object in the data platform.

Step 2: Wait until the next object sync or manually trigger a metadata crawl using POST /data/crawl/{objectPath*}.

Step 3: If the parent schema has activateNewChildren: false,

PUT /data/settings/{objectPath*} with settings: isActive: true.

Bulk create data sources

  • POST /{technology}/handler

  • POST /api/v2/data

Step 1: Ensure your system user has been granted access to the relevant object in the data platform.

Step 2: Wait until the next object sync or manually trigger a metadata crawl using POST /data/crawl/{objectPath*}.

Step 3: If the parent schema has activateNewChildren: false,

PUT /data/settings/{objectPath*} with settings: isActive: true.

Edit a data source connection

POST /api/v2/data

No substitute. Data sources no longer have their own separate connection details but are tied to the parent connection.

Bulk edit data source's connections

  • PUT /{technology}/bulk

  • POST /api/v2/data

  • PUT /{technology}/handler/{handlerId}

No substitute. Data sources no longer have their own separate connection details but are tied to the parent connection.

Run schema detection (object sync)

PUT /dataSource/detectRemoteChanges

POST /data/crawl/{objectPath*}

Delete a data source

DELETE /dataSource/{dataSourceId}

DELETE /data/object/{objectPath*}

Bulk delete data sources

  • PUT /dataSource/bulk/{delete}

  • DELETE /api/v2/data/{connectionKey}

  • DELETE /{technology}/handler/{handlerId}

  • DELETE /dataSource/{dataSourceId}

DELETE /data/object/{objectPath*}

Enable a single data source

PUT /dataSource/{dataSourceId}

PUT /data/settings/{objectPath*} with settings: isActive: true

Bulk enable data sources

PUT /dataSource/bulk/{restore}

PUT /data/settings/{objectPath*} with settings: isActive: true

Disable a single data source

PUT /dataSource/{dataSourceId}

PUT /data/settings/{objectPath*} with settings: isActive: false

Bulk disable data sources

PUT /dataSource/bulk/{disable}

PUT /data/settings/{objectPath*} with settings: isActive: false

Edit a data source name

PUT /dataSource/{dataSourceId}

No substitute. Data source names are automatically generated based on information from your data platform.

Edit a display name

POST /api/v2/data/{connectionKey}

No substitute. Data sources no longer have their own separate connection details but are tied to the parent connection.

Override a host name

PUT /dataSource/{dataSourceId}/overrideHost

No substitute. Data sources no longer have their own separate connection details but are tied to the parent connection.

Create an integration/connection

POST /integrations

POST /data/connection

Update an integration/connection

PUT /integrations/{integrationId}

PUT /data/connection/{connectionKey}

Delete an integration/connection

DELETE /integrations/{integrationId}

DELETE /data/object/{connectionKey}

Delete and update a data dictionary

  • DELETE /dictionary/{dataSourceId}

  • POST /dictionary/{dataSourceId}

  • PUT /dictionary/{dataSourceId}

No substitute. Data source dictionaries are automatically generated based on information from your data platform.

Update a data source owner

  • PUT /dataSource/{dataSourceId}/access/{id}

  • DELETE /dataSource/{dataSourceId}/unsubscribe

PUT /data/settings/{objectPath*} with settings: dataOwners

Response to a data source owner request

  • POST /subscription/deny

  • POST /subscription/deny/bulk

PUT /data/settings/{objectPath*} with settings: dataOwners

Search for a data source

  • GET /dataSource/name/{dataSourceName}

  • GET /dataSource

  • Data source names will change with the upgrade. Update {dataSourceName} in the request with the new data source name.

  • Data sources names will change with the upgrade. Update the searchText in the payload with the new data source name.

Search schema names

GET /schemas

This endpoint will not search the schemas of connection data sources. Instead use the GET /data/object/{objectPath}.

Upgrading to Connections

Connections allow you to register your data objects in a technology through a single connection, making data registration more scalable for your organization. Instead of registering schema and databases individually, you can register them all at once and allow Immuta to monitor your data platform for changes so that data sources are added and removed automatically to reflect the state of data on your data platform.

This document is meant to guide to you to connections from a configured integration. If you are a new user without any current integrations, see the Connections reference guide instead.

triangle-exclamation

Exceptions

Do not upgrade to connections if you meet any of the criteria below:

  • You are using the Databricks Spark integration

  • You are using the capability with Databricks Unity Catalog

  • You are using

hashtag
Integrations

circle-exclamation

Integrations are now connections. Once the upgrade is complete, you will control most integration settings at the connection level via the Connections tab in Immuta.

Integrations (existing)
Connections (new)

hashtag
Supported technology and authorization methods

hashtag
Snowflake

  • Snowflake OAuth

  • Username and password

  • Key pair

hashtag
Databricks

  • Personal Access Token

  • M2M OAuth

hashtag
Trino

  • Username and password

  • OAuth 2.0

triangle-exclamation

Unsupported technologies

The following technologies are not yet supported with connections:

  • Amazon S3

circle-exclamation

Additional connection string options

When registering data sources using the legacy method, there is a field for Additional Connection String Options that your Immuta representative may have instructed you to use. If you did enter any additional connection information there, check to ensure the information you included is supported with connections. Only the following Additional Connection String Options inputs are supported:

hashtag
Supported features

The tables below outline Immuta features, their availability with integrations, and their availability with connections.

Feature
Integrations (existing)
Connections (new)

hashtag
Data sources

circle-check

There will be no policy downtime on your data sources while performing the upgrade.

hashtag
Supported object types

See the integration's reference guide for the supported object types for each technology:

hashtag
Data source names

Data source names will change when migrating from integrations to connections. The new data source names will contain the connection, database, schema, and object name. For example, on Snowflake this will typically mean that my_table will become My Connection.MY_DATABASE.MY_SCHEMA.MY_TABLE.

Having multiple objects with the same name within the same schema is currently unsupported and will lead to object uniqueness violations in Immuta. In this scenario, you can work around it as follows:

  • Ensure every object within the same schema has a unique name, or

  • Remove the visibility of one of the objects from the Immuta system account. This will ensure only one of the objects is seen by the system account and ingested in Immuta.

hashtag
Hierarchy

With connections, your data sources are ingested and presented to reflect the infrastructure hierarchy of your connected data platform. For example, this is what the new hierarchy will look like for a Snowflake connection:

Integrations (existing)
Connections (new)

hashtag
Tags

circle-check

Connections will not change any tags currently applied on your data sources.

hashtag
Tag ingestion

When , use tag ingestion to automatically apply tags from your data platform onto your Immuta data sources.

If you want all data objects from connections to have data tags ingested from the data provider into Immuta, ensure the credentials provided on the Immuta app settings page for the external catalog feature can access all the data objects. Any data objects the credentials do not have access to will not be tagged in Immuta. In practice, it is recommended to just use the same credentials for the connection and tag ingestion.

hashtag
Consideration

circle-exclamation

If you previously ingested data sources using the V2 /data endpoint this limitation applies to you.

The V2 /data endpoint allows users to register data sources and attach a tag automatically when the data sources are registered in Immuta.

The V2 /data endpoint is not supported with a connection, and there is no substitution for this behavior at this time. If you require default tags for newly onboarded data sources, reach out to your Immuta support professional before upgrading.

hashtag
Users and permissions

hashtag
With integrations

Permission
Action
Object

hashtag
With connections

Permission
Action
Object

hashtag
Schema monitoring

circle-check

Schema monitoring is renamed to object sync with connections, as it can also monitor for changes at database and connection level.

During object sync, Immuta crawls your connection to ingest metadata for every database, schema, and table that the Snowflake role or Databricks account credentials you provided during the configuration has access to. Upon completion of the upgrade, the tables' states depend on your previous schema monitoring settings:

  • If you had schema monitoring enabled on a schema: All tables from that schema will be registered in Immuta as enabled data sources.

  • If you had schema monitoring disabled on a schema: All tables from that schema (that were not already registered in Immuta) will be registered as disabled data objects. They are visible from the Data Objects tab in Immuta, but are not listed as data sources until they are enabled.

After the initial upgrade, object sync runs on your connection every 24 hours (at 1:00 AM UTC) to keep your tables in Immuta in sync. Additionally, users can also via the UI or API.

hashtag
Schema projects

With integrations, many settings and the connection details for data sources were controlled in the schema project, including schema monitoring. This functionality is no longer needed with connections and now you can control connection details in a central spot.

circle-exclamation

Schema project owners

With integrations, schema project owners can become schema monitoring owners, control connection settings, and manage subscription policies on the schema project.

These schema project owners will not be represented in connections, and if you want them to have similar abilities, .

hashtag
Additional settings

Object sync provides additional controls compared to schema monitoring:

  • Object status: Connections, databases, schemas and tables can be marked enabled, which for tables make them appear as data sources, or disabled. These statuses are inherited to all lower objects by default, but that can be overridden. For example, if you make a database disabled, all schemas and tables within that database will inherit the status to be disabled. However, if you want one of those tables to be a data source, you can manually enable it.

  • Enable new data objects: This setting controls what state new objects are registered as in Immuta when found by object sync.

hashtag
Comparison

Integrations (existing)
Connections (new)

hashtag
Performance

Connections use a new architectural pattern resulting in an improved performance when monitoring for metadata changes in your data platform, particularly with large numbers of data sources. The following scenarios are regularly tested in an isolated environment in order to provide a benchmark. These numbers can vary based on a number of factors such as (but not limited to) number and type of policies applied, overall API and user activity in the system, connection latency to your data platform.

hashtag
Databricks Unity Catalog

Data sources with integrations required users to . However, this job has been fully automated on data sources with connections, and this step is no longer necessary.

hashtag
APIs

Consolidating integration setup and data source registration into a single connection significantly simplifies programmatic interaction with the Immuta APIs. Actions that used to be managed through multiple different endpoints can now be achieved through one simple and standardized one. As a result, multiple API endpoints are blocked once a user has upgraded their connection.

All blocked APIs will send an error indicating "400 Bad Request - [...]. Use the /data endpoint." This error indicates that you will need to update your processes that are calling the Immuta APIs to leverage the new /data endpoint instead. For details, see the page.

Azure Synapse Analytics

  • Databricks Spark

  • Google BigQuery

  • Snowflake data sources with the private key file password set using Additional Connection String Options.

  • Trino data sources with proxy set using Additional Connection String Options

  • Trino data sources with SSL/TLS enabled and certificate validation disabled using Additional Connection String Options

  • Not supported

    Project workspaces

    Not supported

    Not supported

    User impersonation

    Not supported

    Not supported

    Feature
    Integrations (existing)
    Connections (new)

    Snowflake lineage

    Supported

    Supported

    Query audit

    Supported

    Supported

    Tag ingestion

    Supported

    Supported

    Feature
    Integrations (existing)
    Connections (new)

    Query audit

    Supported

    Supported

    Tag ingestion

    Not supported

    Not supported

    Not supported

    Supported

    User impersonation

    Enable
    : New data objects found by object sync will automatically be enabled and tables will be registered as data sources.
  • Disable: This is the default. New data objects found by object sync will be disabled.

  • Can you adjust the default schedule?

    No

    No

    New tags applied automatically

    New tags are applied automatically for a data source being created, a column being added, or a column type being updated on an existing data source

    New tags are applied automatically for a column being added or a column type being updated on an existing data source

    Integrations are set up from the Immuta app settings page or via the API. These integrations establish a relationship between Immuta and your data platform for policy orchestration.

    Then tables are registered as data sources through an additional step with separate credentials. Schemas and databases are not reflected in the UI.

    Integrations and data sources are set up together with a single connection per account between Immuta and your data platform.

    Based on the privileges granted to the Immuta system user, metadata from databases, schemas, and tables is automatically pulled into Immuta and continuously monitored for any changes.

    Query audit

    Supported

    Supported

    Tag ingestion

    Supported

    Supported

    Connection tags

    Not supported

    Supported

    Workspace-catalog binding

    Integration

    Connection

    -

    Database

    -

    Schema

    Data source

    Data source (once enabled, becomes available for policy enforcement)

    APPLICATION_ADMIN

    Configure integration

    Integration

    CREATE_DATA_SOURCE

    Register tables

    Data source

    Data owner

    Manage data sources

    Data source

    APPLICATION_ADMIN

    Register the connection

    Connection, database, schema, data source

    GOVERNANCE or APPLICATION_ADMIN

    Manage all connections

    Connection, database, schema, data source

    Data owner

    Manage data objects

    Connection, database, schema, data source

    Name

    Schema monitoring and column detection

    Object sync

    Where to turn on

    Enable (optionally) when configuring a data source

    Enabled by default

    Where to update the feature

    Enable or disable from the schema project

    Object sync cannot be disabled

    Default schedule

    Every 24 hours

    Scenario 1 Running object sync on a schema with 10,000 data sources with 50 columns each

    172.2 seconds on average

    Scenario 2 Running object sync on a schema with 1,000 data sources with 10 columns each

    9.38 seconds on average

    Scenario 3 Running object sync on a schema with 1 data source with 50 columns

    0.512 seconds on average

    workspace-catalog binding
    the V2 /data endpoint to register data sources and attach tags automatically
    Databricks Unity Catalog
    Snowflake
    Trino
    supported
    manually run object sync
    you must make them Data Owner on the schema
    manually create the schema monitoring job in Databricks
    API changes

    Supported

    Every 24 hours (at 1:00 AM UTC)

    Not supported

    Supported

    Project workspaces

    Not supported

    Not supported

    User impersonation

    Not supported

    Not supported

    Supported

    Supported

    Multi-cluster support

    Not supported

    Supported

    Connection tags
    Connection tags

    FAQ

    chevron-rightWhat are connections?hashtag

    Connections allow you to register your data objects in a technology through a single connection, making data registration more scalable for your organization. Instead of registering schema and databases individually, you can register them all at once and allow Immuta to monitor your data platform for changes so that data sources are added and removed automatically to reflect the state of data on your data platform.

    chevron-rightWhat will change with connections?hashtag

    There are three high-level changes:

    • Automatic table registration: All unregistered tables that the configured credentials have access to will be registered into Immuta in a disabled state. All tables and schemas under this connection with schema monitoring on will continue to be monitored with object sync.

    • Simplified table names: All data source names will now reflect the connection and hierarchy. If your tables were not already named this way, the names will be changed.

    • Fewer API endpoints: When this upgrade begins, a select number of data and integration API endpoints will be blocked for this connection and its tables. See the documentation, linked below, for a complete list of the impacted endpoints.

    For a more in-depth look at the differences, see the and .

    chevron-rightHow will connections affect my existing integrations?hashtag

    Your integrations will continue to work throughout the upgrade process with zero downtime.

    Post upgrade, some configuration options will now be part of the connection menu: credentials, enabling, and disabling. The Snowflake and Databricks Unity Catalog integrations will continue to be visible in the Integrations tab on the Immuta app settings page, but Trino integrations will only exist in connections.

    chevron-rightHow will connections affect my existing data sources?hashtag

    All pre-existing data sources will continue to exist. If you have used a custom naming template, you will see names getting updated as the connection uses the information from your data platform to generate data source names.

    chevron-rightHow will connections affect my policies?hashtag

    Connections do not impact any policies or user access in your data platform.

    chevron-rightHow will connections affect my users?hashtag

    Connections will not affect your registered users or their access in your data platform.

    However, Immuta administrators will see notable differences in the UI with a new Connections tab now being displayed.

    chevron-rightDo I need to change my scripts running against the Immuta APIs if I want to use connections?hashtag

    Most likely, since there are a number of API changes in regard to data sources and integrations. See the for details about each affected API endpoint and the substitute.

    chevron-rightAre the permissions required for the system user different with connections?hashtag

    No, the Immuta system user still requires the same privileges in your data platform. See the for more details.

    chevron-rightWhat is going to happen with the integrations?hashtag

    We recommend upgrading to connections as soon as possible due to their many benefits.

    Legacy onboarding patterns will no longer be supported by the following dates and technologies:

    • September 2025: Databricks Unity Catalog and Snowflake

    chevron-rightIs my environment the right choice for the connections upgrade?hashtag

    Connections support upgrading from legacy Snowflake, Databricks Unity Catalog, or Trino technologies. See the for more details and reach out to your Immuta support professional if you are interested in the upgrade.

    chevron-rightCan I run object sync on data sources not registered with a connection?hashtag

    No. Object sync is only for data sources registered through connections. Continue to use schema monitoring for any existing data sources that are not upgraded.

    September 2026: Trino

    Upgrading to a connection guide
    Before you begin page
    API changes guide
    Upgrading to a connection guide
    Upgrading to a connection guide

    Before You Begin

    Connections are an improvement from the existing process for not only onboarding your data sources but also managing the integration. However, there are some differences between the two processes that should be noted and understood before you start with the upgrade.

    1. API changes: See the API changes page for a complete breakdown of the APIs that will not work once you begin the upgrade. These changes will mostly affect users with automated API calls around schema monitoring and data source registration.

    2. Automated data source names: Previously, you could name data sources manually. However, data sources from connections are automatically named using the information (database, schema, table) and casing from your data platform. For example, on Snowflake this will typically mean that my_table will become My Connection.MY_DATABASE.MY_SCHEMA.MY_TABLE.

      If you are leveraging Immuta APIs, you may need to adjust code to allow for the new data source names.

    3. Schema projects phased out: With integrations, many settings and the connection info for data sources were controlled in the schema project. This functionality is no longer needed with connections and now you can control connection details in a central spot.

    4. New hierarchy display: With integrations, tables were brought in as data sources and presented as a flat list on the data source list page. With connections, databases and schemas are displayed as objects too.

    5. Change from schema monitoring to object sync: Object metadata synchronization between Immuta and your data platform is no longer optional but always required:

      1. If schema monitoring is off before the upgrade: Once the connection is registered, everything the system user can see will be pulled into Immuta and, if it didn't already exist in Immuta, it will be a disabled object. These disabled objects exist so you can see them, but policy is not protecting the objects, and they will not appear as data sources.

      2. If schema monitoring is on before the upgrade: Once the connection is registered, everything the system user can see will be pulled into Immuta. If it already existed in Immuta, it will be an enabled object and continue to appear as data source.

    6. Enabling a connection will enable all databases, schemas, and tables in the hierarchy: If the connection is disabled after completing your upgrade to connections, only enable the host if you want to enable all databases, schemas, and tables within it.

      Enabling a table that is ordinarily disabled will elevate it to a data source. Immuta will then apply data and subscription policies on that data source.