The following endpoints have been deprecated with connections. Use the recommended endpoint instead.
If you have any automated actions using the following APIs, ensure you do the required change after the upgrade to ensure they continue working as expected.
Create a single data source
POST /{technology}/handler
POST /api/v2/data
Step 1: Ensure your system user has been granted access to the relevant object in the data platform.
Step 2: Wait until the next object sync or manually trigger a metadata crawl using POST /data/crawl/{objectPath*}.
Step 3: If the parent schema has activateNewChildren: false,
PUT /data/settings/{objectPath*} with settings: isActive: true.
Bulk create data sources
POST /{technology}/handler
POST /api/v2/data
Step 1: Ensure your system user has been granted access to the relevant object in the data platform.
Step 2: Wait until the next object sync or manually trigger a metadata crawl using POST /data/crawl/{objectPath*}.
Step 3: If the parent schema has activateNewChildren: false,
PUT /data/settings/{objectPath*} with settings: isActive: true.
Edit a data source connection
POST /api/v2/data
No substitute. Data sources no longer have their own separate connection details but are tied to the parent connection.
Bulk edit data source's connections
PUT /{technology}/bulk
POST /api/v2/data
PUT /{technology}/handler/{handlerId}
No substitute. Data sources no longer have their own separate connection details but are tied to the parent connection.
Run schema detection (object sync)
PUT /dataSource/detectRemoteChanges
Delete a data source
DELETE /dataSource/{dataSourceId}
Bulk delete data sources
PUT /dataSource/bulk/{delete}
DELETE /api/v2/data/{connectionKey}
DELETE /{technology}/handler/{handlerId}
DELETE /dataSource/{dataSourceId}
Enable a single data source
PUT /dataSource/{dataSourceId}
PUT /data/settings/{objectPath*} with settings: isActive: true
Bulk enable data sources
PUT /dataSource/bulk/{restore}
PUT /data/settings/{objectPath*} with settings: isActive: true
Disable a single data source
PUT /dataSource/{dataSourceId}
PUT /data/settings/{objectPath*} with settings: isActive: false
Bulk disable data sources
PUT /dataSource/bulk/{disable}
PUT /data/settings/{objectPath*} with settings: isActive: false
Edit a data source name
PUT /dataSource/{dataSourceId}
No substitute. Data source names are automatically generated based on information from your data platform.
Edit a display name
POST /api/v2/data/{connectionKey}
No substitute. Data sources no longer have their own separate connection details but are tied to the parent connection.
Override a host name
PUT /dataSource/{dataSourceId}/overrideHost
No substitute. Data sources no longer have their own separate connection details but are tied to the parent connection.
Create an integration/connection
POST /integrations
Update an integration/connection
PUT /integrations/{integrationId}
Delete an integration/connection
DELETE /integrations/{integrationId}
Delete and update a data dictionary
DELETE /dictionary/{dataSourceId}
POST /dictionary/{dataSourceId}
PUT /dictionary/{dataSourceId}
No substitute. Data source dictionaries are automatically generated based on information from your data platform.
Update a data source owner
PUT /dataSource/{dataSourceId}/access/{id}
DELETE /dataSource/{dataSourceId}/unsubscribe
PUT /data/settings/{objectPath*} with settings: dataOwners
Response to a data source owner request
POST /subscription/deny
POST /subscription/deny/bulk
PUT /data/settings/{objectPath*} with settings: dataOwners
Search for a data source
Data source names will change with the upgrade. Update {dataSourceName} in the request with the new data source name.
Data sources names will change with the upgrade. Update the searchText in the payload with the new data source name.
Search schema names
This endpoint will not search the schemas of connection data sources. Instead use the GET /data/object/{objectPath}.
Connections allow you to register your data objects in a technology through a single connection, making data registration more scalable for your organization. Instead of registering schema and databases individually, you can register them all at once and allow Immuta to monitor your data platform for changes so that data sources are added and removed automatically to reflect the state of data on your data platform.
This document is meant to guide to you to connections from a configured integration. If you are a new user without any current integrations, see the Connections reference guide instead.
Exceptions
Do not upgrade to connections if you meet any of the criteria below:
You are using the Databricks Spark integration
You are using the capability with Databricks Unity Catalog
You are using
Integrations are now connections. Once the upgrade is complete, you will control most integration settings at the connection level via the Connections tab in Immuta.
Snowflake OAuth
Username and password
Key pair
Personal Access Token
M2M OAuth
Username and password
OAuth 2.0
Unsupported technologies
The following technologies are not yet supported with connections:
Amazon S3
Additional connection string options
When registering data sources using the legacy method, there is a field for Additional Connection String Options that your Immuta representative may have instructed you to use. If you did enter any additional connection information there, check to ensure the information you included is supported with connections. Only the following Additional Connection String Options inputs are supported:
The tables below outline Immuta features, their availability with integrations, and their availability with connections.
There will be no policy downtime on your data sources while performing the upgrade.
See the integration's reference guide for the supported object types for each technology:
Data source names will change when migrating from integrations to connections. The new data source names will contain the connection, database, schema, and object name. For example, on Snowflake this will typically mean that my_table will become My Connection.MY_DATABASE.MY_SCHEMA.MY_TABLE.
Having multiple objects with the same name within the same schema is currently unsupported and will lead to object uniqueness violations in Immuta. In this scenario, you can work around it as follows:
Ensure every object within the same schema has a unique name, or
Remove the visibility of one of the objects from the Immuta system account. This will ensure only one of the objects is seen by the system account and ingested in Immuta.
With connections, your data sources are ingested and presented to reflect the infrastructure hierarchy of your connected data platform. For example, this is what the new hierarchy will look like for a Snowflake connection:
Connections will not change any tags currently applied on your data sources.
When , use tag ingestion to automatically apply tags from your data platform onto your Immuta data sources.
If you want all data objects from connections to have data tags ingested from the data provider into Immuta, ensure the credentials provided on the Immuta app settings page for the external catalog feature can access all the data objects. Any data objects the credentials do not have access to will not be tagged in Immuta. In practice, it is recommended to just use the same credentials for the connection and tag ingestion.
If you previously ingested data sources using the V2 /data endpoint this limitation applies to you.
The V2 /data endpoint allows users to register data sources and attach a tag automatically when the data sources are registered in Immuta.
The V2 /data endpoint is not supported with a connection, and there is no substitution for this behavior at this time. If you require default tags for newly onboarded data sources, reach out to your Immuta support professional before upgrading.
Schema monitoring is renamed to object sync with connections, as it can also monitor for changes at database and connection level.
During object sync, Immuta crawls your connection to ingest metadata for every database, schema, and table that the Snowflake role or Databricks account credentials you provided during the configuration has access to. Upon completion of the upgrade, the tables' states depend on your previous schema monitoring settings:
If you had schema monitoring enabled on a schema: All tables from that schema will be registered in Immuta as enabled data sources.
If you had schema monitoring disabled on a schema: All tables from that schema (that were not already registered in Immuta) will be registered as disabled data objects. They are visible from the Data Objects tab in Immuta, but are not listed as data sources until they are enabled.
After the initial upgrade, object sync runs on your connection every 24 hours (at 1:00 AM UTC) to keep your tables in Immuta in sync. Additionally, users can also via the UI or API.
With integrations, many settings and the connection details for data sources were controlled in the schema project, including schema monitoring. This functionality is no longer needed with connections and now you can control connection details in a central spot.
Schema project owners
With integrations, schema project owners can become schema monitoring owners, control connection settings, and manage subscription policies on the schema project.
These schema project owners will not be represented in connections, and if you want them to have similar abilities, .
Object sync provides additional controls compared to schema monitoring:
Object status: Connections, databases, schemas and tables can be marked enabled, which for tables make them appear as data sources, or disabled. These statuses are inherited to all lower objects by default, but that can be overridden. For example, if you make a database disabled, all schemas and tables within that database will inherit the status to be disabled. However, if you want one of those tables to be a data source, you can manually enable it.
Enable new data objects: This setting controls what state new objects are registered as in Immuta when found by object sync.
Connections use a new architectural pattern resulting in an improved performance when monitoring for metadata changes in your data platform, particularly with large numbers of data sources. The following scenarios are regularly tested in an isolated environment in order to provide a benchmark. These numbers can vary based on a number of factors such as (but not limited to) number and type of policies applied, overall API and user activity in the system, connection latency to your data platform.
Data sources with integrations required users to . However, this job has been fully automated on data sources with connections, and this step is no longer necessary.
Consolidating integration setup and data source registration into a single connection significantly simplifies programmatic interaction with the Immuta APIs. Actions that used to be managed through multiple different endpoints can now be achieved through one simple and standardized one. As a result, multiple API endpoints are blocked once a user has upgraded their connection.
All blocked APIs will send an error indicating "400 Bad Request - [...]. Use the /data endpoint." This error indicates that you will need to update your processes that are calling the Immuta APIs to leverage the new /data endpoint instead. For details, see the page.
Azure Synapse Analytics
Databricks Spark
Google BigQuery
Snowflake data sources with the private key file password set using Additional Connection String Options.
Trino data sources with proxy set using Additional Connection String Options
Trino data sources with SSL/TLS enabled and certificate validation disabled using Additional Connection String Options
Not supported
Project workspaces
Not supported
Not supported
User impersonation
Not supported
Not supported
Snowflake lineage
Supported
Supported
Query audit
Supported
Supported
Tag ingestion
Supported
Supported
Query audit
Supported
Supported
Tag ingestion
Not supported
Not supported
Not supported
Supported
User impersonation
Disable: This is the default. New data objects found by object sync will be disabled.
Can you adjust the default schedule?
No
No
New tags applied automatically
New tags are applied automatically for a data source being created, a column being added, or a column type being updated on an existing data source
New tags are applied automatically for a column being added or a column type being updated on an existing data source
Integrations are set up from the Immuta app settings page or via the API. These integrations establish a relationship between Immuta and your data platform for policy orchestration.
Then tables are registered as data sources through an additional step with separate credentials. Schemas and databases are not reflected in the UI.
Integrations and data sources are set up together with a single connection per account between Immuta and your data platform.
Based on the privileges granted to the Immuta system user, metadata from databases, schemas, and tables is automatically pulled into Immuta and continuously monitored for any changes.
Query audit
Supported
Supported
Tag ingestion
Supported
Supported
Not supported
Supported
Workspace-catalog binding
Integration
Connection
-
Database
-
Schema
Data source
Data source (once enabled, becomes available for policy enforcement)
APPLICATION_ADMIN
Configure integration
Integration
CREATE_DATA_SOURCE
Register tables
Data source
Data owner
Manage data sources
Data source
APPLICATION_ADMIN
Register the connection
Connection, database, schema, data source
GOVERNANCE or APPLICATION_ADMIN
Manage all connections
Connection, database, schema, data source
Data owner
Manage data objects
Connection, database, schema, data source
Name
Schema monitoring and column detection
Object sync
Where to turn on
Enable (optionally) when configuring a data source
Enabled by default
Where to update the feature
Enable or disable from the schema project
Object sync cannot be disabled
Default schedule
Every 24 hours
Scenario 1 Running object sync on a schema with 10,000 data sources with 50 columns each
172.2 seconds on average
Scenario 2 Running object sync on a schema with 1,000 data sources with 10 columns each
9.38 seconds on average
Scenario 3 Running object sync on a schema with 1 data source with 50 columns
0.512 seconds on average
Supported
Every 24 hours (at 1:00 AM UTC)
Not supported
Supported
Project workspaces
Not supported
Not supported
User impersonation
Not supported
Not supported
Supported
Supported
Multi-cluster support
Not supported
Supported
Connections allow you to register your data objects in a technology through a single connection, making data registration more scalable for your organization. Instead of registering schema and databases individually, you can register them all at once and allow Immuta to monitor your data platform for changes so that data sources are added and removed automatically to reflect the state of data on your data platform.
There are three high-level changes:
Automatic table registration: All unregistered tables that the configured credentials have access to will be registered into Immuta in a disabled state. All tables and schemas under this connection with schema monitoring on will continue to be monitored with object sync.
Simplified table names: All data source names will now reflect the connection and hierarchy. If your tables were not already named this way, the names will be changed.
Fewer API endpoints: When this upgrade begins, a select number of data and integration API endpoints will be blocked for this connection and its tables. See the documentation, linked below, for a complete list of the impacted endpoints.
For a more in-depth look at the differences, see the and .
Your integrations will continue to work throughout the upgrade process with zero downtime.
Post upgrade, some configuration options will now be part of the connection menu: credentials, enabling, and disabling. The Snowflake and Databricks Unity Catalog integrations will continue to be visible in the Integrations tab on the Immuta app settings page, but Trino integrations will only exist in connections.
All pre-existing data sources will continue to exist. If you have used a custom naming template, you will see names getting updated as the connection uses the information from your data platform to generate data source names.
Connections do not impact any policies or user access in your data platform.
Connections will not affect your registered users or their access in your data platform.
However, Immuta administrators will see notable differences in the UI with a new Connections tab now being displayed.
Most likely, since there are a number of API changes in regard to data sources and integrations. See the for details about each affected API endpoint and the substitute.
No, the Immuta system user still requires the same privileges in your data platform. See the for more details.
We recommend upgrading to connections as soon as possible due to their many benefits.
Legacy onboarding patterns will no longer be supported by the following dates and technologies:
September 2025: Databricks Unity Catalog and Snowflake
September 2026: Trino
Connections are an improvement from the existing process for not only onboarding your data sources but also managing the integration. However, there are some differences between the two processes that should be noted and understood before you start with the upgrade.
API changes: See the API changes page for a complete breakdown of the APIs that will not work once you begin the upgrade. These changes will mostly affect users with automated API calls around schema monitoring and data source registration.
Automated data source names: Previously, you could name data sources manually. However, data sources from connections are automatically named using the information (database, schema, table) and casing from your data platform. For example, on Snowflake this will typically mean that my_table will become My Connection.MY_DATABASE.MY_SCHEMA.MY_TABLE.
If you are leveraging Immuta APIs, you may need to adjust code to allow for the new data source names.
Schema projects phased out: With integrations, many settings and the connection info for data sources were controlled in the schema project. This functionality is no longer needed with connections and now you can control connection details in a central spot.
New hierarchy display: With integrations, tables were brought in as data sources and presented as a flat list on the data source list page. With connections, databases and schemas are displayed as objects too.
Change from schema monitoring to object sync: Object metadata synchronization between Immuta and your data platform is no longer optional but always required:
If schema monitoring is off before the upgrade: Once the connection is registered, everything the system user can see will be pulled into Immuta and, if it didn't already exist in Immuta, it will be a disabled object. These disabled objects exist so you can see them, but policy is not protecting the objects, and they will not appear as data sources.
If schema monitoring is on before the upgrade: Once the connection is registered, everything the system user can see will be pulled into Immuta. If it already existed in Immuta, it will be an enabled object and continue to appear as data source.
Enabling a connection will enable all databases, schemas, and tables in the hierarchy: If the connection is disabled after completing your upgrade to connections, only enable the host if you want to enable all databases, schemas, and tables within it.
Enabling a table that is ordinarily disabled will elevate it to a data source. Immuta will then apply data and subscription policies on that data source.