LogoLogo
2025.1Book a demo
  • Immuta Documentation - 2025.1
  • Configuration
    • Deploy Immuta
      • Requirements
      • Install
        • Managed Public Cloud
        • Red Hat OpenShift
      • Upgrade
        • Migrating to the New Helm Chart
        • Upgrading IEHC
      • Guides
        • Ingress Configuration
        • TLS Configuration
        • Cosign Verification
        • Production Best Practices
        • Rotating Credentials
        • External Cache Configuration
        • Enabling Legacy Query Engine
        • Private Container Registries
        • Air-Gapped Environments
      • Disaster Recovery
      • Troubleshooting
      • Conventions
    • Connect Data Platforms
      • Data Platforms Overview
      • Amazon S3
      • AWS Lake Formation
        • Register an AWS Lake Formation Connection
        • AWS Lake Formation Reference Guide
      • Azure Synapse Analytics
        • Getting Started with Azure Synapse Analytics
        • Configure Azure Synapse Analytics Integration
        • Reference Guides
          • Azure Synapse Analytics Integration
          • Azure Synapse Analytics Pre-Configuration Details
      • Databricks
        • Databricks Spark
          • Getting Started with Databricks Spark
          • How-to Guides
            • Configure a Databricks Spark Integration
            • Manually Update Your Databricks Cluster
            • Install a Trusted Library
            • Project UDFs Cache Settings
            • Run R and Scala spark-submit Jobs on Databricks
            • DBFS Access
            • Troubleshooting
          • Reference Guides
            • Databricks Spark Integration Configuration
              • Installation and Compliance
              • Customizing the Integration
              • Setting Up Users
              • Spark Environment Variables
              • Ephemeral Overrides
            • Security and Compliance
            • Registering and Protecting Data
            • Accessing Data
              • Delta Lake API
        • Databricks Unity Catalog
          • Getting Started with Databricks Unity Catalog
          • How-to Guides
            • Register a Databricks Unity Catalog Connection
            • Configure a Databricks Unity Catalog Integration
            • Migrate to Unity Catalog
          • Databricks Unity Catalog Integration Reference Guide
      • Google BigQuery
      • Redshift
        • Getting Started with Redshift
        • How-to Guides
          • Configure Redshift Integration
          • Configure Redshift Spectrum
        • Reference Guides
          • Redshift Integration
          • Redshift Pre-Configuration Details
      • Snowflake
        • Getting Started with Snowflake
        • How-to Guides
          • Register a Snowflake Connection
          • Configure a Snowflake Integration
          • Snowflake Table Grants Migration
          • Edit or Remove Your Snowflake Integration
          • Integration Settings
            • Enable Snowflake Table Grants
            • Use Snowflake Data Sharing with Immuta
            • Configure Snowflake Lineage Tag Propagation
            • Enable Snowflake Low Row Access Policy Mode
              • Upgrade Snowflake Low Row Access Policy Mode
        • Reference Guides
          • Snowflake Integration
          • Snowflake Data Sharing
          • Snowflake Lineage Tag Propagation
          • Snowflake Low Row Access Policy Mode
          • Snowflake Table Grants
          • Warehouse Sizing Recommendations
        • Explanatory Guides
          • Phased Snowflake Onboarding
      • Starburst (Trino)
        • Getting Started with Starburst (Trino)
        • How-to Guides
          • Configure Starburst (Trino) Integration
          • Customize Read and Write Access Policies for Starburst (Trino)
        • Starburst (Trino) Integration Reference Guide
      • Queries Immuta Runs in Remote Platforms
      • Legacy Integrations
        • Securing Hive and Impala Without Sentry
        • Enabling ImmutaGroupsMapping
      • Connect Your Data
        • Connections
          • How-to Guides
            • Run Object Sync
            • Manage Connection Settings
            • Use the Connection Upgrade Manager
              • Troubleshooting
          • Reference Guides
            • Connections Reference Guide
            • Upgrading to Connections
              • Before You Begin
              • API Changes
              • FAQ
        • Data Sources
          • Data Sources in Immuta
          • Register Data Sources
            • Amazon S3 Data Source
            • Azure Synapse Analytics Data Source
            • Databricks Data Source
            • Google BigQuery Data Source
            • Redshift Data Source
            • Snowflake Data Source
              • Bulk Create Snowflake Data Sources
            • Starburst (Trino) Data Source
          • Data Source Settings
            • How-to Guides
              • Manage Data Sources and Data Source Settings
              • Manage Data Source Members
              • Manage Access Requests and Tasks
              • Manage Data Dictionary Descriptions
              • Disable Immuta from Sampling Raw Data
            • Data Source Health Checks Reference Guide
          • Schema Monitoring
            • How-to Guides
              • Run Schema Monitoring and Column Detection Jobs
              • Manage Schema Monitoring
            • Reference Guides
              • Schema Monitoring
              • Schema Projects
            • Why Use Schema Monitoring?
    • Manage Data Metadata
      • Connect External Catalogs
        • Getting Started with External Catalogs
        • Configure an External Catalog
        • Reference Guides
          • External Catalogs
          • Custom REST Catalogs
            • Custom REST Catalog Interface Endpoints
      • Data Identification
        • Introduction
        • Getting Started with Data Identification
        • How-to Guides
          • Use Identification
          • Manage Identifiers
          • Run and Manage Identification
          • Manage Identification Frameworks
          • Use Sensitive Data Discovery (SDD)
        • Reference Guides
          • How Competitive Criteria Analysis Works
          • Built-in Identifier Reference
            • Built-In Identifier Changelog
          • Built-in Discovered Tags Reference
      • Data Classification
        • How-to Guides
          • Activate Classification Frameworks
          • Adjust Identification and Classification Framework Tags
          • How to Use a Built-In Classification Framework with Your Own Tags
        • Classification Frameworks Reference Guide
      • Manage Tags
        • How-to Guides
          • Create and Manage Tags
          • Add Tags to Data Sources and Projects
        • Tags Reference Guide
    • Manage Users
      • Getting Started with Users
      • Identity Managers (IAMs)
        • How-to Guides
          • Okta LDAP Interface
          • OpenID Connect
            • OpenID Connect Protocol
            • Okta and OpenID Connect
            • OneLogin with OpenID Connect
          • SAML
            • SAML Protocol
            • Microsoft Entra ID
            • Okta SAML SCIM
        • Reference Guides
          • Identity Managers
          • SAML Single Logout
          • SAML Protocol Configuration Options
      • Immuta Users
        • How-to Guides
          • Managing Personas and Permissions
          • Manage Attributes and Groups
          • User Impersonation
          • External User ID Mapping
          • External User Info Endpoint
        • Reference Guides
          • Attributes and Groups in Immuta
          • Permissions and Personas
    • Organize Data into Domains
      • Getting Started with Domains
      • Domains Reference Guide
    • Application Settings
      • How-to Guides
        • App Settings
        • BI Tools
          • BI Tool Configuration Recommendations
          • Power BI Configuration Example
          • Tableau Configuration Example
        • Add a License Key
        • Add ODBC Drivers
        • Manage Encryption Keys
        • System Status Bundle
      • Reference Guides
        • Data Processing, Encryption, and Masking Practices
        • Metadata Ingestion
  • Governance
    • Introduction
      • Automate Data Access Control Decisions
        • The Two Paths: Orchestrated RBAC and ABAC
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
        • Test and Deploy Policy
      • Compliantly Open More Sensitive Data for ML and Analytics
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
    • Author Policies for Data Access Control
      • Introduction
        • Scalability and Evolvability
        • Understandability
        • Distributed Stewardship
        • Consistency
        • Availability of Data
      • Policies
        • Authoring Policies at Scale
        • Data Engineering with Limited Policy Downtime
        • Subscription Policies
          • How-to Guides
            • Author a Subscription Policy
            • Author an ABAC Subscription Policy
            • Subscription Policies Advanced DSL Guide
            • Author a Restricted Subscription Policy
            • Clone, Activate, or Stage a Global Policy
          • Reference Guides
            • Subscription Policies
            • Subscription Policy Access Types
            • Advanced Use of Special Functions
        • Data Policies
          • Overview
          • How-to Guides
            • Author a Masking Data Policy
            • Author a Minimization Policy
            • Author a Purpose-Based Restriction Policy
            • Author a Restricted Data Policy
            • Author a Row-Level Policy
            • Author a Time-Based Restriction Policy
            • Policy Certifications and Diffs
          • Reference Guides
            • Data Policy Types
            • Masking Policies
            • Row-Level Policies
            • Custom WHERE Clause Functions
            • Data Policy Conflicts and Fallback
            • Custom Data Policy Certifications
            • Orchestrated Masking Policies
      • Projects and Purpose-Based Access Control
        • Projects and Purpose Controls
          • Getting Started
          • How-to Guides
            • Create a Project
            • Create and Manage Purposes
            • Project Management
              • Manage Projects and Project Settings
              • Manage Project Data Sources
              • Manage Project Members
          • Reference Guides
            • Projects and Purposes
          • Why Use Purposes?
        • Equalized Access
          • Manage Project Equalization
          • Project Equalization Reference Guide
          • Why Use Project Equalization?
        • Masked Joins
          • Enable Masked Joins
          • Why Use Masked Joins?
        • Writing to Projects
          • How-to Guides
            • Create and Manage Snowflake Project Workspaces
            • Create and Manage Databricks Spark Project Workspaces
            • Write Data to the Workspace
          • Reference Guides
            • Project Workspaces
            • Project UDFs (Databricks)
    • Observe Access and Activity
      • Introduction
      • Audit
        • How-to Guides
          • Export Audit Logs to S3
          • Export Audit Logs to ADLS
          • Run Governance Reports
        • Reference Guides
          • Universal Audit Model (UAM)
            • UAM Schema
          • Query Audit Logs
            • Snowflake Query Audit Logs
            • Databricks Unity Catalog Query Audit Logs
            • Databricks Spark Query Audit Logs
            • Starburst (Trino) Query Audit Logs
          • Audit Export GraphQL Reference Guide
          • Governance Report Types
          • Unknown Users in Audit Logs
      • Dashboards
        • Use the Audit Dashboards How-To Guide
        • Audit Dashboards Reference Guide
      • Monitors
        • Manage Monitors and Observations
        • Monitors Reference Guide
    • Access Data
      • Subscribe to a Data Source
      • Query Data
        • Querying Snowflake Data
        • Querying Databricks Data
        • Querying Databricks SQL Data
        • Querying Starburst (Trino) Data
        • Querying Redshift Data
        • Querying Azure Synapse Analytics Data
        • Connect to a Database Tool to Run Ad Hoc Queries
      • Subscribe to Projects
  • Releases
    • Release Notes
      • Immuta v2025.1 Release Notes
        • User Interface Changes in v2025.1 LTS
      • Immuta LTS Changelog
      • Immuta Image Digests
      • Immuta CLI Release Notes
    • Immuta Release Lifecycle
    • Immuta Support Matrix Overview
    • Preview Features
      • Features in Preview
    • Deprecations and EOL
  • Developer Guides
    • The Immuta CLI
      • Install and Configure the Immuta CLI
      • Manage Your Immuta Tenant
      • Manage Data Sources
      • Manage Sensitive Data Discovery
        • Manage Sensitive Data Discovery Rules
        • Manage Identification Frameworks
        • Run Sensitive Data Discovery on Data Sources
      • Manage Policies
      • Manage Projects
      • Manage Purposes
      • Manage Audit
    • The Immuta API
      • Integrations API
        • Getting Started
        • How-to Guides
          • Configure an Amazon S3 Integration
          • Configure an Azure Synapse Analytics Integration
          • Configure a Databricks Unity Catalog Integration
          • Configure a Google BigQuery Integration
          • Configure a Redshift Integration
          • Configure a Snowflake Integration
          • Configure a Starburst (Trino) Integration
        • Reference Guides
          • Integrations API Endpoints
          • Integration Configuration Payload
          • Response Schema
          • HTTP Status Codes and Error Messages
      • Connections API
        • How-to Guides
          • Register a Connection
            • Register a Snowflake Connection
            • Register a Databricks Unity Catalog Connection
            • Register an AWS Lake Formation Connection
          • Manage a Connection
          • Deregister a Connection
        • Connection Registration Payloads Reference Guide
      • Immuta V2 API
        • Data Source Payload Attribute Details
        • Data Source Request Payload Examples
        • Create Policies API Examples
        • Create Projects API Examples
        • Create Purposes API Examples
      • Immuta V1 API
        • Authenticate with the API
        • Configure Your Instance of Immuta
          • Get Job Status
          • Manage Frameworks
          • Manage IAMs
          • Manage Licenses
          • Manage Notifications
          • Manage Tags
          • Manage Webhooks
          • Search Filters
          • Manage Identification
            • Identification Frameworks to Identifiers in Domains
            • Manage Sensitive Data Discovery (SDD)
        • Connect Your Data
          • Create and Manage an Amazon S3 Data Source
          • Create an Azure Synapse Analytics Data Source
          • Create an Azure Blob Storage Data Source
          • Create a Databricks Data Source
          • Create a Presto Data Source
          • Create a Redshift Data Source
          • Create a Snowflake Data Source
          • Create a Starburst (Trino) Data Source
          • Manage the Data Dictionary
        • Use Domains
        • Manage Data Access
          • Manage Access Requests
          • Manage Data and Subscription Policies
          • Manage Write Policies
            • Write Policies Payloads and Response Schema Reference Guide
          • Policy Handler Objects
          • Search Connection Strings
          • Search for Organizations
          • Search Schemas
        • Subscribe to and Manage Data Sources
        • Manage Projects and Purposes
          • Manage Projects
          • Manage Purposes
        • Generate Governance Reports
Powered by GitBook

Other versions

  • SaaS
  • 2024.3
  • 2024.2

Copyright © 2014-2025 Immuta Inc. All rights reserved.

On this page
  • Immuta v2025.1.0
  • New features and enhancements
  • User interface changes
  • Domains
  • Policy
  • Data metadata
  • Integrations
  • Data classification
  • Behavior change
  • Bug fixes
  • Deprecations and breaking changes
  • Deprecated features
  • Removed features
  • Breaking changes
  • v2025.1 migration notes
Export as PDF
  1. Releases
  2. Release Notes

Immuta v2025.1 Release Notes

PreviousRelease NotesNextUser Interface Changes in v2025.1 LTS

Last updated 17 hours ago

Immuta v2025.1.0

Immuta v2025.1.0 was released May 15, 2025.

New features and enhancements

User interface changes

  • Updated color scheme: The updated color scheme improves accessibility and contrast throughout the application.

  • Improved navigation menu: The reorganized navigation menu makes the application more intuitive and user-friendly. The menu now includes a Metadata section for managing tags; an Identities section for managing users, groups, and attributes; and an Insights section for managing reports, audit, and notifications.

  • Detect, Discover, and Governance sections have been removed from the navigation menu. These functionalities are now integrated throughout the application.

Domains

: This new feature is the ability to choose how data sources are assigned to your domains. Either manually, as has always been possible, or dynamically, which will assign the data sources to a domain based on their table tags.

Before, a user with the GOVERNANCE permission had to manually assign every data source to a domain. However, with this new feature, the governance user can now decide on a tag, and every data source with that tag will be added to the domain. Dynamic assignment will continuously update the domain so that only and all data sources with the tag are in the domain.

Policy

  • : These functions provide a way to dynamically grant and revoke access to users by doing an exact match comparison between their user information (attribute or group membership) and the tags applied on data sources or its columns. Ultimately, these functions can combine the complexity of multiple roles or rules into a single policy that dynamically assigns access based on users’ attributes or group membership. This results in fewer policies to manage overall and a more streamlined approach to data access management, especially for the most complex use cases.

  • Data policy support for foreign tables in Databricks Unity Catalog integration: Users can apply subscription and data policies to foreign tables in Databricks Unity Catalog.

  • Support for masking complex columns as NULL in Databricks Unity Catalog integration: Users can mask the entire column of STRUCT, MAP, and ARRAY column types in Databricks Unity Catalog as NULL.

  • : Based on customer insights, Immuta has changed the default behavior of how multiple global subscription policies that apply to a single data source are merged. Prior to this change, the global default had been that users must meet all the conditions outlined in each policy to get access. Now, the global default is that users must only meet the conditions of one policy to get access. This behavior can be configured on the app settings page.

  • Disable randomized response by default: Randomized response policies are available for the Snowflake integration. When a randomized response policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process that contains the predicates used for the randomization. The results of this query, which may contain sensitive data, are stored in the Immuta internal database. Because this process may violate an organization's data localization regulations, you must reach out to your Immuta representative to enable this masking policy type for your account. If you have existing randomized response policies, those policies will not be affected by this change.

  • Removed ability to change the default subscription policy setting: The ability to configure the behavior of the default subscription policy on newly registered data sources has reached its end of life (EOL) date and has been removed from the product. As a result, by default Immuta will never apply a subscription policy to newly registered data sources unless an existing global policy applies to them. To set an "Allow individually selected users" subscription policy on all data sources, with that condition that applies to all data sources.

Data metadata

    • System classifications

    • Custom classifications

    • Managed attributes

  • Collibra integration now supports the OAuth 2.0 authentication method: Immuta’s Collibra integration has added support for the OAuth 2.0 authentication method in addition to the supported username/password authentication method.

Integrations

  • Integration error notifications: There are new banner notifications for all users when an integration is experiencing an error. This update calls attention to critical integration errors that can have large impacts to end users to improve awareness and streamline the process of pinpointing and driving errors to resolution. Additionally, Immuta has simplified how the integration statuses are reported within the app settings integrations page.

Snowflake integrations

Databricks integrations

  • Support for Databricks Unity Catalog volumes, models and functions: Immuta supports READ and WRITE access controls for volumes in Databricks Unity Catalog. Immuta also supports governing which users can execute models and functions. This feature is currently available in public preview for customers using Immuta connections and will be included in the connections upgrade.

  • Databricks integration support defaulted to Unity Catalog: Eliminated the manual step of updating a global account setting prior to configuring a Unity Catalog integration. For Databricks integrations, the default support now assumes a Unity Catalog integration.

    Customers using Databricks Spark must now update the default account setting before configuring their Databricks integrations.

  • Streamlined Databricks user management with improved handling of external IDs: The default behavior going forward is that users' external Databricks IDs will be updated to None if Immuta attempts to update these users' Databricks access and Databricks returns a response dictating the targeted principal(s) do not exist. This can be the case if a user is created in Immuta before that user is created in Databricks. Marking external Databricks IDs as NONE will enable Immuta to skip future attempts to update those users' access. This streamlines the tasks that Immuta must process and avoids superfluous errors.

    Databricks external IDs can be updated as needed manually, either through the user profile or by setting this property to <NO IDENTITY> in the external IAM configuration.

Amazon Integrations

  • Connections for the AWS Lake Formation integration is available in private preview. Note that policy support for Lake Formation is not available yet.

Data classification

  • Identifiers in domains: Identifiers can be segregated by domain now to manage which identifiers should run on which data sources. Additionally, you can delegate the management of identifiers to specific users by granting them the Manage Identifiers domain permission.

    Once generally available, this functionality will replace identification frameworks.

  • New built-in patterns: Two new built-in identifiers are available to all customers using identification:

    • SEC_STOCK_TICKER: This new pattern detects strings consistent with stock tickers recognized by the U.S. Securities and Exchange Commission (SEC).

    • FINANCIAL_INSTITUTIONS: This new pattern detects strings consistent with the official and alternate names of financial institutions from lists by the FDIC and OCC.

    Add these identifiers to your domains to start detecting and automatically tagging this data.

  • SDD global settings update: The global sensitive data discovery (SDD) enablement setting has been removed from the app settings page and is available by default. To run identification on your data sources, add them to a domain with identifiers.

  • Identification timeout: Immuta queries for identification now have a timeout of 15 minutes. This timeout ensures that there are no long-running queries that block your compute resources and helps to reduce the cost of running identification.

    The majority of identification runs complete within 15 minutes. If you expect identification to run longer than 15 minutes, reach out to your Immuta representative to configure a longer timeout window.

    Running identification on complex views with large amounts of data is more likely to result in timeouts. Immuta recommends running identification on the underlying base tables.

Behavior change

Disabled data source behavior for Azure Synapse Analytics, Databricks Unity Catalog, Google BigQuery, Redshift, and Snowflake integrations: Immuta will remove all policies on disabled data sources for these integrations.

  • Previous behavior: Disabling a data source triggered a lockdown policy, which revoked all users’ access until the data source was either deleted from Immuta or re-enabled.

  • New behavior: Disabling a data source will remove existing Immuta subscription and data policies and prevent Immuta from adding new policies until the data source is re-enabled. Immuta policies will be removed from currently disabled data sources. For view-based integrations (Azure Synapse Analytics, Google BigQuery, and Redshift), if a user disables an object in Immuta, the Immuta-created view will be deleted.

To enable this behavior for your tenant, add the following snippet to the Advanced Settings section on the app settings page:

featureFlags:
  noLockdownOnDisable: true

Bug fixes

  • Fix for Databricks audit workspace IDs: Previously, users filtering their audit by workspaces had to enter a 16-digit workspace ID. This restriction has been removed.

  • Fix for accurately representing disabled users’ subscription status for data sources and projects in governance reports: Addressed an issue where users with status disabled were misrepresented in governance reports as being subscribed to data sources or projects when in fact they weren’t. (Disabled users always have all their data source and project subscriptions revoked until they get re-enabled.) The following governance reports have been fixed:

    • Data source:

      • All data sources and the users and groups subscribed to them

        • What users and groups are subscribed to a particular data source

        • What users and groups have ever subscribed to a particular data source

    • Projects: What users and groups are part of a particular project

    • Purpose: What users are members of projects with a particular purpose

    • User:

      • All users and the data sources they are subscribed to

        • What data sources is a particular user subscribed to

        • What projects is a particular user currently a member of

Deprecations and breaking changes

Deprecated features

The following features are deprecated. They are still in the product but will be removed at their tentative EOL date.

Feature
Use this alternative feature
Deprecation notice
End of Life (EOL)

CREATE_FILTER permission

None

2024.3

2025.2

Derived data sources (and CREATE_DATA_SOURCE_IN_PROJECT permission)

None

2024.2

2025.2

Legacy onboarding for Snowflake and Databricks Unity Catalog integrations

2025.1

2025.2

Removed features

The following features have been fully removed from the product.

Feature
Use this alternative feature instead
Deprecation notice
End of life (EOL)

@iam function

None

2025.1

2025.1

Conditional tags

2024.3

2025.1

Data inventory dashboard

None

2024.3

2025.1

Data Security Framework and compliance frameworks

2024.3

2025.1

Databricks Runtime 10.4

2025.1

2025.1

Decrypt workflow

None

2024.3

2025.1

External masking interface

None

2023.1

2025.1

External policy handler

None

2024.3

2025.1

Identification frameworks

Identifiers in domains

2025.1

2025.1

K-anonymization policies

2025.1

2025.1

Legacy /audit API

2023.3

2025.1

Legacy audit self-managed container output

2024.1

2025.1

Legacy fingerprint service

None

2024.3

2025.1

Legacy sensitive data discovery

2023.3

2025.1

Managing the default subscription policy

2024.2

2025.1

Policy adjustments

None

2025.1

2025.1

Policy exemptions

2024.3

2025.1

Proxy SQL Server integration

None

2025.1

2025.1

Quick create tab

None

2024.3

2025.1

Redshift Okta authentication

2024.3

2025.1

Breaking changes

Identification and sensitive data discovery replaces orphaned legacy tags: Immuta will remove orphaned tags placed by the legacy sensitive data discovery mechanism.

  • Previously: The legacy sensitive data discovery mechanism was deprecated in September 2023. Tags placed by the legacy SDD mechanism are still visible on columns and can’t be removed - only disabled.

  • New behavior: If you run identification on a data source, unless identification places the same tag, the tag will get remov

v2025.1 migration notes

  • You must be on Immuta version 2024.2 or newer to migrate directly to 2025.1.

  • Immuta has a new download site for Immuta self-managed software distribution: ocir.immuta.com.

    • Immuta's legacy software registry, registry.immuta.com, has been decommissioned.

    • Releases starting with 2024.3.0 are only available from ocir.immuta.com.

    • ocir.immuta.com will require obtaining a new set of registry credentials. These can be viewed in your user profile at https://support.immuta.com.

  • You must migrate feature flags set using secure.extraEnvVars to global.featureFlags or you will see warning messages from Helm. (Deployment will not be impacted if not updated.)

    • detect feature flag now defaults to true; it no longer needs to be set.

    • AuditService feature flag now defaults to true; it no longer needs to be set.

  • 6 new Kubernetes deployments have been added to the Immuta Enterprise Helm chart:

    • 5 new pods for Temporal

    • 1 new pod for the Detect Temporal Worker

in private preview: There is a new standard connector for tag enrichment from Microsoft Purview enterprise data catalog to Immuta. In addition to Purview tags, the following Purview objects will be pulled in and applied to registered data sources as either column or data source tags in Immuta:

: Immuta’s external catalog integration now supports auto-linking data sources with Collibra Edge. The auto-linking process performs name matching of data assets following the with their corresponding data sources in Immuta.

: Connections are Immuta’s improved way of efficient data object management.

As part of our commitment to delivering the best possible onboarding experience, by September 2025, Immuta will no longer support onboarding Snowflake and Databricks Unity Catalog data sources using the legacy method. Customers using Immuta with Snowflake and/or Databricks Unity Catalog have the option to as part of the 2025.1 LTS release. It is highly recommended to already consider performing the upgrade, as it will become a mandatory requirement for versions 2025.2 and newer. To initiate the upgrade, reach out to your Immuta representative.

: All data sources that are registered from connections will now have an automated tag applied that represents the connection. These tags can be used like any other tags in Immuta to build policies, add data sources to domains or generate reports. Immuta fully manages those tags and they can’t be deleted or edited. The tag is formatted as follows: Immuta Connections . The Technology . Your Connection Name . Your Schema . Your Database

Authentication change to accommodate Snowflake moving away from password-only authentication: This release includes updates to the integration setup script to accommodate Snowflake beginning to transition away from password-only authentication for new accounts. , Immuta provides an updated manual setup script that permits password-only authentication by differentiating it as a legacy service with an additional parameter. Existing integrations will continue to function as-is.

in private preview: Immuta's Databricks Spark integration now supports Databricks Runtime 14.3. This compatibility enables users to upgrade their Databricks environments while preserving Immuta’s core data governance capabilities.

: This feature allows users to configure additional workspace connections within their Databricks integrations and bind these additional workspaces to specific catalogs. This enables customers to use Databricks’ feature with their Immuta integration. Users can dictate which workspaces are authorized to access specific catalogs, allowing them to better control catalog access and isolate compute costs if desired.

: Immuta allows multiple Redshift integrations with the same host to exist on a single Immuta tenant. Users can create multiple Redshift integrations with the same host name, provided that each integration has a different port (which Immuta uses to differentiate each one). This support gives Redshift users the flexibility to use infrastructure setups with multiple Redshift clusters, instead of being limited to using a single cluster.

: These improved patterns have higher accuracy out of the box, which reduces the amount of overtagging and missed tags. The result is an easier experience and reduced time to value generating actionable metadata.

: The frameworks API allows users to create rules to dynamically tag their data with sensitivity tags to drive dashboards and policies. These custom rules and frameworks can then be viewed in the UI and managed through the API.

: All governance reports based on identification now have a report column showing whether the tag is used as part of a policy in Immuta.

Column name identifiers: Identification now works to tag data source columns based on . Those tags can then be leveraged when building data or subscription policies to grant access to data sources and mask sensitive data. is also supported to place sensitivity tags and classify data further.

Connections for and . To upgrade existing Snowflake or Databricks Unity Catalog integrations, see the .

to apply these tags

Use a

Export UAM events to or

See the for external container options

Create an "Allow individually selected users"

Specify exempted users directly in your policies using the principles of

Use one of the

Connections for Snowflake and Databricks Unity Catalog in general availability
upgrade to this new pattern
When configuring an integration manually for a new Snowflake account
Databricks Runtime 14.3 support
Support multiple Redshift integrations with the same host on a single Immuta tenant
Improvements to sensitive data patterns used to find and tag data
Classification UI and Frameworks API is generally available
SDD governance report shows whether tags are used in policy
Snowflake
Databricks Unity Catalog
Upgrading to connections guide
Custom classification frameworks with the API
supported masking policy
S3
ADLS
New native sensitive data discovery
subscription policy on all data sources
Data source to domains assignment
@hasTagAsAttribute() and @hasTagAsGroup() functions for subscription policies in general availability
create a global subscription policy
Edge naming convention
workspace-catalog binding
Classification
alternative authentication methods
exception-based policy authoring
infrastructure recommendations
One of these supported Databricks Runtimes
Create a custom classification framework
column name regexes
Automated connections tags
Microsoft Purview enterprise data catalog support
Compatibility with Collibra Edge
Databricks Unity Catalog additional workspace connections
Changing the default value for Default Subscription Merge Options (in app settings)