Deployment Notes
Want timely alerts for Immuta deployments?
Subscribe to the Immuta Status page to receive deployment and downtime notifications.
March, 2025
March 20
Behavior change
Identification and sensitive data discovery timeout: Starting on April 2, 2025, Immuta queries for identification and sensitive data discovery will have a timeout of 15 minutes. This timeout ensures that there are no long-running queries that block your compute resources and helps to reduce the cost of running identification.
The majority of identification runs complete within 15 minutes. If you expect identification to run longer than 15 minutes, reach out to your Immuta representative to configure a longer timeout window for your tenant.
Running identification on complex views with large amounts of data is more likely to result in timeouts. Immuta recommends running sensitive data discovery and identification on the underlying base tables.
Deprecation
Databricks Runtime 10.4: Support for this Databricks Runtime has been deprecated.
March 18
Data source to domains assignment: This new feature is the ability to choose how data sources are assigned to your domains. Either manually, as has always been possible, or dynamically, which will assign the data sources to a domain based on their table tags.
Before, a user with the GOVERNANCE
permission had to manually assign every data source to a domain. However, with this new feature, the governance user can now decide on a tag, and every data source with that tag will be added to the domain. Dynamic assignment will continuously update the domain so that only and all data sources with the tag are in the domain.
March 14
Support multiple Redshift integrations with the same host on a single Immuta tenant: Immuta allows multiple Redshift integrations with the same host to exist on a single Immuta tenant. Users can create multiple Redshift integrations with the same host name, provided that each integration has a different port (which Immuta uses to differentiate each one). This support gives Redshift users the flexibility to use infrastructure setups with multiple Redshift clusters, instead of being limited to using a single cluster.
March 13
Support for Databricks Unity Catalog volumes in public preview: Immuta supports READ
and WRITE
access controls for volumes in Databricks Unity Catalog. This feature is currently available in public preview for customers using Immuta connections and will be included in the connections upgrade.
March 12
SDD global settings update: The global sensitive data discovery (SDD) enablement setting has been removed from the app settings page and is available by default on all tenants. To run identification on your data sources, add them to an identification framework. If you want SDD to run automatically, add an identification framework to the Global SDD Template field on the app settings page.
Marketplace now allows user-provided IDs when creating data products over the API: The create data products endpoint now accepts an optional for the data product. This allows users to share IDs across systems (such as between Collibra and Immuta)
and source controlling data product definitions and pushing approved changes to the Immuta API.
March 11
Databricks Runtime 14.3 support: Immuta's Databricks Spark integration now supports Databricks Runtime 14.3. This compatibility enables users to upgrade their Databricks environments while preserving Immuta’s core data governance capabilities.
March 7
Display last 5 access determinations in Marketplace: When a data steward is making a determination on a request for access, Immuta shows the last 5 approvals and denials. This enhancement assists the data stewards in making an accurate decision, as they can see the prior decisions and the reasoning behind them.
Include account ID in Marketplace URLs: You can append the account identifier to the end of the Marketplace URL. Adding the account identifier can simplify URL redirects to the Marketplace because the user no longer needs to know the account ID to login.
March 6
Bug fixes
UI tab loading issues: The permissions tab was not loading properly when customers were using connections. This was caused by a custom JavaScript object type coming from a third-party library. UI code has been changed to no longer use this custom object type, which has resolved the issue.
Snowflake object type mislabelling leading to policy lockdown: Some customers using connections for Snowflake were seeing Immuta processes sending wrong SQL statements when trying to apply data policies on their environment, such as
ALTER TABLE
on Snowflake objects of typeVIEW
.Since these SQL statements by definition will always fail to execute successfully, Immuta revoked all users' existing access on the affected objects because it could no longer guarantee successful application of masking and row-filter policies on those objects.
The issue of mislabelling Snowflake table types was caused by an incorrect query in Immuta’s backend code, leading to erroneous overrides of Snowflake table types in the Immuta internal metadata storage. The query has since been fixed and all affected objects have been updated to contain the correct Snowflake table type.
March 3
Immuta copilot: The Immuta copilot is a policy writing assistant that allows you to describe the data access you want to enforce in plain language, and then copilot will create a draft Immuta subscription policy from that description for you to review.
This can be extremely helpful for policy authors who do not understand the full list of
Attributes users possess
Groups users belong to
Tags placed on tables and columns
Logic that can be used in Immuta subscription policies
For more details, view the Immuta copilot demo video or private preview documentation.
February, 2025
February 27
Data connections for Snowflake and Databricks Unity Catalog in public preview: Connections is Immuta’s enhanced way of efficient data object management that offers the following benefits:
Reduced complexity by just having one connection in Immuta for both metadata onboarding (pull) and policy application (push) with your data platform
Increased scalability by onboarding all your objects at once instead of repetitive patterns (such as schema by schema)
Improved performance to manage and track metadata changes
Fully automated metadata change detection
Connections is enabled by default on all new tenants created after February 26, 2025, and available upon request for tenants created prior. Reach out to your Immuta support professional to enable it on your tenant. Once enabled, a banner will direct you to the upgrade manager where you can initiate the process to upgrade any of your existing integrations.
February 25
Column name regex support for Google BigQuery: Sensitive data discovery now works to tag data source columns based on column name regexes for Google BigQuery data sources. Those tags can then be leveraged when building data or subscription policies to grant access to data sources and mask sensitive data. Classification is also supported to place sensitivity tags and classify data further.
February 20
Column name regex support for Azure Synapse Analytics: Sensitive data discovery now works to tag data source columns based on column name regexes for Azure Synapse Analytics data sources. Those tags can then be leveraged when building data or subscription policies to grant access to data sources and mask sensitive data. Classification is also supported to place sensitivity tags and classify data further.
February 18
Marketplace API support: All Marketplace functionality is now available to customers through the Marketplace API.
New built-in patterns: Two new built-in identifiers are available to all customers using sensitive data discovery:
SEC_STOCK_TICKER
: This new pattern detects strings consistent with stock tickers recognized by the U.S. Securities and Exchange Commission (SEC).FINANCIAL_INSTITUTIONS
: This new pattern detects strings consistent with the official and alternate names of financial institutions from lists by the FDIC and OCC.
Add these identifiers to your frameworks to start detecting and automatically tagging this data.
February 4
@hasTagAsAttribute() and @hasTagAsGroup() functions for subscription policies in general availability: These functions provide a way to dynamically grant and revoke access to users by doing an exact match comparison between their user information (attribute or group membership) and the tags applied on data sources or its columns.
Ultimately, these functions can combine the complexity of multiple roles or rules into a single policy that dynamically assigns access based on users’ attributes or group membership. This results in fewer policies to manage overall and a more streamlined approach to data access management, especially for the most complex use cases.
January, 2025
January 30
Data policy support for foreign tables in Databricks Unity Catalog: Users can apply subscription and data policies to foreign tables in Databricks Unity Catalog.
January 28
Changing the default value for Default Subscription Merge Options (in app settings): Based on customer insights, Immuta has changed the default behavior of how multiple global subscription policies that apply to a single data source are merged.
Prior to this change, the global default had been that users must meet all the conditions outlined in each policy to get access. Now, the global default is that users must only meet the conditions of one policy to get access. This behavior can be configured on the app settings page.
January 23
Support for masking complex columns as NULL in Databricks Unity Catalog: Users can mask the entire column of STRUCT, MAP, and ARRAY column types in Databricks Unity Catalog as NULL.
January 16
Streamlined Databricks user management with improved handling of external IDs: The default behavior going forward will be that users' external Databricks IDs will be updated to
None
if Immuta attempts to update these users' Databricks access and Databricks returns a response dictating the targeted principal(s) do not exist. This can be the case if a user is created in Immuta before that user is created in Databricks. Marking external Databricks IDs as NONE will enable Immuta to skip future attempts to update those users' access. This streamlines the tasks that Immuta must process and avoids superfluous errors.Databricks external IDs can be updated as needed manually, either through the user profile or by setting this property to
<NO IDENTITY>
in the external IAM configuration.Identifiers in domains: Identifiers can be segregated by domain now to manage which identifiers should run on which data sources. Additionally, you can delegate the management of identifiers to specific users by granting them the
Manage Identifiers
domain permission.Once generally available, this functionality will replace identification frameworks.
Last updated
Was this helpful?