The Immuta system account user for the Unity Catalog integration requires the OWNER permission on catalogs with schemas and tables registered as Immuta data sources. This permission allows Immuta to administer Unity Catalog row-level and column-level security controls. This permission can be applied by granting OWNER on a catalog to a Databricks group that includes the Immuta system account user to allow for multiple owners. If the OWNER permission cannot be applied at the catalog- or schema-level, each table registered as an Immuta data source must individually have the OWNER permission granted to the Immuta system account user.
Uploading a non-existent data source through the databricks/handler API endpoint resulted in a 500 error instead of a 404 error.
After a Redshift integration connection test was successful in the Immuta UI, users encountered an Internal server error when attempting to save the integration settings.
Snowflake lineage was not propagating tags properly to child data sources.
Fixes to address validation test failures when configuring a Redshift integration.
Minor enhancements and fixes that are not user-facing.
Performance improvements when disabling a Snowflake integration.
The Databricks Unity Catalog OAuth certificate field was broken when users attempted to add certificates on the integrations page.
If the token used to configure the Databricks Unity Catalog integration was expired or revoked, applying masking policies to data sources or syncing policies displayed as being successful in the Immuta UI even though the job failed.
Vulnerability: CVE-2023-44270
Snowflake user impersonation roles were being removed incorrectly.
Users can select a light or dark mode theme for the Immuta UI from the user profile menu.
Design improvements of the user profile page.
CVE-2023-45803
CVE-2023-43804
CVE-2023-46136
Minor enhancements and fixes that are not user-facing.
When users attempted to register data sources from two different Starburst (Trino) catalogs, they encountered a remote table validation error if the table and schema names were the same.
Update to the deprecation of legacy audit UI and /audit API; originally the EOL was set to March 2024. However, the EOL time frame has been delayed based on customer feedback. Check future release notes for the updated EOL date.
The Databricks Unity Catalog integration supports rotating personal access tokens.
Pages in the UI have a branded Detect footer to signify that they belong to the Detect module.
Fixes related to Databricks Unity Catalog custom certificate authority configuration. This feature is currently in preview and only available to select accounts.
is generally available. This mode improves query performance in Immuta's Snowflake integration by decreasing the number of Immuta creates.
The Databricks Unity Catalog integration supports OAuth token passthrough as an authentication method for configuring the integration and registering data sources. This feature is currently in preview and only available to select accounts.
Fixes to address performance degradation in the Immuta UI.
Vulnerability: CVE-2023-45857
To create a to discover data sensitivity using the /framework API endpoint, users must now include the parameter rule.name. This will not affect any current behavior and will only impact a new framework being created.
Users can configure their Databricks Unity Catalog integration to support their proxy server.
The Databricks Unity Catalog integration supports OAuth token passthrough. This feature is currently in preview and only available to select accounts.
The query editor page has been removed from the product. Users can no longer enable the query editor on the app settings page.
Creating a governance report on all data sources failed for instances with more than 10,000 data sources.
The Immuta CLI returned a 500 error when creating data sources if the payload had an empty string for the columnDescriptions.description parameter.
Schema monitoring did not create or delete views in Redshift Spectrum if data sources were registered through the Immuta V2 API /data endpoint.
If data sources had tags applied through Snowflake lineage and then an external catalog was updated with new tags, the lineage tags were dropped and the new tags were applied to the column.
The /detectRemoteChanges endpoint behaved inconsistently for Snowflake integrations.
Fixes to address a Snowflake table grants issue that caused data source background jobs to fail.
Users can now adjust the audit frequency for and query audit from the app settings page.
The new Load Audit Events button on the events page will from Snowflake or Databricks into Immuta outside of the scheduled ingestion.
The option to enable the dbt integration has been removed from the Immuta application for new instances.
Databricks Spark project workspaces failed to create for Databricks integrations using .
Minor write policy (private preview) fixes and enhancements.
Attempting to GRANT SELECT on a shared view in Snowflake failed with the following error: UDF IMMUTA_PROD.IMMUTA_SYSTEM.GET_ALLOW_LIST is not secure.
The data source health check was not running on Snowflake data sources.
Vulnerability addressed: CVE-2023-45133
SDD is enabled by default in all new Immuta tenants.
After editing a Databricks Unity Catalog data source, the configuration could not be saved.
Users encountered this error when disabling Snowflake table grants: Error: Query timed out. The connection information may be incorrect. Please double check and try again.
SDD can now be .
Fixes to address Immuta UI performance issues.
Deprecated items remain in the product with minimal support until their end of life date.
Users could not add all schemas when registering Databricks data sources in the Unity Catalog integration.
Users could not query Starburst data sources registered using OAuth authentication and got the following 400 error: This data source was created using anonymous authentication. Users must now set an when using OAuth or asynchronous authentication to create Starburst data sources.
Schema monitoring was not properly creating new data sources in the Databricks Unity Catalog integration when new tables were detected.
The data source members tab did not display all subscribed users when a subscription policy that used advanced DSL rules with special subscription variables was enforced on the data source.
Vulnerability: CVE-2023-41419
Global subscription policies that used the @hasTagAsGroup or @hasTagAsAttribute variable were not granting and revoking users' access to tables properly. This fix addresses the issue for the Databricks Unity Catalog integration.
The data source details tab UI has been redesigned to consolidate data source connection information and remove the query editor button, the SQL connection snippets, and the copy schema button. This redesign aligns the format of this data source details page with the audit dashboards.
Global subscription policies that used the @hasTagAsGroup or @hasTagAsAttribute variable were not granting and revoking users' access to tables properly. This fix addresses the issue for Azure Synapse Analytics, Databricks Spark, Redshift, and Snowflake integrations.
Databricks Unity Catalog integration: Write your policies in Immuta and have them enforced automatically by Databricks across data in your Unity Catalog metastore.
Fixes to address slow or unresponsive Immuta tenants.
Data source health status warning messages were not properly displayed for views.
Fixes to the Redshift integration configuration to address the impact of a change in the Okta Redshift application, which now requires usernames to have the prefix IAM.
The user profile menu icon is now a user icon instead of the user's first initial.
When an automatic subscription policy using the @hasTagAsAttribute variable was applied to a Snowflake data source, users were not granted access to the table in Snowflake.
Users can override the default storage URI for Databricks Spark project workspaces, so they can create project workspaces against storage in a different location if they have an alternative hostname, DNS, or other requirements.
The schema evolution owner was unset when data sources were removed from a schema project.
Fixes to address Immuta UI performance issues.
Vulnerability: CVE-2023-41037
Immuta allows in the Snowflake and Databricks Unity Catalog integrations. This feature is currently in public preview and available to all accounts.
Syncing a Snowflake external catalog failed on data sources with more than 300 tagged columns.
The local subscription policy builder and project subscription policy builder now align with the format of the global subscription policy builder.
Fix to prevent enabling column detection on derived data sources, as column detection is unsupported for derived data sources.
Vulnerability addressed: CVE-2022-25883
Users can view via the Immuta API to track the number of licensed users.
Users were able to change a schema project owner's role, which could leave Immuta in a state where the schema project could not be deleted.
Fix to address a validate connection error with Snowflake External OAuth.
Vulnerability addressed: CVE-2023-37920
Data source and user activity views for Snowflake are now GA.
All new SaaS accounts will have on by default with the activity dashboards visible to users with the AUDIT or GOVERNANCE permission. Current customers are not affected by this change.
With SDD enabled, users now have access to the Discover tab, where they can view their identification frameworks and adjust the rules within a framework.
When users created an IAM on the app settings page and set immuta as the ID, users could not sign in to Immuta using their Immuta Account on the login screen.
Sensitive data discovery failed to run on data sources that were registered using Snowflake External Oauth.
Redshift validation tests required CREATE ON PUBLIC for the Immuta system account, and it should not have been a requirement.
If a user other than the data owner navigated to the policies page of a Snowflake or Redshift data source, the activity panel displayed that "undefined" created the data source.
Fix to re-sync automatic subscription policies after schema detection runs on Snowflake tables that use CREATE OR REPLACE.
Vulnerabilities addressed:
Previously, if users did not have a global framework set for sensitive data discovery, Immuta would run all built-in and custom identifiers by default and any new identifiers required no additional action to be run. Now, a global template must be set. A default template is set automatically with all current built-in and custom identifiers. However, any new identifiers you create must be manually added to the global template.
Immuta can pass a client secret to obtain token credentials in the .
External catalog health checks now include a timestamp so that users can easily determine when the catalog last attempted to sync with Immuta.
Fix to address column detection error on Snowflake data sources: TypeError: Cannot read properties of null.
Fix to address audit ingestion failures.
Performance improvements for identity managers with enabled.
Snowflake policies and grants were not properly synced when users performed CREATE OR REPLACE on a table.
If OAuth was used as the authentication method, users encountered an error when creating a data source with schema monitoring enabled or enabling schema monitoring for an existing data source.
Fix to mitigate audit ingestion failures.
Fix to address the impact of a recent Databricks change that caused a NoSuchFieldException error when querying data on Databricks clusters with Unity Catalog enabled.
If whitespaces trailed or prefixed a project name when creating a Google BigQuery data source, the view was not created in Google BigQuery.
The duration of a Databricks Unity Catalog query is available on the Events page.
Immuta governance reports include query records for Snowflake and Databricks Unity Catalog.
Fixes to address Snowflake audit record collection errors.
Vulnerability addressed: CVE-2023-37466
Unity Catalog query audit requires the public preview version of system tables in Unity Catalog to be enabled. Follow the Databricks documentation to .
The data sources overview and user activity dashboards can be used with Databricks Unity Catalog integrations.
Fix to address an issue that caused schema detection and audit record ingestion to fail in Snowflake when using Snowflake External OAuth for authentication.
Immuta data sources were inconsistently linked to the Snowflake external catalog when automatically ingesting Snowflake object tags.
Vulnerabilities addressed:
Members with timed access to a data source in Immuta could still query data in Snowflake after their access had been revoked in Immuta.
If a Snowflake integration was configured with a Snowflake catalog, users could not configure another external catalog because the test connection button remained disabled.
Removing users from a group in Okta did not remove them from that group in Immuta.
User access events from Databricks Unity Catalog are now captured in UAM and can be exported to S3.
User attributes that included . were not handled properly by Unity Catalog policies.
Fix to address issue that caused some Snowflake audit records to be missing.
: Data access activity from Unity Catalog is audited and can be viewed as Immuta audit logs in the UI or exported.
The example query on the data source overview page for Databricks data sources was missing the catalog, schema, and table name.
Fix to address loading time and error when switching between data source activity monitoring dashboard and other data source tabs.
Multiple data sources could appear to have the same name in the UI because of white space between characters.
Sensitive data discovery customization is now GA: is an Immuta feature that uses sensitive data patterns to determine what type of data your column represents. SDD customization allows for organizations to create and insert their own patterns into SDD which will be recognized and then tagged when found.
Snowflake integration manual installation: After editing a setting on the app settings page (such as the custom login message), the key pair for the Snowflake integration authentication method disappeared when the configuration was saved.
Fix to address an issue with the Databricks Spark integration with Unity Catalog Support that caused an error when creating external tables.
Vulnerability: CVE-2023-32681
Support for configuring data source expiration dates has been deprecated.
Support for the Snowflake integration without Snowflake governance features has been deprecated and will be removed in December 2023.
Support for the legacy Starburst integration has been deprecated. Use the instead.
Tags improvements: Tags now have a details page that provides valuable information about the tag itself and where it is applied within your data environment.
Fix to address the impact of a recent Databricks change that caused a NoSuchFieldException error when querying data in Unity Catalog.
Subscription policies with enhanced variables did not work when Snowflake table grants was enabled.
Vulnerability: CVE-2023-34104
: Monitor data in your Snowflake environment. This feature detects when new tables or columns are created or deleted and automatically registers (or disables) those tables in Immuta for you. Schema monitoring for Snowflake also improves performance of legacy schema monitoring and enhances it by detecting destructively recreated tables (from CREATE OR REPLACE statements), even if the table schema wasn’t changed.
The data sources overview and user activity dashboards can be used with both Snowflake and Databricks integrations together.
The data source overview page shows an icon of the data access technology.
Create a row-level policy using a custom WHERE clause without Immuta validating your custom SQL. Previously, Immuta checked these custom SQL policies by running a query with the WHERE clause in the data platform. For organizations that do not grant Immuta SELECT access to their data platforms, this validation returned an error and locked down the tables. This validation check no longer exists.
With Snowflake table grants enabled, changing a user's attribute through a group updated the Snowflake profiles table to reflect the entitlement changes. However, if a subscription policy specifying that group had already been applied to a data source, the visibility of the table did not change in Snowflake for the user. Instead, users who should have been restricted access from the table could still see that the table existed in Snowflake (but they could not query it to access data). Conversely, users who should have been granted access to the table could not see it.
: SDD automatically discovers and tags your data based on the identifiers it matches but, unlike non-SDD, it does not persist or move any of your data.
Filter the data sources overview dashboard by data platform type (Databricks or Snowflake).
Fix to address the following OpenID Connect login error: type error: cb is not a function uncaught exception detected.
Users could not save their SAML configuration on the app settings page after enabling SAML single log out and received the following error: options.allowIdPInitiatedSLO is not allowed.
: Minimize security risks by enabling SAML single log out, which terminates abandoned sessions after a timeout event occurs or after a user logs out of Immuta, their identity provider, or another application.
Fix to address an issue that caused sensitive data discovery to run on data sources added by schema detection, even if sensitive data discovery was disabled.
: Migrate your data from the Databricks legacy Hive metastore to the Unity Catalog metastore while protecting data and maintaining your current processes in a single Immuta tenant.
The Redshift integration did not properly create views for tables that included column names with special characters. When users queried those views, they received column doesn't exist errors.
When configuring Snowflake object tag ingestion, the connection failed if the host provided was a Snowflake PrivateLink URL.
Vulnerability: CVE-2023-32314
Fix to address a race condition that prevented job clusters from starting properly on Databricks runtimes 9.1 and 10.4.
New tag side sheet: Tag experience has been improved with the addition of tag side sheets, which provide contextual information about tags and can be accessed wherever tags are applied.
An additional 20 UAM audit events are captured and can now be exported to S3. See the full list of supported events on the .
The audit Events page will now show multiple targets for queries that join tables.
Running an external catalog sync did not trigger policy updates when only table tags had changed. If users only added or removed table tags, global policy updates were not applied to data sources.
The data source activity monitoring for Snowflake charts were showing the largest value for each data point on the chart rather than the sum of the values.
Data source and user activity monitoring for Snowflake are now public preview and can be used without . Immuta users with Snowflake data sources can use these features to view visualizations of the with no configuration.
Data source and user activity monitoring dashboards can now be filtered by Snowflake database or Snowflake schema.
Snowflake connection validation failed if users created a custom system account role name.
The data source overview and person overview queries charts were identical to the data overview queries chart, no matter what data source or person was selected.
A backend query was modified to improve the response time of the data source and user activity monitoring dashboards.
Deprecated items remain in the product with minimal support until their end of life date.
Support for the interpolated comparison WHERE clause function has been deprecated.
This deployment addresses a SAML login issue discovered in the original deployment on April 17. Consequently, the April 17 release notes entry has been replaced with the content below.
Snowflake integration using Snowflake governance features: Users can create , , , and policies that use masked columns as input.
The enhanced subscription policy variable @hasTagAsAttribute did not unsubscribe users with that attribute from the data source when a matching column tag was removed.
Snowflake table grants did not properly update user subscriptions to data sources if their group in Immuta was renamed and the group name was used in an automatic subscription policy.
Vulnerabilities:
The data source health check button has been removed from the data source health menu. Use instead.
Data source and user activity monitoring dashboards can now be filtered by Snowflake cluster, warehouse, and role.
Performance improvements of the data source monitoring for Snowflake overview dashboard.
Users could not include duplicate tags in a single row-level policy when using the policy builder.
When configuring an external REST catalog, testing the data source link timed out after three seconds, and users received a failed to retrieve data error.
Vulnerabilities:
Tag enhancements are generally available and update various components of the UI.
Snowflake integration: If a group's access was revoked from a data source in Immuta (manually or through a policy), table grants was not issuing revokes in Snowflake for members of the group that lost its subscription status, allowing them to still access that data. However, if low row access policies for Snowflake was disabled, all the rows in the data source were appropriately hidden.
Snowflake external catalog tags were not synced or pulled in to Immuta.
Users could not enable column detection if they had not made all columns visible in the data source during data source creation.
Data source and user activity monitoring dashboards will persist the date range selected for all dashboards in that user's session. Once logged out, the date range will return to default.
When using SCIM to sync an identity manager with Immuta, removing a user from a group in the identity manager did not remove the user from that group in the remote database in the following integrations:
Snowflake
Redshift
. Blocking use of these functions allows you to restrict users from changing projects within a session.
Left navigation UI enhancement. The left navigation includes two tiers and reorganizes several pages:
Data includes the data sources and projects pages.
Vulnerability: CVE-2022-23529
The number of months for of data source and user activity monitoring for Snowflake can be configured from the app settings page.
A single query for multiple data sources will result in a single Snowflake event and appear as one event on the Events page.
The custom date range for data source and user activity monitoring dashboards supports .
The Databricks Spark integration sometimes provided an incomplete list of databases in the Data Explorer UI or in Databricks clusters after running SHOW DATABASES.
Under rare circumstances, a global data policy using a tag failed to apply to some data sources.
User accounts created with IAM integrations using the SAML 2.0 protocol before SCIM was enabled were not updated by SCIM provisioning after SCIM was enabled.
Users can no longer register multiple data sources that reference the same underlying table in their remote data platform. Existing duplicate data sources that point to the same remote table will not be affected by this change; this feature removal only applies to data source creation.
Fix to repair impact of a recent Databricks Data Explorer change to issue use catalog hive_metastore command on Databricks runtimes older than Databricks runtime 11.x. The Databricks Spark integration now handles this command issued by Databricks Data Explorer.
The Default subscription policy option allows you to choose whether or not a subscription policy will automatically restrict access to tables when they are registered as Immuta data sources. By default, Immuta does not apply a subscription policy on data you register (unless an existing global policy applies to it) so that you can preserve policies applied by your underlying data platform on those tables, leaving existing access controls and workflows intact.
improves query performance in Immuta's Snowflake integration by decreasing the number of Immuta creates.
With data source and user activity monitoring for Snowflake enabled, the Audit tab on the navigation menu defaults to the Events page.
When applying a global subscription policy that uses the @hasTagAsGroup or hasTagAsAttribute enhanced subscription policy variable (for example, "Allow users to subscribe when @hasTagAsAttribute('AllowedAccess', 'dataSource') on all data sources") to a data source, user access was restricted as expected; however, if the data source tag changed through the Immuta V2 API, access wasn't changed, which could potentially allow users to see data that they shouldn't. Additionally, access wasn't changed if the policy was removed.
Users could not save configuration changes if they enabled Snowflake table grants after creating the integration.
When applying a global subscription policy that uses the @hasTagAsGroup or hasTagAsAttribute enhanced subscription policy variable (for example, "Allow users to subscribe when @hasTagAsAttribute('AllowedAccess', 'dataSource') on all data sources") to a data source, user access was restricted as expected; however, if the data source tag changed, access wasn't changed, which could potentially allow users to see data that they shouldn't. Additionally, access wasn't changed if the policy was removed.
Users were able to query system tables in the query editor by using some specific Postgres functions.
Users can no longer set schema to null when bulk updating data sources using the .
is generally available. Let Immuta manage privileges on your Snowflake tables instead of manually granting table access to users. With Snowflake table grants enabled, Snowflake Administrators don't have to manually grant table access to users; instead, Immuta manages privileges on Snowflake tables and views according to the subscription policies on the corresponding Immuta data sources.
: Immuta’s Starburst integration v2.0 allows you to access policy-protected data directly in your Starburst catalogs without rewriting queries or changing your workflows. Instead of generating policy-enforced views and adding them to an Immuta catalog that users have to query (like in the legacy Starburst integration), Immuta policies are translated into Starburst rules and permissions and applied directly to tables within users’ existing catalogs.
is released for private preview. Detect is a tool that monitors your data environment and provides analytic dashboards in the Immuta UI based on audit information of your data use.
Deprecated items remain in the product with minimal support until they are removed from the product.
External masking
Snowflake, Redshift, and Azure Synapse integrations:
If a was applied to a data source and a user updated a (create, update, delete) that also applied to that data source, the data policy was not applied to the data source. Consequently, a user querying that table could see values of masked columns in plaintext.
If an existing and an existing applied to the same data source, then modifications to that data source (or the creation of a new data source targeted by those policies), only the global subscription policy was applied to the data source. Consequently, a user querying that table could see values of masked columns in plaintext.
Editing a schema project to a database that already exists fails.
hasTagAs policy applied correctly. If users did not initially have the attribute specified when the policy was created, they were not granted access to the data source if they were later given the specified attribute.Vulnerability: CVE-2023-43804
Query editor
September 2023
October 2023
Non-native sensitive data discovery (Use instead.)
September 2023
January 2024
Snowflake integration with low row access policy mode disabled (Follow this to enable low row access policy mode. You must also .)
September 2023
March 2024
CVE-2021-46708: Immuta no longer publishes the Swagger API, which removes the ability to exploit this vulnerability. Although the affected library is a downstream dependency of a package Immuta uses, the library that contains the vulnerability is not used by Immuta.
CVE-2023-37920
CVE-2023-38704
CVE-2022-25883
CVE-2023-36665
' in the name.CVE-2023-0842
CVE-2023-29199
CVE-2023-0842
CVE-2023-29017
This issue could allow that user to retain access to data if they were removed from a group that was granted access by a policy.
If an Advanced DSL policy used the @columnsTagged function and the policy had multiple conditions, all users were restricted from seeing data.
Unity Catalog clusters: A breaking change in Databricks caused a wrong number of arguments error when users ran Unity Catalog queries.
When Databricks query plans for tables registered in Immuta were too large, Immuta could not process the audit record.
Vulnerabilities:
CVE-2023-24807
CVE-2023-28154
People includes the admin page.
Policies includes the subscription policies and data policies pages.
Support for Databricks Runtime 11.3 LTS.
When executing the Immuta Data Security Framework, the status of the classification job for individual data sources can now be found in the data source health dropdown. The options include the following:
Classification complete: Classification has run on the data source and applied the appropriate classification tags.
Classification pending: A framework has been created, activated, or updated and will run on the data source.
Classification is not applicable: The data source is not affected by classification.
With data source and user activity monitoring for Snowflake enabled, users without AUDIT permission were brought to an empty overview dashboard when logging in.
Classification for query sensitivity is now dynamic. For a query that joins tables, Immuta uses the same classification rules applied to tables and applies those rules to columns of the query. Immuta applies a new set of classification tags to the query columns and calculates sensitivity for the query event in the audit record. These query classification tags are not included on the tables' data dictionary.
Users could not save configuration changes if they edited an existing Snowflake integration.
Detect pages with over ten thousand (10,000) results would error. There is now a notification that only ten thousand (10,000) of the results are available with the recommendation to refine the page by filter or search.
Vulnerabilities:
CVE-2022-32149
CVE-2022-23491
Vulnerabilities:
CVE-2022-23529
CVE-2022-40899
Legacy audit UI and /audit API
September 2023
October 2024
Legacy Databricks SQL integration (Use the Unity Catalog integration instead.)
September 2023
March 2024
Discussions tab on projects and data sources
September 2023
March 2024
HIPAA Expert Determination
September 2023
March 2024
If users registered tables from the same schema as Immuta data sources, users could break data sources they didn't own if they deleted or changed the schema project connection.
The Databricks Unity Catalog integration configuration on the App Settings page asked for an "Instance Role ARN" instead of the "Instance Profile ARN."
Users were unable to add data sources from the Hive Metastore in the Databricks Spark integration with Unity Catalog.
Databricks Spark Integration with Unity Catalog Support: Enable Unity Catalog support on Immuta clusters to use the Metastore across your Databricks workspaces and enforce Immuta policies on your data. This integration provides a migration pathway for you to add your tables in Unity Catalog while using Immuta policies. Consequently, when additional Unity Catalog features are available, you will be ready to use them. Databricks SQL policies will continue to be enforced through a view-based method, and interactive cluster policies through the Immuta plugin method.
Databricks Runtime 11.2 support.
Write Fewer, Simpler ABAC Policies. Enhanced Subscription Policy Variables (Public Preview) empower users to write fewer, simpler ABAC (Users with Specific Groups/Attributes) policies. Previously, policy writers had to specify groups in separate policies to grant access. With Enhanced Subscription Policy Variables, Immuta's policy engine compares users' groups with data source or column tags in a single policy to determine if there is a match. Users who have a group that matches a tag on a data source or column will be subscribed to that data source.
Immuta supports registering data sources that exceed 1600 columns. However, sensitive data discovery and health checks will not run on those data sources.
The maximum length for the Snowflake role prefix when using is 50 characters.
Users cannot enable or disable impersonation when editing a previously configured integration.
Alternative owners of data sources were not included in the subscription audit records if the data source was created using the Immuta V2 API.
Snowflake Table Grants: If a user who was added to a Snowflake data source through a group Subscription Policy was removed from a data source, that user could see the columns (without any data) of the table when they queried that data in Snowflake.
When users edited a Snowflake integration configuration and changed the authentication type to Snowflake External OAuth, the configuration was still saved as Username and Password for the authentication type.
Vulnerability: CVE-2022-39299
Editing a schema project to a database that already exists fails.
The following UI elements and workflows have been removed. Reach out to your Immuta representative if you need one of these elements re-enabled.
Data source Metrics tab.
Data source Queries tab.
Creating data sources with a SQL statement.
Selecting specific columns to hide when creating a data source in the UI or V2 API.
Tag enhancements (public preview): The tag enhancements feature will improve user experience by updating various components of the UI.
Azure Synapse Analytics: If a user was granted access to about 1300 data sources, access to those tables was delayed.
Deleting an integration on the App Settings page and saving the configuration caused the Immuta UI to crash.
Editing a schema project to a database that already exists fails.
Collibra integration performance improvements.
Immuta's Collibra integration recognizes the implicit relationship between the Database View in Collibra and Immuta data source columns so that tags are properly applied to those columns in Immuta.
The Immuta V1 API /dataSource endpoint returns the remote table name so that users can get the schema and table name of a data source in one API call.
The data source Relationships tab only displayed up to 10 associated projects.
If creating the Immuta database failed in the Snowflake without Snowflake Governance Controls or Databricks SQL integration, the error returned was incorrect.
Removed historical schema monitoring metrics that contained database connection strings.
Subqueries that referenced a table that didn't exist never resolved.
Policies:
Disabling a Global conditional masking policy on a data source could sometimes disable all policies or none of the policies on the data source.
If users submitted a Global Policy payload to the API that was missing the subscriptionType from the actions, the Global Policies page broke when trying to display Subscription Policies.
Redshift:
Users were unable to query tables that had a policy with a Limit usage to purpose(s) <ANY PURPOSE> applied to them.
There were error-handling inconsistencies between the Immuta UI and the database logs.
Vulnerabilities:
CVE-2022-3517
CVE-2022-3602
Editing a schema project to a database that already exists fails.
Deleting a tag hierarchy deleted any tags with a like name. For example, deleting the tag department would also delete the tag department_marketing.
The Refresh External Tags button appeared on the Tag page even if no external catalogs were configured.
Users couldn't change the schema detection owners for schema projects.
Collibra: If multiple values were assigned to an attribute in Collibra, they were added as a single tag in Immuta. For example, if an attribute list called Color contained values Blue, Green, and Yellow, and Blue and Green were selected in Collibra, Immuta displayed the data tag as Color.Blue,Green. Instead, Immuta should have created two tags: Color.Blue and Color.Green.
Webhooks that were listening to setUserAuthorizations were not triggered.
Deleting a Data Policy did not enable the Save Policy button.
With Approve to Promote enabled, adding a comment to a policy did not enable the Save Policy button.
Editing a schema project to a database that already exists fails.
Use the latest Databricks Runtime with Immuta. Databricks Runtime 11.0 is now supported in Immuta.
Connect Snowflake data to Immuta without providing your account credentials. Immuta supports Snowflake External OAuth as a non-password authentication mechanism when configuring the Snowflake integration or creating Snowflake data sources.
Let Immuta manage privileges on your Snowflake tables instead of manually granting table access to users. With Snowflake table grants enabled, Snowflake Administrators no longer have to manually grant table access to users; instead, Immuta manages privileges on Snowflake tables and views according to the subscription policies on the corresponding Immuta data sources.
Ensure that policies are adequately reviewed and approved before they are eligible for production environments. Instead of creating policies directly in production, Approve to Promote allows policy authors to create, assess, and revise policies in a policy-authoring environment. Then, the policy must be approved by a configured number of users before it is promoted to the production environment and enforced on data sources.
The undocumented deletedHandlerSubscribers attribute, which indicates a subscription policy changed, was removed from the data source notifications webhook payload. If you were depending on that attribute in your customized webhooks, that code won't work.
IAMs:
Microsoft Entra ID: When SCIM was enabled for Microsoft Entra ID, sometimes user attributes were removed from users in Immuta when they should not have been.
Policies:
Global Subscription Policies that were applied “When selected by data owners” could not be deleted when using Approve to Promote.
If a Global Subscription Policy was disabled for a data source, staging that Global Policy on the policies page caused the Subscription policy to change on the data source.
Local Policies using @columnTagged() were not properly applied to data in Databricks when the column was tagged.
Projects:
Project owners could not edit projects with approved purposes and data sources.
The baseline percent null values could not be adjusted for k-anonymized columns on the Expert Determination tab in projects.
Snowflake:
Instances that used the Snowflake integration without Snowflake Governance features were sometimes automatically migrated to using Snowflake Governance features when Immuta upgraded.
Vulnerability:
CVE-2022-25647
Tags sometimes did not update on data sources if those tags were quickly added or removed, which could cause policies to not be updated.
The data source page sometimes took several minutes to load if there were over 100,000 data sources registered in Immuta.
If a user was a member of a large number of groups (about 2,000), the UI search was sometimes slow.
When searching for data sources on an instance with over 30,000 data sources and tables with complex struct columns, the search could take several minutes to return or freeze the Immuta tenant.
An Adobe Font requirement caused timeout issues in the Immuta UI.
Editing a schema project to a database that already exists fails.
Application Admins can enable policy adjustments separately from HIPAA Expert Determination on the App Settings page.
Snowflake Integration:
Schema detection caused non-date columns to be incorrectly tagged "New" for data sources that were added in bulk.
Migrating from a Snowflake Using Snowflake Governance Controls integration to a Snowflake Without Using Snowflake Governance Controls integration failed.
Enabling a Snowflake Using Snowflake Governance Controls integration using the automatic setup method failed.
Sensitive Data Discovery did not automatically run when users bulk created data sources.
If Immuta was unable to communicate with an external IAM provider because of a connection failure, groups were removed from Immuta, even if the IAM was still active.
When creating 100,000 tables, the data source creation job sometimes expired.
User Admins could not delete attributes assigned to an Immuta Accounts user.
After configuring SAML and OpenID IAMs, users could not initially log in.
In Databricks runtime 10.4, ShowPartitions commands on Delta tables failed.
Users were unable to edit Global policies that were not on the first page of results.
Automatic Subscription policies could cause out of memory issues if they added about 300 users to a data source.
Editing a schema project to a database that already exists fails.
Project owners are unable to edit projects with approved purposes and data sources.
IAM Signing Certificate Required for SAML. You are required to upload your IAM signing certificate to Immuta to add or edit SAML-based IAMs. If you are already using Immuta's SAML integration, provide a signing certificate to existing configured IAMs for them to continue working.
In the Snowflake Governance features integration, unmasked data was sometimes visible for a fraction of a second while data policies were being applied.
Databricks user impersonation did not work if backticks enclosed the username.
Clicking the Sync User Metadata button in the Immuta UI could queue an infinite number of profile refresh background jobs.
The enriched audit logs created an error if data policies did not exist on a data source.
The attributes type for users was inconsistent with policy attributes type in the audit logs.
Advanced Subscription Policies: If an advanced Subscription policy that did not contain special variables was created, customers with over 100,000 users could experience OOM issues.
Okta/SCIM: When adding users to Okta to sync with Immuta, TypeError: attributeValues is not iterable appeared in the logs.
LDAP users with parentheses in their common name caused authentication to fail when group sync was enabled.
Editing a schema project to a database that already exists fails.
Project owners are unable to edit projects with approved purposes and data sources.
Databricks Runtime 10.4: Show partitions on delta table fails.
Access background jobs with enhanced visibility. This feature allows you to access information to debug issues and identify the cause.
Use the latest Databricks Runtime with Immuta. Databricks Runtime 10.4 LTS is now supported in Immuta.
Prove compliance with Databricks audit trails that include denial events. When Immuta users query Databricks tables that have been registered in Immuta, the query audit logs will include denial events and the policies associated with the decision. Such audit trails are required by some information security teams to prove compliance with secure data access.
Snowflake:
Share policy-protected data in Snowflake with other Snowflake accounts using Snowflake Data Sharing. This integration allows you to author policies in Immuta and protect data shared with other Snowflake accounts in real time. For example, if a pharmaceutical company needed to share trial results outside their Snowflake account and needed to protect PHI, they could share that data outside their account and still have Immuta policies enforced.
Removed features are no longer available in the product.
Advanced rules DSL for data policies
2022.1.0
2022.2
Differential privacy
2022.1.0
2022.2
The custom / external policy handler
2022.1.0
2022.2
Policy export/import
2021.4
Alternative solution
As an alternative to the policy export/import feature, use the Immuta CLI to clone your Global policies.
Creating a policy using the Advanced DSL Data policy builder in the view-based Snowflake integration sometimes caused errors.
When a user's entitlements changed, Immuta did not properly send notification to the integration to GRANT or REVOKE access to tables in the remote system.
Entering a single quotation mark in the search bar sometimes caused an error.
After an Alation or Collibra catalog was configured, new data sources were not linked to the catalogs automatically.
Logging in to Immuta after being logged out due to inactivity sometimes displayed a blank page.
Local policies sometimes appeared on the Global policies page.
Activity panel covered the policy builder when long SQL statements were entered for conditional policies.
Clicking the Policies icon in the left sidebar while editing a Subscription policy displayed an empty Data Policy Builder instead of the Policies page.
When configuring an External REST Catalog, users could not click the Test Connection button if the No Authentication option was selected.
The Immuta login page did not display for some older browser versions of Edge.
LDAP users with parentheses in CN cause authentication to fail if group sync is enabled.
Databricks Runtime 10.4: Show partitions on delta table fails.
The visual styles in the application have been updated.
Users can add multiple alternative owners to data sources at once.
Users can now specify column tags instead of just data source tags with the @hasTagAsAttribute Enhanced Subscription Policy variable.
Policy import/export
When attributes were added to groups that affected an Automatic Subscription policy, users were added or removed from the data source(s) appropriately, but these changes were not audited.
Deleting the last values or all values from user or group attributes caused errors when processing Automatic Subscription policies.
Local policies that were created or updated sometimes displayed on the Global Policy page.
Writing a Global ABAC Subscription policy using @username in the Advanced DSL builder did not subscribe the user to the data source.
Changing a Global Allow Individually Selected Users Subscription policy back to a Global ABAC policy that used special functions caused an error: Error: "actions[0].exceptions.conditions[0]" does not match any of allowed types.
If a policy was added through the Immuta CLI, editing that policy in the Immuta UI sometimes caused an error.
After being added to a data source through an Automatic Subscription policy, users sometimes encountered an error when making unmasking requests.
Creating a Global conditional masking policy in the Advanced DSL builder that used @iam or @username caused an error when the policy was applied to a data source.
Redshift:
Regex masking policies that used metacharacters with backslashes (\d, \s, etc.) did not mask columns.
Users' metadata was not updated in the integration if their usernames contained apostrophes.
Enhanced Subscription Policy Variables (Public Preview): This feature empowers users to write fewer, simpler ABAC (Users with Specific Groups/Attributes) policies. Previously, policy writers had to specify user attribute keys in separate policies to grant access. With Enhanced Subscription Policy Variables, Immuta's policy engine compares user attributes with data source properties (database, host, schema, table, or tag) in a single policy to determine if there is a match. When attribute keys match the property specified, users will be able to subscribe to the data source(s).
Snowflake Table Grants: With this feature enabled with the Snowflake with Governance Controls integration, Snowflake Administrators no longer have to manually grant table access to users; instead, Immuta manages privileges on Snowflake tables and views according to the subscription policies on the corresponding Immuta data sources.
Improved performance of auto-subscription policies.
If an SSL CA cert was used when setting up an LDAP IAM, clicking the Test LDAP Sync button resulted in an error.
Tags were removed from data sources if they were applied after data source creation and before the external catalog health check (which is triggered by navigating to the data source). However, tags applied to a data source during creation remained on the data source.
Group permissions were not considered when users attempted to create data sources or Global Policies. For example, if a user was a member of a group that had the GOVERNANCE permission assigned to it, that user was not inheriting the GOVERNANCE permission. Consequently, when that user tried to apply a Global Policy to a data source, they received an error. However, if a user had the GOVERNANCE permissions applied to their account directly, they were able to create a Global Policy. This same behavior occurred with the CREATE_DATA_SOURCE permission.
Creating an Immuta data source from a Databricks view that contained an implicit column alias failed.
Editing a schema project to a database that already exists fails.
The App Settings page freezes when a user selects Migrate Users from BIM when configuring an external IAM.
An auto-subscription policy that adds more than 64,000 users to a data source can cause errors in the logs and impact subscription reports.
Integration jobs can end up in an expired state, even if they successfully are processed, under certain load conditions.
Edit configuration for integrations: Users can edit the configuration for Azure Synapse, Databricks SQL, Redshift, and Snowflake without disabling the integration.
Manual approvals in ABAC global subscription policies: Governors can now add an approval workflow as an alternative method of access to data sources if a user does not meet the conditions of the Users with Specific Groups/Attributes (ABAC) Global Subscription Policy.
Before this release, if someone was manually added by an owner or Governor and didn’t meet the ABAC policy requirements, they could query the table, but no rows would come back because they didn’t have the groups or attributes specified in the policy. Now, manually adding users overrides the ABAC policy. Therefore, any users who had been manually subscribed to a data source but could not see any data will see data after this upgrade. You can prevent this behavior by either switching the Subscription policy to auto-subscribe (which removes users who don't meet the Subscription policy) or adding a Data Policy that redacts rows for users who do not have the groups or attributes specified in the Subscription policy.
If users have existing Global Subscription policies that were combined, those will not change on the data source after the upgrade. However, the Require Manual Subscription option will automatically be enabled on those existing policies, so users who meet the conditions of the policy will not be automatically subscribed.
Sensitive data discovery global template and default sample size UI (public preview): Users can adjust these configurations on the App Settings page. If users already had a Global Template or default sample size configured in the Advanced Configuration section, these configurations will migrate to the new Sensitive Data Discovery section on the App Settings page when they upgrade their Immuta tenant.
: Through this integration, Immuta applies policies directly in Starburst so that users can keep their existing tools and workflows (querying, reporting, etc.) and have per-user policies dynamically applied at query time.
Support for PrivateLink with Snowflake on AWS: Contact Immuta to enable this feature.
"Active" tags on merged Share Responsibility Global policies did not show the active number of data sources they were enforced on.
The configuration section for project workspaces could break if a handler was not enabled.
Databricks:
If a table in Databricks had been created from an AVRO schema file, queries against the table on Immuta-enabled clusters only returned results for partition columns. Additionally, trying to create tables from an AVRO schema file on Immuta-enabled clusters returned an error: "Unable to infer the schema."
Fixed Databricks init script error handling when artifacts weren't downloading correctly.
Errors occurred when using mlflow.spark.log_model on non-Machine Learning clusters.
Because Immuta's built-in identity manager (BIM) is not enabled in SaaS, the App Settings page froze when a user selected Migrate Users when configuring an external IAM.
Redshift integration performance issues related to Python UDF concurrency capabilities.
Snowflake:
When enabling a Snowflake integration with an external catalog, if the host had multiple periods in the account the Snowflake plugin was invalid.
When users tried to edit the Excepted Roles/Users List for the integration, the configuration saved correctly. However, when the App Settings page refreshed, the Excepted Roles/Users List was empty and the allow list in Snowflake was not updated.
Editing a schema project to a database that already exists fails.
Project owners are unable to edit projects with approved purposes and data sources.
Disable query engine: Application Admins can disable the Query Engine on the App Settings page.
New Immuta UI: Although the most significant change is the adjustment to the visual styles in the application, other UI changes include an expandable left navigation and dark mode support.
Support for AWS-Sydney.
Databricks init script: To use the updated Immuta init script and cluster policies, existing SaaS users must update their Databricks cluster configuration following this Manually Update Your Databricks Cluster guide.
Databricks:
Views: Although users could create views in Databricks from Immuta data sources they were subscribed to, when users tried to select from those views, they received an error saying that the Immuta data source the view was created against did not exist or that they did not have access to it.
External Delta Tables: Querying an external Delta table that had been added as an Immuta data source as a non-admin resulted in a NoSuchDataSourceException error if the table path had a space in it.
Sensitive Data Discovery failed for Databricks data sources when initiated in the UI if the cluster was configured to use ephemeral overrides.
The integration did not work with the Databricks Runtime 9.1 maintenance update.
Ephemeral Overrides:
The UI was not displaying the checkbox to apply the ephemeral override to multiple data sources.
Ephemeral overrides were not being used when calculating column detection.
Out of memory errors occurred when several actions or jobs ran simultaneously, such as
Bulk disabling data sources
Bulk creating data sources
Sensitive Data Discovery: Users could not configure sampleSize to override the default number of records sampled from a data source.
Snowflake Governance Features Integration: When a data source existed in Immuta but not in Snowflake and a user tried to refresh the policies, Immuta continuously retried to update the policies and then failed with the following error: Execution error in store procedure UPSERT_POLICIES: SQL compilation error: Table does not exist or not authorized.
Vulnerabilities
CVE-2022-0355: Information Exposure in simple-get
CVE-2022-0235: Information Exposure in node-fetch
Databricks: Errors occur when using mlflow.spark.log_model on non-Machine Learning clusters.
Editing a schema project to a database that already exists fails.
Because Immuta's built-in identity manager (BIM) is not enabled in SaaS, the App Settings page freezes when a user selects Migrate Users when configuring an external IAM.
This feature was also announced in the Immuta Changelog.
Deprecation announcement for IP filtering for SaaS tenants: Support for the IP filtering feature for SaaS tenants has been deprecated and will reach end of life on March 31, 2026. This feature is being replaced by the Immuta SaaS private networking offering, which addresses the same security concern by allowing you to control who has access to your tenant's UI and APIs.
For assistance with setting up Immuta SaaS private networking, please contact your Immuta representative.
This deprecation was also announced in the Immuta Changelog.
Immuta integration with MySQL: Immuta now directly integrates with MySQL hosted on Amazon Relational Database Service (RDS) or Amazon Aurora. This enhancement means you can now
Create Marketplace data products with data sources residing in your RDS or Aurora MySQL environments for unified access and discovery.
Allow data consumers to easily request access to these securables directly through the Immuta Marketplace app.
Enable approvers to efficiently manage access requests and provision access to your MySQL data safely and effectively, which simplifies management and auditing.
This integration extends Immuta’s industry-leading data governance capabilities to your key MySQL deployments in the AWS cloud.
See a demo of this new feature in the Immuta Changelog.
Behavior change release enabled for all accounts: Support for Databricks Delta Sharing
Immuta’s support for subscription policies on shares from Databricks Delta Sharing is now enabled for all accounts. This allows you to manage granting and revoking access to these securables safely and efficiently.
The release of this feature followed Immuta’s behavior change release process.
Opt-in period: 10/3 - 11/2
On by default: 11/3 - 12/2
Enabled for all accounts: 12/3
This feature was also announced in the Immuta changelog.
OAuth support for Azure Synapse Analytics integration: Immuta now supports OAuth as an additional authentication method in the Azure Synapse Analytics integration. Users can configure an Azure Synapse Analytics integration in Immuta using OAuth with a client secret.
This feature was also announced in the Immuta changelog.
Behavior change release enabled for all accounts: Improved query audit for Databricks Unity Catalog
Immuta’s Unity Catalog improved query audit is now enabled for all accounts. The Databricks Unity Catalog audit ingestion query now uses the system.query.history table from Databricks. This foundational change significantly improves the match rate between Unity Catalog query audit records and Immuta-managed data sources, delivering more accurate and valuable insights.
What's new
There's now a better linkage between queries run in Unity Catalog and their associated data sources in the Immuta audit dashboards and reports, providing a clearer, more reliable picture of data access usage within Unity Catalog.
In addition to better linkage, the rowsProduced parameter is now also getting populated for Unity Catalog query audits.
The following parameters are no longer populated, as they are not available in the system.query.history table: userAgent, requestId, clientIp.
The release of this feature followed Immuta’s behavior change release process.
Opt-in period: 10/1 - 10/31
Opt-out period: 11/1 - 11/30
Enabled for all accounts: 12/1
Note: If you were using Immuta's Databricks UC audit feature without the required privileges for this change, Immuta disabled the audit ingest feature.
This feature was also announced in the Immuta changelog.
Subscription policy support on Amazon Redshift tables: Immuta now supports subscription policies on Amazon Redshift tables through an enhanced Amazon Redshift integration.
With this release, users can provision data to their consumers by using Immuta subscription policies or the Immuta Marketplace. Users’ access will be automatically updated and enforced in Amazon Redshift based on what is dictated in Immuta.
To begin using this integration, contact your Immuta representative.
See a demo of this new feature in the Immuta changelog.
Marketplace notifications inbox: You can now view all your notifications directly within the Marketplace app by navigating to your notifications tab. This page displays the same email and Slack notifications you receive from directly in the app, providing a single, convenient view of your entire notification history. This feature was also announced in the Immuta changelog.
Databricks Unity Catalog integration validation improvements: Immuta has made backend improvements to the validations run that verify the health of Databricks Unity Catalog integrations. These improvements were released to all customers on November 25 and can result in significant cost savings. Your specific cost savings depends on how you've configured your Immuta and Databricks environments, such as your Databricks warehouse size and Immuta audit ingest interval. This change was also announced in the Immuta changelog.
Behavior change release enabled for all accounts: Fine-grained access controls for Databricks Unity Catalog materialized views and streaming tables
Immuta’s support for fine grained access controls on Unity Catalog materialized views and streaming tables is now enabled for all accounts. This feature allows you to apply row filtering and column masking policies on Unity Catalog materialized views and streaming tables.
Since these securables now support Immuta subscription and data policies, you gain enhanced security and flexibility with consistent policy enforcement across your Unity Catalog ecosystem.
The release of this feature followed Immuta’s behavior change release process.
Opt-in period: 9/18 - 10/20
On by default period: 10/21 - 11/20
Enabled for all accounts: 11/21
This feature was also announced in the Immuta changelog.
Slack notifications default setting: Admins can now control default Slack notification preferences for all Immuta Marketplace users. Application admins can choose whether Slack notifications are on or off by default across a workspace for each Slack integration.
If enabled, all users are automatically opted in to receive real-time Marketplace notifications in Slack, without any individual user action.
This feature was also announced in the Immuta changelog.
Support for PostgreSQL materialized views: Immuta now supports provisioning access to PostgreSQL materialized views with subscription policies. This enhances the existing subscription policy support for PostgreSQL standard tables and views, streamlining access provisioning and governance with Immuta.
This feature was also announced in the Immuta changelog.
Viewless integration with Amazon Redshift: Immuta has begun releasing an enhanced version of the Amazon Redshift integration. Instead of creating Immuta-managed views, Immuta will now be able to manage base tables directly.
With this viewless integration, you can create Marketplace data products with Amazon Redshift tables, which allows consumers to request access to these securables and enables approvers to efficiently manage access requests and provision access to data safely.
To begin using this integration, contact your Immuta representative.
See a demo of this new feature in the Immuta changelog.
Slack notifications for Immuta Marketplace: Marketplace now supports Slack notifications, bringing real-time alerts directly to where your teams collaborate. This new capability extends our existing email notification system, allowing users to receive the same event-driven updates, such as approvals, data access changes, and requests, via direct messages in Slack.
Slack notifications for Marketplace requests can enable your teams and speed up process in multiple ways.
Faster collaboration: Receive data access and governance updates instantly within Slack, without switching between tools.
Higher engagement: Direct notifications encourage faster responses from data stewards and data consumers, improving workflow velocity.
See a demo of this new feature in the Immuta changelog.
Immuta integration with Teradata: You can now use Immuta to manage access to Teradata views. Access controls defined in Immuta through subscription policies are automatically enforced in Teradata.
This integration enables users to provision and manage access to Teradata data dynamically and at scale, reducing manual overhead for administrative teams and ensuring consistent access control across Teradata and other platforms that Immuta supports.
To begin using this integration, please contact your Immuta representative.
Behavior change release on by default period: Support for Databricks Delta Sharing
What's changing: Immuta now supports enforcing subscription policies on shares from Databricks Delta Sharing, allowing you to manage granting and revoking access to these securables safely and efficiently.
On Monday, November 3, this feature will be enabled by default for all customers except those who have opted out.
Behavior change release on by default period: Improved query audit for Databricks Unity Catalog
What's changing: Immuta’s Unity Catalog audit ingestion query now uses the table from Databricks. This foundational change significantly improves the match rate between Unity Catalog query audit records and Immuta-managed data sources, delivering more accurate and valuable insights.
There's now a better linkage between queries run in Unity Catalog and their associated data sources in the Immuta audit dashboards and reports, providing a clearer, more reliable picture of data access usage within Unity Catalog.
MariaDB integration: Immuta now directly integrates with MariaDB hosted on Amazon Relational Database Service (RDS).
You can create Marketplace data products with MariaDB data sources, which allows your consumers to request access to these securables and enables approvers to efficiently manage access requests and provision access to data safely.
To begin using this connection, contact your Immuta representative.
Improved logic for data source health status: Data source health status reporting has been improved for clarity and accuracy.
Previously, an Unhealthy data source could briefly appear as Healthy while a failed job was actively resyncing.
Now, data sources will remain Unhealthy until all resync jobs are complete. The data source health status is updated only after the resync finishes, which prevents false positive health statuses from being displayed during failed policy resyncs and eliminates confusion for users.
Webhook history for Marketplace: You are now able to view the status of historical webhook events for your configured webhook endpoints. This new history view makes it easier to diagnose webhook errors and verify successful event delivery.
Behavior change release on by default period: Fine-grained access controls for Databricks Unity Catalog materialized views and streaming tables
Immuta supports row filtering and column masking on Unity Catalog materialized views and streaming tables. Since these securables now support Immuta subscription and data policies, you gain enhanced security and flexibility with consistent policy enforcement across your Unity Catalog ecosystem.
This feature is now enabled by default on all Immuta tenants that have not opted out of it. See the behavior change release features page for details.
Performance improvement for Trino integration: Immuta has optimized policy calculations for our Trino integration. This is a back-end update that is released to all customers.
These optimizations have led to performance improvements for Trino users who are not using Immuta projects. After releasing this enhancement, query times improved an average of 70% for users without an Immuta project set.
Python UDFs no longer created for Amazon Redshift integration: Immuta no longer creates Python UDFs in Amazon Redshift when an integration is created.
AWS is ending support for Python UDFs in Amazon Redshift and will prevent the creation of new Python UDFs on November 1, 2025.
Two policy types relied on Python UDFs when applied to Redshift data sources: reversible masking policies and non-global regex masking policies. Consequently, support for these policy types on Redshift data sources has been deprecated.
On July 1, 2026, existing Python UDFs will no longer be functional, and Immuta will no longer support non-global regex masking or reversible masking policies for the Amazon Redshift integration. These policy types must not be used after this date.
Masking exception requests: Immuta Marketplace now supports masking exception requests, enabling organizations to safely grant granular access to sensitive data. Data consumers can request access to specific masked columns within a data product, and, if approved, see the unmasked values while masking continues to apply for all other users.
Requests can be made for one or more columns within a data product.
Approvals, denials, and revocations follow the same workflow and auditing as existing Marketplace access requests.
Provisioning is handled automatically through Immuta policies, column tags, and user attributes.
Masking exception requests make column-level access manageable and auditable. Consumers only request what they need, while stewards review the request with context like classification tags. The result is simplified, repeatable workflows.
Contact your Immuta customer success representative for access to this feature.
Support for Databricks Delta Sharing: Immuta now supports enforcing subscription policies on shares from Databricks Delta Sharing, allowing you to manage granting and revoking access to these securables safely and efficiently. The release of this feature will follow Immuta’s behavior change release process. The specific dates for each phase in that process are outlined below. For details about each phase, see the Behavior change release process page.
Opt-in period: 10/3 - 11/2
On by default period: 11/3 - 12/2
Enabled for all accounts: 12/3
Ending Python UDFs support for Amazon Redshift integration: Support for non-global regex masking and reversible masking policies on Amazon Redshift data sources has been deprecated. AWS is ending support for Python UDFs in Amazon Redshift and will prevent the creation of new Python UDFs on November 1, 2025. To align with this, as of October 8 Immuta will no longer create Python UDFs in Amazon Redshift when an integration is created in Immuta. Two policy types relied on Python UDFs when applied to Redshift data sources: reversible masking policies and non-global regex masking policies. Consequently, support for these policy types on Redshift data sources has been deprecated. On July 1, 2026, existing Python UDFs will no longer be functional, and Immuta will no longer support non-global regex masking or reversible masking policies for the Amazon Redshift integration. These policy types must not be used after this date.
Improved query audit for Databricks Unity Catalog: Immuta’s Unity Catalog audit ingestion query now uses the system.query.history table from Databricks. This foundational change significantly improves the match rate between Unity Catalog query audit records and Immuta-managed data sources, delivering more accurate and valuable insights.
There's now a better linkage between queries run in Unity Catalog and their associated data sources in the Immuta audit dashboards and reports, providing a clearer, more reliable picture of data access usage within Unity Catalog.
The following parameter is now populated: rowsProduced.
These parameters are no longer populated, as they are not available in the system.query.history table: userAgent, requestId, clientIp.
The release of this feature will follow Immuta’s behavior change release process. The specific dates for each phase in that process are outlined below. For details about each phase, see the Behavior change release process page.
Opt-in period: 10/1 - 10/31
On by default period: 11/1 - 11/30
Enabled for all accounts: 12/1
If you are currently using Immuta's Databricks Unity Catalog audit feature without the required privileges for this change, you will see an in-app banner, requiring you to take action. If action is not taken by October 31, Immuta will disable the audit ingest feature on that date (query audit will be toggled to off in the integration settings).
Fine-grained access controls for Databricks Unity Catalog materialized views and streaming tables: Immuta supports row filtering and column masking on Unity Catalog materialized views and streaming tables.
Since these securables now support Immuta subscription and data policies, you gain enhanced security and flexibility with consistent policy enforcement across your Unity Catalog ecosystem.
The release of this feature will follow Immuta’s behavior change release process. The specific dates for each phase in that process are outlined below.
Opt-in period: 9/18 - 10/20
On by default period: 10/21 - 11/20
Enabled for all accounts: 11/21
Eliminating possibility of wrong Collibra assets being autolinked to Immuta data sources: Previously, if a data source was created in Immuta before it was present as an asset in Collibra, a fallback method could link the wrong Collibra asset to the data source in Immuta. This only happened in the rare case where Collibra assets with the same schema and table name already existed in Collibra at the time of the auto-linking attempt, but the correct target asset wasn’t present yet.
Now, the Immuta catalog auto-linking method for Collibra will only attempt to auto-link Collibra assets using the fully qualified asset name following the Edge naming convention - eliminating the need for any fallback linking method. This guarantees an accurate match rate every time.
Databricks Runtime 14.3 support: Immuta's Databricks Spark integration now supports Databricks Runtime 14.3. With this update, you can upgrade your Databricks environment while preserving Immuta’s core data governance capabilities.
This update is only relevant to Databricks Spark customers, not customers using Unity Catalog.
Microsoft SQL Server integration: Immuta now directly integrates with Microsoft SQL Server. This integration supports several different deployment methods, including SQL Server hosted on Amazon Relational Database Service (RDS), Azure SQL Server, and self-hosted.
You can now create Marketplace data products with SQL Server data sources.
Data consumers can request access to these securables.
Approvers can efficiently manage access requests and provision access to data safely.
To begin using this integration, please reach out to your Immuta representative.
Google BigQuery integration: The Google BigQuery integration is now enabled for all accounts, so you can explore the integration without having to request to have it enabled.
Once Google BigQuery has been configured, BigQuery administrators can author Immuta policies to enforce access controls, and users can then query policy-protected data directly in BigQuery.
Guardrail subscription policies: Guardrail subscription policies prevent users from gaining access to data unless they meet certain conditions. Unlike grant policies, guardrail policies do not subscribe users; they act as the minimum requirements that must be met before a user is even eligible to subscribe to data in the first place.
If the conditions of a guardrail subscription policy are not met, users will not be subscribed to the data source even if they meet conditions of a grant policy on the data source.
If you are using the Immuta API to generate subscription policy logic, ALL subscription policies with subscriptionType: policy will now always be set to shareResponsibility: true. Any different payload parameters for the shared responsibility setting will be ignored.
This feature will be introduced in a phased rollout. Immuta tenants where the feature has been activated will receive a notification popup on the policies page upon login. The Immuta team will contact customers who have not yet received the feature (as they may require a migration of existing policies) to coordinate the rollout, which must be performed by October 10.
Multiple approvers for data access requests: When configuring a request form with the list of data stewards that can respond to an access request, you can now select that all data steward groups, attributes, or permissions in the list approve. This option is now available in addition to the current behavior of any data steward from the list being able to approve.
This feature benefits organizations that require more than one level of approvals.
For example, you may require that the domain steward and someone from the governance team both approve every request for data access.
The cleanup script for deleting a Snowflake integration or connection now removes Immuta policies and Immuta-managed roles from your Snowflake environment. This enhancement provides the following benefits:
Deletion of Immuta roles and policies happens in one place, which simplifies the cleanup process.
You have full control over cleanup actions when running the script.
You can better identify points of failure if any issues occur.
You can easily re-run the script as needed.
This change applies to any Immuta Snowflake user running this script, regardless of whether or not you have migrated to using connections to register data or you still use the legacy integration method.
For guidance on how to delete a Snowflake integration or connection, see the Edit or remove your Snowflake integration or Deregister a connection guide. Before running the script, ensure you are using the latest version downloaded from Immuta and not a legacy version you may have saved previously.
Configuring the permissions that Immuta revokes for Databricks Unity Catalog users: You can now configure whether or not Immuta revokes Immuta users’ USE_CATALOG and USE_SCHEMA permissions when users do not have access to any of the securables within that catalog or schema.
Previously, Immuta revoked USE_CATALOG and USE_SCHEMA from users that did not have access to any securable within a particular catalog or schema. Now, you can disable this default behavior.
When this setting is disabled, Immuta users who do not have access to any securables within a catalog or schema will keep their existing USE_CATALOG and USE_SCHEMA permissions. Updating this setting will not retroactively update existing user permissions; this change will be applied on a go-forward basis only.
See the App settings guide for instructions on configuring this setting.
API response change
A portion of the response for the GET /policy/global/{policyId} endpoint (which retrieves all global policies) has been changed. The syncStatuses field used to be a rollup of all relevant job statuses and their counts; now the field is a boolean:
Before
After
Custom WHERE clause policies behavior change: Data policies that use custom WHERE functions that include a column name argument must now use @columnReference('Column_Name') to specify the data source column to use in the policy. The following data policy types are affected by this change:
Masking using a custom function (Mask columns tagged Discovered.PII using the custom function <custom where clause function>)
Masking policy conditions that use a custom function (Mask using hashing columns tagged Discovered.PII where <custom WHERE clause function>)
Row-level policies that use a custom function (Only show rows where <custom WHERE clause function>)
This change makes it easier to author custom WHERE clause policies and ensures greater accuracy and reliability in policy application.
Existing custom WHERE clause policies that reference column names and are applied to Snowflake and Databricks data sources have been automatically migrated to use the new @columnReference() format:
Before migration: Only show rows where 'Location'='US'
After migration: Only show rows where @columnReference('Location')='US'
Oracle integration: Immuta supports registering a connection for Oracle hosted on Amazon Relational Database Service (RDS). You can now create Marketplace data products with Oracle data sources, allowing your consumers to request access to these securables and enabling approvers to efficiently manage access requests and provision access to data safely.
Delegating steward review flow to the data product: When creating request forms, it is now possible to delegate the review flow configuration to each data product where the request form is used rather than it being set in the request form. Delegating the configuration of the review flow to the data products allows you to reuse request form questions yet still have unique stewards set per data product.
Compatibility with Collibra’s data classification module: Immuta’s integration with Collibra enterprise data catalog now also supports ingestion of Collibra data classifications as column tags into Immuta.
This allows customers that use the Collibra data classification module to leverage the resulting classifications for subscription or data policy authoring in Immuta. For example, you could use an Immuta masking policy to mask all columns by default if they were classified as sensitive by the Collibra data classification module.
Private networking support for Marketplace: Both the Immuta SaaS Marketplace and Governance applications now support private connectivity from organizations' networks. This allows organizations to meet security and compliance controls by ensuring that all traffic to the SaaS tenant applications only traverse private networks and no users can access it outside of their internal networks.
Deprecated features remain in the product with minimal support until their end of life date.
In October, Immuta will require Snowflake integrations to enable table grants and low-row access policy mode. Additionally, Snowflake project workspaces (which require table grants to be disabled) will be unavailable.
Before October, Snowflake customers not using these features must complete the following tasks:
Ensure the user who has set up the Snowflake integration or Snowflake connection has all permissions listed on the Register a Snowflake connection guide. There are two permissions that may need to be granted to that user that were not previously required:
CREATE ROLE ON ACCOUNT WITH GRANT OPTION
MANAGE GRANTS ON ACCOUNT WITH GRANT OPTION
Navigate to App Settings > Global Integration Settings in Immuta, check the Enable Snowflake table grants box, and save the changes.
Once these steps are complete, you can use Immuta to manage Snowflake table grants without needing to grant access manually in Snowflake.
Snowflake project workspaces
July 2025
October 2025
Snowflake with low row access policy mode disabled
July 2025
October 2025
Snowflake with table grants disabled
July 2025
October 2025
Marketplace webhooks: Marketplace actions now fire webhooks that users can set up with their own systems to subscribe and react to. Actions like creating a data product, requesting access to a data product, or approving an access request that require additional follow-up actions outside of Immuta send out webhooks. You can then listen for those events and trigger your downstream actions based on your configuration.
Manually link external catalogs on Amazon S3 data sources: Users can now manually link Amazon S3 data sources to an asset in an external catalog. Using this link, Immuta polls the external catalog to ingest and apply tags to each Amazon S3 data source. Immuta checks every 24 hours for any relevant metadata changes in the connected external catalog.
The linked external catalog can be seen on the data source's detail page. The external catalog tags can be seen on the data source dictionary page. These tags can then be used to drive policies or classification frameworks.
Multi-tag dynamic domain assignment: You can now use multiple table tags to assign data sources to domains dynamically. Before, you could only pick one tag and every data source with that tag was assigned to a certain domain. This enhancement removes that limitation for table tags and connection tags.
Table tags example: If you have a data source tagged Finance and another data source tagged HR but you want both to belong to the Operations domain, you can configure the Operations domain with dynamic tag assignment and select both the Finance and the HR tags.
Connection tags example: If you have a catalog in Databricks and a database in Snowflake that both belong to the Customer 360 domain, you can leverage connection tags and select both the Databricks and Snowflake connection tags to assign data sources to the same domain.
You can combine up to 100 tags through this mechanism.
PostgreSQL integration in public preview: This includes Immuta’s enhanced connections onboarding and support for table grants. Immuta currently offers support for the following deployment methods:
Self-managed PostgreSQL
Amazon Aurora
Amazon RDS
Neon
Crunchy Data
Email notifications for Marketplace: The following Marketplace events will trigger email notifications to the appropriate users, accelerating time to data, if user emails are mapped from your identity manager to Immuta:
Data product request
Determination made on request
Access revoked
Canceled request
Pending request(s) reminder
Expiration reminder (temporary approvals)
Data product deletion which impacts a request
Updates to governance reports for subscription status: Improved performance by 10x for rendering the following governance reports:
Additionally, these reports include a new “Access Grant” column that denotes if a user has READ or WRITE access to a data source.
Marketplace is public preview: Marketplace is available to all customers to help manage their request and approve provisioning needs.
Identification for Redshift in general availability: Identification support for Redshift is generally available.
Redshift identification supports multiple criteria to identify and tag your data:
Column name matching with a regex
Column contents matching where a sample of the actual data is matched with a regex
Dictionary matching where a sample of the actual data is matched with a dictionary
To perform the matching, Immuta analyzes the sample directly inside Redshift so that no data ever leaves your data lake. Immuta provides 50+ out-of-the-box identifiers to find common PII, like credit cards or social security numbers.
Review assist monitors access request determinations made by your data stewards and finds trends across two factors that determine access approvals, temporary approvals, or denials. Those factors are:
The data in the data product itself, meaning the trends are associated with the specific requested data.
Metadata about the requestors, such as the groups and attributes they possess.
Once review assist has identified trends, it begins to form a recommendation about whether the data steward should approve, temporarily approve, or deny the request that it presents in future access requests.
An AI-generated justification for the decision will accompany that recommendation, which is pre-populated to submit along with the access determination. This AI-generated justification is based on the trends across the factors above, as well as prior human-entered justifications. For temporary approvals, review assist also considers past temporary approvals to determine the recommended duration.
Customizing stewards in Marketplace: Users can now customize the stewards who manage access requests at the request-form level. This enhancement provides the following benefits:
Granular steward assignment: Steward assignment can be as granular as needed. You can assign stewards by specifying usernames, groups, attributes, or permissions.
Separation of duties: Stewards can be assigned separately from the data product owners (those with the Manage Data Products permission in the domain).
Support for Redshift datashares: Immuta can now govern Redshift datashares. Users can onboard and apply Immuta controls to Redshift datashares with the same workflows they are using for standard Redshift data sources today.
Integration with Atlan for seamless tag ingestion: Immuta now offers a new, out-of-the-box standard connector designed to seamlessly integrate Immuta with the Atlan enterprise data catalog. This connector enables automated tag enrichment within Immuta, leveraging the rich metadata and tagging information managed within Atlan. By establishing a direct link between these two platforms, organizations can significantly streamline the process of applying consistent and comprehensive tags to their data assets within Immuta.
Marketplace temporary approvals: Stewards are now able to temporarily approve access to data products by either accepting the requestor’s proposed duration of access, or entering their own. Temporary approvals allow the steward to:
Hedge riskier approvals
Reduce recertification / entitlement review load: Access is lost and if the user still needs it, they can ask again rather than steward’s reviewing and confirming access through periodic campaigns
Immuta will automatically deprovision access directly in the data platform as soon as the duration period ends.
Identification for Starburst (Trino) in general availability: Identification support for both Trino open source and Starburst Enterprise is generally available.
Starburst (Trino) identification supports multiple criteria to identify and tag your data:
Column name matching with a regex
Column contents matching where a sample of the actual data is matched with a regex
Dictionary matching where a sample of the actual data is matched with a dictionary
To perform the matching, Immuta analyzes the sample directly inside Starburst (Trino) so that no data ever leaves your data lake. Immuta provides 50+ out-of-the-box identifiers to find common PII, like credit cards or social security numbers.
Databricks Runtime 9.1 and 10.4: Support for these Databricks Runtime versions has been removed from the product.
Connections for Snowflake and Databricks Unity Catalog in general availability: Connections are Immuta’s improved way of efficient data object management.
As part of our commitment to delivering the best possible onboarding experience, by September 2025, Immuta will no longer support onboarding Snowflake and Databricks Unity Catalog data sources using the legacy method.
Over the coming weeks, a banner will be displayed to Immuta administrators for tenants that have not been upgraded yet. This will redirect them to the Upgrade Manager where they can initiate the process to upgrade any remaining integrations using the legacy onboarding pattern.
For any questions or concerns, please reach out to your Immuta support professional.
Identifiers in domains in general availability: Identifiers in domains is now generally available and will be enabled for all SaaS tenants. This functionality replaces identification frameworks, which are now deprecated and removed from the product.
New functionality and advantages of identifiers in domains allows you to do the following:
Leverage your existing domains to segregate identifiers by domain
Delegate ownership of identifiers to specific individuals by granting the Manage Identifiers permission on a domain level
Control whether or not identification should run on new data sources and columns automatically for each domain
Place tags outside of the Discovered . * hierarchy
Understand via the tag side sheet which identifier placed each tag and what the specific hit rate was (only available for tags placed with the new feature enabled)
Immuta is automatically migrating all tenants to identifiers in domains. You will see the following changes in your tenant:
Identification frameworks will disappear and become unavailable.
The SDD application setting will disappear and become unavailable. SDD is now always enabled, but will only run on data sources in domains with identifiers.
There is a new Identifier tab and a new setting for auto-scanning in each domain.
The API is changing.
A new domain has been generated for each deprecated identification framework. It will be named as the name of the identification framework plus (Immuta generated). These domains hold the same data sources and identifiers the identification framework held before.
If you set a global framework in the application settings previously, the respective domain will be set to auto-scanning and get data sources assigned to dynamically. It will now run on all data sources, regardless of if they are in another domain.
The built-in identifiers Immuta provides have been improved and are more accurate. This won’t impact any existing identifiers that are already part of a domain (or were part of an identification framework). You will only get the latest version of a reference identifier when you copy it into a domain.
If you are currently actively using identification / SDD, your tenants have been excluded from this rollout. An Immuta customer success representative will reach out to coordinate the rollout with you for your specific SaaS tenants
Customized request forms for Marketplace: This feature allows you to customize questions you ask requestors and more easily manage data use agreements. Furthermore, they provide more valuable information to the data steward making access determinations by ensuring all relevant information is answered by the requestor.
Enhancement
Collibra integration now supports the OAuth 2.0 authentication method: In a continuous effort of adapting industry best practices for authentication between applications, Immuta’s Collibra integration has added support for the OAuth 2.0 authentication method in addition to the supported username/password authentication method.
Removed feature
Removed ability to change the default subscription policy setting: The ability to configure the behavior of the default subscription policy on newly registered data sources has reached its end of life (EOL) date and has been removed from the product. As a result, by default Immuta will never apply a subscription policy to newly registered data sources unless an existing global policy applies to them. To set an "Allow individually selected users" subscription policy on all data sources, create a global subscription policy with that condition that applies to all data sources.
Automatic SSO login for Marketplace: Once you configure a single identity manager (IAM) with Immuta that supports SAML or OIDC SSO, users that are signed in to the SSO will be automatically logged into Marketplace. If they use a link within an external catalog, they will be automatically redirected to the request access screen in Marketplace without seeing the sign in screen.
With an SSO configured, users will not be able to login to Marketplace with internal Immuta accounts.
Identifiers in domains: Identifiers in domains is available in public preview and enabled by default on all new SaaS tenants. This functionality replaces identification frameworks and allows you to
Leverage your existing domains to segregate identifiers by domain
Delegate ownership of identifiers to specific individuals by granting the Manage Identifiers permission on a domain level
Place tags outside of the Discovered . * hierarchy
Disabled data source behavior for Azure Synapse Analytics, Databricks Unity Catalog, Google BigQuery, Redshift, and Snowflake integrations: Immuta will remove all policies on disabled data sources for these integrations.
Previous behavior: Disabling a data source triggered a lockdown policy, which revoked all users’ access until the data source was either deleted from Immuta or re-enabled.
New behavior: Disabling a data source will remove existing Immuta subscription and data policies and prevent Immuta from adding new policies until the data source is re-enabled. Immuta policies will be removed from currently disabled data sources. For view-based integrations (Azure Synapse Analytics, Google BigQuery, and Redshift), if a user disables an object in Immuta, the Immuta-created view will be deleted.
Data source changes in the Governance app are reflected in the Marketplace app: If a data source is deleted or removed from a domain in the Governance app, it will now be removed from any corresponding data products that contain that data source in the Marketplace app.
Immuta SaaS private networking over AWS PrivateLink in public preview: The Immuta SaaS Governance application supports private connectivity from organizations' networks. This allows organizations to meet security and compliance controls by ensuring that all traffic to the SaaS tenant Governance application only traverses private networks and no users can access it outside of their internal networks.
Automated connection tags: All data sources that are registered from connections will now have an automated tag applied that represents the connection. These tags can be used like any other tags in Immuta to build policies, add data sources to domains or generate reports. Immuta fully manages those tags, and they can’t be deleted or edited.
The tag will be formatted as follows: Immuta Connections . The Technology . Your Connection
AWS Lake Formation integration: This integration allows administrators to use Immuta to orchestrate Lake Formation access controls so they can better centralize and scale governance in AWS. Immuta can enforce subscription policies (grants and revokes at the table level) across three AWS services: Amazon Athena, Amazon EMR Spark, and Amazon Redshift Spectrum.
Identification and sensitive data discovery will replace orphaned legacy tags: Starting on April 22, 2025, Immuta will remove orphaned tags placed by the legacy sensitive data discovery mechanism.
Previously: The legacy sensitive data discovery mechanism was deprecated in September 2023. Tags placed by the legacy SDD mechanism are still visible on columns and can’t be removed - only disabled.
New behavior: If you run identification on a data source, unless identification places the same tag, the tag will get removed.
Support for Databricks Unity Catalog models and functions in public preview: Immuta has expanded its governance capabilities by introducing subscription policy support for Databricks Unity Catalog models & functions. In addition to existing support for tables, views, and volumes, Immuta now supports governing which users can execute models & functions. This feature is currently available in public preview for customers using Immuta connections and will be included in the connections upgrade.
Data source to domain assignment is now generally available: Assigning data sources to domains dynamically is now generally available and enabled by default. When you create a new domain or edit an existing domain, you can choose to either assign data sources manually or dynamically based on a table tag. All existing domains are set to manual assignment.
User interface changes
Updated color scheme: The updated color scheme improves accessibility and contrast throughout the application. This change will also provide a more consistent experience alongside the new Marketplace offering.
Improved navigation menu: The reorganized navigation menu makes the application more intuitive and user-friendly. The menu now includes a Metadata section for managing tags; an Identities section for managing users, groups, and attributes; and an Insights section for managing reports, audit, and notifications.
Detect, Discover, and Governance sections have been removed from the navigation menu. These functionalities are now integrated throughout the application.
K-anonymization policies have reached end of life and will be removed from the product.
Removed infrastructure admin permission for connections: The infrastructure admin permission has been removed, as it was a duplication of the data owner permission functionality. Any users that previously had the infrastructure admin permission on any connection, database, schema, or data object have been granted the data owner permission at the relevant hierarchy levels instead.
Since both permissions offered the same functionality, this change does not impact users’ ability to perform actions in Immuta.
Identification and sensitive data discovery timeout: Starting on April 2, 2025, Immuta queries for identification and sensitive data discovery will have a timeout of 15 minutes. This timeout ensures that there are no long-running queries that block your compute resources and helps to reduce the cost of running identification.
The majority of identification runs complete within 15 minutes. If you expect identification to run longer than 15 minutes, reach out to your Immuta representative to configure a longer timeout window for your tenant.
Running identification on complex views with large amounts of data is more likely to result in timeouts. Immuta recommends running sensitive data discovery and identification on the underlying base tables.
Databricks Runtime 10.4: Support for this Databricks Runtime has been deprecated.
Data source to domains assignment: This new feature is the ability to choose how data sources are assigned to your domains. Either manually, as has always been possible, or dynamically, which will assign the data sources to a domain based on their table tags.
Before, a user with the GOVERNANCE permission had to manually assign every data source to a domain. However, with this new feature, the governance user can now decide on a tag, and every data source with that tag will be added to the domain. Dynamic assignment will continuously update the domain so that only and all data sources with the tag are in the domain.
Support multiple Redshift integrations with the same host on a single Immuta tenant: Immuta allows multiple Redshift integrations with the same host to exist on a single Immuta tenant. Users can create multiple Redshift integrations with the same host name, provided that each integration has a different port (which Immuta uses to differentiate each one). This support gives Redshift users the flexibility to use infrastructure setups with multiple Redshift clusters, instead of being limited to using a single cluster.
Support for Databricks Unity Catalog volumes in public preview: Immuta supports READ and WRITE access controls for volumes in Databricks Unity Catalog. This feature is currently available in public preview for customers using Immuta connections and will be included in the connections upgrade.
SDD global settings update: The global sensitive data discovery (SDD) enablement setting has been removed from the app settings page and is available by default on all tenants. To run identification on your data sources, add them to an identification framework. If you want SDD to run automatically, add an identification framework to the Global SDD Template field on the app settings page.
Marketplace now allows user-provided IDs when creating data products over the API: The create data products endpoint now accepts an optional user-provided ID for the data product. This allows users to share IDs across systems (such as between Collibra and Immuta)
and and pushing approved changes to the Immuta API.
Databricks Runtime 14.3 support: Immuta's Databricks Spark integration now supports Databricks Runtime 14.3. This compatibility enables users to upgrade their Databricks environments while preserving Immuta’s core data governance capabilities.
Display last 5 access determinations in Marketplace: When a data steward is making a determination on a request for access, Immuta shows the last 5 approvals and denials. This enhancement assists the data stewards in making an accurate decision, as they can see the prior decisions and the reasoning behind them.
Include account ID in Marketplace URLs: You can append the account identifier to the end of the Marketplace URL. Adding the account identifier can simplify URL redirects to the Marketplace because the user no longer needs to know the account ID to login.
UI tab loading issues: The permissions tab was not loading properly when customers were using connections. This was caused by a custom JavaScript object type coming from a third-party library. UI code has been changed to no longer use this custom object type, which has resolved the issue.
Snowflake object type mislabelling leading to policy lockdown: Some customers using connections for Snowflake were seeing Immuta processes sending wrong SQL statements when trying to apply data policies on their environment, such as ALTER TABLE on Snowflake objects of type VIEW.
Since these SQL statements by definition will always fail to execute successfully, Immuta revoked all users' existing access on the affected objects because it could no longer guarantee successful application of masking and row-filter policies on those objects.
The issue of mislabelling Snowflake table types was caused by an incorrect query in Immuta’s backend code, leading to erroneous overrides of Snowflake table types in the Immuta internal metadata storage. The query has since been fixed and all affected objects have been updated to contain the correct Snowflake table type.
Data connections for Snowflake and Databricks Unity Catalog in public preview: Connections is Immuta’s enhanced way of efficient data object management that offers the following benefits:
Reduced complexity by just having one connection in Immuta for both metadata onboarding (pull) and policy application (push) with your data platform
Increased scalability by onboarding all your objects at once instead of repetitive patterns (such as schema by schema)
Improved performance to manage and track metadata changes
Fully automated metadata change detection
Connections is enabled by default on all new tenants created after February 26, 2025, and available upon request for tenants created prior. Reach out to your Immuta support professional to enable it on your tenant. Once enabled, a banner will direct you to the upgrade manager where you can initiate the process to upgrade any of your existing integrations.
Column name regex support for Google BigQuery: Sensitive data discovery now works to tag data source columns based on column name regexes for Google BigQuery data sources. Those tags can then be leveraged when building data or subscription policies to grant access to data sources and mask sensitive data. Classification is also supported to place sensitivity tags and classify data further.
Column name regex support for Azure Synapse Analytics: Sensitive data discovery now works to tag data source columns based on column name regexes for Azure Synapse Analytics data sources. Those tags can then be leveraged when building data or subscription policies to grant access to data sources and mask sensitive data. Classification is also supported to place sensitivity tags and classify data further.
Marketplace API support: All Marketplace functionality is now available to customers through the Marketplace API.
New built-in patterns: Two new built-in identifiers are available to all customers using sensitive data discovery:
SEC_STOCK_TICKER: This new pattern detects strings consistent with stock tickers recognized by the U.S. Securities and Exchange Commission (SEC).
FINANCIAL_INSTITUTIONS: This new pattern detects strings consistent with the official and alternate names of financial institutions from lists by the FDIC and OCC.
Add these identifiers to your frameworks to start detecting and automatically tagging this data.
@hasTagAsAttribute() and @hasTagAsGroup() functions for subscription policies in general availability: These functions provide a way to dynamically grant and revoke access to users by doing an exact match comparison between their user information (attribute or group membership) and the tags applied on data sources or its columns.
Ultimately, these functions can combine the complexity of multiple roles or rules into a single policy that dynamically assigns access based on users’ attributes or group membership. This results in fewer policies to manage overall and a more streamlined approach to data access management, especially for the most complex use cases.
K-anonymization policies: Support for k-anonymization policies has been deprecated.
Data policy support for foreign tables in Databricks Unity Catalog: Users can apply subscription and data policies to foreign tables in Databricks Unity Catalog.
Changing the default value for Default Subscription Merge Options (in app settings): Based on customer insights, Immuta has changed the default behavior of how multiple global subscription policies that apply to a single data source are merged.
Prior to this change, the global default had been that users must meet all the conditions outlined in each policy to get access. Now, the global default is that users must only meet the conditions of one policy to get access. This behavior can be configured on the app settings page.
Support for masking complex columns as NULL in Databricks Unity Catalog: Users can mask the entire column of STRUCT, MAP, and ARRAY column types in Databricks Unity Catalog as NULL.
Streamlined Databricks user management with improved handling of external IDs: The default behavior going forward will be that users' external Databricks IDs will be updated to None if Immuta attempts to update these users' Databricks access and Databricks returns a response dictating the targeted principal(s) do not exist. This can be the case if a user is created in Immuta before that user is created in Databricks. Marking external Databricks IDs as NONE will enable Immuta to skip future attempts to update those users' access. This streamlines the tasks that Immuta must process and avoids superfluous errors.
Databricks external IDs can be updated as needed manually, either through the user profile or by setting this property to <NO IDENTITY> in the external IAM configuration.
: Identifiers can be segregated by domain now to manage which identifiers should run on which data sources. Additionally, you can delegate the management of identifiers to specific users by granting them the Manage Identifiers domain permission.
Once generally available, this functionality will replace identification frameworks.
{
"syncStatuses": {
"active": 2
}
}{
"syncStatuses": true
}Global Subscription Policies that contained the @hasTagAsAttribute variable caused errors and degraded performance.
Snowflake with Snowflake Governance Features: Changing a column's masking policy type resulted in errors until users manually synced the policy in Immuta.
CVE-2022-37616
CVE-2022-39353
When a user's group was deleted in an external IAM, that update appeared in Immuta but was not syncing properly in Snowflake.
When using Snowflake controls with Excepted Roles specified, if users tried to do an outer join using a column that had a masking policy applied, it resulted in an error: SQL compilation error: Invalid expression [] in VALUES clause.
Schema detection
CVE-2022-0155: Information Exposure in follow-redirects
CVE-2021-3807: Regular Expression Denial of Service (ReDoS) in ansi-regex
CWE-451: User Interface (UI) Misrepresentation of Critical Information in swagger-ui-dist
2022.2
Opt-in period: 10/3 - 11/2
On by default: 11/3 - 12/2
Enabled for all accounts: 12/3
The rowsProduced parameter is now populated for Unity Catalog query audits.
The following parameters are no longer populated, as they are not available in the system.query.history table: userAgent, requestId, clientIp.
On Monday, November 3, this feature will be enabled by default for all customers except those who have opted out.
If you do not opt out of this feature, you must update your Databricks Unity Catalog service principal with the new required privileges for this change. If action is not taken before November 3, Immuta will disable audit ingest for your integration.
The release of this feature follows Immuta’s behavior change release process. The specific dates for each phase in that process are outlined below.
Opt-in period: 10/1 - 11/2
On by default: 11/3 - 11/30
Enabled for all accounts: 12/3
Upcoming behavior change: Security improvement to Immuta's webhook signature scheme
Immuta is updating the webhook signature scheme to use HMAC-SHA256 instead of HMAC-SHA1. The webhook payload format and shared secret remain unchanged; only the hashing algorithm used to generate the signature has been updated.
This change was originally scheduled for February 20 and has been extended to March 16.
What is changing
Currently, Immuta sends a webhook signature signed with HMAC-SHA1 via the x-immuta-webhook-signature HTTP header.
Immuta has begun sending an additional webhook signature signed with HMAC-SHA256 via a new HTTP header, x-immuta-webhook-signature-sha256.
Beginning March 16, Immuta will stop sending webhook signatures signed with HMAC-SHA1 for all customers who have not opted out before that date. Contact your Immuta representative to opt out.
Impact to you
Customers that validate webhook signatures must ensure their verification logic supports HMAC-SHA256. No action is required for customers who do not perform signature validation.
Timeline
The release of this change will follow Immuta’s . The specific dates for each phase in that process are outlined below.
1/20: Customers can opt-in to stop receiving a webhook signature signed with HMAC-SHA1.
3/16: Immuta will stop sending webhook signatures signed with HMAC-SHA1 by default, but customers can opt-out of this change for this time period.
4/16: The change will be generally enabled. It will no longer be possible for webhook signatures to be signed using HMAC-SHA1, and nothing will be sent over the x-immuta-webhook-signature header.
Why this change
While HMAC-SHA1 has not been shown to be practically exploitable in this context, SHA-1 is deprecated and no longer recommended for new designs. This update is a proactive security-hardening measure.
Private networking for Google BigQuery now on by default: As part of our ongoing investment in private networking and secure access to additional data platforms, Immuta is introducing private networking for Google BigQuery.
What's changing
Previously, Immuta's services connected to BigQuery via its public endpoints. Immuta has now enabled Private Service Connect (PSC) for all Google API traffic in our SaaS environment, including BigQuery.
Why this change?
The incoming caller IP seen by BigQuery will change from a public IP (e.g., 35.x.x.x) to an internal IP (e.g., 10.x.x.x). Existing allowlists that only look for public IPs will deny this new internal traffic.
Impact
If you manage a BigQuery dataset or project that uses VPC Service Controls (VPC-SC) or IAM Conditions to allow specific public IPs, the connection from Immuta will stop working unless updates are made.
In order to prevent this breakage from happening, we recommend that you update VPC-SC to allow our SaaS VPCs to access the BigQuery instance. This will allow a seamless way for both the public IPs and the VPC to connect so there is no loss of connectivity.
To get the VPC information for the new VPC-SC policy, please contact Immuta Support.
Behavior change release on by default period:
Snowflake data policy impersonation will be enabled beginning Monday, February 23, for all customers who have not opted out before that date.
If you would like to opt out of this feature release, contact your Immuta representative before February 23.
What’s changing
Impersonation is now compatible with the latest version of Immuta’s Snowflake integration, which uses low-row access policy mode for improved overall performance.
With this update:
Impersonation is available for data policy access only.
Administrators can impersonate users to view the row- and column-level restrictions applied to them.
The impersonator’s subscription policy access remains unchanged while impersonating another user.
This release follows Immuta’s standard .
Release timeline
Opt-in period: January 22 – February 22
On by default: February 23 – March 22
Enabled for all accounts: March 23
This feature was also announced in the .
Behavior change release on by default period:
Connections can now be used with Starburst (Trino) integrations. This feature is now enabled by default for all customers currently using connections except those that have opted out. A banner will direct you to the upgrade manager where you can initiate the process to upgrade your existing Trino integration.
The release of this feature will follow Immuta’s . The specific dates for each phase in that process are outlined below.
Opt-in period: 1/13-2/12
On by default: 2/13-3/12
Enabled for all accounts: 3/13
This feature was also announced in the .
: Immuta subscription policies will now grant to Immuta dynamically generated Databricks Unity Catalog groups instead of to individual users.
What's changing
For these subscription types, Immuta will create groups in your Databricks environment which represent the realized permutations of access based on existing policies against user attributes and groups and create a single grant per each of those groups. Manual grants to individual users will continue to be issued as direct user grants in Databricks Unity Catalog.
This does not change at all how Immuta policies are authored and evaluated; it is simply an implementation detail of how the grant is executed in Databricks.
Why this change?
This update streamlines the grant process, better aligns with Databricks Unity Catalog grant limits, and allows you to operate at a greater scale.
Impact
This update requires the Immuta service principal to be a Unity Catalog workspace admin in order to create and manage groups. To learn more about this change and how to grant the necessary permission, see the .
Timeline
The release of this feature will follow Immuta’s . The specific dates for each phase in that process are outlined below.
Opt-in period: 2/2/26 - 3/1/26
On by default period: 3/2/26 - 4/1/26
Enabled for all accounts: 4/2/26
All opt-ins must grant Immuta workspace admin privileges before opting in. Opt-ins will be added to a waitlist, and this feature will be enabled at a later date in February during the opt-in period. Immuta account teams will communicate specific timing on when this will be enabled.
This feature was also announced in the .
Private networking for Google BigQuery: As part of our ongoing investment in private networking and secure access to additional data platforms, Immuta is introducing private networking for Google BigQuery.
What is changing
Today, Immuta's services connect to BigQuery via its public endpoints.
Starting February 23, we will enable Private Service Connect (PSC) for all Google API traffic in our SaaS environment, including BigQuery.
Why this change? The incoming caller IP seen by BigQuery will change from a public IP (e.g., 35.x.x.x) to an internal IP (e.g., 10.x.x.x). Existing allowlists that only look for public IPs will deny this new internal traffic.
Impact and timeline If you manage a BigQuery dataset or project that uses VPC Service Controls (VPC-SC) or IAM Conditions to allow specific public IPs, the connection from Immuta will stop working on February 23 unless updates are made.
In order to prevent this breakage from happening, we recommend that you update VPC-SC to allow our SaaS VPCs to access the BigQuery instance. This will allow a seamless way for both the public IPs and the VPC to connect so there is no loss of connectivity. To get the VPC information for the new VPC-SC policy, please contact Immuta Support.
This release will follow Immuta’s . The specific dates for each phase in that process are outlined below.
On by default: February 23 - March 17. Contact your Immuta representative before February 23 if you need to opt out during this period.
Enabled for all production global segments: March 18. You will not be able to opt out.
Please contact your Immuta representative as soon as possible to review next steps and avoid disruption.
This change was also announced in the
: We are excited to announce the private preview of our new catalog-integrated access workflow, which closes the gap between discovering data and actually using it. Your data consumers can now initiate access requests directly from the environments where they already work, including Atlan, Alation, Databricks Unity Catalog, Snowflake Horizon, Collibra, and more. By placing the request process exactly where discovery happens, Immuta eliminates the friction of manual ticketing and legacy identity governance hurdles.
To reflect this evolution, we are officially renaming the Immuta Marketplace to the Immuta Request app.
This isn't just a name change; it represents a strategic shift toward making the Request the central object of our ecosystem. While the previous marketplace focused on data products, the Request app is a purpose-built engine designed to handle the entire lifecycle of data access. Whether a user needs access to a full database, a specific schema or table, or even a specialized request to unmask sensitive columns, the process is now unified and automated.
By streamlining these approvals within the data flow, we are providing a modern, data-centric alternative to general-purpose Identity Governance and Administration (IGA) vendors. We are moving away from the "one-size-fits-all" approach of legacy governance tools in favor of a system that understands the nuances of data permissions. This ensures that security stays invisible to the end user while providing the business with a scalable, automated path to data democratization
Key highlights of the private preview
Direct catalog integration: Start a request within Atlan, Alation, Databricks Unity Catalog, Snowflake Horizon, or Collibra. Extensible to other catalogs.
Granular request support: Automate access at the database, schema, or table level.
Advanced policy requests: Support for specialized requests, such as unmasking specific columns for approved projects.
If you would like access to the Immuta Request app in private preview, please reach out to your Immuta representative.
This change was also announced in the .
Architecture improvements and updates for Databricks Unity Catalog integration: Immuta has updated the back-end architecture supporting the Databricks Unity Catalog integration, and this update will be released to Unity Catalog customers over the coming weeks. The new architecture will help customers get improved performance at a higher scale through more efficient operations between Immuta and Databricks. These are largely back-end updates to improve performance and efficiency.
In addition to the architectural updates, there are several user-facing changes included with this release:
: The data policy resync option in the data source health checks will be updated to resync data policies as well as subscription policies for Unity Catalog data sources. If there is any policy failure on a data source, you can manually trigger a resync through the health check, which will run for both subscription and data policies.
: Part of the architecture updates include improvements to Immuta's subscription logic that allow Immuta to be fully additive to existing Databricks grants. Previously, Immuta would take over all grants on a data source, meaning users were revoked if they were not explicitly granted access through an Immuta subscription policy. Now, Immuta-managed grants and Databricks-managed grants will coexist harmoniously.
This change was also announced in the .
: Impersonation is now compatible with the most updated version of Immuta's Snowflake integration, which leverages low-row access policy mode for improved overall performance.
With this update, impersonation is now available for data policy access only. Administrators can impersonate others to see the row- and column-level restrictions their users are subject to. When impersonating another user, the impersonator’s subscription policy access will remain unchanged.
The release of this feature will follow Immuta’s . The specific dates for each phase in that process are outlined below.
Opt-in period: 1/22 - 2/22
On by default: 2/23 - 3/22
Enabled for all accounts: 3/23
This feature was also announced in the .
: Immuta is updating the webhook signature scheme to use HMAC-SHA256 instead of HMAC-SHA1.
This change aligns Immuta with current cryptographic best practices and NIST guidance. The webhook payload format and shared secret remain unchanged; only the hashing algorithm used to generate the signature has been updated.
What is changing
Currently, Immuta sends a webhook signature signed with HMAC-SHA1 via the x-immuta-webhook-signature HTTP header. Immuta has begun sending an additional webhook signature signed with HMAC-SHA256 via a new HTTP header, x-immuta-webhook-signature-sha256. As of today, you are able to opt-in to stop receiving webhook signatures using HMAC-SHA1.
Impact to you
Customers that validate webhook signatures must ensure their verification logic supports HMAC-SHA256.
No action is required for customers who do not perform signature validation.
Timeline
The release of this change will follow Immuta’s . The specific dates for each phase in that process are outlined below.
1/20: Customers can opt-in to stop receiving a webhook signature signed with HMAC-SHA1.
2/20: Immuta will stop sending webhook signatures signed with HMAC-SHA1 by default, but customers can opt-out of this change for this time period.
3/20: The change will be generally enabled. It will no longer be possible for webhook signatures to be signed using HMAC-SHA1, and nothing will be sent over the x-immuta-webhook-signature header.
Why this change
While HMAC-SHA1 has not been shown to be practically exploitable in this context, SHA-1 is deprecated and no longer recommended for new designs. This update is a proactive security-hardening measure.
This change was also announced in the .
: Connections is Immuta’s enhanced way of efficient data object management that offers the following benefits:
Reduced complexity by having just one connection in Immuta for both metadata onboarding and policy application with your data platform.
Increased scalability by onboarding all your objects at once instead of repetitive patterns.
Improved performance to manage and track metadata changes.
Connections can now be used with Trino (and Starburst) integrations. Reach out to your Immuta support professional to enable it on your tenant. Once enabled, a banner will direct you to the upgrade manager where you can initiate the process to upgrade your existing Trino integration.
The release of this feature will follow Immuta’s . The specific dates for each phase in that process are outlined below.
Opt-in period: 1/13-2/12
On by default: 2/13-3/12
Enabled for all accounts: 3/13
This feature was also announced in the .
: Advanced masking exceptions allow data governors to consolidate complex masking matrices containing hundreds of rules into a single policy.
The new advanced masking policy options in the global data policy builder allow you to create dynamic exception rules that let users see just a subset of sensitive columns in the clear instead of an all-or-nothing approach for granting access to restricted data, which would often be too broad and create a security risk. The two policies below illustrate how advanced masking exceptions transform a policy.
Basic masking exception (static): "Mask columns tagged sensitive except when a user is a member of group Finance or group HR."
Users that are either part of the Finance or HR department will always see all sensitive columns unmasked everywhere.
Advanced masking exception (dynamic): "Mask columns tagged sensitive for everyone except when columns are tagged with the department the user currently belongs to.”
This dynamic approach helps you maintain a high security baseline without sacrificing business agility: global masking rules protect your organization from broad data leaks and advanced masking exceptions ensure that security doesn't become a bottleneck.
See a demo of this new feature in the .
The Immuta Request App: A dedicated experience focused on the request lifecycle, continuing to support but no longer centering the data product marketplace view.
Documentation: You will notice significant changes to our to reflect this change.
USE CATALOGUSE SCHEMADatabricks integrations fully managed through connections: For all actions related to editing and managing Databricks integrations, users must go through connections. Databricks integrations will no longer be present in Immuta's integrations app settings page.
Deprecating integration error banners: Immuta will no longer show the integration error banners when an integration validation is failing. Those error messages will be migrated to a new user experience through Immuta connections.
New Immuta schemas: Immuta will create new immuta_policies schemas in your Databricks environments and manage all policies through the new schemas. The original policy schemas will still be present in your Unity Catalog environment, but Immuta will no longer use those.
Users that are part of the Finance or HR department will only see a subset of sensitive columns unmasked, depending on what has been cleared for use for their department.
Azure Private Link for Starburst is generally available: Private Link connectivity to Starburst on Azure, previously in private preview, is now generally available.
Classification page is available by default: Users can now view created through the frameworks API on the Discover classification page. This page is now enabled on all tenants.
: Marketplace brings data products and data people together by exposing request and approval workflows, all backed by the existing Immuta policy engine. Integrate Marketplace with your existing catalog, or leverage the Immuta Marketplace app alone.
It allows your entire organization to provision data as one through workflows:
Publish data products. Make curated data products findable on a single, central platform.
Establish teams and authority. Define logical domains for local control and visibility. Enable business users to manage metadata and access approvals separately from data product owners.
Search and access data assets. Make it easy for users to search and filter available data assets, and establish a process to easily request access.
Watch the for a demo.
Disable randomized response by default and allow a customer to opt in: When a randomized response policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process that contains the predicates used for the randomization. The results of this query, which may contain sensitive data, are stored in the Immuta internal database. Because this process may violate an organization's data localization regulations, you must reach out to your Immuta representative to enable this masking policy type for your account. If you have existing randomized response policies, those policies will not be affected by this change.
Global sensitive data discovery (SDD) template setting changes: If you have SDD enabled, there are template setting changes and a change in how Immuta runs SDD automatically on new data sources:
The default value for Global SDD Template Name is blank.
If you don't change the default value and leave Global SDD Template Name blank, Immuta won't run any patterns on new data sources.
If you change the default value and want a different identification framework to run, you need to enter the name of that identification framework (instead of the displayName
: Immuta’s external catalog integration now supports auto-linking data sources with Collibra Edge. The auto-linking process performs name matching of data assets following the with their corresponding data sources in Immuta.
Deprecated features remain in the product with minimal support until their end of life date.
Fix for accurately representing disabled users’ subscription status for data sources and projects in governance reports: Addressed an issue where users with status disabled were misrepresented in governance reports as being subscribed to data sources or projects when in fact they weren’t. (Disabled users always have all their data source and project subscriptions revoked until they get re-enabled.)
The following governance reports have been fixed:
Data source:
All data sources and the users and groups subscribed to them
What users and groups are subscribed to a particular data source
Deprecated features remain in the product with minimal support until their end of life date.
Conditional tags applied by sensitive data discovery are deprecated and will be removed from the product in December, 2024. If you rely on conditional tags, consult your Immuta representative for instructions on using the classification framework API to apply these tags instead of sensitive data discovery.
: The frameworks API allows users to create rules to dynamically tag their data with sensitivity tags to drive dashboards and policies. These custom rules and frameworks can then be viewed in the UI and managed through the API.
: Additional improvements have been made to the improved pack of built-in identifiers:
CREDIT_CARD_NUMBER: Previously only detected card numbers that can be issued currently. Now, it can detect credit card numbers that were formerly issued.
Deprecated features remain in the product with minimal support until their end of life date.
The following built-in Classification Frameworks are now deprecated and will reach end of life in December 2024:
California Consumer Privacy Act
Data Security Framework
General Data Protection Regulation
Health Insurance Portability and Accountability Act
Instead, use the to create custom frameworks that replicate the functionality of any built-in framework and extend them to suit your use cases. Immuta's Product Engineering team can assist you with creating your custom framework.
Azure Private Link for and is generally available: Azure Private Link provides private connectivity from the Immuta SaaS platform (hosted on AWS) to customer-managed Snowflake and Databricks accounts on Azure. It ensures that all traffic to the configured endpoints only traverses private networks over the Immuta private cloud exchange.
Integration error updates: This feature includes banner notifications for all users when an integration is experiencing an error. This update calls attention to critical integration errors that can have large impacts to end users to improve awareness and streamline the process of pinpointing and driving errors to resolution.
Additionally, Immuta has simplified how the integration statuses are reported within the app settings integrations page for enhanced clarity.
: This deployment includes a new standard connector (out-of-the-box) for tag enrichment from a Microsoft Purview enterprise data catalog to Immuta.
The Microsoft Purview catalog integration with Immuta currently supports tag ingestion of and as tags for Databricks Unity Catalog, Snowflake, and Azure Synapse Analytics data sources and their associated columns. Additionally, data source and column descriptions from the connected Microsoft Purview catalog will also be pulled into Immuta.
This connector simplifies tag enrichment in Immuta for customers whose tag information resides in Microsoft Purview enterprise data catalog. Previously, customers leveraging Microsoft Purview enterprise data catalog had to build an integration themselves using Immuta’s custom REST catalog interface.
: This feature allows users to configure additional workspace connections within their Databricks integrations and bind these additional workspaces to specific catalogs. This enables customers to use Databricks’ feature with their Immuta integration. Users can dictate which workspaces are authorized to access specific catalogs, allowing them to better control catalog access and isolate compute costs if desired.
: This feature allows connections to data sources over private networking that reside in a different global segment than their Immuta tenant. For example, if your Immuta tenant is in North America, you can now connect to data sources in APAC and the EU over private networking.
Databricks integration support defaulted to Unity Catalog: Eliminated the manual step of updating a global account setting prior to configuring a Unity Catalog integration. For Databricks integrations, the default support now assumes a Unity Catalog integration.
Customers using Databricks Spark must now update the default account setting before configuring their Databricks integrations.
Deprecated items remain in the product with minimal support until their end of life date.
: These improved patterns have higher accuracy out of the box, which reduces the amount of overtagging and missed tags. The result is an easier experience and reduced time to value generating actionable metadata.
New standard connector for tag enrichment from Microsoft Purview enterprise data catalog to Immuta. In addition to Purview tags, the following Purview objects will be pulled in and applied to registered data sources as either column or data source tags in Immuta:
System classifications
: All governance reports based on sensitive data discovery now have a report column showing whether the tag is used as part of a policy in Immuta.
Authentication change to accommodate Snowflake moving away from password-only authentication: This deployment includes updates to the integration setup script to accommodate Snowflake beginning to transition away from password-only authentication for new accounts. , Immuta provides an updated manual setup script that permits password-only authentication by differentiating it as a legacy service with an additional parameter. Existing integrations will continue to function as-is.
Fix for Databricks audit workspace IDs: Previously, users filtering their audit by workspaces had to enter a 16-digit workspace ID. This restriction has been removed.
: This permission enables customers to delegate activity reviews to individuals for a set of audit events related to data sources within a domain, helping organizations open up access to query information to more users across the enterprise while staying compliant. For customers who use domains to define data products, the Audit Activity domain permission allows data product owners to review query activities of the data sources they manage using rich visualizations and dashboards.
SDD governance report shows whether tags are used in a policy: Under the governance reports menu, all reports based on sensitive data discovery now have a report column showing whether the tag is used as part of a policy in Immuta.
Rotating the shared secret for Starburst (Trino): Users can rotate the shared secret used for API authentication between Starburst (Trino) and Immuta, which provides improved security management, compliance with organizational policies, and the following benefits:
Enhanced security: Regularly update your API credentials to mitigate potential security risks.
Compliance support: Meet security requirements that mandate periodic rotation of API keys.
Flexibility: Change the shared secret at any time after the initial integration setup.
Existing integrations will continue to function normally. Downtime is required when rotating the shared secret, so to ensure continuous operation of your integration, and establish a regular schedule for rotating your shared secret as part of your security best practices.
Deprecated items remain in the product with minimal support until their end of life date.
for Snowflake and Databricks Unity Catalog supports detecting and automatically reapplying policies on data sources that have changed their object type (for example, a VIEW that was changed into a TABLE or vice versa).
SDD supports Databricks Unity Catalog OAuth M2M: Sensitive data discovery now works with Databricks data sources that are registered in Immuta using .
Only users with the permission are authorized to use the endpoint; users without that permission will be blocked and get a 403 status returned.
: When schema monitoring is enabled, Immuta applies a New tag whenever a new data source is added or its columns change. This allows governors to create policies that automatically apply to all new data sources and columns (such as masking new data by default).
Previously, data owners were always asked to validate data source requests (which in turn removes the New tag) related to data source and column changes, even if there was no actual policy present targeting the New tag.
Now, data owners are only asked to validate data source requests if an actual policy is present that is targeting the New tag. Otherwise the validation request for data owners gets skipped.
As a result, in the absence of a relevant policy, data owners will now have fewer data source requests to validate which saves them time and increases efficiency.
Query text has been removed from all legacy audit records: Immuta no longer stores query text with legacy audit records, as its support has reached . Instead, use , which by default contain query text.
Snowflake External OAuth: The form field Client Secret stopped being displayed in the UI for Snowflake data source registration, which led customers to believe that Snowflake External OAuth using client secret was no longer a supported authentication mechanism. This fix reintroduced the client secret field in the UI.
Customers who had already registered data sources with Snowflake External OAuth previously via the UI, API, or CLI while the bug existed were not affected, since the issue only affected the UI but not the backend or programmatic interfaces.
Deprecated items remain in the product with minimal support until their end of life date.
Masked joins for Snowflake and Databricks Unity Catalog integrations is now generally available. This feature allows masked columns to be joined across data sources that belong to the same project giving users additional capability for data analysis within a project, while still securing sensitive data. Sensitive columns can be masked while still allowing users the ability to join on these within a project, helping organizations strike the correct balance between access and security.
Simpler UX for sensitive data discovery: Customizing sensitive data discovery is now easier and quicker with a single entry point for configuration. Instead of navigating to multiple pages in the Immuta application, use a single form to create an identifier for sensitive data and add tags and regex patterns.
Released Immuta CLI v1.4.0: A new version of the CLI was released which includes new support for AWS IAM role authentication for audit export to S3 and some CLI breaking changes. See the CLI release note for more details.
Allow masked joins for Snowflake and Databricks Unity Catalog integrations: This feature allows masked columns to be joined across data sources that belong to the same project giving users additional capability for data analysis within a project, while still securing sensitive data. Sensitive columns can be masked while still allowing users the ability to join on these within a project, helping organizations strike the correct balance between access and security.
Removing legacy audit records: Starting July 23rd, Immuta will begin enforcing the 90 day retention period for legacy audit records for all tenants in SaaS. This will have no impact on Governance Reports. Universal Audit Model (UAM) records can be exported on a configured schedule to S3 or ALDS, see the or guides.
Group membership count contains information on active and disabled users: When looking at the number of users contained in a group, you can easily distinguish between active and disabled users. This enhancement allows user admins to verify accurate user-to-group membership between their external identity access manager and Immuta faster.
Support role-based access for S3 audit export: Audit export supports AWS IAM authentication. Customers can use AWS assumed role-based authentication or access key authentication to secure access to S3 to export audit events.
Databricks Unity Catalog integration tag ingestion: Customers who have tags defined and applied in Databricks Unity Catalog can seamlessly bring those tags into Immuta to leverage them for attribute based access control (ABAC), data classification, and data monitoring.
This feature is currently in preview at the design partner level. To use this feature in preview, you must have no more than 2,500 Unity Catalog data sources registered in Immuta. See the for expectations and details, and then reach out to your Immuta representative to enable this feature.
Comply with column length and precision in a Snowflake masking policy: Snowflake is soon requiring the outputs of masked columns to comply with the length, scale, and precision of what the Snowflake columns require. To comply with this Snowflake behavior change, Immuta truncates the output values in masked columns to match the Snowflake column requirements so that users' queries continue to complete successfully.
Trino universal audit model available with Trino 435 using the Immuta Trino plugin 435.1: For customers that are using EMR 7.1 with Trino 435.1, and have audit requirements, the Immuta Trino 435.1 plugin now supports audit in the universal audit model. The Immuta Trino 435.1 plugin audit information is on par with the Immuta Trino 443 plugin. The Immuta Trino 435.1 plugin is supported on SaaS and 2024.2 and newer.
Adding a new external catalog integration automatically backfills tags for pre-existing data sources: Prior to this change, users had to manually link pre-existing data sources to the relevant external data catalog entry after a new external data catalog integration was set up, and only newly registered data sources were linked automatically. Now, Immuta triggers an auto-linking process for all unlinked data sources when a new external data catalog integration setup is saved.
This change increases the level of automation, reduces cognitive and manual workload for data governors, and aligns external data catalog integration behavior with end user expectations.
Removing the overview tab on identification frameworks: Under Discover, each identification framework now has two tabs: rules and data sources. Prior to this change, there was an overview tab that linked to the other two tabs. When clicking into an identification framework, you now land directly on the rules tab.
: We are excited to announce that Immuta now supports establishing connections to Databricks using OAuth Machine-to-Machine (M2M) authentication. This feature enhances security and simplifies the process of integrating Databricks with Immuta, leveraging the robust capabilities of OAuth M2M authentication.
New product changelog: The new Immuta product changelog will announce the latest product updates, features, improvements, and fixes.
Immuta users can open the in-app changelog by clicking “What’s New?” in the left-hand navigation. It is also available at .
Schema monitoring enhancement for Databricks Unity Catalog: Schema monitoring for Databricks Unity Catalog now supports detecting and automatically reapplying policies on destructively recreated tables (from CREATE OR REPLACE statements), even if the table schema itself wasn’t changed.
: Immuta query audit events for Starburst (Trino) will include the following information.
Object accessed: The tables and columns that were queried
Tags: The Immuta table and column tags, including data catalog tags synchronized to Immuta, for queried tables and columns
Sensitivity classification: The columns' sensitivity in context of other queried columns if an Immuta classification framework is enabled at the time of audit event processing
Governance permission required for Discover: Starting today, the Discover UI for managing automated data identification and classification is only accessible to users with the GOVERNANCE permission in the Immuta application. Previously, Immuta users with permission to create data sources could also access the settings in the Discover UI.
Fix for external tag ingestion related to Collibra Output Module API behavioral change: Incorrect filters were being passed to Collibra’s Output Module API when fetching column tag information. This was resulting in a failed API request while linking or refreshing Collibra tags on a data source. Collibra’s Output Module API began performing additional request validation on approximately May 6, 2024, which indicated a problem. This fix ensures that the Collibra tag ingestion integration in Immuta is reflecting these changes. Without it, there was a residual risk that some incorrect column tags would get ingested.
Data owners can now see audit events for the data sources that they own without having the AUDIT Immuta permission: Data owners can see query events for their data sources on the audit page, data overview page, data source pages, and the data source activity tab. They can also inspect Immuta audit events on the audit page and activity tab for the data sources they own. This enhancement gives data owners full visibility of activity in the data sources they own.
Snowflake memoizable functions update: Immuta policies leverage Snowflake’s memoizable UDFs. When an end user references a policy-protected column in a query, the cached results are available from the memoizable function, resulting in faster, more performant queries.
Running table statistics only if required (instead of by default): Table statistics consist of row counts, identification of high cardinality columns, and a sample data fingerprint. Immuta needs to collect this information in order to support the following data access policy types:
Column masking with randomized response
Column masking with format preserving masking
Data policies on Snowflake Iceberg tables: Users can now apply fine-grained access controls to Snowflake Iceberg tables, making support for Immuta data policies and subscription policies consistent across standard Snowflake table types.
POST /project endpoint: Users will receive a 422 status error instead of a 400 status error when trying to create a new project name that would result in a database conflict on the project's unique name.
POST /api/v2/data endpoint response: creating will not be returned in the response when using this endpoint the first time; the response will just include bulkId and connectionString. However, when updating a data source using POST /api/v2/data
: Domains are containers of data sources that allow you to assign data ownership and access management to specific business units, subject matter experts, or teams at the nexus of cross-functional groups. Domains support organizations building a data mesh architecture and implementing a federated governance approach to data security, which can accelerate local, domain-specific decision making processes and reduce risk for the business.
This feature is being gradually rolled out to customers and may not be available in your account yet.
Improved user experience for managing users, data sources and policies: This deployment includes significant user experience updates focused on enhancing Immuta's key entities: users, data sources, and policies.
Project-scoped purpose exceptions for Snowflake and Databricks Unity Catalog integrations: Row and column-level policies can now account for purposes and projects for additional security. With this policy configuration, a user will only be able to view the data the policy applied to if they are acting under a certain purpose and that data is within their current project. Purpose exception policies ensure data is only being used for the intended purposes. This feature is in private preview.
The POST /tag/{modelType}/{modelId} endpoint (which adds tags to models that can be tagged, such as data sources and projects) can only apply tags that exist to these models. This update presents one breaking change: A 404 status will now be returned with the tag(s) that were not valid instead of a 200 status, and no tags will be processed if any invalid tags are found.
: Besides READ operations, Immuta's Amazon S3 integration now also supports fine-grained access permissions for READWRITE operations. While Immuta read policies control who can consume objects from Amazon S3 storage locations, allow control of who can add and delete objects. Contact your Immuta representative to get write policies for Amazon S3 enabled in your Immuta tenant.
Disable k-anonymization by default and allow a customer to opt in: When a k-anonymization policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process that generates rules that enforce the k-anonymity. The results of this query, which may contain sensitive data, are temporarily held in memory.
Because this process may violate an organization's data localization regulations, you must reach out to your Immuta representative to enable this masking policy type for your account. If you have existing k-anonymization policies, those policies will not be affected by this change.
Updated classification frameworks: Customers using the public preview classification frameworks feature now have access to the Data Security Framework (DSF) and Risk Assessment Framework (RAF). DSF extends sensitive data discovery tags to apply descriptive category tags to your data; RAF extends the DSF to apply sensitivity tags to your data, such as Medium, High, and Very High.
Together, these frameworks replace the less comprehensive legacy Immuta Data Security Framework, which has been deprecated and will be removed from the product.
Support protecting more than 10,000 objects with Unity Catalog row- and column- level policies: Users can now mask more than 10,000 columns or tables with row filters, removing the previous limitation in the Unity Catalog integration. This enhancement provides greater flexibility and scalability for data masking operations, allowing users to effectively secure sensitive data across larger datasets.
Updates to button labels: Two buttons have been renamed to align their labels more closely with their functionality.
The "Sync Policies" button has been renamed to "Sync Data Policies" to better reflect its function.
: Immuta can now export audit logs to Microsoft Azure ADLS Gen2 Blob, in universal audit format (UAM). The Immuta audit export payload contains audit records for both configuration activity in Immuta and data access activity from Snowflake, Databricks and Starburst.
Deprecated items remain in the product with minimal support until their end of life date.
The ability to configure the behavior of the default subscription policy has been deprecated and will reach end of life in September 2024. Once this configuration setting is removed from the app settings page, Immuta will not apply a subscription policy to registered data sources unless an existing global policy applies to them. To set an "Allow individually selected users" subscription policy on all data sources, with that condition that applies to all data sources or apply a to individual data sources.
: Immuta's universal audit model now includes query audit events from Starburst Enterprise. These query audit events are included on the new audit page, in the Detect activity views, and in the S3 export payload. This feature is currently supported in Immuta SaaS tenants with Starburst e438 and will be available in the 2024.2 LTS release.
Query duration support for Detect Monitors: Immuta Detect can now notify you via a webhook when a user executed a query that exceeded a configurable duration threshold on supported data platforms. This enhancement allows data platform owners to know when a user issued long-running queries so they can keep data warehouse running costs low. Additionally, knowing which users issued long-running queries is an opportunity to enable data consumers to query the data in an optimal way, direct them to use another optimized data set, and allow the data owner to understand new workload requirements.
to increase visibility of users and queries that may breach data platform latency SLO and control data warehouse cost.
The POST /tag/column/{datasource_id}_{column_name} endpoint (which adds tags to columns on data sources) can only tag existing columns on data sources. It does this by checking the dictionary associated with the data source to see if the desired column exists on the data source. This deployment introduces two breaking changes:
Column does not exist 404: When the column does not exist on the data source, a 404 status is now returned instead of a 200.
Dictionary does not exist 404: When an associated dictionary does not exist on the specified data source (that you have access to add tags to), a 404 status is now returned instead of a 403.
Audit page: The audit page that uses the legacy audit format has been removed. The legacy version of Immuta audit format continues to be maintained and accessible through the deprecated audit API until its scheduled EOL date.
Color coding for data source health: The health status for each data source on the data source list page now uses color coding to provide a visual for users so they can quickly determine whether they should take action related to the health of data sources. Additionally, unhealthy data sources are ranked at the top of the list on the data source page to ensure that when users log in to Immuta they are aware that unhealthy data sources exist in the system. Prior to this change, users had to click through all data source pages or had to explicitly set up a filter to achieve the same behavior.
“Pending” policy state: A new Pending policy state indicates when background jobs are running to update permissions after a policy is created or changed. Once the Pending state changes to Active, all policy changes have been enforced on affected data sources.
Faster query performance with Snowflake memoizable functions: When a policy is applied to a column, Immuta now uses Snowflake memoizable functions to cache the result of common lookups in the policy encapsulated in the called function.
Subsequently, when users query a column with the applied policy, Immuta leverages the cached result, resulting in significant enhancements to query performance.
To enable support for memoizable functions, contact your Immuta customer success representative.
Workspace filtering for Databricks Unity Catalog audit collection: Users can limit Databricks Unity Catalog audit collection by specifying a comma-delimited list of Databricks workspace IDs in the integration's app settings.
For a more responsive Detect activity page experience, Immuta limited the number of auto-suggested filter values (such as data sources, tags, and users) to 100 of the most active values. The total item count for each filter type still reflects the number of events in the dashboard time range.
When pulling personally identifiable information (PII) from Collibra, Immuta now includes and differentiates true and false value assignments as Personally Identifiable Information.true and Personally Identifiable Information.false to more accurately reflect how PII is set in Collibra.
Improved validation when saving sensitive data discovery patterns in the Immuta UI: When adding a regular expression pattern for sensitive data discovery, the Immuta UI validates the format of the regular expression according to the RE2 regular expression standard. Patterns that don’t conform cannot be saved, preventing those patterns from causing failures at run time.
: Immuta Detect monitors help you surface non-compliant data combinations and maintain data availability through data platform configuration changes. Monitors automate manual aggregation and calculation of user activity metrics based on query events. Additionally, they can notify you when the metrics exceed your intended operating thresholds. Monitors work with query tags, query execution outcomes, and Immuta Discover classification sensitivities when enabled.
This feature is in private preview and can be made available upon request. Contact your customer success manager for more details.
Fix to address a UI issue that led customers to believe that disabled users were not getting their access revoked. The UI has been updated and disabled users are now being filtered out from the data source members tab.
: Universal audits now include Immuta configuration audit events, domain audit events, sensitive data discovery (SDD) audit events, and user management audit events. Immuta tenants with the domain preview enabled can now audit domain structure changes.
Sensitive data discovery (SDD) pattern validation at runtime: SDD has used RE2 regular expression syntax since mid-year of 2023, and custom patterns created since that time are validated when added to the system. In limited cases, custom patterns created prior to this are not RE2 compliant and cause SDD analysis to fail without apparent cause. Now, those cases raise a detailed message stating the pattern name and the full regular expression. This message is shown under the data source health check menu for any targeted data sources where SDD failed for this reason.
Activating regulatory frameworks in Discover: Fix to address an issue that prevented some customers from activating the regulatory frameworks in Discover. In some cases, customers who previously used the Immuta data security framework (DSF) before getting access to the new frameworks for GDPR, CCPA, HIPAA and PCI were unable to activate the new frameworks.
Amazon S3 integration: Immuta’s Amazon S3 integration enhances the management of permissions in complex data lakes on object storage. Eliminate scalability concerns as you enforce S3 access effortlessly. You can grant users time-bound access to files and folders, creating a security posture with zero-standing permissions, a gold-standard for compliance.
Additionally, you can grant access to human identities seamlessly through Identity Providers (IdPs) like Okta, Microsoft Entra ID, and more, thanks to integration with AWS IAM Identity Center. With the implementation of attribute-based access controls (ABAC) for S3, Immuta provides a simplified and efficient approach to managing data lake permissions. The privileges you set using the Amazon S3 integration can apply anywhere, from the CLI, to your applications using AWS SDKs, and on Amazon EMR Spark and Amazon SageMaker. Elevate your data governance with these advanced capabilities and experience a seamless and secure data access environment. Contact your customer success manager for more details.
: Universal audits now include Immuta policy and data sources changes.
Deprecated items remain in the product with minimal support until their end of life date.
Bug fix for sensitive data discovery settings: Fix to the App Settings for Sensitive Data Discovery. Previously, the field to set the global SDD framework was hidden and as a result the global SDD framework could not be updated. The field is now available when SDD is turned on.
Bug fix for SDD rules display: Fix to an issue with adding new discovery rules to an Identification framework. Previously, adding a new discovery rule would not appear in the list in the UI until the page was reloaded. Newly added rules now appear in the list at once.
Immuta could not update a group through SCIM if that group was initially created through SAML before SCIM was enabled in an IAM's configuration.
Enhancement to Classification Frameworks rule display: In Discover, under Classification frameworks, the list of rules now shows all input and output tags in the browse list. There is no need to click further into a details screen to learn everything about a rule.
Change to SDD person name rule: The built-in Sensitive Data Discovery pattern for Person Name has been adjusted to more easily match columns that are consistent with person names.
Addressed a vulnerability that could allow a malicious user to enter HTML tags to affect the page's user interface. Such an issue could increase the risk of XSS attacks or threaten users’ privacy.
Performance improvements for Immuta tenants that had data sources with more than 500 masked columns.
Redshift Spectrum data sources were not deleted when the schema project they belonged to was deleted.
Fix to address issue that prevented users with the CREATE_DATA_SOURCE permission from being able to create a data source if a user without that permission previously tried to register data sources via the API.
Users were unable to edit an external catalog’s configuration.
Minor enhancements and fixes that are not user-facing.
: The integrations API allows you to integrate your remote data platform with Immuta so that Immuta can manage and enforce access controls on your data.
With native SDD enabled, users will have SDD options displayed when creating a data source for Snowflake and Databricks, but those SDD options will no longer be displayed for other technologies.
An additional 19 UAM audit events are captured and can now be viewed on the Immuta audit page in the UI or exported to S3. See the full list of supported events on the .
If creating a user initially failed because of an invalid payload, users encountered the following 409 error in a subsequent request with the correct payload: User with the provided userid already exists.
Projects: What users and groups are part of a particular project
Purpose: What users are members of projects with a particular purpose
User:
All users and the data sources they are subscribed to
What data sources is a particular user subscribed to
What projects is a particular user currently a member of
PERSON_NAME: Enhanced the pattern to detect a wider variety of names to reduce the number of false-positives.
DATE: Previously only worked with strings. The pattern is enhanced to now detect and apply when the data type is date.
TIME: Previously only worked with strings. The pattern is enhanced to now detect and apply with the data type is time.
Immuta Data Security Framework
Payment Card Industry Data Security Standard
Risk Assessment Framework
Custom classifications
Managed attributes
Query duration: The amount of time it took to execute the query in seconds
Database name: The name of the Starburst (Trino) catalog
Column masking with k-anonymization
Column masking with rounding
Row minimization
Prior to this change, table statistics would be collected for every newly onboarded object by default, except if the object had a Skip_Stats tag applied. Post this change, table statistics are now only collected on a data object once they are required (i.e., if one of the above-mentioned policy types is applied). Even then, the Skip_Stats tag continues to be respected. This change results in performance improvements, as the number of standard operations during data object onboarding is significantly reduced.
Alation custom fields integration: In addition to Alation standard tags, Immuta’s Alation integration now also supports pulling information from Alation custom fields as tags into Immuta.
creating: []The People section has a more intuitive experience with notable changes. Users and groups have been split into two separate tabs. The first tab provides an overview of a user or group, while the second tab contains detailed settings, such as permissions, attributes, and associated groups.
Another important enhancement in the People section is the new Attributes page, which centralizes all information about an attribute, including the users or groups it applies to.
The Data Sources section has been completely redesigned to offer a more efficient search and filter experience. Users can preview details of a data source through expandable rows on the list and access bulk actions for data sources more easily.
The Policy section includes an updated list with improved search and filter capabilities. Additionally, a policy detail page allows users to view comprehensive policy information, take action, edit policies, and see a list of targeted data sources.
These enhancements are being gradually rolled out to customers and may not be available in your account yet.
Disable external usernames with invalid Databricks identities: Databricks user identities for Immuta users will now be automatically marked as invalid when the user is not found during policy application. This will prevent them from being affected by Databricks policy until manually marked as valid again in their Immuta user profile. This change drastically improves syncing performance of subscription policies for Databricks Unity Catalog integrations when Immuta users are not present in the Databricks environment.
The "Refresh Views/Policies" button has been renamed to "Refresh Views/Data Policies" for improved accuracy.
Support access using AWS IAM role in SaaS for Amazon S3 integration: Users can now leverage an AWS IAM role for Immuta to establish a secure, cross-account connection to S3 Access Grants. This enhancement allows for seamless orchestration of access grants, providing a more secure and compliant experience for our users.
Write policies for Starburst: In addition to read operations, Immuta's Starburst integration now supports fine-grained access permissions for write operations. In its default setting, write operations control the authorization of SQL operations that perform data modification. Administrators can include more operations (such as ALTER and DROP tables) to be authorized as write operations through advanced configuration. Contact your customer success representative to learn more.
Custom URL redirects: Custom URL redirects create a second fully-qualified domain name for SaaS tenants that redirects to the primary domain name. This gives users a domain name that they can remember and that has little impact on their integrations. Contact your customer success representative if you are interested in setting up a custom URL redirect.
Sensitive Data Discovery (SDD) tag context: Introducing language to specify when tags were placed by legacy SDD; the tag side sheet now mentions that legacy SDD is deprecated and targeted for removal in March 2024.
Native SDD now leaves legacy SDD tags in place when they are not found upon a subsequent re-scan of a data source. Customers who begin using native SDD can now see results with no impact to prior legacy SDD tags.
The new user profile page separates information better and makes it easier to understand.
Keyboard shortcuts are now available for some common functions. Keep an eye out for in-app guidance that helps with how to use them.
The account menu is wider for better readability and now has an option to toggle between light and dark mode. (By default, Immuta still uses your browser settings.)
Browser tabs tell you which page you’re on, instead of all being labeled “Immuta Console.” A new, adaptive favicon allows you to still tell that it’s Immuta at-a-glance, whether you’re in light or dark mode.
Write policies: Write policies is a new capability to manage user write access authorizations via policy (enabling users to modify data in data source objects). This release supports the new functionality for Snowflake and Databricks Unity Catalog integrations. Contact your customer success manager for more details.
Redshift Okta authentication
November 2024
December 2024
Data inventory dashboard
October 2024
November 2024
CREATE_FILTER permission
August 2024
December 2024
Unmask requests
August 2024
December 2024
Policy exemptions
August 2024
October 2024
Databricks Spark with Unity Catalog support integration
January 2024
March 2024
dbt integration
January 2024
March 2024
Data source expiration dates
January 2024
May 2024
{
"statusCode": 403,
"error": "Forbidden",
"message": "You must have the \"CREATE_DATA_SOURCE\" permission."
}{
"statusCode": 404,
"error": "Not Found",
"message": "Tags with the names [`country`, `sensitive`] do not exist."
}{
"statusCode": 404,
"error": "Not Found",
"message": "Data Source {datasourceId} does not have a column named '{columnName}' (case-sensitive)."
}{
"statusCode": 404,
"error": "Not Found",
"message": "Could not find column information for Data Source {datasourceId}."
}