Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Immuta CLI v1.4.0 was released July 10, 2024. It allows you to authenticate an export configuration to S3 using AWS IAM roles.
The following CLI audit
commands have been removed:
immuta audit exportConfig create:s3
; instead use immuta audit exportConfig create:s3:accessKey
immuta audit exportConfig update:s3
; instead use immuta audit exportConfig update:s3:accessKey
immuta audit exportConfig create:adls
; instead use immuta audit exportConfig create:adls:sasToken
immuta audit exportConfig update:adls
;instead use immuta audit exportConfig update:adls:sasToken
Linux x86_64 (amd64):
Linux ARMv8 (arm64):
Darwin x86_64 (amd64):
Darwin ARMv8 (arm64):
Download and add the binary to a directory in your system's $PATH as immuta.exe
:
The SHA 256 checksum is available to verify the file at https://immuta-platform-artifacts.s3.amazonaws.com/cli/v1.4.0/immuta_cli_SHA256SUMS.
Immuta CLI v1.3.0 was released April 4, 2024. It allows you to export universal audit model (UAM) events to ADLS Gen2.
Linux x86_64 (amd64):
Linux ARMv8 (arm64):
Darwin x86_64 (amd64):
Darwin ARMv8 (arm64):
Download and add the binary to a directory in your system's $PATH as immuta.exe
:
The SHA 256 checksum is available to verify the file at https://immuta-platform-artifacts.s3.amazonaws.com/cli/v1.3.0/immuta_cli_SHA256SUMS.
Immuta CLI v1.2.1 was released November 20, 2023. It fixes a bug with the integrations API.
Linux x86_64 (amd64):
Linux ARMv8 (arm64):
Darwin x86_64 (amd64):
Darwin ARMv8 (arm64):
Download and add the binary to a directory in your system's $PATH as immuta.exe
:
The SHA 256 checksum is available to verify the file at https://immuta-platform-artifacts.s3.amazonaws.com/cli/v1.2.1/immuta_cli_SHA256SUMS.
Immuta CLI v1.2.0 was released October 2, 2023. It fixes a bug with the audit export.
Linux x86_64 (amd64):
Linux ARMv8 (arm64):
Darwin x86_64 (amd64):
Darwin ARMv8 (arm64):
Download and add the binary to a directory in your system's $PATH as immuta.exe
:
The SHA 256 checksum is available to verify the file at https://immuta-platform-artifacts.s3.amazonaws.com/cli/v1.2.0/immuta_cli_SHA256SUMS.
Immuta CLI v1.2.0-1 was released August 19, 2022. It allows you to export universal audit model (UAM) events to S3.
Linux x86_64 (amd64):
Linux ARMv8 (arm64):
Darwin x86_64 (amd64):
Darwin ARMv8 (arm64):
Download and add the binary to a directory in your system's $PATH as immuta.exe
:
The SHA 256 checksum is available to verify the file at https://immuta-platform-artifacts.s3.amazonaws.com/cli/v1.2.0-1/immuta_cli_SHA256SUMS.
Immuta CLI v1.1.0 was released August 19, 2022. It allows you to overwrite existing files in output directory targets when you specify the --force
flag to clone your Immuta tenant or policies. If this --force
flag is omitted, you will receive an error when the output directory exists and is not empty.
Linux x86_64 (amd64):
Linux ARMv8 (arm64):
Darwin x86_64 (amd64):
Darwin ARMv8 (arm64):
Download and add the binary to a directory in your system's $PATH as immuta.exe
:
The SHA 256 checksum is available to verify the file at https://immuta-platform-artifacts.s3.amazonaws.com/cli/v1.1.0/immuta_cli_SHA256SUMS.
The Immuta CLI v1.0.0 was released April, 26, 2022. It includes new commands that allow users to manage sensitive data discovery.
Linux x86_64 (amd64):
Linux ARMv8 (arm64):
Darwin x86_64 (amd64):
Darwin ARMv8 (arm64):
Download and add the binary to a directory in your system's $PATH as immuta.exe
:
The SHA 256 checksum is available to verify the file at https://immuta-platform-artifacts.s3.amazonaws.com/cli/v1.0.0/immuta_cli_SHA256SUMS.
Run sensitive data discovery (SDD): The immuta sdd run
command allows you to run SDD using the CLI instead of the API or UI. You can specify data sources on which to run SDD, or you can run SDD on all data sources.
Manage patterns: The immuta sdd classifier
command and its sub commands allow you to create, search for, update, and delete sensitive data discovery identifiers.
Manage identification frameworks: The immuta sdd template
command and its sub commands allow you to create, search for, update, and delete sensitive data discovery frameworks. Global frameworks must be managed through the Immuta UI.
--output
or -o
flag allows you to specify yaml or json for the output.
--template
option for the immuta api
command has been changed to --outputTemplate
. Additionally, this option is now available for all commands so that users can customize the output.
version
is now a flag instead of a command.
Running immuta policy clone
when there were no policies available to clone did not indicate that a target directory was not created or updated. The CLI now prints the message No global policies available to clone
.
The verbose
option is deprecated (in favor of the --output
option).
The version
command is deprecated.
This section includes deployment notes, features in preview, and a support matrix.
This page includes a list of new features, enhancements, and bug fixes.
This page includes an overview of the data platforms, identity managers, external catalogs, and web browsers that Immuta supports.
This page includes a list of new features, enhancements, and bug fixes for the Immuta CLI.
This section includes an overview of Immuta's feature preview program and a list of features currently in preview.
Immuta supports the following databases.
Amazon Redshift
Amazon Redshift Serverless
Amazon Redshift Spectrum
Amazon S3
Azure Synapse Analytics (Immuta only supports Dedicated SQL pools, not Serverless SQL pools.)
Google BigQuery
Databricks Unity Catalog:
Databricks clusters (See the Databricks Installation guide for supported Databricks runtimes.)
Databricks SQL
Snowflake
Starburst
Trino
Immuta supports these external catalogs. Click a link below for configuration instructions.
For details about the IAM protocols and providers in this section, see the support matrix in the Identity Managers Overview.
Immuta fully supports these IAM protocols:
AD/LDAP
SAML 2.0
OpenID Connect 1.0
These are common providers that support the protocols listed above. However, this list may not be all-inclusive, and if a provider stops supporting one of those protocols, Immuta may not fully support that provider.
Active Directory
ADFS
Amazon Cognito
Centrify
JumpCloud
Keycloak
Microsoft Entra ID
Okta
OneLogin
OpenLDAP and other LDAP servers
Oracle Access Manager
Ping Identity
Immuta supports the following web browsers.
Firefox
Google Chrome
Microsoft Edge
The following features are deprecated. They are still in the product but will be removed at their tentative EOL date.
Redshift Okta authentication
November 2024
December 2024
Quick create tab
None
September 2024
December 2024
External policy handler
None
August 2024
December 2024
CREATE_FILTER permission
None
August 2024
December 2024
Unmask requests
None
August 2024
December 2024
Derived data sources (and CREATE_DATA_SOURCE_IN_PROJECT permission)
None
March 2024
November 2024
Legacy sensitive data discovery
September 2023
Conditional tags
November 2024
December 2024
Legacy /audit
API
September 2023
November 2024
Legacy audit UI
September 2023
February 2024
Legacy audit query text
September 2023
August 2024
The following features have been fully removed from the product.
Legacy Immuta DSF
April 2024
December 2024
Data Security Framework and compliance frameworks
November 2024
December 2024
Data inventory dashboard
None
October 2024
November 2024
Policy exemptions
August 2024
October 2024
Managing the default subscription policy
March 2024
September 2024
External masking
None
January 2023
September 2024
Data source expiration dates
None
January 2024
May 2024
Interpolated WHERE clause
Other custom WHERE clause functions
April 2023
May 2024
Legacy Starburst (Trino) integration
June 2023
May 2024
dbt integration
None
January 2024
March 2024
Databricks Spark with Unity Catalog support
January 2024
March 2024
Non-Unity Databricks SQL view-based integration
September 2023
March 2024
Discussions tab
None
September 2023
March 2024
HIPAA expert determination and templated policies (HIPAA and CCPA)
None
September 2023
March 2024
Legacy Snowflake view-based integration (Snowflake integration without Snowflake Governance features)
September 2023
March 2024
Query editor
September 2023
October 2023
Want timely alerts for Immuta deployments?
Subscribe to the Immuta Status page to receive deployment and downtime notifications.
Marketplace private preview now available: Marketplace brings data products and data people together by exposing request and approval workflows, all backed by the existing Immuta policy engine. Integrate Marketplace with your existing catalog, or leverage the Immuta Marketplace app alone.
It allows your entire organization to provision data as one through workflows:
Publish data products. Make curated data products findable on a single, central platform.
Establish teams and authority. Define logical domains for local control and visibility. Enable business users to manage metadata and access approvals separately from data product owners.
Search and access data assets. Make it easy for users to search and filter available data assets, and establish a process to easily request access.
Provision data access. Streamline data access approvals and automatically provision access based on data use agreements.
Watch the Immuta Marketplace webinar for a demo.
Disable randomized response by default and allow a customer to opt in: When a randomized response policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process that contains the predicates used for the randomization. The results of this query, which may contain sensitive data, are stored in the Immuta internal database. Because this process may violate an organization's data localization regulations, you must reach out to your Immuta representative to enable this masking policy type for your account. If you have existing randomized response policies, those policies will not be affected by this change.
Global sensitive data discovery (SDD) template setting changes: If you have SDD enabled, there are template setting changes and a change in how Immuta runs SDD automatically on new data sources:
The default value for Global SDD Template Name is blank.
If you don't change the default value and leave Global SDD Template Name blank, Immuta won't run any patterns on new data sources.
If you change the default value and want a different identification framework to run, you need to enter the name
of that identification framework (instead of the displayName
). See the API documentation on how to retrieve the name of an identification framework
Compatibility with Collibra Edge: Immuta’s external catalog integration now supports auto-linking data sources with Collibra Edge. The auto-linking process performs name matching of data assets following the Edge naming convention with their corresponding data sources in Immuta.
Deprecated features remain in the product with minimal support until their end of life date.
Redshift Okta authentication
November 2024
December 2024
Fix for accurately representing disabled users’ subscription status for data sources and projects in governance reports: Addressed an issue where users with status disabled
were misrepresented in governance reports as being subscribed to data sources or projects when in fact they weren’t. (Disabled users always have all their data source and project subscriptions revoked until they get re-enabled.)
The following governance reports have been fixed:
Data source:
All data sources and the users and groups subscribed to them
What users and groups are subscribed to a particular data source
What users and groups have ever subscribed to a particular data source
Projects: What users and groups are part of a particular project
Purpose: What users are members of projects with a particular purpose
User:
All users and the data sources they are subscribed to
What data sources is a particular user subscribed to
What projects is a particular user currently a member of
Deprecated features remain in the product with minimal support until their end of life date.
Conditional tags applied by sensitive data discovery are deprecated and will be removed from the product in December, 2024. If you rely on conditional tags, consult your Immuta representative for instructions on using the classification framework API to apply these tags instead of sensitive data discovery.
Classification UI and Frameworks API is generally available: The frameworks API allows users to create rules to dynamically tag their data with sensitivity tags to drive dashboards and policies. These custom rules and frameworks can then be viewed in the UI and managed through the API.
New built-in pattern improvements: Additional improvements have been made to the improved pack of built-in identifiers:
CREDIT_CARD_NUMBER
: Previously only detected card numbers that can be issued currently. Now, it can detect credit card numbers that were formerly issued.
PERSON_NAME
: Enhanced the pattern to detect a wider variety of names to reduce the number of false-positives.
DATE
: Previously only worked with strings. The pattern is enhanced to now detect and apply when the data type is date.
TIME
: Previously only worked with strings. The pattern is enhanced to now detect and apply with the data type is time.
Deprecated features remain in the product with minimal support until their end of life date.
The following built-in Classification Frameworks are now deprecated and will reach end of life in December 2024:
California Consumer Privacy Act
Data Security Framework
General Data Protection Regulation
Health Insurance Portability and Accountability Act
Immuta Data Security Framework
Payment Card Industry Data Security Standard
Risk Assessment Framework
Instead, use the Classification Framework API and UI to create custom frameworks that replicate the functionality of any built-in framework and extend them to suit your use cases. Immuta's Product Engineering team can assist you with creating your custom framework.
Azure Private Link for Databricks and Snowflake is generally available: Azure Private Link provides private connectivity from the Immuta SaaS platform (hosted on AWS) to customer-managed Snowflake and Databricks accounts on Azure. It ensures that all traffic to the configured endpoints only traverses private networks over the Immuta private cloud exchange.
Integration error updates: This feature includes banner notifications for all users when an integration is experiencing an error. This update calls attention to critical integration errors that can have large impacts to end users to improve awareness and streamline the process of pinpointing and driving errors to resolution.
Additionally, Immuta has simplified how the integration statuses are reported within the app settings integrations page for enhanced clarity.
Standard integration with Microsoft Purview enterprise data catalog for tag enrichment in Immuta: This deployment includes a new standard connector (out-of-the-box) for tag enrichment from a Microsoft Purview enterprise data catalog to Immuta.
The Microsoft Purview catalog integration with Immuta currently supports tag ingestion of Classifications and Managed attributes as tags for Databricks Unity Catalog, Snowflake, and Azure Synapse Analytics data sources and their associated columns. Additionally, data source and column descriptions from the connected Microsoft Purview catalog will also be pulled into Immuta.
This connector simplifies tag enrichment in Immuta for customers whose tag information resides in Microsoft Purview enterprise data catalog. Previously, customers leveraging Microsoft Purview enterprise data catalog had to build an integration themselves using Immuta’s custom REST catalog interface.
Databricks Unity Catalog additional workspace connections: This feature allows users to configure additional workspace connections within their Databricks integrations and bind these additional workspaces to specific catalogs. This enables customers to use Databricks’ workspace-catalog binding feature with their Immuta integration. Users can dictate which workspaces are authorized to access specific catalogs, allowing them to better control catalog access and isolate compute costs if desired.
Private networking across global segments: This feature allows connections to data sources over private networking that reside in a different global segment than their Immuta tenant. For example, if your Immuta tenant is in North America, you can now connect to data sources in APAC and the EU over private networking.
Databricks integration support defaulted to Unity Catalog: Eliminated the manual step of updating a global account setting prior to configuring a Unity Catalog integration. For Databricks integrations, the default support now assumes a Unity Catalog integration.
Customers using Databricks Spark must now update the default account setting before configuring their Databricks integrations.
Deprecated items remain in the product with minimal support until their end of life date.
Data inventory dashboard
October 2024
November 2024
Improvements to sensitive data patterns used to find and tag data: These improved patterns have higher accuracy out of the box, which reduces the amount of overtagging and missed tags. The result is an easier experience and reduced time to value generating actionable metadata.
Microsoft Purview enterprise data catalog support: New standard connector for tag enrichment from Microsoft Purview enterprise data catalog to Immuta. In addition to Purview tags, the following Purview objects will be pulled in and applied to registered data sources as either column or data source tags in Immuta:
System classifications
Custom classifications
Managed attributes
SDD governance report shows whether tags are used in policy: All governance reports based on sensitive data discovery now have a report column showing whether the tag is used as part of a policy in Immuta.
Authentication change to accommodate Snowflake moving away from password-only authentication: This deployment includes updates to the integration setup script to accommodate Snowflake beginning to transition away from password-only authentication for new accounts. When configuring an integration manually for a new Snowflake account, Immuta provides an updated manual setup script that permits password-only authentication by differentiating it as a legacy service with an additional parameter. Existing integrations will continue to function as-is.
Fix for Databricks audit workspace IDs: Previously, users filtering their audit by workspaces had to enter a 16-digit workspace ID. This restriction has been removed.
New domain-level permission - Audit Activity: This permission enables customers to delegate activity reviews to individuals for a set of audit events related to data sources within a domain, helping organizations open up access to query information to more users across the enterprise while staying compliant. For customers who use domains to define data products, the Audit Activity domain permission allows data product owners to review query activities of the data sources they manage using rich visualizations and dashboards.
SDD governance report shows whether tags are used in a policy: Under the governance reports menu, all reports based on sensitive data discovery now have a report column showing whether the tag is used as part of a policy in Immuta.
Rotating the shared secret for Starburst (Trino): Users can rotate the shared secret used for API authentication between Starburst (Trino) and Immuta, which provides improved security management, compliance with organizational policies, and the following benefits:
Enhanced security: Regularly update your API credentials to mitigate potential security risks.
Compliance support: Meet security requirements that mandate periodic rotation of API keys.
Flexibility: Change the shared secret at any time after the initial integration setup.
Existing integrations will continue to function normally. Downtime is required when rotating the shared secret, so follow the Starburst (Trino) integration API documentation to ensure continuous operation of your integration, and establish a regular schedule for rotating your shared secret as part of your security best practices.
Deprecated items remain in the product with minimal support until their end of life date.
CREATE_FILTER permission
August 2024
December 2024
Unmask requests
August 2024
December 2024
Schema monitoring for Snowflake and Databricks Unity Catalog supports detecting and automatically reapplying policies on data sources that have changed their object type (for example, a VIEW
that was changed into a TABLE
or vice versa).
SDD supports Databricks Unity Catalog OAuth M2M: Sensitive data discovery now works with Databricks data sources that are registered in Immuta using OAuth Machine-to-Machine (M2M) authentication.
Only users with the CREATE_DATA_SOURCE
permission are authorized to use the POST api/v2/data
endpoint; users without that permission will be blocked and get a 403 status returned.
Decreased the number of validation tasks for data owners from new data sources and columns found by schema monitoring: When schema monitoring is enabled, Immuta applies a New
tag whenever a new data source is added or its columns change. This allows governors to create policies that automatically apply to all new data sources and columns (such as masking new data by default).
Previously, data owners were always asked to validate data source requests (which in turn removes the New
tag) related to data source and column changes, even if there was no actual policy present targeting the New
tag.
Now, data owners are only asked to validate data source requests if an actual policy is present that is targeting the New
tag. Otherwise the validation request for data owners gets skipped.
As a result, in the absence of a relevant policy, data owners will now have fewer data source requests to validate which saves them time and increases efficiency.
Query text has been removed from all legacy audit records: Immuta no longer stores query text with legacy audit records, as its support has reached end of life. Instead, use UAM events, which by default contain query text.
Snowflake External OAuth: The form field Client Secret stopped being displayed in the UI for Snowflake data source registration, which led customers to believe that Snowflake External OAuth using client secret was no longer a supported authentication mechanism. This fix reintroduced the client secret field in the UI.
Customers who had already registered data sources with Snowflake External OAuth previously via the UI, API, or CLI while the bug existed were not affected, since the issue only affected the UI but not the backend or programmatic interfaces.
Deprecated items remain in the product with minimal support until their end of life date.
Policy exemptions
August 2024
October 2024
Masked joins for Snowflake and Databricks Unity Catalog integrations is now generally available. This feature allows masked columns to be joined across data sources that belong to the same project giving users additional capability for data analysis within a project, while still securing sensitive data. Sensitive columns can be masked while still allowing users the ability to join on these within a project, helping organizations strike the correct balance between access and security.
Simpler UX for sensitive data discovery: Customizing sensitive data discovery is now easier and quicker with a single entry point for configuration. Instead of navigating to multiple pages in the Immuta application, use a single form to create an identifier for sensitive data and add tags and regex patterns.
Released Immuta CLI v1.4.0: A new version of the CLI was released which includes new support for AWS IAM role authentication for audit export to S3 and some CLI breaking changes. See the CLI release note for more details.
Allow masked joins for Snowflake and Databricks Unity Catalog integrations: This feature allows masked columns to be joined across data sources that belong to the same project giving users additional capability for data analysis within a project, while still securing sensitive data. Sensitive columns can be masked while still allowing users the ability to join on these within a project, helping organizations strike the correct balance between access and security.
Removing legacy audit records: Starting July 23rd, Immuta will begin enforcing the 90 day retention period for legacy audit records for all tenants in SaaS. This will have no impact on Governance Reports. If you need to export legacy audit records older than 90 days, see the View audit logs guide for details on the legacy (deprecated) Audit API. Universal Audit Model (UAM) records can be exported on a configured schedule to S3 or ALDS, see the Export audit logs to S3 or Export audit logs to ADLS guides.
Group membership count contains information on active and disabled users: When looking at the number of users contained in a group, you can easily distinguish between active and disabled users. This enhancement allows user admins to verify accurate user-to-group membership between their external identity access manager and Immuta faster.
Support role-based access for S3 audit export: Audit export supports AWS IAM authentication. Customers can use AWS assumed role-based authentication or access key authentication to secure access to S3 to export audit events.
Databricks Unity Catalog integration tag ingestion: Customers who have tags defined and applied in Databricks Unity Catalog can seamlessly bring those tags into Immuta to leverage them for attribute based access control (ABAC), data classification, and data monitoring.
This feature is currently in preview at the design partner level. To use this feature in preview, you must have no more than 2,500 Unity Catalog data sources registered in Immuta. See the design partner description for expectations and details, and then reach out to your Immuta representative to enable this feature.
Comply with column length and precision in a Snowflake masking policy: Snowflake is soon requiring the outputs of masked columns to comply with the length, scale, and precision of what the Snowflake columns require. To comply with this Snowflake behavior change, Immuta truncates the output values in masked columns to match the Snowflake column requirements so that users' queries continue to complete successfully.
Trino universal audit model available with Trino 435 using the Immuta Trino plugin 435.1: For customers that are using EMR 7.1 with Trino 435.1, and have audit requirements, the Immuta Trino 435.1 plugin now supports audit in the universal audit model. The Immuta Trino 435.1 plugin audit information is on par with the Immuta Trino 443 plugin. The Immuta Trino 435.1 plugin is supported on SaaS and 2024.2 and newer.
Adding a new external catalog integration automatically backfills tags for pre-existing data sources: Prior to this change, users had to manually link pre-existing data sources to the relevant external data catalog entry after a new external data catalog integration was set up, and only newly registered data sources were linked automatically. Now, Immuta triggers an auto-linking process for all unlinked data sources when a new external data catalog integration setup is saved.
This change increases the level of automation, reduces cognitive and manual workload for data governors, and aligns external data catalog integration behavior with end user expectations.
Removing the overview tab on identification frameworks: Under Discover, each identification framework now has two tabs: rules and data sources. Prior to this change, there was an overview tab that linked to the other two tabs. When clicking into an identification framework, you now land directly on the rules tab.
OAuth M2M support for Databricks Unity Catalog: We are excited to announce that Immuta now supports establishing connections to Databricks using OAuth Machine-to-Machine (M2M) authentication. This feature enhances security and simplifies the process of integrating Databricks with Immuta, leveraging the robust capabilities of OAuth M2M authentication.
New product changelog: The new Immuta product changelog will announce the latest product updates, features, improvements, and fixes.
Immuta users can open the in-app changelog by clicking “What’s New?” in the left-hand navigation. It is also available at changelog.immuta.com.
Schema monitoring enhancement for Databricks Unity Catalog: Schema monitoring for Databricks Unity Catalog now supports detecting and automatically reapplying policies on destructively recreated tables (from CREATE OR REPLACE statements), even if the table schema itself wasn’t changed.
The Immuta Starburst (Trino) integration supports additional query audit metadata enrichment including the object accessed during the query event: Immuta query audit events for Starburst (Trino) will include the following information.
Object accessed: The tables and columns that were queried
Tags: The Immuta table and column tags, including data catalog tags synchronized to Immuta, for queried tables and columns
Sensitivity classification: The columns' sensitivity in context of other queried columns if an Immuta classification framework is enabled at the time of audit event processing
Query duration: The amount of time it took to execute the query in seconds
Database name: The name of the Starburst (Trino) catalog
Governance permission required for Discover: Starting today, the Discover UI for managing automated data identification and classification is only accessible to users with the GOVERNANCE
permission in the Immuta application. Previously, Immuta users with permission to create data sources could also access the settings in the Discover UI.
Fix for external tag ingestion related to Collibra Output Module API behavioral change: Incorrect filters were being passed to Collibra’s Output Module API when fetching column tag information. This was resulting in a failed API request while linking or refreshing Collibra tags on a data source. Collibra’s Output Module API began performing additional request validation on approximately May 6, 2024, which indicated a problem. This fix ensures that the Collibra tag ingestion integration in Immuta is reflecting these changes. Without it, there was a residual risk that some incorrect column tags would get ingested.
Data owners can now see audit events for the data sources that they own without having the AUDIT
Immuta permission: Data owners can see query events for their data sources on the audit page, data overview page, data source pages, and the data source activity tab. They can also inspect Immuta audit events on the audit page and activity tab for the data sources they own. This enhancement gives data owners full visibility of activity in the data sources they own.
Snowflake memoizable functions update: Immuta policies leverage Snowflake’s memoizable UDFs. When an end user references a policy-protected column in a query, the cached results are available from the memoizable function, resulting in faster, more performant queries.
Running table statistics only if required (instead of by default): Table statistics consist of row counts, identification of high cardinality columns, and a sample data fingerprint. Immuta needs to collect this information in order to support the following data access policy types:
Column masking with randomized response
Column masking with format preserving masking
Column masking with k-anonymization
Column masking with rounding
Row minimization
Prior to this change, table statistics would be collected for every newly onboarded object by default, except if the object had a Skip_Stats
tag applied. Post this change, table statistics are now only collected on a data object once they are required (i.e., if one of the above-mentioned policy types is applied). Even then, the Skip_Stats
tag continues to be respected. This change results in performance improvements, as the number of standard operations during data object onboarding is significantly reduced.
Alation custom fields integration: In addition to Alation standard tags, Immuta’s Alation integration now also supports pulling information from Alation custom fields as tags into Immuta.
Data policies on Snowflake Iceberg tables: Users can now apply fine-grained access controls to Snowflake Iceberg tables, making support for Immuta data policies and subscription policies consistent across standard Snowflake table types.
POST /project
endpoint: Users will receive a 422 status error instead of a 400 status error when trying to create a new project name that would result in a database conflict on the project's unique name.
POST /api/v2/data
endpoint response: creating
will not be returned in the response when using this endpoint the first time; the response will just include bulkId
and connectionString
. However, when updating a data source using POST /api/v2/data
, the response will include creating: []
(with no data source names inside the array).
Domains in general availability: Domains are containers of data sources that allow you to assign data ownership and access management to specific business units, subject matter experts, or teams at the nexus of cross-functional groups. Domains support organizations building a data mesh architecture and implementing a federated governance approach to data security, which can accelerate local, domain-specific decision making processes and reduce risk for the business.
This feature is being gradually rolled out to customers and may not be available in your account yet.
Improved user experience for managing users, data sources and policies: This deployment includes significant user experience updates focused on enhancing Immuta's key entities: users, data sources, and policies.
The People section has a more intuitive experience with notable changes. Users and groups have been split into two separate tabs. The first tab provides an overview of a user or group, while the second tab contains detailed settings, such as permissions, attributes, and associated groups.
Another important enhancement in the People section is the new Attributes page, which centralizes all information about an attribute, including the users or groups it applies to.
The Data Sources section has been completely redesigned to offer a more efficient search and filter experience. Users can preview details of a data source through expandable rows on the list and access bulk actions for data sources more easily.
The Policy section includes an updated list with improved search and filter capabilities. Additionally, a policy detail page allows users to view comprehensive policy information, take action, edit policies, and see a list of targeted data sources.
These enhancements are being gradually rolled out to customers and may not be available in your account yet.
Disable external usernames with invalid Databricks identities: Databricks user identities for Immuta users will now be automatically marked as invalid when the user is not found during policy application. This will prevent them from being affected by Databricks policy until manually marked as valid again in their Immuta user profile. This change drastically improves syncing performance of subscription policies for Databricks Unity Catalog integrations when Immuta users are not present in the Databricks environment.
Project-scoped purpose exceptions for Snowflake and Databricks Unity Catalog integrations: Row and column-level policies can now account for purposes and projects for additional security. With this policy configuration, a user will only be able to view the data the policy applied to if they are acting under a certain purpose and that data is within their current project. Purpose exception policies ensure data is only being used for the intended purposes. This feature is in private preview.
The POST /tag/{modelType}/{modelId}
endpoint (which adds tags to models that can be tagged, such as data sources and projects) can only apply tags that exist to these models. This update presents one breaking change: A 404 status will now be returned with the tag(s) that were not valid instead of a 200 status, and no tags will be processed if any invalid tags are found.
Write policies for Amazon S3: Besides READ
operations, Immuta's Amazon S3 integration now also supports fine-grained access permissions for READWRITE
operations. While Immuta read policies control who can consume objects from Amazon S3 storage locations, write policies allow control of who can add and delete objects. Contact your Immuta representative to get write policies for Amazon S3 enabled in your Immuta tenant.
Disable k-anonymization by default and allow a customer to opt in: When a k-anonymization policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process that generates rules that enforce the k-anonymity. The results of this query, which may contain sensitive data, are temporarily held in memory.
Because this process may violate an organization's data localization regulations, you must reach out to your Immuta representative to enable this masking policy type for your account. If you have existing k-anonymization policies, those policies will not be affected by this change.
Updated classification frameworks: Customers using the public preview classification frameworks feature now have access to the Data Security Framework (DSF) and Risk Assessment Framework (RAF). DSF extends sensitive data discovery tags to apply descriptive category tags to your data; RAF extends the DSF to apply sensitivity tags to your data, such as Medium
, High
, and Very High
.
Together, these frameworks replace the less comprehensive legacy Immuta Data Security Framework, which has been deprecated and will be removed from the product.
Support protecting more than 10,000 objects with Unity Catalog row- and column- level policies: Users can now mask more than 10,000 columns or tables with row filters, removing the previous limitation in the Unity Catalog integration. This enhancement provides greater flexibility and scalability for data masking operations, allowing users to effectively secure sensitive data across larger datasets.
Updates to button labels: Two buttons have been renamed to align their labels more closely with their functionality.
The "Sync Native Policies" button has been renamed to "Sync Data Policies" to better reflect its function.
The "Refresh Native Views/Policies" button has been renamed to "Refresh Native Views/Data Policies" for improved accuracy.
Support access using AWS IAM role in SaaS for Amazon S3 integration: Users can now leverage an AWS IAM role for Immuta to establish a secure, cross-account connection to S3 Access Grants. This enhancement allows for seamless orchestration of access grants, providing a more secure and compliant experience for our users.
Support exporting audit to Azure ADLS Gen2: Immuta can now export audit logs to Microsoft Azure ADLS Gen2 Blob, in universal audit format (UAM). The Immuta audit export payload contains audit records for both configuration activity in Immuta and data access activity from Snowflake, Databricks and Starburst.
Deprecated items remain in the product with minimal support until their end of life date.
The ability to configure the behavior of the default subscription policy has been deprecated and will reach end of life in September 2024. Once this configuration setting is removed from the app settings page, Immuta will not apply a subscription policy to registered data sources unless an existing global policy applies to them. To set an "Allow individually selected users" subscription policy on all data sources, create a global subscription policy with that condition that applies to all data sources or apply a local subscription policy to individual data sources.
UAM support for Starburst: Immuta's universal audit model now includes query audit events from Starburst Enterprise. These query audit events are included on the new audit page, in the Detect activity views, and in the S3 export payload. This feature is currently supported in Immuta SaaS tenants with Starburst e438 and will be available in the 2024.2 LTS release.
Query duration support for Detect Monitors: Immuta Detect can now notify you via a webhook when a user executed a query that exceeded a configurable duration threshold on supported data platforms. This enhancement allows data platform owners to know when a user issued long-running queries so they can keep data warehouse running costs low. Additionally, knowing which users issued long-running queries is an opportunity to enable data consumers to query the data in an optimal way, direct them to use another optimized data set, and allow the data owner to understand new workload requirements.
Use Detect monitor with query duration thresholds to increase visibility of users and queries that may breach data platform latency SLO and control data warehouse cost.
Write policies for Starburst: In addition to read operations, Immuta's Starburst integration now supports fine-grained access permissions for write operations. In its default setting, write operations control the authorization of SQL operations that perform data modification. Administrators can include more operations (such as ALTER and DROP tables) to be authorized as write operations through advanced configuration. Contact your customer success representative to learn more.
The POST /tag/column/{datasource_id}_{column_name}
endpoint (which adds tags to columns on data sources) can only tag existing columns on data sources. It does this by checking the dictionary associated with the data source to see if the desired column exists on the data source. This deployment introduces two breaking changes:
Column does not exist 404: When the column does not exist on the data source, a 404 status is now returned instead of a 200.
Dictionary does not exist 404: When an associated dictionary does not exist on the specified data source (that you have access to add tags to), a 404 status is now returned instead of a 403.
Audit page: The audit page that uses the legacy audit format has been removed. The legacy version of Immuta audit format continues to be maintained and accessible through the deprecated audit API until its scheduled EOL date.
Color coding for data source health: The health status for each data source on the data source list page now uses color coding to provide a visual for users so they can quickly determine whether they should take action related to the health of data sources. Additionally, unhealthy data sources are ranked at the top of the list on the data source page to ensure that when users log in to Immuta they are aware that unhealthy data sources exist in the system. Prior to this change, users had to click through all data source pages or had to explicitly set up a filter to achieve the same behavior.
“Pending” policy state: A new Pending
policy state indicates when background jobs are running to update permissions after a policy is created or changed. Once the Pending
state changes to Active
, all policy changes have been enforced on affected data sources.
Custom URL redirects: Custom URL redirects create a second fully-qualified domain name for SaaS tenants that redirects to the primary domain name. This gives users a domain name that they can remember and that has little impact on their integrations. Contact your customer success representative if you are interested in setting up a custom URL redirect.
Sensitive Data Discovery (SDD) tag context: Introducing language to specify when tags were placed by legacy SDD; the tag side sheet now mentions that legacy SDD is deprecated and targeted for removal in March 2024.
Native SDD now leaves legacy SDD tags in place when they are not found upon a subsequent re-scan of a data source. Customers who begin using native SDD can now see results with no impact to prior legacy SDD tags. See the Migrate legacy to native SDD page for more details.
Faster query performance with Snowflake memoizable functions: When a policy is applied to a column, Immuta now uses Snowflake memoizable functions to cache the result of common lookups in the policy encapsulated in the called function.
Subsequently, when users query a column with the applied policy, Immuta leverages the cached result, resulting in significant enhancements to query performance.
To enable support for memoizable functions, contact your Immuta customer success representative.
Workspace filtering for Databricks Unity Catalog audit collection: Users can limit Databricks Unity Catalog audit collection by specifying a comma-delimited list of Databricks workspace IDs in the integration's app settings.
For a more responsive Detect activity page experience, Immuta limited the number of auto-suggested filter values (such as data sources, tags, and users) to 100 of the most active values. The total item count for each filter type still reflects the number of events in the dashboard time range.
When pulling personally identifiable information (PII) from Collibra, Immuta now includes and differentiates true
and false
value assignments as Personally Identifiable Information.true
and Personally Identifiable Information.false
to more accurately reflect how PII is set in Collibra.
Improved validation when saving sensitive data discovery patterns in the Immuta UI: When adding a regular expression pattern for sensitive data discovery, the Immuta UI validates the format of the regular expression according to the RE2 regular expression standard. Patterns that don’t conform cannot be saved, preventing those patterns from causing failures at run time.
Snowflake query monitoring with notifications: Immuta Detect monitors help you surface non-compliant data combinations and maintain data availability through data platform configuration changes. Monitors automate manual aggregation and calculation of user activity metrics based on query events. Additionally, they can notify you when the metrics exceed your intended operating thresholds. Monitors work with query tags, query execution outcomes, and Immuta Discover classification sensitivities when enabled.
This feature is in private preview and can be made available upon request. Contact your customer success manager for more details.
Fix to address a UI issue that led customers to believe that disabled users were not getting their access revoked. The UI has been updated and disabled users are now being filtered out from the data source members tab.
Immuta audit events in the universal audit model (UAM): Universal audits now include Immuta configuration audit events, domain audit events, sensitive data discovery (SDD) audit events, and user management audit events. Immuta tenants with the domain preview enabled can now audit domain structure changes.
Sensitive data discovery (SDD) pattern validation at runtime: SDD has used RE2 regular expression syntax since mid-year of 2023, and custom patterns created since that time are validated when added to the system. In limited cases, custom patterns created prior to this are not RE2 compliant and cause SDD analysis to fail without apparent cause. Now, those cases raise a detailed message stating the pattern name and the full regular expression. This message is shown under the data source health check menu for any targeted data sources where SDD failed for this reason.
Usability updates:
The new user profile page separates information better and makes it easier to understand.
Keyboard shortcuts are now available for some common functions. Keep an eye out for in-app guidance that helps with how to use them.
The account menu is wider for better readability and now has an option to toggle between light and dark mode. (By default, Immuta still uses your browser settings.)
Browser tabs tell you which page you’re on, instead of all being labeled “Immuta Console.” A new, adaptive favicon allows you to still tell that it’s Immuta at-a-glance, whether you’re in light or dark mode.
Activating regulatory frameworks in Discover: Fix to address an issue that prevented some customers from activating the regulatory frameworks in Discover. In some cases, customers who previously used the Immuta data security framework (DSF) before getting access to the new frameworks for GDPR, CCPA, HIPAA and PCI were unable to activate the new frameworks.
Amazon S3 integration: Immuta’s Amazon S3 integration enhances the management of permissions in complex data lakes on object storage. Eliminate scalability concerns as you enforce S3 access effortlessly. You can grant users time-bound access to files and folders, creating a security posture with zero-standing permissions, a gold-standard for compliance.
Additionally, you can grant access to human identities seamlessly through Identity Providers (IdPs) like Okta, Microsoft Entra ID, and more, thanks to integration with AWS IAM Identity Center. With the implementation of attribute-based access controls (ABAC) for S3, Immuta provides a simplified and efficient approach to managing data lake permissions. The privileges you set using the Amazon S3 integration can apply anywhere, from the CLI, to your applications using AWS SDKs, and on Amazon EMR Spark and Amazon SageMaker. Elevate your data governance with these advanced capabilities and experience a seamless and secure data access environment. Contact your customer success manager for more details.
Immuta audit events in the universal audit model (UAM): Universal audits now include Immuta policy and data sources changes.
Write policies: Write policies is a new capability to manage user write access authorizations via policy (enabling users to modify data in data source objects). This release supports the new functionality for Snowflake and Databricks Unity catalog integrations. Contact your customer success manager for more details.
Deprecated items remain in the product with minimal support until their end of life date.
Databricks Spark with Unity Catalog support integration
January 2024
March 2024
dbt integration
January 2024
March 2024
Data source expiration dates
January 2024
May 2024
Bug fix for sensitive data discovery settings: Fix to the App Settings for Sensitive Data Discovery. Previously, the field to set the global SDD framework was hidden and as a result the global SDD framework could not be updated. The field is now available when SDD is turned on.
Bug fix for SDD rules display: Fix to an issue with adding new discovery rules to an Identification framework. Previously, adding a new discovery rule would not appear in the list in the UI until the page was reloaded. Newly added rules now appear in the list at once.
Immuta could not update a group through SCIM if that group was initially created through SAML before SCIM was enabled in an IAM's configuration.
Enhancement to Classification Frameworks rule display: In Discover, under Classification frameworks, the list of rules now shows all input and output tags in the browse list. There is no need to click further into a details screen to learn everything about a rule.
Change to SDD person name rule: The built-in Sensitive Data Discovery pattern for Person Name has been adjusted to more easily match columns that are consistent with person names.
Addressed a vulnerability that could allow a malicious user to enter HTML tags to affect the page's user interface. Such an issue could increase the risk of XSS attacks or threaten users’ privacy.
Performance improvements for Immuta tenants that had data sources with more than 500 masked columns.
Redshift Spectrum data sources were not deleted when the schema project they belonged to was deleted.
Fix to address issue that prevented users with the CREATE_DATA_SOURCE permission from being able to create a data source if a user without that permission previously tried to register data sources via the API.
Users were unable to edit an external catalog’s configuration.
Minor enhancements and fixes that are not user-facing.
Integrations API: The integrations API allows you to integrate your remote data platform with Immuta so that Immuta can manage and enforce access controls on your data.
With native SDD enabled, users will have SDD options displayed when creating a data source for Snowflake and Databricks, but those SDD options will no longer be displayed for other technologies.
An additional 19 UAM audit events are captured and can now be viewed on the Immuta audit page in the UI or exported to S3. See the full list of supported events on the Universal audit model (UAM) page.
If creating a user initially failed because of an invalid payload, users encountered the following 409
error in a subsequent request with the correct payload: User with the provided userid already exists.
The Immuta system account user for the Unity Catalog integration requires the OWNER
permission on catalogs with schemas and tables registered as Immuta data sources. This permission allows Immuta to administer Unity Catalog row-level and column-level security controls. This permission can be applied by granting OWNER
on a catalog to a Databricks group that includes the Immuta system account user to allow for multiple owners. If the OWNER
permission cannot be applied at the catalog- or schema-level, each table registered as an Immuta data source must individually have the OWNER
permission granted to the Immuta system account user.
Uploading a non-existent data source through the databricks/handler
API endpoint resulted in a 500
error instead of a 404
error.
After a Redshift integration connection test was successful in the Immuta UI, users encountered an Internal server error
when attempting to save the integration settings.
Immuta was not granting access to data sources with a hasTagAs
policy applied correctly. If users did not initially have the attribute specified when the policy was created, they were not granted access to the data source if they were later given the specified attribute.
Snowflake lineage was not propagating tags properly to child data sources.
Fixes to address validation test failures when configuring a Redshift integration.
Minor enhancements and fixes that are not user-facing.
Performance improvements when disabling a Snowflake integration.
The Databricks Unity Catalog OAuth certificate field was broken when users attempted to add certificates on the integrations page.
If the token used to configure the Databricks Unity Catalog integration was expired or revoked, applying masking policies to data sources or syncing policies displayed as being successful in the Immuta UI even though the job failed.
Vulnerability: CVE-2023-44270
Snowflake user impersonation roles were being removed incorrectly.
Users can select a light or dark mode theme for the Immuta UI from the user profile menu.
Design improvements of the user profile page.
CVE-2023-45803
CVE-2023-43804
CVE-2023-46136
Minor enhancements and fixes that are not user-facing.
When users attempted to register data sources from two different Starburst (Trino) catalogs, they encountered a remote table validation error if the table and schema names were the same.
Update to the deprecation of legacy audit UI and /audit
API; originally the EOL was set to March 2024. However, the EOL time frame has been delayed based on customer feedback. Check future release notes for the updated EOL date.
The Databricks Unity Catalog integration supports rotating personal access tokens.
Pages in the UI have a branded Detect footer to signify that they belong to the Detect module.
Fixes related to Databricks Unity Catalog custom certificate authority configuration. This feature is currently in preview and only available to select accounts.
Snowflake low row access policy mode is generally available. This mode improves query performance in Immuta's Snowflake integration by decreasing the number of Snowflake row access policies Immuta creates.
The Databricks Unity Catalog integration supports OAuth token passthrough as an authentication method for configuring the integration and registering data sources. This feature is currently in preview and only available to select accounts.
Fixes to address performance degradation in the Immuta UI.
Vulnerability: CVE-2023-45857
To create a classification framework to discover data sensitivity using the /framework
API endpoint, users must now include the parameter rule.name
. This will not affect any current behavior and will only impact a new framework being created.
Users can configure their Databricks Unity Catalog integration to support their proxy server.
The Databricks Unity Catalog integration supports OAuth token passthrough. This feature is currently in preview and only available to select accounts.
The query editor page has been removed from the product. Users can no longer enable the query editor on the app settings page.
Creating a governance report on all data sources failed for instances with more than 10,000 data sources.
The Immuta CLI returned a 500 error when creating data sources if the payload had an empty string for the columnDescriptions.description
parameter.
Schema monitoring did not create or delete views in Redshift Spectrum if data sources were registered through the Immuta V2 API /data
endpoint.
If data sources had tags applied through Snowflake lineage and then an external catalog was updated with new tags, the lineage tags were dropped and the new tags were applied to the column.
The /detectRemoteChanges
endpoint behaved inconsistently for Snowflake integrations.
Fixes to address a Snowflake table grants issue that caused data source background jobs to fail.
Vulnerability: CVE-2023-43804
Users can now adjust the audit frequency for Snowflake and Databricks Unity Catalog native query audit from the app settings page.
The new Load Audit Events button on the events page will manually sync audit events from Snowflake or Databricks into Immuta outside of the scheduled ingestion.
The option to enable the dbt integration has been removed from the Immuta application for new instances.
Databricks Spark project workspaces failed to create for Databricks integrations using metastore magic.
Minor write policy (private preview) fixes and enhancements.
Attempting to GRANT SELECT
on a shared view in Snowflake failed with the following error: UDF IMMUTA_PROD.IMMUTA_SYSTEM.GET_ALLOW_LIST is not secure
.
The data source health check was not running on Snowflake data sources.
Vulnerability addressed: CVE-2023-45133
Native SDD is enabled by default in all new Immuta tenants.
After editing a Databricks Unity Catalog data source, the configuration could not be saved.
Users encountered this error when disabling Snowflake table grants: Error: Query timed out. The connection information may be incorrect. Please double check and try again.
Native SDD can now be fully customized using the UI.
Fixes to address Immuta UI performance issues.
Deprecated items remain in the product with minimal support until their end of life date.
September 2023
October 2024
September 2023
March 2024
Discussions tab on projects and data sources
September 2023
March 2024
HIPAA Expert Determination
September 2023
March 2024
Query editor
September 2023
October 2023
September 2023
January 2024
September 2023
March 2024
Users could not add all schemas when registering Databricks data sources in the Unity Catalog integration.
Users could not query Starburst data sources registered using OAuth authentication and got the following 400 error: This data source was created using anonymous authentication.
Users must now set an admin username globally in Immuta when using OAuth or asynchronous authentication to create Starburst data sources.
Schema monitoring was not properly creating new data sources in the Databricks Unity Catalog integration when new tables were detected.
The data source members tab did not display all subscribed users when a subscription policy that used advanced DSL rules with special subscription variables was enforced on the data source.
Vulnerability: CVE-2023-41419
Global subscription policies that used the @hasTagAsGroup
or @hasTagAsAttribute
variable were not granting and revoking users' access to tables properly. This fix addresses the issue for the Databricks Unity Catalog integration.
The data source details tab UI has been redesigned to consolidate data source connection information and remove the query editor button, the SQL connection snippets, and the copy schema button. This redesign aligns the format of this data source details page with the Detect dashboards.
Global subscription policies that used the @hasTagAsGroup
or @hasTagAsAttribute
variable were not granting and revoking users' access to tables properly. This fix addresses the issue for Azure Synapse Analytics, Databricks Spark, Redshift, and Snowflake integrations.
Databricks Unity Catalog integration: Write your policies in Immuta and have them enforced automatically by Databricks across data in your Unity Catalog metastore.
Fixes to address slow or unresponsive Immuta tenants.
Data source health status warning messages were not properly displayed for views.
Fixes to the Redshift integration configuration to address the impact of a change in the Okta Redshift application, which now requires usernames to have the prefix IAM
.
The user profile menu icon is now a user icon instead of the user's first initial.
When an automatic subscription policy using the @hasTagAsAttribute
variable was applied to a Snowflake data source, users were not granted access to the table in Snowflake.
Users can override the default storage URI for Databricks Spark project workspaces, so they can create project workspaces against storage in a different location if they have an alternative hostname, DNS, or other requirements.
The schema evolution owner was unset when data sources were removed from a schema project.
Fixes to address Immuta UI performance issues.
Vulnerability: CVE-2023-41037
Immuta allows masked columns to be used in row-level policies in the Snowflake and Databricks Unity Catalog integrations. This feature is currently in public preview and available to all accounts.
Syncing a Snowflake external catalog failed on data sources with more than 300 tagged columns.
The local subscription policy builder and project subscription policy builder now align with the format of the global subscription policy builder.
Fix to prevent enabling column detection on derived data sources, as column detection is unsupported for derived data sources.
Vulnerability addressed: CVE-2022-25883
Users can view license usage via the Immuta API to track the number of licensed users.
Users were able to change a schema project owner's role, which could leave Immuta in a state where the schema project could not be deleted.
Fix to address a validate connection error with Snowflake External OAuth.
Vulnerability addressed: CVE-2023-37920
Data source and user activity views for Snowflake are now GA.
All new SaaS accounts will have Detect on by default with the activity dashboards visible to users with the AUDIT
or GOVERNANCE
permission. Current customers are not affected by this change.
With native SDD enabled, users now have access to the Discover tab, where they can view their identification frameworks and adjust the rules within a framework.
When users created an IAM on the app settings page and set immuta
as the ID, users could not sign in to Immuta using their Immuta Account on the login screen.
Sensitive data discovery failed to run on data sources that were registered using Snowflake External Oauth.
Redshift validation tests required CREATE ON PUBLIC
for the Immuta system account, and it should not have been a requirement.
If a user other than the data owner navigated to the policies page of a Snowflake or Redshift data source, the activity panel displayed that "undefined" created the data source.
Fix to re-sync automatic subscription policies after schema detection runs on Snowflake tables that use CREATE OR REPLACE
.
Vulnerabilities addressed:
CVE-2021-46708
: Immuta no longer publishes the Swagger API, which removes the ability to exploit this vulnerability. Although the affected library is a downstream dependency of a package Immuta uses, the library that contains the vulnerability is not used by Immuta.
CVE-2023-37920
CVE-2023-38704
Previously, if users did not have a global framework set for sensitive data discovery, Immuta would run all built-in and custom identifiers by default and any new identifiers required no additional action to be run. Now, a global template must be set. A default template is set automatically with all current built-in and custom identifiers. However, any new identifiers you create must be manually added to the global template.
Immuta can pass a client secret to obtain token credentials in the Snowflake External OAuth authentication method.
External catalog health checks now include a timestamp so that users can easily determine when the catalog last attempted to sync with Immuta.
Fix to address column detection error on Snowflake data sources: TypeError: Cannot read properties of null
.
Fix to address audit ingestion failures.
Performance improvements for identity managers with SCIM support enabled.
Native Snowflake policies and grants were not properly synced when users performed CREATE OR REPLACE
on a table.
If OAuth was used as the authentication method, users encountered an error when creating a data source with schema monitoring enabled or enabling schema monitoring for an existing data source.
Fix to mitigate audit ingestion failures.
Fix to address the impact of a recent Databricks change that caused a NoSuchFieldException
error when querying data on Databricks clusters with Unity Catalog enabled.
If whitespaces trailed or prefixed a project name when creating a Google BigQuery data source, the view was not created in Google BigQuery.
The duration of a Databricks Unity Catalog query is available on the Events page.
Immuta governance reports include native query records for Snowflake and Databricks Unity Catalog.
Fixes to address Snowflake audit record collection errors.
Vulnerability addressed: CVE-2023-37466
Unity Catalog native query audit requires the public preview version of system tables in Unity Catalog to be enabled. Follow the Databricks documentation to enable system tables.
The data sources overview and user activity dashboards can be used with Databricks Unity Catalog integrations.
Fix to address an issue that caused schema detection and audit record ingestion to fail in Snowflake when using Snowflake External OAuth for authentication.
Immuta data sources were inconsistently linked to the Snowflake external catalog when automatically ingesting Snowflake object tags.
Vulnerabilities addressed:
CVE-2022-25883
CVE-2023-36665
Members with timed access to a data source in Immuta could still query data in Snowflake after their access had been revoked in Immuta.
If a Snowflake integration was configured with a Snowflake catalog, users could not configure another external catalog because the test connection button remained disabled.
Removing users from a group in Okta did not remove them from that group in Immuta.
User access events from Databricks Unity Catalog are now captured in UAM and can be exported to S3.
User attributes that included .
were not handled properly by Unity Catalog policies.
Fix to address issue that caused some Snowflake audit records to be missing.
Native query audit is now available for the Databricks Unity Catalog integration: Data access activity from Unity Catalog is audited and can be viewed as Immuta audit logs in the UI or exported.
The example query on the data source overview page for native Databricks data sources was missing the catalog, schema, and table name.
Fix to address loading time and error when switching between data source activity monitoring dashboard and other data source tabs.
Multiple data sources could appear to have the same name in the UI because of white space between characters.
Snowflake data sources could not be created if they had a '
in the name.
Sensitive data discovery customization is now GA: Sensitive data discovery (SDD) is an Immuta feature that uses sensitive data patterns to determine what type of data your column represents. SDD customization allows for organizations to create and insert their own patterns into SDD which will be recognized and then tagged when found.
Snowflake integration manual installation: After editing a setting on the app settings page (such as the custom login message), the key pair for the Snowflake integration authentication method disappeared when the configuration was saved.
Fix to address an issue with the Databricks Spark integration with Unity Catalog Support that caused an error when creating external tables.
Vulnerability: CVE-2023-32681
Support for configuring data source expiration dates has been deprecated.
Support for the Snowflake integration without Snowflake governance features has been deprecated and will be removed in December 2023.
Support for the legacy Starburst integration has been deprecated. Use the Starburst v2.0 integration instead.
Tags improvements: Tags now have a details page that provides valuable information about the tag itself and where it is applied within your data environment.
Fix to address the impact of a recent Databricks change that caused a NoSuchFieldException
error when querying data in Unity Catalog.
Subscription policies with enhanced variables did not work when Snowflake table grants was enabled.
Vulnerability: CVE-2023-34104
Native schema monitoring for Snowflake: Monitor data in your Snowflake environment. This feature detects when new tables or columns are created or deleted and automatically registers (or disables) those tables in Immuta for you. Native schema monitoring for Snowflake also improves performance of legacy schema monitoring and enhances it by detecting destructively recreated tables (from CREATE OR REPLACE
statements), even if the table schema wasn’t changed.
The data sources overview and user activity dashboards can be used with both Snowflake and Databricks integrations together.
The data source overview page shows an icon of the data access technology.
Create a row-level policy using a custom WHERE clause without Immuta validating your custom SQL. Previously, Immuta checked these custom SQL policies by running a query with the WHERE clause in the data platform. For organizations that do not grant Immuta SELECT
access to their data platforms, this validation returned an error and locked down the tables. This validation check no longer exists.
With Snowflake table grants enabled, changing a user's attribute through a group updated the Snowflake profiles table to reflect the entitlement changes. However, if a subscription policy specifying that group had already been applied to a data source, the visibility of the table did not change in Snowflake for the user. Instead, users who should have been restricted access from the table could still see that the table existed in Snowflake (but they could not query it to access data). Conversely, users who should have been granted access to the table could not see it.
Native SDD for Snowflake and Databricks is now public preview: Native SDD automatically discovers and tags your data based on the identifiers it matches but, unlike non-native SDD, it does not persist or move any of your data.
Filter the data sources overview dashboard by data platform type (Databricks or Snowflake).
Fix to address the following OpenID Connect login error: type error: cb is not a function uncaught exception detected.
Users could not save their SAML configuration on the app settings page after enabling SAML single log out and received the following error: options.allowIdPInitiatedSLO is not allowed.
SAML single log out: Minimize security risks by enabling SAML single log out, which terminates abandoned sessions after a timeout event occurs or after a user logs out of Immuta, their identity provider, or another application.
Fix to address an issue that caused sensitive data discovery to run on data sources added by schema detection, even if sensitive data discovery was disabled.
Databricks metastore magic: Migrate your data from the Databricks legacy Hive metastore to the Unity Catalog metastore while protecting data and maintaining your current processes in a single Immuta tenant.
The Redshift integration did not properly create views for tables that included column names with special characters. When users queried those views, they received column doesn't exist
errors.
When configuring Snowflake object tag ingestion, the connection failed if the host provided was a Snowflake PrivateLink URL.
Vulnerability: CVE-2023-32314
Fix to address a race condition that prevented job clusters from starting properly on Databricks runtimes 9.1 and 10.4.
New tag side sheet: Tag experience has been improved with the addition of tag side sheets, which provide contextual information about tags and can be accessed wherever tags are applied.
An additional 20 UAM audit events are captured and can now be exported to S3. See the full list of supported events on the Universal audit model (UAM) page.
The audit Events page will now show multiple targets for queries that join tables.
Running an external catalog sync did not trigger policy updates when only table tags had changed. If users only added or removed table tags, global policy updates were not applied to data sources.
The data source activity monitoring for Snowflake charts were showing the largest value for each data point on the chart rather than the sum of the values.
Data source and user activity monitoring for Snowflake are now public preview and can be used without classification enabled. Immuta users with Snowflake data sources can use these features to view visualizations of the audit information with no configuration.
Data source and user activity monitoring dashboards can now be filtered by Snowflake database or Snowflake schema.
Snowflake connection validation failed if users created a custom system account role name.
The data source overview and person overview queries charts were identical to the data overview queries chart, no matter what data source or person was selected.
A backend query was modified to improve the response time of the data source and user activity monitoring dashboards.
Deprecated items remain in the product with minimal support until their end of life date.
Support for the interpolated comparison WHERE clause function has been deprecated.
This deployment addresses a SAML login issue discovered in the original deployment on April 17. Consequently, the April 17 release notes entry has been replaced with the content below.
Snowflake integration using Snowflake governance features: Users can create conditional masking, minimization, WHERE clause, and time-based restriction policies that use masked columns as input.
The enhanced subscription policy variable @hasTagAsAttribute
did not unsubscribe users with that attribute from the data source when a matching column tag was removed.
Snowflake table grants did not properly update user subscriptions to data sources if their group in Immuta was renamed and the group name was used in an automatic subscription policy.
Vulnerabilities:
CVE-2023-0842
CVE-2023-29199
The data source health check button has been removed from the data source health menu. Use these health checks instead.
Data source and user activity monitoring dashboards can now be filtered by Snowflake cluster, warehouse, and role.
Performance improvements of the data source monitoring for Snowflake overview dashboard.
Users could not include duplicate tags in a single row-level policy when using the policy builder.
When configuring an external REST catalog, testing the data source link timed out after three seconds, and users received a failed to retrieve data
error.
Vulnerabilities:
CVE-2023-0842
CVE-2023-29017
Tag enhancements are generally available and update various components of the UI.
Snowflake integration: If a group's access was revoked from a data source in Immuta (manually or through a policy), table grants was not issuing revokes in Snowflake for members of the group that lost its subscription status, allowing them to still access that data. However, if low row access policies for Snowflake was disabled, all the rows in the data source were appropriately hidden.
Snowflake external catalog tags were not synced or pulled in to Immuta.
Users could not enable column detection if they had not made all columns visible in the data source during data source creation.
Data source and user activity monitoring dashboards will persist the date range selected for all dashboards in that user's session. Once logged out, the data range will return to default.
When using SCIM to sync an identity manager with Immuta, removing a user from a group in the identity manager did not remove the user from that group in the remote database in the following integrations:
Snowflake
Redshift
Synapse
This issue could allow that user to retain access to data if they were removed from a group that was granted access by a policy.
If an Advanced DSL policy used the @columnsTagged
function and the policy had multiple conditions, all users were restricted from seeing data.
Unity Catalog clusters: A breaking change in Databricks caused a wrong number of arguments
error when users ran Unity Catalog queries.
When Databricks query plans for tables registered in Immuta were too large, Immuta could not process the audit record.
Vulnerabilities:
CVE-2023-24807
CVE-2023-28154
Block a set of Immuta's custom user-defined functions (UDFs) from being used on your Databricks Spark clusters. Blocking use of these functions allows you to restrict users from changing projects within a session.
Left navigation UI enhancement. The left navigation includes two tiers and reorganizes several pages:
Data includes the data sources and projects pages.
People includes the admin page.
Policies includes the subscription policies and data policies pages.
Support for Databricks Runtime 11.3 LTS.
Vulnerability: CVE-2022-23529
The number of months for historical ingestion of data source and user activity monitoring for Snowflake can be configured from the app settings page.
A single query for multiple data sources will result in a single Snowflake universal audit model (UAM) event and appear as one event on the Events page.
The custom date range for data source and user activity monitoring dashboards supports custom time ranges.
When executing the Immuta Data Security Framework, the status of the classification job for individual data sources can now be found in the data source health dropdown. The options include the following:
Classification complete: Classification has run on the data source and applied the appropriate classification tags.
Classification pending: A framework has been created, activated, or updated and will run on the data source.
Classification is not applicable: The data source is not affected by classification.
The Databricks Spark integration sometimes provided an incomplete list of databases in the Data Explorer UI or in Databricks clusters after running SHOW DATABASES
.
Under rare circumstances, a global data policy using a tag failed to apply to some data sources.
User accounts created with IAM integrations using the SAML 2.0 protocol before SCIM was enabled were not updated by SCIM provisioning after SCIM was enabled.
With data source and user activity monitoring for Snowflake enabled, users without AUDIT permission were brought to an empty overview dashboard when logging in.
Users can no longer register multiple data sources that reference the same underlying table in their remote data platform. Existing duplicate data sources that point to the same remote table will not be affected by this change; this feature removal only applies to data source creation.
Fix to repair impact of a recent Databricks Data Explorer change to issue use catalog hive_metastore
command on Databricks runtimes older than Databricks runtime 11.x. The Databricks Spark integration now handles this command issued by Databricks Data Explorer.
The Default subscription policy option allows you to choose whether or not a subscription policy will automatically restrict access to tables when they are registered as Immuta data sources. By default, Immuta does not apply a subscription policy on data you register (unless an existing global policy applies to it) so that you can preserve policies applied by your underlying data platform on those tables, leaving existing access controls and workflows intact.
Snowflake low row access policy mode improves query performance in Immuta's Snowflake integration by decreasing the number of Snowflake row access policies Immuta creates.
With data source and user activity monitoring for Snowflake enabled, the Audit tab on the navigation menu defaults to the Events page.
Classification for query sensitivity is now dynamic. For a query that joins tables, Immuta uses the same classification rules applied to tables and applies those rules to columns of the query. Immuta applies a new set of classification tags to the query columns and calculates sensitivity for the query event in the audit record. These query classification tags are not included on the tables' data dictionary.
When applying a global subscription policy that uses the @hasTagAsGroup
or hasTagAsAttribute
enhanced subscription policy variable (for example, "Allow users to subscribe when @hasTagAsAttribute('AllowedAccess', 'dataSource')
on all data sources") to a data source, user access was restricted as expected; however, if the data source tag changed through the Immuta V2 API, access wasn't changed, which could potentially allow users to see data that they shouldn't. Additionally, access wasn't changed if the policy was removed.
Users could not save configuration changes if they enabled Snowflake table grants after creating the integration.
Users could not save configuration changes if they edited an existing Snowflake integration.
Detect pages with over ten thousand (10,000) results would error. There is now a notification that only ten thousand (10,000) of the results are available with the recommendation to refine the page by filter or search.
Vulnerabilities:
CVE-2022-32149
CVE-2022-23491
When applying a global subscription policy that uses the @hasTagAsGroup
or hasTagAsAttribute
enhanced subscription policy variable (for example, "Allow users to subscribe when @hasTagAsAttribute('AllowedAccess', 'dataSource')
on all data sources") to a data source, user access was restricted as expected; however, if the data source tag changed, access wasn't changed, which could potentially allow users to see data that they shouldn't. Additionally, access wasn't changed if the policy was removed.
Users were able to query system tables in the query editor by using some specific Postgres functions.
Users can no longer set schema
to null
when bulk updating data sources using the api/v2/data
endpoint.
Snowflake table grants is generally available. Let Immuta manage privileges on your Snowflake tables instead of manually granting table access to users. With Snowflake table grants enabled, Snowflake Administrators don't have to manually grant table access to users; instead, Immuta manages privileges on Snowflake tables and views according to the subscription policies on the corresponding Immuta data sources.
Starburst Integration v2.0: Immuta’s Starburst integration v2.0 allows you to access policy-protected data directly in your Starburst catalogs without rewriting queries or changing your workflows. Instead of generating policy-enforced views and adding them to an Immuta catalog that users have to query (like in the legacy Starburst integration), Immuta policies are translated into Starburst rules and permissions and applied directly to tables within users’ existing catalogs.
Immuta Detect is released for private preview. Detect is a tool that monitors your data environment and provides analytic dashboards in the Immuta UI based on audit information of your data use.
Deprecated items remain in the product with minimal support until they are removed from the product.
External masking
Snowflake, Redshift, and Azure Synapse integrations:
If a combined global subscription policy was applied to a data source and a user updated a global data policy (create, update, delete) that also applied to that data source, the data policy was not applied to the data source. Consequently, a user querying that table could see values of masked columns in plaintext.
If an existing global subscription policy and an existing global data policy applied to the same data source, then modifications to that data source (or the creation of a new data source targeted by those policies), only the global subscription policy was applied to the data source. Consequently, a user querying that table could see values of masked columns in plaintext.
Vulnerabilities:
CVE-2022-23529
CVE-2022-40899
Editing a schema project to a database that already exists fails.
If users registered tables from the same schema as Immuta data sources, users could break data sources they didn't own if they deleted or changed the schema project connection.
The Databricks Unity Catalog integration configuration on the App Settings page asked for an "Instance Role ARN" instead of the "Instance Profile ARN."
Users were unable to add data sources from the Hive Metastore in the Databricks Spark integration with Unity Catalog.
Databricks Spark Integration with Unity Catalog Support: Enable Unity Catalog support on Immuta clusters to use the Metastore across your Databricks workspaces and enforce Immuta policies on your data. This integration provides a migration pathway for you to add your tables in Unity Catalog while using Immuta policies. Consequently, when additional Unity Catalog features are available, you will be ready to use them. Databricks SQL policies will continue to be enforced through a view-based method, and interactive cluster policies through the Immuta plugin method.
Databricks Runtime 11.2 support.
Write Fewer, Simpler ABAC Policies. Enhanced Subscription Policy Variables (Public Preview) empower users to write fewer, simpler ABAC (Users with Specific Groups/Attributes) policies. Previously, policy writers had to specify groups in separate policies to grant access. With Enhanced Subscription Policy Variables, Immuta's policy engine compares users' groups with data source or column tags in a single policy to determine if there is a match. Users who have a group that matches a tag on a data source or column will be subscribed to that data source.
Immuta supports registering data sources that exceed 1600 columns. However, sensitive data discovery and health checks will not run on those data sources.
The maximum length for the Snowflake role prefix when using Snowflake Table Grants is 50 characters.
Users cannot enable or disable native impersonation when editing a previously configured integration.
Alternative owners of data sources were not included in the subscription audit records if the data source was created using the Immuta V2 API.
Snowflake Table Grants: If a user who was added to a Snowflake data source through a group Subscription Policy was removed from a data source, that user could see the columns (without any data) of the table when they queried that data in Snowflake.
When users edited a Snowflake integration configuration and changed the authentication type to Snowflake External OAuth, the configuration was still saved as Username and Password for the authentication type.
Vulnerability: CVE-2022-39299
Editing a schema project to a database that already exists fails.
The following UI elements and workflows have been removed. Reach out to your Immuta representative if you need one of these elements re-enabled.
Data source Metrics tab.
Data source Queries tab.
Creating data sources with a SQL statement.
Selecting specific columns to hide when creating a data source in the UI or V2 API.
Tag enhancements (public preview): The tag enhancements feature will improve user experience by updating various components of the UI.
Azure Synapse Analytics: If a user was granted access to about 1300 data sources, access to those tables was delayed.
Deleting an integration on the App Settings page and saving the configuration caused the Immuta UI to crash.
Editing a schema project to a database that already exists fails.
Collibra integration performance improvements.
Immuta's Collibra integration recognizes the implicit relationship between the Database View in Collibra and Immuta data source columns so that tags are properly applied to those columns in Immuta.
The Immuta V1 API /dataSource
endpoint returns the remote table name so that users can get the schema and table name of a data source in one API call.
The data source Relationships tab only displayed up to 10 associated projects.
If creating the Immuta database failed in the Snowflake without Snowflake Governance Controls or Databricks SQL integration, the error returned was incorrect.
Removed historical schema monitoring metrics that contained database connection strings.
Subqueries that referenced a table that didn't exist never resolved.
Policies:
Disabling a Global conditional masking policy on a data source could sometimes disable all policies or none of the policies on the data source.
If users submitted a Global Policy payload to the API that was missing the subscriptionType
from the actions, the Global Policies page broke when trying to display Subscription Policies.
Global Subscription Policies that contained the @hasTagAsAttribute
variable caused errors and degraded performance.
Snowflake with Snowflake Governance Features: Changing a column's masking policy type resulted in errors until users manually synced the policy in Immuta.
Redshift:
Users were unable to query tables that had a policy with a Limit usage to purpose(s) <ANY PURPOSE>
applied to them.
There were error-handling inconsistencies between the Immuta UI and the database logs.
Vulnerabilities:
CVE-2022-3517
CVE-2022-3602
CVE-2022-37616
CVE-2022-39353
Editing a schema project to a database that already exists fails.
Deleting a tag hierarchy deleted any tags with a like name. For example, deleting the tag department
would also delete the tag department_marketing
.
The Refresh External Tags button appeared on the Tag page even if no external catalogs were configured.
Users couldn't change the schema detection owners for schema projects.
Collibra: If multiple values were assigned to an attribute in Collibra, they were added as a single tag in Immuta. For example, if an attribute list called Color
contained values Blue
, Green
, and Yellow
, and Blue
and Green
were selected in Collibra, Immuta displayed the data tag as Color.Blue,Green
. Instead, Immuta should have created two tags: Color.Blue
and Color.Green
.
Webhooks that were listening to setUserAuthorizations
were not triggered.
Deleting a Data Policy did not enable the Save Policy button.
With Approve to Promote enabled, adding a comment to a policy did not enable the Save Policy button.
Editing a schema project to a database that already exists fails.
Use the latest Databricks Runtime with Immuta. Databricks Runtime 11.0 is now supported in Immuta.
Connect Snowflake data to Immuta without providing your account credentials. Immuta supports Snowflake External OAuth as a non-password authentication mechanism when configuring the Snowflake integration or creating Snowflake data sources.
Let Immuta manage privileges on your Snowflake tables instead of manually granting table access to users. With Snowflake table grants enabled, Snowflake Administrators no longer have to manually grant table access to users; instead, Immuta manages privileges on Snowflake tables and views according to the subscription policies on the corresponding Immuta data sources.
Ensure that policies are adequately reviewed and approved before they are eligible for production environments. Instead of creating policies directly in production, Approve to Promote allows policy authors to create, assess, and revise policies in a policy-authoring environment. Then, the policy must be approved by a configured number of users before it is promoted to the production environment and enforced on data sources.
The undocumented deletedHandlerSubscribers
attribute, which indicates a subscription policy changed, was removed from the data source notifications webhook payload. If you were depending on that attribute in your customized webhooks, that code won't work.
IAMs:
Microsoft Entra ID: When SCIM was enabled for Microsoft Entra ID, sometimes user attributes were removed from users in Immuta when they should not have been.
Policies:
Global Subscription Policies that were applied “When selected by data owners” could not be deleted when using Approve to Promote.
If a Global Subscription Policy was disabled for a data source, staging that Global Policy on the policies page caused the Subscription policy to change on the data source.
Local Policies using @columnTagged()
were not properly applied to data in Databricks when the column was tagged.
Projects:
Project owners could not edit projects with approved purposes and data sources.
The baseline percent null values could not be adjusted for k-anonymized columns on the Expert Determination tab in projects.
Snowflake:
Instances that used the Snowflake integration without Snowflake Governance features were sometimes automatically migrated to using Snowflake Governance features when Immuta upgraded.
Vulnerability:
CVE-2022-25647
Tags sometimes did not update on data sources if those tags were quickly added or removed, which could cause policies to not be updated.
The data source page sometimes took several minutes to load if there were over 100,000 data sources registered in Immuta.
If a user was a member of a large number of groups (about 2,000), the UI search was sometimes slow.
When searching for data sources on an instance with over 30,000 data sources and tables with complex struct columns, the search could take several minutes to return or freeze the Immuta tenant.
An Adobe Font requirement caused timeout issues in the Immuta UI.
Editing a schema project to a database that already exists fails.
Application Admins can enable Policy Adjustments separately from HIPAA Expert Determination on the App Settings page.
Snowflake Integration:
Schema detection caused non-date columns to be incorrectly tagged "New" for data sources that were added in bulk.
Migrating from a Snowflake Using Snowflake Governance Controls integration to a Snowflake Without Using Snowflake Governance Controls integration failed.
Enabling a Snowflake Using Snowflake Governance Controls integration using the automatic setup method failed.
Sensitive Data Discovery did not automatically run when users bulk created data sources.
If Immuta was unable to communicate with an external IAM provider because of a connection failure, groups were removed from Immuta, even if the IAM was still active.
When creating 100,000 tables, the data source creation job sometimes expired.
User Admins could not delete attributes assigned to an Immuta Accounts user.
After configuring SAML and OpenID IAMs, users could not initially log in.
In Databricks runtime 10.4, ShowPartitions
commands on Delta tables failed.
Users were unable to edit Global policies that were not on the first page of results.
Automatic Subscription policies could cause out of memory issues if they added about 300 users to a data source.
Editing a schema project to a database that already exists fails.
Project owners are unable to edit projects with approved purposes and data sources.
IAM Signing Certificate Required for SAML. You are required to upload your IAM signing certificate to Immuta to add or edit SAML-based IAMs. If you are already using Immuta's SAML integration, provide a signing certificate to existing configured IAMs for them to continue working.
In the Snowflake Governance features integration, unmasked data was sometimes visible for a fraction of a second while data policies were being applied.
Databricks user impersonation did not work if backticks enclosed the username.
Clicking the Sync User Metadata button in the Immuta UI could queue an infinite number of profile refresh background jobs.
The enriched audit logs created an error if data policies did not exist on a data source.
The attributes type for users was inconsistent with policy attributes type in the audit logs.
Advanced Subscription Policies: If an advanced Subscription policy that did not contain special variables was created, customers with over 100,000 users could experience OOM issues.
Okta/SCIM: When adding users to Okta to sync with Immuta, TypeError: attributeValues is not iterable
appeared in the logs.
LDAP users with parentheses in their common name caused authentication to fail when group sync was enabled.
Editing a schema project to a database that already exists fails.
Project owners are unable to edit projects with approved purposes and data sources.
Databricks Runtime 10.4: Show partitions on delta table fails.
Access background jobs with enhanced visibility. This feature allows you to access information to debug issues and identify the cause.
Use the latest Databricks Runtime with Immuta. Databricks Runtime 10.4 LTS is now supported in Immuta.
Prove compliance with Databricks audit trails that include denial events. When Immuta users query Databricks tables that have been registered in Immuta, the query audit logs will include denial events and the policies associated with the decision. Such audit trails are required by some information security teams to prove compliance with secure data access.
Snowflake:
Share policy-protected data in Snowflake with other Snowflake accounts using Snowflake Data Sharing. This integration allows you to author policies in Immuta and protect data shared with other Snowflake accounts in real time. For example, if a pharmaceutical company needed to share trial results outside their Snowflake account and needed to protect PHI, they could share that data outside their account and still have Immuta policies enforced.
Removed features are no longer available in the product.
Advanced rules DSL for data policies
2022.1.0
2022.2
Differential privacy
2022.1.0
2022.2
The custom / external policy handler
2022.1.0
2022.2
Policy export/import
2021.4
2022.2
Alternative solutions
Instead of using differential privacy, combine k-anonymization and randomized response policies on your data. Immuta requires that you opt in to use k-anonymization. To enable k-anonymization for your account, contact your Immuta representative.
As an alternative to the policy export/import feature, use the Immuta CLI to clone your Global policies.
Creating a policy using the Advanced DSL Data policy builder in the view-based Snowflake integration sometimes caused errors.
When a user's entitlements changed, Immuta did not properly send notification to the integration to GRANT or REVOKE access to tables in the remote system.
Entering a single quotation mark in the search bar sometimes caused an error.
After an Alation or Collibra catalog was configured, new data sources were not linked to the catalogs automatically.
Logging in to Immuta after being logged out due to inactivity sometimes displayed a blank page.
Local policies sometimes appeared on the Global policies page.
Activity panel covered the policy builder when long SQL statements were entered for conditional policies.
Clicking the Policies icon in the left sidebar while editing a Subscription policy displayed an empty Data Policy Builder instead of the Policies page.
When configuring an External REST Catalog, users could not click the Test Connection button if the No Authentication option was selected.
The Immuta login page did not display for some older browser versions of Edge.
LDAP users with parentheses in CN cause authentication to fail if group sync is enabled.
Databricks Runtime 10.4: Show partitions on delta table fails.
The visual styles in the application have been updated.
Users can add multiple alternative owners to data sources at once.
Users can now specify column tags instead of just data source tags with the @hasTagAsAttribute
Enhanced Subscription Policy variable.
Policy import/export
When attributes were added to groups that affected an Automatic Subscription policy, users were added or removed from the data source(s) appropriately, but these changes were not audited.
Deleting the last values or all values from user or group attributes caused errors when processing Automatic Subscription policies.
Local policies that were created or updated sometimes displayed on the Global Policy page.
Writing a Global ABAC Subscription policy using @username
in the Advanced DSL builder did not subscribe the user to the data source.
Changing a Global Allow Individually Selected Users Subscription policy back to a Global ABAC policy that used special functions caused an error: Error: "actions[0].exceptions.conditions[0]" does not match any of the allowed types.
If a policy was added through the Immuta CLI, editing that policy in the Immuta UI sometimes caused an error.
After being added to a data source through an Automatic Subscription policy, users sometimes encountered an error when making unmasking requests.
Creating a Global conditional masking policy in the Advanced DSL builder that used @iam
or @username
caused an error when the policy was applied to a data source.
Redshift:
Regex masking policies that used metacharacters with backslashes (\d
, \s
, etc.) did not mask columns.
Users' metadata was not updated in the integration if their usernames contained apostrophes.
Enhanced Subscription Policy Variables (Public Preview): This feature empowers users to write fewer, simpler ABAC (Users with Specific Groups/Attributes) policies. Previously, policy writers had to specify user attribute keys in separate policies to grant access. With Enhanced Subscription Policy Variables, Immuta's policy engine compares user attributes with data source properties (database, host, schema, table, or tag) in a single policy to determine if there is a match. When attribute keys match the property specified, users will be able to subscribe to the data source(s).
Snowflake Table Grants: With this feature enabled with the Snowflake with Governance Controls integration, Snowflake Administrators no longer have to manually grant table access to users; instead, Immuta manages privileges on Snowflake tables and views according to the subscription policies on the corresponding Immuta data sources.
Improved performance of auto-subscription policies.
If an SSL CA cert was used when setting up an LDAP IAM, clicking the Test LDAP Sync button resulted in an error.
Tags were removed from data sources if they were applied after data source creation and before the external catalog health check (which is triggered by navigating to the data source). However, tags applied to a data source during creation remained on the data source.
Group permissions were not considered when users attempted to create data sources or Global Policies. For example, if a user was a member of a group that had the GOVERNANCE permission assigned to it, that user was not inheriting the GOVERNANCE permission. Consequently, when that user tried to apply a Global Policy to a data source, they received an error. However, if a user had the GOVERNANCE permissions applied to their account directly, they were able to create a Global Policy. This same behavior occurred with the CREATE_DATA_SOURCE permission.
Creating an Immuta data source from a Databricks view that contained an implicit column alias failed.
Editing a schema project to a database that already exists fails.
The App Settings page freezes when a user selects Migrate Users from BIM when configuring an external IAM.
An auto-subscription policy that adds more than 64,000 users to a data source can cause errors in the logs and impact subscription reports.
Integration jobs can end up in an expired state, even if they successfully are processed, under certain load conditions.
Edit configuration for integrations: Users can edit the configuration for Azure Synapse, Databricks SQL, Redshift, and Snowflake without disabling the integration.
Manual approvals in ABAC global subscription policies: Governors can now add an approval workflow as an alternative method of access to data sources if a user does not meet the conditions of the Users with Specific Groups/Attributes (ABAC) Global Subscription Policy.
Before this release, if someone was manually added by an owner or Governor and didn’t meet the ABAC policy requirements, they could query the table, but no rows would come back because they didn’t have the groups or attributes specified in the policy. Now, manually adding users overrides the ABAC policy. Therefore, any users who had been manually subscribed to a data source but could not see any data will see data after this upgrade. You can prevent this behavior by either switching the Subscription policy to auto-subscribe (which removes users who don't meet the Subscription policy) or adding a Data Policy that redacts rows for users who do not have the groups or attributes specified in the Subscription policy.
If users have existing Global Subscription policies that were combined, those will not change on the data source after the upgrade. However, the **Require Manual Subscription** option will automatically be enabled on those existing policies, so users who meet the conditions of the policy will not be automatically subscribed.
Sensitive data discovery global template and default sample size UI (public preview): Users can adjust these configurations on the App Settings page. If users already had a Global Template or default sample size configured in the Advanced Configuration section, these configurations will migrate to the new Sensitive Data Discovery section on the App Settings page when they upgrade their Immuta tenant.
Starburst integration: Through this integration, Immuta applies policies directly in Starburst so that users can keep their existing tools and workflows (querying, reporting, etc.) and have per-user policies dynamically applied at query time.
Support for PrivateLink with Snowflake on AWS: Contact Immuta to enable this feature.
"Active" tags on merged Share Responsibility Global policies did not show the active number of data sources they were enforced on.
The configuration section for Native Workspaces could break if a native handler was not enabled.
Databricks:
If a table in Databricks had been created from an AVRO schema file, queries against the table on Immuta-enabled clusters only returned results for partition columns. Additionally, trying to create tables from an AVRO schema file on Immuta-enabled clusters returned an error: "Unable to infer the schema."
Fixed Databricks init script error handling when artifacts weren't downloading correctly.
Errors occurred when using mlflow.spark.log_model
on non-Machine Learning clusters.
Because Immuta's built-in identity manager (BIM) is not enabled in SaaS, the App Settings page froze when a user selected Migrate Users when configuring an external IAM.
Redshift integration performance issues related to Python UDF concurrency capabilities.
Snowflake:
When enabling a native Snowflake integration with an external catalog, if the host had multiple periods in the account the Snowflake plugin was invalid.
When users tried to edit the Excepted Roles/Users List for the integration, the configuration saved correctly. However, when the App Settings page refreshed, the Excepted Roles/Users List was empty and the allow list in Snowflake was not updated.
When a user's group was deleted in an external IAM, that update appeared in Immuta but was not syncing properly in Snowflake.
When using Snowflake native controls with Excepted Roles specified, if users tried to do an outer join using a column that had a masking policy applied, it resulted in an error: SQL compilation error: Invalid expression [] in VALUES clause
.
Editing a schema project to a database that already exists fails.
Project owners are unable to edit projects with approved purposes and data sources.
Disable query engine: Application Admins can disable the Query Engine on the App Settings page.
New Immuta UI: Although the most significant change is the adjustment to the visual styles in the application, other UI changes include an expandable left navigation and dark mode support.
Support for AWS-Sydney.
Databricks init script: To use the updated Immuta init script and cluster policies, existing SaaS users must update their Databricks cluster configuration following this Manually Update Your Databricks Cluster guide.
Databricks:
Views: Although users could create views in Databricks from Immuta data sources they were subscribed to, when users tried to select from those views, they received an error saying that the Immuta data source the view was created against did not exist or that they did not have access to it.
External Delta Tables: Querying an external Delta table that had been added as an Immuta data source as a non-admin resulted in a NoSuchDataSourceException
error if the table path had a space in it.
Sensitive Data Discovery failed for Databricks data sources when initiated in the UI if the cluster was configured to use ephemeral overrides.
The integration did not work with the Databricks Runtime 9.1 maintenance update.
Ephemeral Overrides:
The UI was not displaying the checkbox to apply the ephemeral override to multiple data sources.
Ephemeral overrides were not being used when calculating column detection.
Out of memory errors occurred when several actions or jobs ran simultaneously, such as
Bulk disabling data sources
Bulk creating data sources
Column detection
Schema detection
Sensitive Data Discovery: Users could not configure sampleSize
to override the default number of records sampled from a data source.
Snowflake Governance Features Integration: When a data source existed in Immuta but not in Snowflake and a user tried to refresh the native policies, Immuta continuously retried to update the policies and then failed with the following error: Execution error in store procedure UPSERT_POLICIES: SQL compilation error: Table does not exist or not authorized.
Vulnerabilities
CVE-2022-0355
: Information Exposure in simple-get
CVE-2022-0235
: Information Exposure in node-fetch
CVE-2022-0155
: Information Exposure in follow-redirects
CVE-2021-3807
: Regular Expression Denial of Service (ReDoS) in ansi-regex
CWE-451
: User Interface (UI) Misrepresentation of Critical Information in swagger-ui-dist
Databricks: Errors occur when using mlflow.spark.log_model
on non-Machine Learning clusters.
Editing a schema project to a database that already exists fails.
Because Immuta's built-in identity manager (BIM) is not enabled in SaaS, the App Settings page freezes when a user selects Migrate Users when configuring an external IAM.
Bulk Approve Subscription Requests: Data Owners can approve all pending access requests at once.
Databricks:
Databricks Runtime 9.1 LTS Support.
User Impersonation in Databricks: Databricks users can impersonate Immuta users.
Multiple Immuta tenants are supported in a single Databricks workspace: This change adds a new field in the Databricks Integration UI: a Unique ID that ties the set of cluster policies to their instance of Immuta. This feature makes it easier for users to configure the integration and avoid cluster policy conflicts.
Support for Notebook-Scoped Libraries on Machine Learning Clusters: Users on Databricks runtimes 8+ can manage notebook-scoped libraries with %pip
commands.
GCM TLS ciphers: enabled by default in Databricks init script.
TLS verification can be disabled in the Databricks init script when necessary, such as when JAR files for the init script are hosted where a self-signed or internal TLS CA is used.
Data source creation performance improvements.
Redshift: Support for Redshift is now generally available.
Permanently Delete Users: User data can be deleted and permanently removed from Immuta, which aligns with the GDPR requirement.
Spark Direct File Reads: Users can manage Immuta policies against direct file reads in Spark.
Apply Immuta Attributes to Groups from External IAMs: User Admins can apply attributes in Immuta to groups from external IAMs.
User profile sync performance improvements.
CVE-2021-3918
Databricks:
Views with where
clauses that included a string with the SQL comment characters --
caused Immuta data source failures.
Aliases in view create statements were case-sensitive.
Using mlflow.spark.save_model
and mlflow.spark.log_model
was blocked by the Immuta Security Manager and other errors.
Databricks and Redshift integrations: Attributes with two or more single quotes were not handled correctly.
Snowflake row access policy performance improvements.
Requesting access to a schema project with a large number of data sources (approximately ten thousand) caused 502 errors.
When creating data sources after an Alation catalog was configured, tags were not automatically added to the data sources.
Databricks:
Change Data Feed: Immuta supports Change Data Feed (CDF), which shows the row-level changes between versions of a Delta table. The changes displayed include row data and metadata that indicates whether the row was inserted, deleted, or updated.
Databricks Runtime 8.4
Databricks Runtime 9.1: This runtime is only supported on Python/SQL clusters.
User Impersonation: User impersonation allows users to natively query data in Snowflake, Redshift, and Synapse as another Immuta user.
Snowflake:
Snowflake as a Catalog: When enabled, Immuta automatically registers an external Snowflake catalog using the provided hostname and leveraging the Immuta System account that gets generated. Any Snowflake sources that are registered from that host will automatically have their relevant tags ingested into Immuta.
Snowflake Integration: Immuta manages and applies Snowflake row access and column masking policies on individual tables directly in Snowflake instead of creating views for Immuta data sources.
Snowflake Audit: Users can view audit records on the Immuta Audit page for queries run natively in Snowflake.
Security Manager error on AWS metadata service.
Schema detection failure occurred in Snowflake instances with tens of thousands of tables.
Case mismatch between Databricks and SCIM users.
Error occurred ("You may not access raw data directly") when querying Delta tables in Databricks.
Advanced Subscription policies AND'ed together could prevent auto-subscription policies from applying correctly.
Snowflake Governance Features Integration: If users created a database with non-default collation, and then they created a table with an explicit collation that didn't match the default collation, applying native policies failed.
LDAP Sync:
Each web worker (instead of a single web worker) kicked off LDAP Sync.
Users who belonged to more than one group that had the same name in Immuta could not log in.
Native Snowflake: Schema monitoring, Global or Local policy changes, and data source creation timed out or failed to make updates until web pods were restarted.
Power BI: The client failed to list out databases on a Databricks cluster because of a query timeout.
Creating a view in a scratch path database from a Snowflake data source resulted in an error: Error in SQL statement: NoSuchElementException: key not found: <masked column>
The table below outlines the available features currently in preview for this release and when they were introduced.
Preview levels
The design partner level is for SaaS customers only.
In this preview level, Immuta launches an initial limited-functionality feature with a select group of customers to solve a specific challenge. The goal of this preview level is to validate that the solution solves the challenge in a way that is valuable, usable, and feasible.
Throughout the feature development and launch processes, Product Management and Engineering meet regularly with the customer to gather feedback and help implement the feature. When the process starts, entire portions of the feature may be missing from the product, but the customer receives regular (potentially weekly) updates of the feature from the Engineering team.
Design partner level features do not have support SLAs or Immuta customer support engagement; the customer solely works with the Immuta Product team. Design partner feature functionality is subject to change, discontinuation, and discontinuation of support at Immuta’s sole discretion. Immuta makes no delivery date commitments.
Private preview features approximately match the product offered to the general public. Immuta only makes changes to the feature after gathering feedback or discovering unexpected implications of the feature.
Immuta invites customers to the private preview, and they are required to engage with Immuta Product Management to provide feedback about the feature.
Immuta makes commercially reasonable efforts to support private preview functionality; however, such support is not subject to SLA targets or processes. Immuta immediately closes support tickets that are filed and redirects customers to the Product Manager in charge of the feature. Private preview functionality is subject to change, discontinuation, and discontinuation of support at Immuta’s sole discretion.
Public preview features match the product offered to the general public. Immuta only changes the feature to address bugs.
Public preview features are fully documented on the Immuta website, but customers are expected (not required) to engage with Product Management and Customer Success to enable the feature.
Immuta makes commercially reasonable efforts to support public preview functionality; however, such support is not subject to the normal SLA targets and will not be considered priority level 1 or 2. If public preview functionality impacts or is believed to reasonably impact other fully supported functionality, the customer must disable the public preview functionality; SLA targets and processes only apply once the public preview functionality is disabled. Issues discovered (even at priority levels 3 and 4) with public preview functionality will be resolved at Immuta’s sole discretion.
GA features are complete and available to all customers. Full SLA targets and processes apply.
Use one of the
to apply these tags
Export UAM events to or
Use , which still contain query text
Specify exempted users directly in your policies using the principles of
Create an "Allow individually selected users"
Legacy audit UI and /audit
API (Use instead.)
Legacy Databricks SQL integration (Use the instead.)
Non-native sensitive data discovery (Use instead.)
Snowflake integration with low row access policy mode disabled (Follow this to enable low row access policy mode. You must also .)
Allow masked columns as input for row-level policies in the Snowflake and Databricks Unity Catalog integrations
Public preview
September 2023
Private preview
January 2024
Public preview
March 2024
Private preview
September 2022
Connections for Snowflake and Databricks Unity Catalog
Public preview
December 2024
Design partner
May 2024
Public preview
July 2022
Private preview
October 2022
Public preview
September 2024
Public preview
2021
Private preview
September 2024
Public preview
February 2024
Public preview
November 2021
Public preview
April 2024
Public preview
April 2024
Private preview
October 2022
Public preview
2021
Private preview
December 2022
Private preview (SaaS only)
May 2024
Private preview
September 2023
Private preview
February 2024
Private preview
April 2024
Invitation only
None
None
Product Management (required)
Invitation only
Best effort
Yes
Product Management (required)
Customer request
Limited SLAs
Yes
Customer Success and Sales Engineering
No action required
Full SLAs
Yes
Customer Success and Sales Engineering