Skip to content

Immuta v2024.2 Release Notes

Immuta v2024.2.1

Immuta v2024.2.1 was released June 7, 2024.

Enhancements

  • Trino universal audit model available with Trino 435 using the Immuta Trino plugin 435.1: For customers that are using EMR 7.1 with Trino 435.1, and have audit requirements, the Immuta Trino 435.1 plugin now supports audit in universal audit model. The Immuta Trino 435.1 plugin audit information is on par with Immuta Trino 443 plugin. The Immuta Trino 435.1 plugin is supported on SaaS and 2024.2 and later.

  • Data owners can now see audit events for the data sources that they own without having the AUDIT Immuta permission: Data owners can see query events for their data sources on the audit page, data overview page, data source pages, and the data source activity tab. They can also inspect Immuta audit events on the audit page and activity tab for the data sources they own. This enhancement gives data owners full visibility of activity in the data sources they own.

Bug fixes

  • UI performance issues
  • Fixes to address issues that caused Immuta to fail passing the SSL cert supplied by customers using an external metadata database.
  • IAM integrations that had SCIM enabled did not support backslashes \ in usernames.
  • Subscription policies that included variables (@host, @database, @schema, @table) caused UI performance issues.
  • Immuta was not escaping or encoding special backslash characters (/, \) in usernames, which resulted in bad API requests.
  • Deleting and re-enabling a Redshift integration caused issues for data sources with custom schema/table names and formats.
  • Databricks Unity Catalog integration configuration failed to save if Oauth token passthrough was used as the authentication method.
  • Fixes to address an issue that caused Redshift data source subscriptions to fail if users were subscribed to a large number of them.

Immuta v2024.2.0

Immuta v2024.2.0 was released May 10, 2024.

New features

Immuta Detect

Immuta Detect is a tool that monitors your data environment and provides analytic dashboards in the Immuta UI based on audit information of your data use.

  • Query monitoring with webhook notifications for Databricks, Snowflake and Starburst (Trino) in public preview: Immuta Detect monitors help you surface non-compliant data combinations and maintain data availability through data platform configuration changes. Monitors can notify you when user activity metrics exceed your intended operating thresholds. Monitors work with query tags, query execution outcomes, and Immuta Discover classification sensitivities when enabled.

  • Dynamic query classification in private preview: For a query that joins tables, Immuta uses the same classification rules applied to tables and applies those rules to columns of the query. Immuta applies a new set of classification tags to the query columns and calculates sensitivity for the query event in the audit record. These query classification tags are not included on the table's data dictionary.

  • Universal audit model: Over 90 audit events are captured and can be exported to S3 or ADLS Gen2. See the full list of supported events on the Universal audit model (UAM) page.

Immuta Discover

  • Native sensitive data discovery (SDD): Native SDD is available for Snowflake and Databricks in general availability, and Starburst (Trino) and Redshift in private preview. Native SDD automatically discovers and tags your data based on the identifiers it matches but, unlike non-native SDD, it does not persist or move any of your data. It is enabled by default.

    • SDD tag context: Native SDD leaves legacy SDD tags in place when they are not found upon a subsequent re-scan of a data source. Customers who begin using native SDD can see results with no impact to prior legacy SDD tags. See the Migrate legacy to native SDD page for more details.
  • Built-in classification frameworks in private preview: Immuta comes preconfigured with a bundle of classification frameworks for use out-of-the-box once endorsed by your organization's admins. These frameworks are designed by Immuta’s Legal Engineering and Research Engineering teams and informed by data privacy regulations and security standards: GDPR, CCPA, GLBA, HIPAA, PCI, and global best practices. They are a starting point for companies to customize to their own classification, security, and risk policies.

Immuta Secure

  • Write policies for Starburst (Trino) and Amazon S3 in private preview: In addition to read operations, Immuta's Starburst (Trino) and Amazon S3 integrations now support fine-grained access permissions for write operations.

  • Project-scoped purpose exceptions for Snowflake and Databricks Unity Catalog integrations in public preview: Row- and column-level policies can now account for purposes and projects for additional security. With this policy configuration, a user will only be able to view the data the policy applied to if they are acting under a certain purpose and that data is within their current project. Purpose exception policies ensure data is only being used for the intended purposes.

  • Support protecting more than 10,000 objects with Unity Catalog row- and column- level policies: Users can now mask more than 10,000 columns or tables with row filters, removing the previous limitation in the Unity Catalog integration. This enhancement provides greater flexibility and scalability for data masking operations, allowing users to effectively secure sensitive data across larger datasets.

  • Faster query performance with Snowflake memoizable functions in public preview: When a policy is applied to a column, Immuta now uses Snowflake memoizable functions to cache the result of common lookups in the policy encapsulated in the called function. Subsequently, when users query a column with the applied policy, Immuta leverages the cached result, which significantly enhances query performance. Contact your customer success manager for more details.

  • Disable external usernames with invalid Databricks identities: Databricks user identities for Immuta users will now be automatically marked as invalid when the user is not found during policy application. This will prevent them from being affected by Databricks policy until manually marked as valid again in their Immuta user profile. This change drastically improves syncing performance of subscription policies for Databricks Unity Catalog integrations when Immuta users are not present in the Databricks environment.

  • Disable k-anonymization by default and allow users to opt-in: When a k-anonymization policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process that generates rules that enforce the k-anonymity. The results of this query, which may contain data that is subject to regulatory constraints such as GDPR or HIPAA, are stored in Immuta's metadata database.

    To ensure this process does not violate your organization's data localization regulations, you need to first activate this masking policy type before you can use it in your Immuta tenant. To enable k-anonymization, adjust the setting on the Immuta app settings page. If you have existing k-anonymization policies, those policies will not be affected by this change.

  • Collibra PII assignments: When pulling personally identifiable information (PII) from Collibra, Immuta now includes and differentiates true and false value assignments as Personally Identifiable Information.true and Personally Identifiable Information.false to more accurately reflect how PII is set in Collibra.

  • Integrations API: The Integrations API will be enabled by default when users upgrade to 2024.2. With this feature, the Integrations UI is in a new section of the product. Also, when creating native Snowflake integrations, tag extraction will no longer be an option. Users can set up tag extraction and manage existing Snowflake external catalogs via the External Catalog section of the Immuta app settings page.

  • Running table statistics only if required (instead of by default): Table statistics consist of row counts, identification of high cardinality columns, and a sample data fingerprint. Immuta needs to collect this information in order to support the following data access policy types:

    • Column masking with randomized response
    • Column masking with format preserving masking
    • Column masking with k-anonymization
    • Column masking with rounding
    • Row minimization

    Prior to this change, table statistics would be collected for every newly onboarded object by default, except if the object had a Skip_Stats tag applied. Post this change, table statistics are now only collected on a data object once they are required (i.e., if one of the above-mentioned policy types is applied). Even then, the Skip_Stats tag continues to be respected. This change results in performance improvements, as the number of standard operations during data object onboarding is significantly reduced.

  • Alation custom fields integration: In addition to Alation standard tags, Immuta’s Alation integration now also supports pulling information from Alation custom fields as tags into Immuta.

User experience updates

  • Improved user experience for managing users, data sources, and policies in public preview: This deployment includes significant user experience updates focused on enhancing Immuta's key entities: users, data sources, and policies.

    • The People section has a more intuitive experience with notable changes. Users and groups have been split into two separate tabs. The first tab provides an overview of a user or group, while the second tab contains detailed settings, such as permissions, attributes, and associated groups.

      Another enhancement in the People section is the new Attributes page, which centralizes all information about an attribute, including the users or groups it applies to.

    • The Data Sources section has been completely redesigned to offer a more efficient search and filter experience. Users can preview details of a data source through expandable rows on the list and access bulk actions for data sources more easily.

    • The Policy section includes an updated list with improved search and filter capabilities. Additionally, a policy detail page allows users to view comprehensive policy information, take action, edit policies, and see a list of targeted data sources.
  • “Pending” policy state: A new Pending policy state indicates when background jobs are running to update permissions after a policy is created or changed. Once the Pending state changes to Active, all policy changes have been enforced on affected data sources.

  • Color coding for data source health: The health status for each data source on the data source list page now uses color coding to provide a visual for users so they can quickly determine whether they should take action related to the health of data sources. Additionally, unhealthy data sources are ranked at the top of the list on the data source page to ensure that when users log in to Immuta they are aware that unhealthy data sources exist in the system. Prior to this change, users had to click through all data source pages or had to explicitly set up a filter to achieve the same behavior.

  • Updates to button labels: Two buttons have been renamed to align their labels more closely with their functionality.

    • The "Sync Native Policies" button has been renamed to "Sync Data Policies" to better reflect its function.
    • The "Refresh Native Views/Policies" button has been renamed to "Refresh Native Views/Data Policies" for improved accuracy.

Dark mode and other usability updates

  • The new user profile page separates information better and makes it easier to understand.
  • Keyboard shortcuts are now available for some common functions. Keep an eye out for in-app guidance that helps with how to use them.
  • The account menu is wider for better readability and now has an option to toggle between light and dark mode. By default, Immuta still uses your browser settings.
  • Browser tabs tell you which page you’re on, instead of all being labeled “Immuta Console.” A new, adaptive favicon allows you to still tell that it’s Immuta at-a-glance, whether you’re in light or dark mode.
  • Fix to address a UI issue that led customers to believe that disabled users were not getting their access revoked. The UI has been updated and disabled users are now being filtered out from the data source members tab.

Deprecations and breaking changes

Deprecation announcements

Deprecated items remain in the product with minimal support until their end of life date.

Feature Deprecation notice End of life (EOL)
Derived data sources 2024.2 2024.4
Managing the default subscription policy 2024.2 2024.4

Removed features (EOL)

Feature Deprecation notice End of life (EOL)
Amazon EMR Spark & Hive proxy connector 2023.2 2024.2
Azure Data Lake Storage proxy connector 2023.3 2024.2
Azure SQL Proxy Connector 2023.3 2024.2
Data source expiration dates 2023.2 2024.2
dbt integration 2024.1 2024.2
Databricks Spark with Unity Catalog support 2024.1 2024.2
Non-Unity Databricks SQL view-based integration 2023.3 2024.2
Discussions tab 2023.3 2024.2
HIPAA expert determination and templated policies (HIPAA and CCPA) 2023.3 2024.2
Interpolated WHERE clause 2023.2 2024.2
Legacy Amazon S3 proxy 2023.3 2024.2
Legacy sensitive data discovery (SDD) 2023.3 Starting to remove after 2024.2
Legacy Starburst (Trino) integration 2023.2 2024.2
MySQL proxy connector 2024.1 2024.2
Query editor (now turned off by default) 2023.3 Starting to remove with 2024.2
Single Node Docker installation 2023.2 2024.2
Legacy Snowflake view-based integration (Snowflake integration without Snowflake Governance features) 2023.2 2024.2
Tableau connector 2023.3 2024.2

Breaking changes

  • Change to POST /tag/{modelType}/{modelId} endpoint: The POST /tag/{modelType}/{modelId} endpoint (which adds tags to models that can be tagged, such as data sources and projects) can only apply tags that exist to these models. This update presents one breaking change: A 404 status will now be returned with the tag(s) that were not valid instead of a 200 status, and no tags will be processed if any invalid tags are found.

    {
      "statusCode": 404,
      "error": "Not Found",
      "message": "Tags with the names [`country`, `sensitive`] do not exist."
    }
    
  • Change to POST /tag/column/{datasource_id}_{column_name} endpoint: The POST /tag/column/{datasource_id}_{column_name} endpoint (which adds tags to columns on data sources) can only tag existing columns on data sources. It does this by checking the dictionary associated with the data source to see if the desired column exists on the data source. This deployment introduces two breaking changes:

    • Column does not exist 404: When the column does not exist on the data source, a 404 status is now returned instead of a 200.

      {
      "statusCode": 404,
      "error": "Not Found",
      "message": "Data Source {datasourceId} does not have a column named '{columnName}' (case-sensitive)."
      }
      
    • Dictionary does not exist 404: When an associated dictionary does not exist on the specified data source (that you have access to add tags to), a 404 status is now returned instead of a 403.

      {
      "statusCode": 404,
      "error": "Not Found",
      "message": "Could not find column information for Data Source {datasourceId}."
      }
      
  • Change to POST /project: Users will receive a 422 status error instead of a 400 status error when trying to create a new project name that would result in a database conflict on the project's unique name.

  • Change to POST /api/v2/data response: creating will not be returned in the response when using this endpoint the first time; the response will just include bulkId and connectionString. However, when updating a data source using POST /api/v2/data, the response will include creating: [] (with no data source names inside the array).

v2024.2 migration notes

  • You must be on Immuta version 2022.5 or newer to migrate directly to 2024.2.

  • Integrations API: If you did not have integrations API turned on prior to 2024.2.0, when the tenants are restarted after upgrading, the system will perform a short migration of the native integrations from the global configuration to the new native integrations bometadata tables in support of integrations API.