Skip to content

Data Discovery

Sensitive data discovery (SDD) is an Immuta feature that uses data patterns to determine what type of data your column represents. Using frameworks, rules, and patterns, Immuta evaluates your data and can assign the appropriate tags to your data dictionary based on what it finds. This saves the time of identifying your data manually and provides the benefit of a standard taxonomy across all your data sources in Immuta.

Supported technologies

Native SDD supports data discovery on data sources from the following technologies:

  • Snowflake
  • Databricks or Databricks Unity Catalog
  • Starburst (Trino)

    Public preview

    Native SDD for Starburst (Trino) is currently in public preview and available to all accounts. Please reach out to your Immuta representative to enable it on your tenant.

  • Redshift

    Private preview

    Native SDD for Redshift is currently in private preview and available to all accounts. Please reach out to your Immuta representative to enable it on your tenant.

Architecture

To evaluate your data, SDD generates a SQL query using the identification framework's rules; the Immuta system account then executes that query in the native technology. Immuta receives the query result, containing the column name and the matching rules but no raw data values. These results are then used to apply the resulting tags to the appropriate columns.

This evaluating and tagging process occurs when SDD runs, which happens automatically from the following events:

  • A new data source is created.
  • Schema monitoring is enabled and a new data source is detected.
  • Column detection is enabled and new columns are detected. Here, SDD will only run on new columns and no existing tags will be removed or changed.

Users can also manually trigger SDD to run from a data source's overview page or the identification frameworks page.

Components

Sensitive data discovery (SDD) runs frameworks to discover data. These frameworks are a collection of rules. These rules contain a single criteria and the resulting tags that will be applied when the criteria's conditions have been met. See the sections below for more information on each component.

Identification framework

An identification framework is a collection of rules that will look for a particular criteria and tag any columns where those conditions are met. While organizations can have multiple frameworks, only one may be applied to each data source. Immuta has the built-in Default Framework, which contains all the built-in patterns and assigns the built-in Discovered tags based on pattern matching.

For a how-to on the framework actions users can take, see the Manage frameworks page.

Global framework

Each organization has a single global framework that will apply to all the data sources in Immuta by default, unless they have a different framework assigned. It is labeled on the frameworks page with a globe icon. Users can bypass this global framework by applying a specific framework to a set of data sources.

Rule

A rule is a criteria and the resulting tags to apply to data that matches the criteria. When Immuta recognizes that criteria, it can tag the data to describe the type. Each rule is specific to its own framework, but all a framework's rules can be copied to create a new framework.

For a how-to on the rule actions users can take, see the Manage rules page.

Criteria

Criteria are the conditions that need to be met for resulting tags to be applied to data.

Supported criteria types

  • Competitive pattern analysis: This criteria is a process that will review all the regex and dictionary patterns within the rules of the framework and search for the pattern with the best fit. If there are multiple rules in a framework using competitive pattern analysis, only one will be applied to any column. To learn more about the competitive nature, see the How competitive pattern analysis works guide.
  • Column name: This criteria matches a column name pattern to the column names in the data sources. The rule's resulting tags will be applied to the column where the name is found.

Pattern

A pattern is the type of data Immuta will look for to meet the requirements to tag a column. They can be used in rules across multiple frameworks, but can only be used once within each framework. Immuta comes with built-in patterns to discover common categories of data. These patterns cannot be modified and are within preset rules with preset tags. Users can also create their own unique patterns to find their specific data. SDD only supports regex patterns written in RE2 syntax.

Supported pattern types

The three types of patterns are described below:

  • Regex: This pattern contains a case-insensitive regular expression that searches for matches against column values.
  • Column name: This pattern includes a case-insensitive regular expression that is only matched against column names, not against the values in the column.
  • Dictionary: This pattern contains a list of words and phrases to match against column values.

Configuration

Only application admins can enable sensitive data discovery (SDD) globally on the Immuta app settings page. Then, data source creators can disable SDD on a data-source-by-data-source basis.

Tag mutability

When SDD is manually triggered by a data owner, all column tags that were previously applied by SDD are removed and the tags prescribed by the latest run are applied. However, if SDD is triggered because a new column is detected by schema monitoring, tags will only be applied to the new column, and no tags will be modified on existing columns. Additionally, governors, data source owners, and data source experts can disable any unwanted Discovered tags in the data dictionary to prevent them from being used and auto-tagged on that data source in the future.

Performance

The amount of time it takes to identify data relies on the number of text columns in the data source and the number of patterns in the framework. For Snowflake, the number of rows has little impact on the time because data sampling has near constant performance. However, views perform significantly worse due to extra query compilation time. Additionally, performance may vary based on row count depending on the sampling used by each technology.

The time it takes to run SDD for all newly onboarded data sources in Immuta is not limited by SDD performance but by the execution of background jobs in Immuta. Consult your Immuta account manager when onboarding a large number of data sources to ensure the advanced settings are set appropriately for your organization.

Testing

For users interested in testing SDD, note that the built-in patterns by Immuta require a certain amount of confidence to be assigned to a column. This means that with synthetic data, there may be situations where the data is not real enough to fit the confidence needed to match patterns. To test SDD, use a dev environment, create copies of your tables, or use the API to run a dryRun and see the tags that would be applied to your data by SDD.

Considerations

Deleting the built-in Discovered tags is not recommended: If you do delete built-in Discovered tags and use the Default Framework, then when the pattern is matched the column will not be tagged. As an alternative, tags can be disabled on a column-by-column basis from the data dictionary, or SDD can be turned off on a data-source-by-data-source basis when creating a data source.

  • Limitations with regex patterns:

    • Regex patterns are case sensitive.
    • Regex patterns are only supported on columns with the data type string.
  • Limitations with dictionary patterns:

    • Immuta compiles dictionary patterns into a regex that is sent in the body of a query. For Snowflake, the size of the dictionary is limited by the overall query text size limit in Snowflake of 1 MB.
    • Dictionary patterns are only supported on columns with the data type string.

Databricks limitations

  • For Databricks, Immuta will start up a Databricks cluster to complete the SDD job if one is not already running. This can cause unnecessary cost if the cluster becomes idle. Follow Databricks best practices to automatically terminate inactive clusters after a set period of time.

  • SDD for Databricks only checks for rules on columns with the data type string.

  • Native SDD for Databricks Unity Catalog will only work on data sources authenticated with a personal access token (PAT). OAuth machine-to-machine (M2M) is not supported with SDD.

Starburst (Trino) limitation

Native SDD will only work on Starburst (Trino) data sources authenticated with username and password. OAuth 2.0 is not supported with SDD.

Redshift limitations

  • Redshift Spectrum is not supported with SDD.
  • The Redshift cluster must be up and running for SDD to successfully run.

Redshift supported authentication methods

  • Username and password is fully supported with native SDD.
  • Okta is not supported with native SDD.

  • AWS access key is supported with limitations with native SDD:

    • The AWS access key used to register the data source can do a minimum of the following redshift-data API actions:

      • redshift-data:BatchExecuteStatement
      • redshift-data:CancelStatement
      • redshift-data:DescribeStatement
      • redshift-data:ExecuteStatement
      • redshift-data:GetStatementResult
      • redshift-data:ListStatements
    • The AWS access key used to register the data source must have redshift:GetClusterCredentials for the cluster, user, and database that they onboard their data sources with.

    • If using a custom URL, then the data source registered with the AWS access key must have the region and clusterid included in the additional connection string options formatted like the following:

      region=us-east-2;clusterid=12345
      
    • Redshift Serverless data sources are not supported for native SDD with the AWS access key authentication method.

Migrating from legacy to native SDD

These limitations are only relevant to users who have previously enabled and run Immuta SDD.

If you had legacy SDD enabled, running native SDD can result in different tags being applied because native SDD is more accurate and has fewer false positives than legacy SDD. Running a new SDD scan against a table will change the context of the resulting tags, but no Discovered tags previously applied by legacy SDD will be removed.

See the Migrate from legacy to native SDD page for more information.