Data Discovery

Sensitive data discovery (SDD) is an Immuta feature that uses data patterns to determine what type of data your column represents. Using identification frameworks and identifiers, Immuta evaluates your data and can assign the appropriate tags to your data dictionary based on what it finds. This saves the time of identifying your data manually and provides the benefit of a standard taxonomy across all your data sources in Immuta.

Architecture

To evaluate your data, SDD generates a SQL query using the identification framework's identifiers; the Immuta system account then executes that query in the remote technology. Immuta receives the query result, containing the column name and the matching identifiers but no raw data values. These results are then used to apply the resulting tags to the appropriate columns.

This evaluating and tagging process occurs when identification runs and happens automatically from the following events, if a global framework is set:

  • A new data source is created.

  • Schema monitoring is enabled, and a new data source is detected.

The following actions will also trigger identification:

  • Column detection is enabled, and new columns are detected. Here, SDD will only run on new columns, and no existing tags will be removed or changed. Note, this will use the identification framework that already ran on the data source.

  • A user manually triggers it from the data source health check menu. Note, this will use the identification framework that already applies to the data source or the global framework, if set.

  • A user manually triggers it from the identification frameworks page.

  • A user manually triggers it through the API.

Users can manually run identification from a data source's overview page or the identification frameworks page.

Components

Sensitive data discovery (SDD) runs frameworks to discover data. These frameworks are a collection of identifiers. These identifiers contain a single criteria and the tags that will be applied when the criteria's conditions have been met. See the sections below for more information on each component.

Identification framework

An identification framework is a group of identifiers that will look for particular criteria and tag any columns where those conditions are met.

While organizations can have multiple frameworks, only one may be applied to each data source. Immuta has the built-in "Default Framework," which contains all the built-in identifiers and assigns the built-in Discovered tags.

For a how-to on the framework actions users can take, see the Manage frameworks page.

Global framework

Each organization can set a global framework to apply to all the data sources in Immuta by default unless they have a different framework assigned. It is labeled on the frameworks page with a globe icon. If a global framework is set, identification will run on all new data sources. If a global framework is not set, identification will only run on data sources manually applied to an identification framework.

Users can set any framework as the global framework or leave the global framework field blank.

Identifier

An identifier is a criteria and the tags to apply to data that matches the criteria. When Immuta recognizes that criteria, it can tag the data to describe the type.

Immuta comes with built-in identifiers to discover common categories of data. These identifiers cannot be modified or deleted. Users can also create their own unique identifiers to find their specific data.

Improved identifiers

A new and improved pack of the built-in identifiers was released October 2024.

If you are interested in these improved identifiers, reach out to your Immuta support professional.

For a how-to on the identifier actions users can take, see the Create an identifier page.

Supported criteria types for identifiers

  • Competitive criteria analysis: This criteria is a process that will review all the regex and dictionary criteria within the identifiers of the framework and search for the identifier with the best fit. In this review, each competitive criteria analysis identifier in the framework competes against each other to find the best and most specific identifier that fits the data. The resulting tags for the best identifier are then applied to the column. Only one competitive criteria analysis identifier will apply per column. Competitive criteria identifiers, both built-in and custom, must match at least 90% of the data sampled. To learn more about the competitive nature, see the How competitive criteria analysis works guide.

    • Regex: This criteria contains a case-insensitive regular expression that searches for matches against column values. SDD only supports regular expressions (regex) written in RE2 syntax.

    • Dictionary: This criteria contains a list of words and phrases to match against column values.

  • Column name: This criteria includes a case-insensitive regular expression matched against column names, not against the values in the column. The identifier's tags will be applied to the column where the name is found. Multiple column name identifiers can match a column and be applied. SDD only supports regular expressions (regex) written in RE2 syntax.

Supported technologies

Sensitive data discovery has varied support for data sources from different technologies based on the identifier type.

Technology
Regex
Dictionary
Column name regex

Snowflake

Supported

Supported

Supported

Databricks

Supported

Supported

Supported

Starburst (Trino)

Supported in public preview (see limitations)

Supported in public preview (see limitations)

Supported

Redshift

Supported in private preview (see limitations)

Supported in private preview (see limitations)

Supported

Azure Synapse Analytics

Not supported

Not supported

Supported

Amazon S3

Not supported

Not supported

Supported

Google BigQuery

Not supported

Not supported

Supported

Configuration

Configure a global framework or add data sources to an identification framework to run SDD on data sources.

Tag mutability

When SDD is manually triggered by a data owner, all column tags previously applied by SDD are removed and the tags prescribed by the latest run are applied. However, if SDD is triggered because a new column is detected by schema monitoring, tags will only be applied to the new column, and no tags will be modified on existing columns. Additionally, governors, data source owners, and data source experts can disable any unwanted Discovered tags in the data dictionary to prevent them from being used and auto-tagged on that data source in the future.

Performance

The amount of time it takes to run identification on a data source depends on several factors:

  • Columns: The time to run identification grows nearly linearly with the number of text columns in the data source.

  • Identifiers: The number of identifiers being used weakly impacts the time to run identification.

  • Row count: Performance of identification may vary depending on the sampling method used by each technology. For Snowflake, the number of rows has little impact on the time because data sampling has near-constant performance.

  • Views: Performance on views is limited by the performance of the query that defines the view.

The time it takes to run identification for all newly onboarded data sources in Immuta is not limited by SDD performance but by the execution of background jobs in Immuta. Consult your Immuta account manager when onboarding a large number of data sources to ensure the advanced settings are set appropriately for your organization.

Testing

For users interested in testing SDD, note that the built-in identifiers by Immuta require a 90% match to data to be assigned to a column. This means that with synthetic data, there may be situations where the data is not real enough to fit the confidence needed to match identifiers. To test SDD, use a dev environment, create copies of your tables, or use the API to run a dryRun and see the tags that would be applied to your data by SDD.

Considerations

Deleting the built-in Discovered tags is not recommended: If you do delete built-in Discovered tags and use the Default Framework, then when the identifier is matched the column will not be tagged. As an alternative, tags can be disabled on a column-by-column basis from the data dictionary, or SDD can be turned off on a data-source-by-data-source basis when creating a data source.

Supported data types and casing

Type of identifier
Supported data types
Case sensitivity

Data regex*

Text string columns

Case-sensitive

Column name regex

Any column

Not case-sensitive

Dictionary

Text string columns

Can be toggled in the identifier definition

*Two built-in patterns support and match based on additional data types:

  • DATE: Columns will match this identifier if they are string and the regex matches or if the data type is date, date+time, or timestamp.

  • TIME: Columns will match this identifier if they are string and the regex matches or if the data type is time. Note that if the date is included in the data, it will not match this identifier.

Limitations with dictionary patterns

Immuta compiles dictionary patterns into a regex that is sent in the body of a query.

For Snowflake, the size of the dictionary is limited by the overall query text size limit in Snowflake of 1 MB.

Databricks limitation

For Databricks, Immuta will start up a Databricks cluster to complete the SDD job if one is not already running. This can cause unnecessary costs if the cluster becomes idle. Follow Databricks best practices to automatically terminate inactive clusters after a set period of time.

Starburst (Trino) limitation

Authentication method
Column name regex identifiers
Competitive criteria analysis identifiers

Username and password

Supported

Supported

Supported

Not supported

Redshift limitations

  • The Redshift cluster must be up and running for SDD to successfully run

  • Redshift Spectrum is only supported with column name regex identifiers

Redshift supported authentication methods

Authentication method
Column name regex identifiers
Competitive criteria analysis identifiers

Username and password

Supported

Supported

AWS access key

Supported

Supported (see limitations)

Supported

Not supported

AWS access key limitations

To use AWS access key authentication on a Redshift data source and have competitive criteria analysis identifiers supported,

  • The AWS access key used to register the data source must be able to do a minimum of the following redshift-data API actions:

    • redshift-data:BatchExecuteStatement

    • redshift-data:CancelStatement

    • redshift-data:DescribeStatement

    • redshift-data:ExecuteStatement

    • redshift-data:GetStatementResult

    • redshift-data:ListStatements

  • The AWS access key used to register the data source must have redshift:GetClusterCredentials for the cluster, user, and database that they onboard their data sources with.

  • If using a custom URL, then the data source registered with the AWS access key must have the region and clusterid included in the additional connection string options formatted like the following:

      region=us-east-2;clusterid=12345
  • Redshift Serverless data sources are not supported for competitive criteria analysis identifiers with the AWS access key authentication method.

Legacy SDD

Legacy SDD was available before October 2023. It is no longer available, but some users may still see the term "legacy SDD" in the context of their data tags applied to specific data sources. These tags can be disabled from data sources but cannot be removed.

Last updated

Was this helpful?