LogoLogo
SaaS
  • Immuta Documentation - SaaS
  • Configuration
    • Connect Data Platforms
      • Data Platforms Overview
      • Amazon S3 Integration
      • AWS Lake Formation
        • Getting Started with AWS Lake Formation
        • Register an AWS Lake Formation Connection
        • Reference Guides
          • AWS Lake Formation
          • Security and Compliance
          • Protecting Data
          • Accessing Data
      • Azure Synapse Analytics
        • Getting Started with Azure Synapse Analytics
        • Configure Azure Synapse Analytics Integration
        • Reference Guides
          • Azure Synapse Analytics Overview
          • Azure Synapse Analytics Pre-Configuration Details
      • Databricks
        • Databricks Spark
          • Getting Started with Databricks Spark
          • How-to Guides
            • Configure a Databricks Spark Integration
            • Manually Update Your Databricks Cluster
            • Install a Trusted Library
            • Project UDFs Cache Settings
            • Run R and Scala spark-submit Jobs on Databricks
            • DBFS Access
            • Troubleshooting
          • Reference Guides
            • Databricks Spark Integration Configuration
              • Installation and Compliance
              • Customizing the Integration
              • Setting Up Users
              • Spark Environment Variables
              • Ephemeral Overrides
            • Security and Compliance
            • Registering and Protecting Data
            • Accessing Data
              • Delta Lake API
        • Databricks Unity Catalog
          • Getting Started with Databricks Unity Catalog
          • How-to Guides
            • Configure a Databricks Unity Catalog Integration
            • Migrating to Unity Catalog
          • Databricks Unity Catalog Integration Reference Guide
      • Google BigQuery Integration
      • Redshift
        • Getting Started with Redshift
        • How-to Guides
          • Configure Redshift Integration
          • Configure Redshift Spectrum
        • Reference Guides
          • Redshift Overview
          • Redshift Pre-Configuration Details
      • Snowflake
        • Getting Started with Snowflake
        • How-to Guides
          • Configure a Snowflake Integration
          • Edit or Remove Your Snowflake Integration
          • Integration Settings
            • Snowflake Table Grants Private Preview Migration
            • Enable Snowflake Table Grants
            • Using Snowflake Data Sharing with Immuta
            • Enable Snowflake Low Row Access Policy Mode
              • Upgrade Snowflake Low Row Access Policy Mode
            • Configure Snowflake Lineage Tag Propagation
        • Reference Guides
          • Snowflake Integration
          • Snowflake Table Grants
          • Snowflake Data Sharing with Immuta
          • Snowflake Low Row Access Policy Mode
          • Snowflake Lineage Tag Propagation
          • Warehouse Sizing Recommendations
        • Explanatory Guides
          • Phased Snowflake Onboarding
      • Starburst (Trino)
        • Getting Started with Starburst (Trino)
        • How-to Guides
          • Configure Starburst (Trino) Integration
          • Customize Read and Write Access Policies for Starburst (Trino)
        • Starburst (Trino) Integration Reference Guide
      • Queries Immuta Runs in Your Data Platform
      • Connect Your Data
        • Registering a Connection
          • How-to Guides
            • Register a Snowflake Connection
            • Register a Databricks Unity Catalog Connection
            • Manually Run Object Sync
            • Manage Connection Settings
            • Use the Connection Upgrade Manager
              • Troubleshooting
          • Reference Guides
            • Connections
            • Upgrading to Connections
              • Before You Begin
              • API Changes
              • FAQ
        • Registering Metadata
          • Data Sources in Immuta
          • Register Data Sources
            • Amazon S3 Data Source
            • Azure Synapse Analytics Data Source
            • Databricks Data Source
            • Google BigQuery Data Source
            • Redshift Data Source
            • Snowflake Data Source
              • Bulk Create Snowflake Data Sources
            • Create a Starburst (Trino) Data Source
          • Data Source Settings
            • How-to Guides
              • Manage Data Source Settings
              • Manage Data Source Members
              • Manage Access Requests and Tasks
              • Manage Data Dictionary Descriptions
              • Disable Immuta from Sampling Raw Data
            • Data Source Health Checks Reference Guide
          • Schema Monitoring
            • How-to Guides
              • Manage Schema Monitoring
              • Run Schema Monitoring and Column Detection Jobs
            • Reference Guides
              • Schema Monitoring
              • Schema Projects
            • Why Use Schema Monitoring Concept Guide
    • Manage Data Metadata
      • Connect External Catalogs
        • Configure an External Catalog
        • Reference Guides
          • External Catalog Introduction
          • Custom REST Catalog Interface Introduction
          • Custom REST Catalog Interface Endpoints
      • Data Discovery
        • Introduction
        • Getting Started with Data Discovery
        • How-to Guides
          • Use Identifiers in Domains
          • Use Sensitive Data Discovery (SDD)
          • Manage Identification Frameworks
          • Manage Identifiers
          • Run and Manage Sensitive Data Discovery on Data Sources
        • Reference Guides
          • Identifiers in Domains
          • Built-in Identifier Reference
          • Improved Pack: Built-in Identifier Reference
          • Built-in Discovered Tags Reference
          • How Competitive Pattern Analysis Works
      • Data Classification
        • How-to Guides
          • Activate Classification Frameworks
          • Adjust Identification and Classification Framework Tags
          • How to Use a Classification Framework with Your Own Tags
        • Reference Guide
          • Classification Frameworks
      • Manage Tags
        • How-to Guides
          • Create and Manage Tags
          • Add Tags to Data Sources and Projects
        • Tags Reference Guide
    • Manage Users
      • Getting Started with Users
      • Identity Managers (IAMs)
        • How-to Guides
          • Okta LDAP Interface
          • OpenID Connect
            • OpenID Connect Protocol
            • Okta and OpenID Connect
            • OneLogin with OpenID Connect
          • SAML
            • SAML Protocol
            • Microsoft Entra ID
            • Okta SAML SCIM
        • Reference Guides
          • Identity Managers
          • SAML Protocol Configuration Options
          • SAML Single Logout
      • Immuta Users
        • How-to Guides
          • Managing Personas and Permissions
          • User Impersonation
          • Manage Attributes and Groups
          • External User ID Mapping
          • External User Info Endpoint
        • Reference Guides
          • Permissions and Personas
          • Attributes and Groups in Immuta
    • Organize Data into Domains
      • Getting Started with Domains
      • Domains Reference Guide
    • Application Settings
      • How-to Guides
        • App Settings
        • Private Networking Support
          • Data Connection Private Networking
            • AWS PrivateLink for Redshift
            • AWS PrivateLink for API Gateway
            • Databricks Private Connectivity
              • AWS PrivateLink for Databricks
              • Azure Private Link for Databricks
            • Snowflake Private Connectivity
              • AWS PrivateLink for Snowflake
              • Azure Private Link for Snowflake
            • Starburst (Trino) Private Connectivity
              • AWS PrivateLink for Starburst (Trino)
              • Azure Private Link for Starburst (Trino)
          • Immuta SaaS Private Networking
            • Immuta SaaS Private Networking Over AWS PrivateLink
        • BI Tools
          • BI Tool Configuration Recommendations
          • Power BI Configuration Example
          • Tableau Configuration Example
        • IP Filtering
        • System Status Bundle
      • Reference Guides
        • Deployment Options
        • Data Processing
        • Encryption and Masking Practices
  • Marketplace
    • Introduction
      • User Types
      • Walkthrough
    • Share Data Products
      • How-to Guides
        • Manage Data Products
        • View and Respond to Access Requests
        • Manage Request Forms
        • Customize the Marketplace Branding
      • Reference Guides
        • Marketplace App Requirements
        • Data Products
        • Marketplace Permissions Matrix
        • Understanding Access Provisioning and Underlying Policies in Immuta
          • S3 Provisioning Best Practices
        • Integrating with Existing Catalogs
        • Setting Up Domains for Marketplace
    • Access Data Products
      • How-to Guides
        • Logging into Marketplace
        • Requesting Access to a Data Product
      • Reference Guide
        • Data Source Access Status
    • Short-Term Limitations
  • Governance
    • Introduction
      • Automate Data Access Control Decisions
        • The Two Paths
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
        • Test and Deploy Policy
      • Compliantly Open More Sensitive Data for ML and Analytics
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
    • Author Policies for Data Access Control
      • Introduction
        • Scalability and Evolvability
        • Understandability
        • Distributed Stewardship
        • Consistency
        • Availability of Data
      • Policies
        • Authoring Policies at Scale
        • Data Engineering with Limited Policy Downtime
        • Subscription Policies
          • Overview
          • How-to Guides
            • Author a Subscription Policy
            • Author an ABAC Subscription Policy
            • Subscription Policies Advanced DSL Guide
            • Author a Restricted Subscription Policy
            • Clone, Activate, or Stage a Global Policy
          • Reference Guides
            • Subscription Policy Access Types
            • Advanced Use of Special Functions
        • Data Policies
          • Overview
          • How-to Guides
            • Author a Masking Data Policy
            • Author a Minimization Policy
            • Author a Purpose-Based Restriction Policy
            • Author a Restricted Data Policy
            • Author a Row-Level Policy
            • Author a Time-Based Restriction Policy
            • Policy Certifications and Diffs
          • Reference Guides
            • Data Policy Types
            • Masking Policies
            • Row-Level Policies
            • Custom WHERE Clause Functions
            • Data Policy Conflicts and Fallback
            • Custom Data Policy Certifications
            • Orchestrated Masking Policies
      • Projects and Purpose-Based Access Control
        • Projects and Purpose Controls
          • Getting Started
          • How-to Guides
            • Create a Project
            • Create and Manage Purposes
            • Adjust a Policy
            • Project Management
              • Manage Projects and Project Settings
              • Manage Project Data Sources
              • Manage Project Members
          • Reference Guides
            • Projects and Purposes
            • Policy Adjustments
          • Concept Guide
            • Why Use Purposes?
        • Equalized Access
          • Manage Project Equalization How-to Guide
          • Equalized Access Reference Guide
          • Why Use Project Equalization?
        • Masked Joins
          • Enable Masked Joins How-to Guide
          • Why Use Masked Joins?
        • Writing to Projects
          • How-to Guides
            • Create and Manage Snowflake Project Workspaces
            • Create and Manage Databricks Spark Project Workspaces
            • Write Data to the Workspace
          • Reference Guides
            • Writing to Projects
            • Project UDFs (Databricks)
      • Data Consumers
        • Subscribe to a Data Source
        • Query Data
          • Querying Snowflake Data
          • Querying Databricks Data
          • Querying Starburst (Trino) Data
          • Querying Databricks SQL Data
          • Querying Redshift Data
          • Querying Azure Synapse Analytics Data
        • Subscribe to Projects
    • Observe Access and Activity
      • Introduction
      • Audit
        • How-to Guides
          • Export Audit Logs to S3
          • Export Audit Logs to ADLS
          • Use Immuta Audit
          • Run Governance Reports
        • Reference Guides
          • Universal Audit Model (UAM)
            • UAM Schema Reference Guide
          • Query Audit Logs
            • Snowflake Query Audit Logs
            • Databricks Unity Catalog Query Audit Logs
            • Databricks Spark Query Audit Logs
            • Starburst (Trino) Query Audit Logs
          • Audit Export GraphQL Reference Guide
          • Unknown Users in Audit Logs
          • Governance Report Types
      • Dashboards
        • Use the Audit Dashboards How-To Guide
        • Audit Dashboards Reference Guide
      • Monitors
        • Manage Monitors and Observations
        • Monitors Reference Guide
  • Releases
    • Deployment Notes
      • 2024
      • 2023
      • 2022
    • Scheduled Maintenance Windows
    • Immuta Support Matrix Overview
    • Immuta CLI Release Notes
    • Preview Features
      • Features in Preview
    • Deprecations
  • Developer Guides
    • The Immuta CLI
      • Install and Configure the Immuta CLI
      • Manage Your Immuta Tenant
      • Manage Data Sources
      • Manage Sensitive Data Discovery
        • Manage Sensitive Data Discovery Rules
        • Manage Identification Frameworks
        • Run Sensitive Data Discovery on Data Sources
      • Manage Policies
      • Manage Projects
      • Manage Purposes
      • Manage Audit Export
    • The Immuta API
      • Integrations API
        • Getting Started
        • How-to Guides
          • Configure an Amazon S3 Integration
          • Configure an Azure Synapse Analytics Integration
          • Configure a Databricks Unity Catalog Integration
          • Configure a Google BigQuery Integration
          • Configure a Redshift Integration
          • Configure a Snowflake Integration
          • Configure a Starburst (Trino) Integration
        • Reference Guides
          • Integrations API Endpoints
          • Integration Configuration Payload
          • Response Schema
          • HTTP Status Codes and Error Messages
      • Connections API
        • How-to Guides
          • Register a Connection
            • Register a Snowflake Connection
            • Register a Databricks Unity Catalog Connection
            • Register an AWS Lake Formation Connection
          • Manage a Connection
          • Deregister a Connection
        • Connection Registration Payloads Reference Guide
      • Marketplace API
        • Marketplace API Endpoints
        • Source Controlling Data Products
      • Immuta V2 API
        • Data Source Payload Attribute Details
          • Data Source Request Payload Examples
        • Create Policies API Examples
        • Create Projects API Examples
        • Create Purposes API Examples
      • Immuta V1 API
        • Authenticate with the API
        • Configure Your Instance of Immuta
          • Get Job Status
          • Manage Frameworks
          • Manage IAMs
          • Manage Licenses
          • Manage Notifications
          • Manage Identifiers in Domains
            • API Changes - Identification Frameworks to Identifiers in Domains
          • Manage Sensitive Data Discovery (SDD)
          • Manage Tags
          • Manage Webhooks
          • Search Filters
        • Connect Your Data
          • Create and Manage an Amazon S3 Data Source
          • Create an Azure Synapse Analytics Data Source
          • Create a Databricks Data Source
          • Create a Redshift Data Source
          • Create a Snowflake Data Source
          • Create a Starburst (Trino) Data Source
          • Manage the Data Dictionary
        • Use Domains
        • Manage Data Access
          • Manage Access Requests
          • Manage Data and Subscription Policies
          • Manage Write Policies
            • Write Policies Payloads and Response Schema Reference Guide
          • Policy Handler Objects
          • Search Connection Strings
          • Search for Organizations
          • Search Schemas
        • Subscribe to and Manage Data Sources
        • Manage Projects and Purposes
          • Manage Projects
          • Manage Purposes
        • Generate Governance Reports
Powered by GitBook

Self-managed versions

  • 2024.3
  • 2024.2

Resources

  • Immuta Changelog

Copyright © 2014-2025 Immuta Inc. All rights reserved.

On this page
  • Data policy support matrix
  • Policy types
  • Inclusionary policies
  • Limit to purpose policies
  • Masking policies
  • Row-level security policies
  • New column added policy

Was this helpful?

Export as PDF
  1. Governance
  2. Author Policies for Data Access Control
  3. Policies
  4. Data Policies
  5. Reference Guides

Data Policy Types

PreviousReference GuidesNextMasking Policies

Last updated 1 month ago

Was this helpful?

Once a user is subscribed to a data source, the data policies that are applied to that data source determine what data the user sees.

For all data policies, you must establish the conditions for which they will be enforced. Immuta allows you to append multiple conditions to the data. Those conditions are based on user attributes and groups (which can come from multiple identity management systems and applied as conditions in the same policy), or purposes they are acting under through Immuta projects.

Conditions can be directed as exclusionary or inclusionary, depending on the policy that's being enforced:

  • exclusionary condition example: Mask using hashing values in columns tagged PII on all data sources for everyone except users in the group AUDIT.

  • : Only show rows where user is a member of a group that matches the value in the column tagged Department.

Data policy support matrix

The table below outlines the policy types supported by each integration. Details about each of these policies are included in the section.

  • On Databricks data sources, joins will not be allowed on data protected with replace with NULL or constant policies.

  • Databricks Unity Catalog ARRAY, MAP, or STRUCT type columns only support masking with NULL.

Policy types

Inclusionary policies

For example, governors could mask values using hashing for users acting under a specified purpose while masking those same values by making null for everyone else who accesses the data.

This variation can be created by selecting for everyone who when available from the condition dropdown menus and then completing the Otherwise clause.

Limit to purpose policies

For example, if the purpose Research included Marketing, Product, and Onboarding as sub-purposes, a governor could write the following global policy:

Limit usage to purpose(s) Research for everyone on data sources tagged PHI.

This hierarchy allows you to create this as a single purpose instead of creating separate purposes, which must then each be added to policies as they evolve.

Now, any user acting under the purpose or sub-purpose of Research - whether Research.Marketing or Research.Onboarding - will meet the criteria of this policy. Consequently, purpose hierarchies eliminate the need for a governor to rewrite these global policies when sub-purposes are added or removed. Furthermore, if new projects with new Research purposes are added, for example, the relevant global policy will automatically be enforced.

Masking policies

Masking policies hide values in data, providing various levels of utility while still preserving privacy.

Complex data types: masking fields within struct columns

Public preview: This feature is available to all accounts.

Spark supports a class of data types called complex types, which can represent multiple data values in a single column. Immuta supports masking fields within array and struct columns:

  • Array: an ordered collection of elements

  • Struct: a collection of elements that are primitive or complex types

Without this feature enabled, the struct and array columns of a data source default to jsonb in the Data Dictionary, and the masking policies that users can apply to jsonb columns are limited. For example, if a user wanted to mask PII inside the column patient in the image below, they would have to apply null masking to the entire column or use a custom function instead of just masking name or address.

After a global or local policy masks the columns containing PII, users who do not meet the exception specified in the policy will see these values masked:

Note: Immuta uses the > delimiter to indicate that a field is nested instead of the . delimiter, since field and column names could include ..

Feature limitations

  • This feature is only available for Databricks data sources.

  • The Databricks Unity Catalog integration only supports masking with NULL on STRUCT, ARRAY, and MAP type columns.

  • This feature only supports Parquet and Delta table types.

Struct columns with many fields

If users have struct columns with many fields, they will need to either

  • create the data source against a cluster running Spark 3 or

  • add spark.debug.maxToStringFields 1000 to their Spark 2 cluster's configuration.

To get column information about a data source, Immuta executes a DESCRIBE call for the table. In this call, Spark returns a simple string representation of the schema for each column in the table. For the patient column above, the simple string would look like this:

struct<name:string,ssn:string,age:int,address:struct<city:string,state:string,zipCode:string,street:text>>

Immuta then parses this string into the following format for the data source's dictionary:

{
  dataType: 'struct',
  children: [
    {
      name: 'name',
      dataType: 'text'
    },
    {
      name: 'ssn',
      dataType: 'text'
    },
    {
      name: 'age',
      dataType: 'integer'
    },
    {
      name: 'address',
      dataType: 'struct',
      children: [
        {
          name: 'city',
          dataType: 'text'
        },
        {
          name: 'state',
          dataType: 'text'
        },
        {
          name: 'zipCode',
          dataType: 'text'
        },
        {
          name: 'street',
          dataType: 'text'
        },
      ]
    }
  ]
}

However, if the struct contains more than 25 fields, Spark truncates the string, causing the parser to fail and fall back to jsonb. Immuta will attempt to avoid this failure by increasing the number of fields allowed in the server-side property setting, maxToStringFields; however, this only works with clusters on a Spark 3 runtime. The maxToStringFields configuration in Spark 2 cannot be set through the ODBC driver and can only be set through the Spark configuration on the cluster with spark.debug.maxToStringFields 1000 on cluster startup.

Conditional masking

You can conditionally mask the value based on a value in another column. This allows you to build a policy that looks something like: "Mask bank account number where country = 'USA'" instead of blindly stating you want bank account masked always.

Constant

With this policy, you can replace the values with the same constant value you choose, such as 'Redacted', removing any utility of that data.

Custom function

This option uses functions native to the underlying database to transform the column. Single quotes enclosing the regex and escaping special characters are required. The following example masks telephone numbers variably depending on the presence of a dash (implying a prefix), space, or only digits:

REGEXP_REPLACE(@column, '(\\+?\\d{0,3}[-\\s]?)?\\d{4}', '****')

The image below illustrates authoring a global policy using this custom function:

Limitations

  • The masking functions are executed against the remote database directly. A poorly written function could lead to poor quality results, data leaks, and performance hits.

  • Using custom functions can result in changes to the original data type. In order to prevent query errors you must ensure that you cast this result back to the original type.

  • The function must be valid for the data type of the selected column. If it is not

    • Local policies will error and show a message that the function is not valid.

    • Global policies will error and change to the default masking type (hashing for text and NULL for all others).

Format preserving masking

Support limitation: This policy is only supported in the Snowflake integration.

This option masks the value, but preserves the length and type of the value.

Preserving the data format is important if the format has some relevance to the analysis at hand. For example, if you need to retain the integer column type or if the first 6 digits of a 12-digit number have an important meaning.

Hashing

This policy masks the values with an irreversible sha256 hash, which is consistent for the same value throughout the data source, so you can count or track the specific values, but not know the true raw value.

K-anonymization

Support limitation: This policy is only supported in the Snowflake integration.

K-anonymity is measured by grouping records in a data source that contain the same values for a common set of quasi identifiers (QIs) - publicly known attributes (such as postal codes, dates of birth, or gender) that are consistently, but ambiguously, associated with an individual.

The k-anonymity of a data source is defined as the number of records within the least populated cohort, which means that the QIs of any single record cannot be distinguished from at least k other records. In this way, a record with QIs cannot be uniquely associated with any one individual in a data source, provided k is greater than 1.

In Immuta, masking with k-anonymization examines pairs of values across columns and hides groups that do not appear at least the specified number of times (k). For example, if one column contains street numbers and another contains street names, the group 123, "Main Street" probably would appear frequently while the group 123, "Diamondback Drive" probably would show up much less. Since the second group appears infrequently, the values could potentially identify someone, so this group would be masked.

After the fingerprint service identifies columns with a low number of distinct values, users will only be able to select those columns when building the policy. Users can either use a minimum group size (k) given by the fingerprint or manually select the value of k.

Note: The default cardinality cutoff for columns to qualify for k-anonymization is 500.

Sample data is processed during computation of k-anonymization policies

When a k-anonymization policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process that generates rules enforcing k-anonymity. The results of this query, which may contain data that is subject to regulatory constraints such as GDPR or HIPAA, are stored in Immuta's metadata database.

The location of the metadata database depends on your deployment:

  • Self-managed Immuta deployment: The metadata database is located in the server where you have your external metadata database deployed.

  • SaaS Immuta deployment: The metadata database is located in the AWS global segment you have chosen to deploy Immuta.

Masking multiple columns with k-anonymization

Governors can write global data policies using k-anonymization in the global data policy builder.

When this global policy is applied to data sources, it will mask all columns matching the specified tag.

Applying k-anonymization over disjoint sets of columns in separate policies does not guarantee k-anonymization over their union.

If you select multiple columns to mask with k-anonymization in the same policy, the policy is driven by how many times these values appear together. If the groups appear fewer than k times, they will be masked.

For example, if Policy A

Policy A: Mask with k-anonymization the values in the columns gender and state requiring a group size of at least 2 for everyone

was applied to this data source

Gender
State

Female

Ohio

Female

Florida

Female

Florida

Female

Arkansas

Male

Florida

the values would be masked like this:

Gender
State

Null

Null

Female

Florida

Female

Florida

Null

Null

Null

Null

Note: Selecting many columns to mask with k-anonymization increases the processing that must occur to calculate the policy, so saving the policy may take time.

However, if you select to mask the same columns with k-anonymization in separate policies, Policy C and Policy D,

Policy C: Mask with k-anonymization the values in the column gender requiring a group size of at least 2 for everyone

Policy D: Mask with k-anonymization the values in the column state requiring a group size of at least 2 for everyone

the values in the columns will be masked separately instead of as groups. Therefore, the values in that same data source would be masked like this:

Gender
State

Female

Null

Female

Florida

Female

Florida

Female

Null

Null

Florida

Mixing masking policies on the same column

In some cases, you may want several different masking policies applied to the same column through Otherwise policies. To build these policies, select everyone who instead of everyone or everyone except. After you specify who the masking policy applies to, select how it applies to everyone else in the Otherwise condition.

You can add and remove tags in Otherwise conditions for global policies (unlike local policy Otherwise conditions), as illustrated above; however, all tags or regular expressions included in the initial everyone who rule must be included in an everyone or everyone except rule in the additional clauses.

NULL

This policy makes values null, removing any utility of the data the policy applies to.

Randomized response

Support limitation: This policy is only supported in the Snowflake integration.

This policy masks data by slightly randomizing the values in a column, preserving the utility of the data while preventing outsiders from inferring content of specific records.

participant_id
zip_code
gender
occupation
substance_abuse

...

...

...

...

...

880d0096

75002

Male

Welder

Y

f267334b

75002

Male

Welder

Y

bfdb43db

75002

Male

Welder

Y

260930ce

75002

Male

Welder

Y

046dc7fb

75002

Male

Welder

Y

...

...

...

...

...

All members of this cohort have indicated substance abuse, sensitive personal information that could have damaging consequences, and, even though direct identifiers have been removed and k-anonymization has been applied, outsiders could infer substance abuse for an individual if they knew a male welder in this zip code.

In this scenario, using randomized response would change some of the Y's in substance_abuse to N's and vice versa; consequently, outsiders couldn't be sure of the displayed value of substance_abuse given in any individual row, as they wouldn't know which rows had changed.

How the randomization works

Sample data is processed during computation of randomized response policies

When a randomized response policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process. To enforce the policy, Immuta generates and stores predicates and a list of allowed replacement values that may contain data that is subject to regulatory constraints (such as GDPR or HIPAA) in Immuta's metadata database.

The location of the metadata database depends on your deployment:

  • Self-managed Immuta deployment: The metadata database is located in the server where you have your external metadata database deployed.

  • SaaS Immuta deployment: The metadata database is located in the AWS global segment you have chosen to deploy Immuta.

Immuta applies a random number generator (RNG) that is seeded with some fixed attributes of the data source, column, backing technology, and the value of the high cardinality column, an approach that simulates cached randomness without having to actually cache anything.

For numeric data, Immuta uses the RNG to add a random shift from a 0-centered Laplace distribution with the standard deviation specified in the policy configuration. For most purposes, knowing the distribution is not important, but the net effect is that on average the reported values should be the true value plus or minus the specified deviation value.

Preserving data utility

Using randomized response doesn't destroy the data because data is only randomized slightly; aggregate utility can be preserved because analysts know how and what proportion of the values will change. Through this technique, values can be interpreted as hints, signals, or suggestions of the truth, but it is much harder to reason about individual rows.

Additionally, randomized response gives deniability of record content not dataset participation, so individual rows can be displayed.

Regular expression (regex)

This policy is similar to replacing with a constant, but it provides more utility because you can retain portions of the true value. When authoring the policy in Immuta, the regex and the replacement value do not need to be in single or double quotes.

The following regex rule would mask the final digits of an IP address:

Mask using a regex \d+$ the value in the columns ip_address for everyone.

In this case, the regular expression \d+$

\d matches a digit (equal to [0-9])

+ Quantifier — Matches between one and unlimited times, as many times as possible, giving back as needed (greedy)

$ asserts position at the end of the string, or before the line terminator right at the end of the string (if any)

This ensures we capture the last digit(s) after the last . in the ip address. We then can enter the replacement for what we captured, which in this case is XXX. So the outcome of the policy, would look like this: 164.16.13.XXX

This regex rule applies masking to telephone numbers variably depending on the presence of a dash (implying a prefix), space, or only digits:

Mask using a regex (\+?\d{0,3}[-\s]?)?\d{4} the value in the column tagged Discovered...Telephone Number for everyone.

The image below illustrates authoring a regex global policy that will apply to Databricks Unity Catalog data sources:

Databricks Unity Catalog integration regex_replace function

Reversibility

Deprecation notice

Support for unmask requests has been deprecated.

Note: The user receiving the unmasking request must send the unmasked value to the requester.

With Reversible Masking, the raw values are switched out with consistent values to allow analysis without revealing the underlying sensitive data. The direct identifier is replaced with a token that can still be tracked or counted.

Rounding

This is a technique to hide precision from numeric values while providing more utility than simply hashing. For example, you could remove precision from a geospatial coordinate. You can also use this type of policy to remove precision from dates and times by rounding to the nearest hour, day, month, or year.

Row-level security policies

These policies hide entire rows or objects of data based on the policy being enforced; some of these policies require the data to be tagged as well.

Masked columns as input for row-level policies

Public preview: This feature is available to all accounts.

If a global masking policy applies to a column, you can still use that masked column in a global row-level policy.

Consider the following policy examples:

  • Masking policy: Mask values in columns tagged Country for everyone except users in group Admin.

  • Row-level policy: Only show rows where user possesses an attribute in OfficeLocation that matches the value in column tagged Country for everyone.

Both of these policies use the Country tag to restrict access. Therefore, the masking policy and the row-level policy would apply to data source columns with the tag Country for users who are not in the Admin group.

Limitations

Matching

These policies match a user attribute with a row/object/file attribute to determine if that row/object/file should be visible. This process uses a direct string match, so the user attribute would have to match exactly the data attribute in order to see that row of data.

For example, to restrict access to insurance claims data to the state for which the user's home office is located, you could build a policy such as this:

Only show rows where user possesses an attribute in Office Location that matches the value in the column State for everyone except when user is a member of group Legal.

In this case, the Office Location is retrieved by the identity management system as a user attribute or group. If the user's attribute (Office Location) was Missouri, rows containing the value Missouri in the State column in the data source would be the only rows visible to that user.

Minimization

These policies return a limited percentage of the data, which is randomly sampled, at query time. but it is the same sample for all the users. For example, you could limit certain users to only 10% of the data. Immuta uses a hashing policy to return approximately 10% of the data, and the data returned will always be the same; however, the exact number of rows exposed depends on the distribution of high cardinality columns in the database and the hashing type available. Additionally, Immuta will adjust the data exposed when new rows are added or removed.

Best practice: row count

Immuta recommends you use a table with over 1,000 rows for the best results when using a data minimization policy.

Time-based restrictions

These policies restrict access to rows/objects/files that fall within the time restrictions set in the policy. If a data source has time-based restriction policies, queries run against the data source by a user will only return rows/blobs with a date in its event-time column/attribute from within a certain range.

The time window is based on the event time you select when creating the data source. This value will come from a date/time column in relational sources.

WHERE clause policy

This policy can be thought of as a table "view" created automatically for the user based on the condition of the policy. For example, in the policy below, users who are not members of the Admins group will only see taxi rides where passenger_count < 2.

Only show rows where public.us.taxis.passenger_count <2 for everyone except when user is a member of group Admins.

WHERE clause policy requirement

All columns referenced in the policy must have fully qualified names. Any column names that are unqualified (just the column name) will default to a column of the data source the policy is being applied to (if one matches the name).

New column added policy

When this policy is activated by a governor, it will automatically be enforced on data sources that have the New tag applied to them.

Supported with caveats ():

For all policies except , inclusionary logic allows governors to vary policy actions with an Otherwise clause.

Purposes help define the scope and use of data within a project and allow users to meet . Governors create and manage purposes and their sub-purposes, which project owners then add to their project(s) and use to drive Data Policies.

Purposes can be constructed as a hierarchy, meaning that purposes can contain nested sub-purposes, much like in Immuta. This design allows more flexibility in managing purpose-based restriction policies and transparency in the relationships among purposes.

Refer to the for a tutorial on purpose-based restrictions on data.

After Complex Data Types is enabled on the , the column type for struct columns for new data sources will display as struct in the Data Dictionary. (For data sources that are already in Immuta, users can edit the data source and change the column types for the appropriate columns from jsonb to struct.) Once struct fields are available, they can be searched, tagged, and used in masking policies. For example, a user could tag name, ssn, and street as PII instead of the entire patient column.

Note: When building conditional masking policies with custom SQL statements, avoid using a column that is masked using in the SQL statement, as this can lead to different behavior depending on your data platform and may produce results that are unexpected.

This option also allows users to submit an to users who meet the exceptions of the policy.

Hashed values are different across data sources, so you cannot join on hashed values unless you . Immuta prevents joins on hashed values to protect against link attacks where two data owners may have exposed data with the same masked column (a quasi-identifier), but their data combined by that masked value could result in a sensitive data leak.

To ensure this process does not violate your organization's data localization regulations, you need to first before you can use it in your Immuta tenant.

For example, if an analyst wanted to publish data from a health survey she conducted, she could remove direct identifiers and apply to indirect identifiers to make it difficult to single out individuals. However, consider these survey participants, a cohort of male welders who share the same zip code:

To ensure this process does not violate your organization's data localization regulations, you need to first before you can use it in your Immuta tenant.

For string data, the random number generator essentially flips a biased coin. If the coin comes up as tails, which it does with the frequency of the replacement rate , then the value is changed to any other possible value in the column, selected uniformly at random from among those values. If the coin comes up as heads, the true value is released.

The Databricks Unity Catalog integration uses Spark’s built in regex_replace function. That Databricks function currently . Regex will not work on this platform unless these settings are appropriately configured.

This option masks the values using hashing, but allows users to submit an to users who meet the exceptions of the policy.

Note: When building row-level policies with custom SQL statements, avoid using a column that is masked using in the SQL statement, as this can lead to different behavior depending on whether you’re using the Spark or Snowflake and may produce results that are unexpected.

This feature is only available for and integrations.

This feature is only supported for , not local data policies.

You can put any valid SQL in the policy. See the for a list of custom functions.

This policy pairs with to mask newly added columns to data sources until data owners review and approve these changes from the requests tab of their profile page.

To learn how to activate this policy, navigate to the .

âž–
purpose restrictions on policies
tags
data governor policy guide
enable masked joins on data sources within a project
configured in the policy
Snowflake
Databricks Unity Catalog
global data policies
WHERE clause
Custom WHERE clause functions
schema monitoring
tutorial
purpose-based restriction policies
randomized response
k-anonymization
randomized response
inclusionary condition example
policy types
App Settings page
activate this masking policy type
activate this masking policy type
unmasking request
unmasking request
Note the use of @column to specify the column to which this should apply
A regex applied to Databricks which requires Global pattern to be enabled and Case insensitivity disabled.
only supports global pattern flags set as global (g) and case-sensitive