All Data Policy Types
Last updated
Last updated
Once a user is subscribed to a data source, the data policies that are applied to that data source determine what data the user sees.
For all data policies, you must establish the conditions for which they will be enforced. Immuta allows you to append multiple conditions to the data. Those conditions are based on user attributes and groups (which can come from multiple identity management systems and applied as conditions in the same policy), or purposes they are acting under through Immuta projects.
Conditions can be directed as exclusionary or inclusionary, depending on the policy that's being enforced:
exclusionary condition example: Mask using hashing values in columns tagged PII
on all data sources for everyone except users in the group AUDIT
.
inclusionary condition example: Only show rows where user is a member of a group that matches the value in the column tagged Department
.
The table below outlines the policy types supported by each integration. Details about each of these policies are included in the policy types section.
*Supported with Caveats:
On Databricks data sources, joins will not be allowed on data protected with replace with NULL/constant policies.
On Trino data sources, the Immuta function @iam
for WHERE clause policies can block the creation of views.
For all policies except purpose-based restriction policies, inclusionary logic allows governors to vary policy actions with an Otherwise clause.
For example, governors could mask values using hashing for users acting under a specified purpose while masking those same values by making null for everyone else who accesses the data.
This variation can be created by selecting for everyone who when available from the condition dropdown menus and then completing the Otherwise clause.
Purposes help define the scope and use of data within a project and allow users to meet purpose restrictions on policies. Governors create and manage purposes and their sub-purposes, which project owners then add to their project(s) and use to drive Data Policies.
Purposes can be constructed as a hierarchy, meaning that purposes can contain nested sub-purposes, much like tags in Immuta. This design allows more flexibility in managing purpose-based restriction policies and transparency in the relationships among purposes.
For example, if the purpose Research
included Marketing
, Product
, and Onboarding
as sub-purposes, a governor could write the following global policy:
Limit usage to purpose(s) Research for everyone on data sources tagged PHI.
This hierarchy allows you to create this as a single purpose instead of creating separate purposes, which must then each be added to policies as they evolve.
Now, any user acting under the purpose or sub-purpose of Research
- whether Research.Marketing
or Research.Onboarding
- will meet the criteria of this policy. Consequently, purpose hierarchies eliminate the need for a governor to rewrite these global policies when sub-purposes are added or removed. Furthermore, if new projects with new Research purposes are added, for example, the relevant global policy will automatically be enforced.
Refer to the data governor policy guide for a tutorial on purpose-based restrictions on data.
Masking policies hide values in data, providing various levels of utility while still preserving privacy.
This policy masks the values with an irreversible sha256 hash, which is consistent for the same value throughout the data source, so you can count or track the specific values, but not know the true raw value.
Hashed values are different across data sources, so you cannot join on hashed values unless you enable masked joins on data sources within a project. Immuta prevents joins on hashed values to protect against link attacks where two data owners may have exposed data with the same masked column (a quasi-identifier), but their data combined by that masked value could result in a sensitive data leak.
This policy makes values null, removing any utility of the data the policy applies to.
With this policy, you can replace the values with the same constant value you choose, such as 'Redacted', removing any utility of that data.
This policy is similar to replacing with a constant, but it provides more utility because you can retain portions of the true value. When authoring the policy in Immuta, the regex and the replacement value do not need to be in single or double quotes.
The following regex rule would mask the final digits of an IP address:
Mask using a regex
\d+$
the value in the columnsip_address
for everyone.
In this case, the regular expression \d+$
\d
matches a digit (equal to [0-9])
+
Quantifier — Matches between one and unlimited times, as many times as possible, giving back as needed (greedy)
$
asserts position at the end of the string, or before the line terminator right at the end of the string (if any)
This ensures we capture the last digit(s) after the last .
in the ip address. We then can enter the replacement for what we captured, which in this case is XXX
. So the outcome of the policy, would look like this: 164.16.13.XXX
The image below illustrates authoring a regex global policy that will apply to Databricks Unity Catalog data sources:
Databricks Unity Catalog integration regex_replace function
The Databricks Unity Catalog integration uses Spark’s built in regex_replace
function. That Databricks function currently only supports global pattern flags set as global (g
) and case-sensitive. Regex will not work on this platform unless these settings are appropriately configured.
This is a technique to hide precision from numeric values while providing more utility than simply hashing. For example, you could remove precision from a geospatial coordinate. You can also use this type of policy to remove precision from dates and times by rounding to the nearest hour, day, month, or year.
Deprecation notice
Support for unmask requests has been deprecated.
This option masks the values using hashing, but allows users to submit an unmasking request to users who meet the exceptions of the policy.
Note: The user receiving the unmasking request must send the unmasked value to the requester.
With Reversible Masking, the raw values are switched out with consistent values to allow analysis without revealing the underlying sensitive data. The direct identifier is replaced with a token that can still be tracked or counted.
This option masks the value, but preserves the length and type of the value.
This option also allows users to submit an unmasking request to users who meet the exceptions of the policy.
Preserving the data format is important if the format has some relevance to the analysis at hand. For example, if you need to retain the integer column type or if the first 6 digits of a 12-digit number have an important meaning.
This option uses functions native to the underlying database to transform the column. Single quotes enclosing the regex and escaping special characters are required. The following example masks telephone numbers variably depending on the presence of a dash (implying a prefix), space, or only digits:
The image below illustrates authoring a global policy using this custom function:
Limitations
The masking functions are executed against the remote database directly. A poorly written function could lead to poor quality results, data leaks, and performance hits.
Using custom functions can result in changes to the original data type. In order to prevent query errors you must ensure that you cast this result back to the original type.
The function must be valid for the data type of the selected column. If it is not
Local policies will error and show a message that the function is not valid.
Global policies will error and change to the default masking type (hashing for text and NULL for all others).
For all of the policies above, both at the local and global policy levels, you can conditionally mask the value based on a value in another column. This allows you to build a policy that looks something like: "Mask bank account number where country = 'USA'" instead of blindly stating you want bank account masked always.
Note: When building conditional masking policies with custom SQL statements, avoid using a column that is masked using randomized response in the SQL statement, as this can lead to different behavior depending on your data platform and may produce results that are unexpected.
Sample data is processed during computation of k-anonymization policies
When a k-anonymization policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process that generates rules enforcing k-anonymity. The results of this query, which may contain data that is subject to regulatory constraints such as GDPR or HIPAA, are stored in Immuta's metadata database.
The location of the metadata database depends on your deployment:
Self-managed Immuta deployment: The metadata database is located in the server where you have your external metadata database deployed.
SaaS Immuta deployment: The metadata database is located in the AWS global segment you have chosen to deploy Immuta.
To ensure this process does not violate your organization's data localization regulations, you need to first activate this masking policy type before you can use it in your Immuta tenant. To enable k-anonymization for your account, contact your Immuta representative.
K-anonymity is measured by grouping records in a data source that contain the same values for a common set of quasi identifiers (QIs) - publicly known attributes (such as postal codes, dates of birth, or gender) that are consistently, but ambiguously, associated with an individual.
The k-anonymity of a data source is defined as the number of records within the least populated cohort, which means that the QIs of any single record cannot be distinguished from at least k other records. In this way, a record with QIs cannot be uniquely associated with any one individual in a data source, provided k is greater than 1.
In Immuta, masking with k-anonymization examines pairs of values across columns and hides groups that do not appear at least the specified number of times (k). For example, if one column contains street numbers and another contains street names, the group 123, "Main Street"
probably would appear frequently while the group 123, "Diamondback Drive"
probably would show up much less. Since the second group appears infrequently, the values could potentially identify someone, so this group would be masked.
After the fingerprint service identifies columns with a low number of distinct values, users will only be able to select those columns when building the policy. Users can either use a minimum group size (k) given by the fingerprint or manually select the value of k.
Note: The default cardinality cutoff for columns to qualify for k-anonymization is 500.
Masking multiple columns with k-anonymization
Governors can write global data policies using k-anonymization in the global data policy builder.
When this global policy is applied to data sources, it will mask all columns matching the specified tag.
Applying k-anonymization over disjoint sets of columns in separate policies does not guarantee k-anonymization over their union.
If you select multiple columns to mask with k-anonymization in the same policy, the policy is driven by how many times these values appear together. If the groups appear fewer than k times, they will be masked.
For example, if Policy A
Policy A: Mask with k-anonymization the values in the columns
gender
andstate
requiring a group size of at least 2 for everyone
was applied to this data source
Gender | State |
---|---|
Female | Ohio |
Female | Florida |
Female | Florida |
Female | Arkansas |
Male | Florida |
the values would be masked like this:
Gender | State |
---|---|
Null | Null |
Female | Florida |
Female | Florida |
Null | Null |
Null | Null |
Note: Selecting many columns to mask with k-anonymization increases the processing that must occur to calculate the policy, so saving the policy may take time.
However, if you select to mask the same columns with k-anonymization in separate policies, Policy C and Policy D,
Policy C: Mask with k-anonymization the values in the column
gender
requiring a group size of at least 2 for everyonePolicy D: Mask with k-anonymization the values in the column
state
requiring a group size of at least 2 for everyone
the values in the columns will be masked separately instead of as groups. Therefore, the values in that same data source would be masked like this:
Gender | State |
---|---|
Female | Null |
Female | Florida |
Female | Florida |
Female | Null |
Null | Florida |
This policy masks data by slightly randomizing the values in a column, preserving the utility of the data while preventing outsiders from inferring content of specific records.
For example, if an analyst wanted to publish data from a health survey she conducted, she could remove direct identifiers and apply k-anonymization to indirect identifiers to make it difficult to single out individuals. However, consider these survey participants, a cohort of male welders who share the same zip code:
participant_id | zip_code | gender | occupation | substance_abuse |
---|---|---|---|---|
... | ... | ... | ... | ... |
| 75002 | Male | Welder | Y |
| 75002 | Male | Welder | Y |
| 75002 | Male | Welder | Y |
| 75002 | Male | Welder | Y |
| 75002 | Male | Welder | Y |
... | ... | ... | ... | ... |
All members of this cohort have indicated substance abuse, sensitive personal information that could have damaging consequences, and, even though direct identifiers have been removed and k-anonymization has been applied, outsiders could infer substance abuse for an individual if they knew a male welder in this zip code.
In this scenario, using randomized response would change some of the Y's in substance_abuse
to N's and vice versa; consequently, outsiders couldn't be sure of the displayed value of substance_abuse
given in any individual row, as they wouldn't know which rows had changed.
How the randomization works
Immuta applies a random number generator (RNG) that is seeded with some fixed attributes of the data source, column, backing technology, and the value of the high cardinality column, an approach that simulates cached randomness without having to actually cache anything.
For string data, the random number generator essentially flips a biased coin. If the coin comes up as tails, which it does with the frequency of the replacement rate configured in the policy, then the value is changed to any other possible value in the column, selected uniformly at random from among those values. If the coin comes up as heads, the true value is released.
For numeric data, Immuta uses the RNG to add a random shift from a 0-centered Laplace distribution with the standard deviation specified in the policy configuration. For most purposes, knowing the distribution is not important, but the net effect is that on average the reported values should be the true value plus or minus the specified deviation value.
Preserving data utility
Using randomized response doesn't destroy the data because data is only randomized slightly; aggregate utility can be preserved because analysts know how and what proportion of the values will change. Through this technique, values can be interpreted as hints, signals, or suggestions of the truth, but it is much harder to reason about individual rows.
Additionally, randomized response gives deniability of record content not dataset participation, so individual rows can be displayed.
In some cases, you may want several different masking policies applied to the same column through Otherwise policies. To build these policies, select everyone who instead of everyone or everyone except. After you specify who the masking policy applies to, select how it applies to everyone else in the Otherwise condition.
You can add and remove tags in Otherwise conditions for global policies (unlike local policy Otherwise conditions), as illustrated above; however, all tags or regular expressions included in the initial everyone who rule must be included in an everyone or everyone except rule in the additional clauses.
Feature limitations
Masking struct and array columns is only available for Databricks data sources.
Immuta only supports Parquet and Delta table types.
Spark supports a class of data types called complex types, which can represent multiple data values in a single column. Immuta supports masking fields within array and struct columns:
array: an ordered collection of elements
struct: a collection of elements that are primitive or complex types
Without this feature enabled, the struct and array columns of a data source default to jsonb
in the Data Dictionary, and the masking policies that users can apply to jsonb
columns are limited. For example, if a user wanted to mask PII inside the column patient
in the image below, they would have to apply null masking to the entire column or use a custom function instead of just masking name
or address
.
After Complex Data Types is enabled on the App Settings page, the column type for struct columns for new data sources will display as struct
in the Data Dictionary. (For data sources that are already in Immuta, users can edit the data source and change the column types for the appropriate columns from jsonb
to struct
.) Once struct fields are available, they can be searched, tagged, and used in masking policies. For example, a user could tag name
, ssn
, and street
as PII instead of the entire patient
column.
After a global or local policy masks the columns containing PII, users who do not meet the exception specified in the policy will see these values masked:
Note: Immuta uses the >
delimiter to indicate that a field is nested instead of the .
delimiter, since field and column names could include .
.
Caveats
Struct columns with many fields
If users have struct columns with many fields, they will need to either
create the data source against a cluster running Spark 3 or
add spark.debug.maxToStringFields 1000
to their Spark 2 cluster's configuration.
To get column information about a data source, Immuta executes a DESCRIBE
call for the table. In this call, Spark returns a simple string representation of the schema for each column in the table. For the patient
column above, the simple string would look like this:
struct<name:string,ssn:string,age:int,address:struct<city:string,state:string,zipCode:string,street:text>>
Immuta then parses this string into the following format for the data source's dictionary:
However, if the struct contains more than 25 fields, Spark truncates the string, causing the parser to fail and fall back to jsonb
. Immuta will attempt to avoid this failure by increasing the number of fields allowed in the server-side property setting, maxToStringFields
; however, this only works with clusters on a Spark 3 runtime. The maxToStringFields
configuration in Spark 2 cannot be set through the ODBC driver and can only be set through the Spark configuration on the cluster with spark.debug.maxToStringFields 1000
on cluster startup.
These policies hide entire rows or objects of data based on the policy being enforced; some of these policies require the data to be tagged as well.
Note: When building row-level policies with custom SQL statements, avoid using a column that is masked using randomized response in the SQL statement, as this can lead to different behavior depending on whether you’re using the Spark or Snowflake and may produce results that are unexpected.
These policies match a user attribute with a row/object/file attribute to determine if that row/object/file should be visible. This process uses a direct string match, so the user attribute would have to match exactly the data attribute in order to see that row of data.
For example, to restrict access to insurance claims data to the state for which the user's home office is located, you could build a policy such as this:
Only show rows where user possesses an attribute in
Office Location
that matches the value in the columnState
for everyone except when user is a member of groupLegal
.
In this case, the Office Location
is retrieved by the identity management system as a user attribute or group. If the user's attribute (Office Location
) was Missouri
, rows containing the value Missouri
in the State
column in the data source would be the only rows visible to that user.
This policy can be thought of as a table "view" created automatically for the user based on the condition of the policy. For example, in the policy below, users who are not members of the Admins
group will only see taxi rides where passenger_count < 2
.
Only show rows where
public.us.taxis.passenger_count <2
for everyone except when user is a member of group Admins.
You can put any valid SQL WHERE clause in the policy. See the Custom WHERE clause functions for a list of custom functions.
WHERE clause policy requirement
All columns referenced in the policy must have fully qualified names. Any column names that are unqualified (just the column name) will default to a column of the data source the policy is being applied to (if one matches the name).
These policies restrict access to rows/objects/files that fall within the time restrictions set in the policy. If a data source has time-based restriction policies, queries run against the data source by a user will only return rows/blobs with a date in its event-time
column/attribute from within a certain range.
The time window is based on the event time you select when creating the data source. This value will come from a date/time column in relational sources.
These policies return a limited percentage of the data, which is randomly sampled, at query time. but it is the same sample for all the users. For example, you could limit certain users to only 10% of the data. Immuta uses a hashing policy to return approximately 10% of the data, and the data returned will always be the same; however, the exact number of rows exposed depends on the distribution of high cardinality columns in the database and the hashing type available. Additionally, Immuta will adjust the data exposed when new rows are added or removed.
Best practice: row count
Immuta recommends you use a table with over 1,000 rows for the best results when using a data minimization policy.
Public preview: This feature is currently in public preview and available to all accounts.
If a global masking policy applies to a column, you can still use that masked column in a global row-level policy.
Consider the following policy examples:
Masking policy: Mask values in columns tagged Country
for everyone except users in group Admin
.
Row-level policy: Only show rows where user possesses an attribute in OfficeLocation
that matches the value in column tagged Country
for everyone.
Both of these policies use the Country
tag to restrict access. Therefore, the masking policy and the row-level policy would apply to data source columns with the tag Country
for users who are not in the Admin
group.
Limitations
This feature is only available for Snowflake and Databricks Unity Catalog integrations.
This feature is only supported for global data policies, not local data policies.
This policy pairs with schema monitoring to mask newly added columns to data sources until data owners review and approve these changes from the requests tab of their profile page.
When this policy is activated by a governor, it will automatically be enforced on data sources that have the New
tag applied to them.
To learn how to activate this policy, navigate to the tutorial.