Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
When used with sensitive data discovery and Discovered tags, global policies are enforced on data sources as they are created.
For example, if an organization's compliance requirements state that access to personal information is restricted to users within the corresponding country or geographic region, they could write a global policy in Immuta that enforces that requirement before users have begun connecting data:
Only show rows where user possesses an attribute in
OfficeLocation
that matches the value in the column taggedDiscovered . Country
for everyone.
Best practices for writing global policies
The best practices outlined below will also appear in callouts within relevant tutorials.
Use schema monitoring to assess changes to data sources.
Activate the new column added templated global policy to protect potentially sensitive data before data owners can review new columns that have been added.
Write global policies using Discovered tags and attributes before connecting data.
Use global policies instead of local policies to manage data access.
In most cases, the goal is to share as much data as possible while still being compliant with privacy regulations. Immuta recommends a scale of wide subscription policies and specific data policies to give as much access as possible.
Use the minimum amount of policies possible to achieve the data privacy needed.
This section includes conceptual, reference, and how-to guides for creating policies. Some of these guides are provided below. See the left navigation for a complete list of resources.
When activated by data governors, these templated policies are automatically enforced on data sources that have had relevant tags applied to them by users or Sensitive Data Discovery.
This page outlines the types of templated policies users can manage in Immuta. To learn how to activate a templated policy, navigate to the tutorial.
Deprecation notice
Support for this policy has been deprecated.
HIPAA De-identification requires that
18 direct identifiers are removed from data sources.
Data owners do not have actual knowledge that Data Users could re-identify individuals.
The HIPAA De-identification policy is a Global Policy included in Immuta by default. When combined with Sensitive Data Discovery, this policy automatically applies to relevant data sources. However, to fully comply with HIPAA Safe Harbor, data owners will need to certify that tags on data sources are accurate; after the policy is applied, multiple warnings indicate that certification is required, including a "Policy Certification Required" label on the data source and on the policy. Additionally, owners will receive a notification to certify the policy.
Note: The HIPAA De-identification Policy is staged by default and cannot be edited by any user. However, governors can clone this policy and then edit the clone.
The data owner and data user certifications serve as official acknowledgements that the users and data comply with HIPAA Safe Harbor:
Data Owner Certification: Data owners certify that all 18 identifiers have been correctly tagged and that they have no knowledge that the information in the data sources could be used by Data Users to identify individuals.
Data User Certification: Data Users agree to use the data only for the stated purpose of the project; refrain from sharing that data outside the project; not re-identify or take any steps to re-identify individuals' health information; notify the Project Owner or Governance team in the event that individuals have been identified or could be identified; and refrain from contacting individuals who might be identified.
HIPAA Expert Determination allows data scientists to increase utility of datasets while still complying with strict HIPAA regulations that require a "very low" re-identification risk. It does this through a project by allowing the project owner to adjust the k-anonymization noise across multiple columns to gain utility. To learn more about Expert Determination, see Policy Adjustments and HIPAA Expert Determination in Immuta .
Deprecation notice
Support for this policy has been deprecated.
The CCPA policy is a Global Policy included in Immuta by default. When combined with Sensitive Data Discovery, this policy automatically applies to relevant data sources.
CCPA sets forth two routes to achieve compliance:
businesses processing consumer personal information abide by all applicable restrictions (e.g., purpose restrictions or consumer rights), and/or
businesses transform consumer personal information into de-identified or aggregate data so that restrictions, such as consumer rights, become inapplicable.
Under CCPA, de-identification is successfully performed if data “cannot reasonably identify, relate to, describe, be capable of being associated with, or be linked, directly or indirectly, to a particular consumer,” provided that an organization that uses de-identified information
implements technical safeguards that prohibit re-identification of the consumer to whom the information may pertain,
implements business processes that specifically prohibit re-identification of the information,
implements business processes to prevent inadvertent release of de-identified information, and
makes no attempt to re-identify the information.
Immuta’s CCPA de-identification policy was created to comply with this definition and consists of 4 main components (each of which addresses at least one prong of CCPA's de-identification test):
a self-executing data policy that applies a de-identification technique that serves as a technical safeguard to prohibit re-identification of the consumer.
certifications by the data owner. These serve as an official acknowledgement that the covered business has initially appropriately labeled consumer information and is not aware that the data user is in position to re-identify consumers prior to the re-use of the data. This component is crucial to prevent inadvertent release of de-identified information.
certifications by the data user. These serve as official acknowledgements that the data user is subject to business processes that prohibit re-identification and inadvertent release of de-identified information to third parties.
functionalities to enable real-time monitoring and auditing of query-based access to data. These aim to deter and detect attempts to re-identify.
Note: The language used in certifications can be customized to meet specific needs of customers, such as when customers want to use specific language found in data-sharing agreements.
The data policy is made of four rules.
The first rule ensures that access to data can only happen for two types of use cases: those that require access to de-identified data (Re-identification Prohibited.CCPA
) and those that require access to identifying data (Use Case Outside De-identification
). Data Users are then strictly segmented by use case through attribute-based access control and purpose acknowledgement.
The second rule nulls direct identifiers and undetermined identifiers for Data Users with access to de-identified data.
The third rule generalizes indirect identifiers with k-anonymization so that the re-identifiability probability is always equal to or below 5% for Data Users with access to de-identified data. Note: Immuta has analyzed industry standards and thresholds recommended by statistical methods experts and selected the most restrictive value of 5% for the maximum re-identifiability probability.
The fourth rule applies the first three rules to all data sources containing columns tagged Discovered.Identifier Direct
, Discovered.Identifier Indirect
, or Discovered.Identifier Undetermined
.
Immuta's CCPA policy addresses both both direct and indirect identifiers because robust de-identification requires considering all types of identifying attributes, and the identifiers are masked differently to maximize utility. With this combination of masking techniques, the data re-identification risk (the amount of re-identification possible for each data source) meets CCPA’s de-identification criteria.
Note: The CCPA policy is staged by default and cannot be edited by any user. However, governors can clone this policy and then edit the clone. However, customers will have to check that after the customization the overall re-identification risk is still acceptable.
When paired with Schema Monitoring, this policy masks newly added columns to data sources until data owners review and approve these changes from the Requests tab of their profile page.
Immuta policies restrict access to data and apply to data sources at the local or global level:
Local policies refer to specific tables.
Global policies refer to tags instead of specific tables, allowing you to build a single policy that impacts a large percentage of your data rather than building separate local policies for each table.
Consider the following local and global policy examples:
Local policy example: Mask using hashing the values in the columns ssn
, last_name
, and home_address
.
Global policy example: Mask using hashing values in columns tagged PII
on all data sources.
In this scenario, the local policy would mask the sensitive columns specified in the data policy on the single data source it was created for. If only using local policies, a data owner or governor would have to write that policy for every data source on which they wanted to mask sensitive data. The second policy would mask any column tagged PII
on all data sources that had the PII
tag applied to a column. Because this global policy automatically applies to those qualifying data sources, that policy only needs to be written once.
Consequently, global policies are the best practice for using Immuta: they provide the most scalability and manageability of access control.
For details about subscription and data policy types, see the subscription policies reference guide or data policies reference guide.
Best Practice: Access Controls
In most cases, the goal is to share as much data as possible while still being compliant with privacy regulations. Immuta recommends a scale of wide subscription policies and specific data policies to give as much access as possible.
Global policies can be authored by users with GOVERNANCE permission or data owners. Data owners' global policies will only attach to data sources they own (also called restricted global policies), even if the tags their policies target go beyond their data sources.
Immuta policies compare data and user attributes at query-time to determine whether or not the querying user should access the data.
Data attributes are information about the data within the data source. These attributes are then matched against policy logic to determine if a row or object should be visible to a specific user. This matching is usually done between the data attribute and the user attribute.
For example, consider the policy below:
Only show rows where user is a member of a group that matches the value in the column tagged Department
.
The data attribute (the value in the column tagged Department
) is matched against the user attribute (their group) to determine whether or not rows will be visible to the user accessing the data.
User attributes are values connected to specific Immuta user accounts and are used in policies to restrict access to data. These attributes fall into three categories:
attributes: Attributes are custom tags that are applied to users to restrict what data users can see. Attributes can be added manually or mapped from an external catalog.
groups: Groups allow System Administrators to group sets of users together. Users can belong to any number of groups and can be added or removed from groups at any time. Like attributes, groups can be used to restrict what data a set of users has access to.
permissions: Permissions control what actions a user can take in Immuta, both API and UI actions. Permissions can be added and removed from user accounts by a System Administrator (an Immuta user with the USER_ADMIN
permission); however, the permissions themselves are managed by Immuta, and the actions associated with the permissions cannot be altered.
The table below outlines the various states of global policies in Immuta.
Note: Policies that contain the circumstance When selected by data owners cannot be staged.
Data owners who are not governors can write restricted global policies for data sources that they own. With this feature, data owners have higher-level policy controls and can write and enforce policies on multiple data sources simultaneously, eliminating the need to write redundant local policies on data sources.
Unlike global policies, the application of these policies is restricted to the data sources owned by the users or groups specified in the policy and will change as users' ownerships change.
Data owners and governors can access a data source's policies on the policies tab of the data source. There, these users can view existing policies or apply new policies to the data source with the following features:
Apply Existing Policies Button: By clicking this button in the top right corner of the policies tab, users can search for and apply existing policies to the data source from other data sources or global policies.
Subscription Policy Builder: In this section, users can determine who may access the data source. If a subscription policy has already been set by a global policy, a notification and a Disable button appear at the bottom of this section. Users can click the Disable button to make changes to the subscription policy.
Data Policy Builder: In this section, users can create policies to enforce privacy controls on the data source and see a list of data policies currently applied to the data source.
All changes made to policies by data owners or governors appear in a collapsible Activity panel on the right side of the screen.
The information recorded in the activity panel includes when the data source was created, the name and type of the policy, when the policy was applied or changed, and if the policy is in conflict on the data source. Additionally, global policy changes are identified by the governance icon; all other updates are labeled by the data sources icon.
Categorical Randomized Response: Categorical values are randomized by replacing a value with some non-zero probability. Not all values are randomized, and the consumer of the data is not told which values are randomized and which ones remain unchanged. Values are replaced by selecting a different value uniformly at random from among all other values. For example, if a randomized response policy were applied to a “state” column, a person’s residency could flip from Maryland to Virginia, which would provide ambiguity to the actual state of residency. This policy is appropriate when obscuring sensitive values such as medical diagnosis or survey responses.
Custom Function: This function uses SQL functions native to the underlying database to transform the values in a column. This can be used in numerous use cases, but notional examples include top-coding to some upper limit, a custom hash function, and string manipulation.
K-Anonymization: Masking through k-anonymization is a distinct policy that can operate over multiple attributes. A k-anonymization policy applies rounding and NULL masking policies over multiple columns so that the columns contain at least “K” records, where K is a positive integer. As a result, attributes will only be disclosed when there is a sufficient number of observations. This policy is appropriate to apply over indirect identifiers, such as zip code, gender, or age. Generally, each of these identifiers is not uniquely linked to an individual, but when combined with other identifiers can be associated with a single person. Applying k-anonymization to these attributes provides the anonymity of crowds so that individual rows are made indistinct from each other, reducing the re-identification risk by making it unclear which record corresponds to a specific person. Immuta supports k-anonymization of text, numeric, and time-based data types.
Mask with Format Preserving Masking: This function masks using a reversible function but does so in a way that the underlying structure of a value is preserved. This means the length and type of a value are maintained. This is appropriate when the masked value should appear in the same format as the underlying value. Examples of this would include social security numbers and credit card numbers where Mask with Format Preserving Masking would return masked values in a format consistent with credit cards or social security numbers, respectively. There is larger overhead with this masking type, and it should really only be used when format is critically valuable, such as situations when an engineer is building an application where downstream systems validate content. In almost all analytical use cases, format should not matter.
Mask with Reversibility: This function masks in a way that an authorized user can “unmask” a value and reveal the value to an authorized user. Masking with Reversibility is appropriate when there is a need to obscure a value while allowing an authorized user to recover the underlying value. All of the same use cases and caveats that apply to Replace with Hashing apply to this function. Reversibly masked fields can leak the length of their contents, so it is important to consider whether or not this may be an attack vector for applications involving its use.
Randomized Response: This function randomizes the displayed value to make the true value uncertain, but maintains some analytic utility. The randomization is applied differently to both categorical and quantitative values. In both cases, the noise can be increased to enhance privacy or reduced to preserve more analytic value.
Datetime and Numeric Randomized Response: Numeric and datetime randomized response apply a tunable, unbiased noise to the nominal value. This noise can obscure the underlying value, but the impact of the noise is reduced in aggregate. This masking type can be applied to sensitive numerical attributes, such as salary, age, or treatment dates.
Replace with Constant: This function replaces any value in a column with a specified value. The underlying data will appear to be a constant. This masking carries the same privacy and utility guarantees as Replace with NULL. Apply this policy to strings that require a specific repeated value.
Replace with Hashing: This function masks the values with an irreversible hash, which is consistent for the same value throughout the data source, so you can count or track the specific values, but not know the true raw value. This is appropriate for cases where the underlying value is sensitive, but there is a need to segment the population. Such attributes could be addresses, time segments, or countries. It is important to note that hashing is susceptible to inference attacks based on prior knowledge of the population distribution. For example, if “state” is hashed, and the dataset is a sample across the United States, then an adversary could assume that the most frequently occurring hash value is California. As such, it's most secure to use the hashing mask on attributes that are evenly distributed across a population.
Replace with Null: This function replaces any value in a column with NULL
. This removes any identifiability from the column and removes all utility of the data. Apply this policy to numeric or text attributes that have a high re-identification risk, but little analytic value (names and personal identifiers).
Replace with REGEX: This function uses a regular expression to replace all or a portion of an attribute. REGEX replacement allows for some groupings to be maintained, while providing greater ambiguity to the disclosed value. This masking technique is useful when the underlying data has some consistent structure, the remasked underlying data represents some re-identification risk, and a regular expression can be used to mask the underlying data to be less identifiable.
Rounding: Immuta’s rounding policy reduces, rounds, or truncates numeric or datetime values to a fixed precision. This policy is appropriate when it is important to maintain analytic value of a quantity, but not at its native precision.
Date/Time Rounding: This policy truncates the precision of a datetime value to a user-defined precision. minute
, hour
, day
, months
, and year
are the supported precisions.
Numeric Rounding: This policy maps the nominal value to the ceiling of some specified bandwidth. Immuta has a recommended bandwidth based on the Freedman-Diaconis rule.
The masking functions described above can be implemented in a variety of use cases. Use the table below to determine the circumstance under which a function should be used.
Applicable to Numeric Data: The masking function can be applied to numeric values.
Column-Value Determinism: Repeated values in the same column are masked with the same output.
Introduces NULLs: The masking function may, under normal or irregular circumstances, return NULL values.
Performance: How performant the masking function will be (10/10 being the best).
Preserves Appearance: The output masked value resembles the valid column values. For example, a masking function would output phone numbers when given phone numbers. Here, NULL values are not counted against this property.
Preserves Averages: The average of the masked values (avg(mask(v))
) will be near the average of the values in the clear (avg(v)
).
Suitable for De-Identification: The masking function can be used to obscure record identifiers, hiding data subject identities and preventing future linking against other identified data.
Provides Deniability of Record Content: A (possibly identified) person can plausibly attribute the appearance of the value to the masking function. This is a desirable property of masking functions that retain analytic utility, as such functions must necessarily leak information about the original value. Fields masked with these functions provide strong protections against value inference attacks.
Preserves Equality and Grouping: Each value will be masked to the same value consistently without colliding with others. Therefore, equal values remain equal under masking while unequal values remain unequal, preserving equality. This implies that counting statistics are also preserved.
Preserves Message Length: The length of the masked value is equal to the length of the original value.
Preserves Range Statistics: The number of data values falling in a particular range is preserved. For strings, this can be interpreted as the number of strings falling between any two values by alphabetical order.
Preserves Value Locality: The output will remain near the input, which may be important for analytic purposes.
Reversible: Qualified individuals can reveal the original input value.
Masking Policy Support by Integration
Since Global Policies can apply masking policies across multiple different databases at once, if an unsupported masking policy is applied to a column, Immuta will revert to NULLing that column.
See the integration support matrix for an outline of masking policies supported by each integration.
Once a subscription policy is applied to a data source, Immuta users must be subscribed to that data source to access the data. The subscription policy determines who can request access and has one of four possible restriction levels:
Anyone: Users will automatically be granted access (least restricted).
Anyone who asks (and is approved): Users will need to request access and be granted permission by the configured approvers (moderately restricted).
Users with specific groups or attributes: Only users with the specified groups/attributes will be able to see the data source and subscribe (moderately restricted). This restriction type is referred to as an attribute-based access control (ABAC) global subscription policy throughout this page.
Individual users you select: The data source will not appear in search results; data owners must manually add/remove users (most restricted).
In some cases, multiple global subscription policies created by a data governor may apply to a single data source.
Anyone
Anyone who asks (and is approved)
Individual users you select
When such conflicts occur, data owners can manually choose which policy will apply. To do this the data owner must
Disable the applied global subscription policy in the policies tab on a data source.
Provide a reason the global policy should be disabled.
Select which conflicting global subscription policy they want to apply.
You can build ABAC global subscription policies in Immuta by selecting the users with specific groups or attributes restriction level. These policies will automatically subscribe users who possess the user entitlements specified in the policy.
In some cases, multiple ABAC global policies may apply to a single data source. Rather than allowing the two policies to conflict, Immuta combines the conditions of the subscription policies.
When creating a users with specific groups or attributes global policy, data governors select whether the global subscription policy should be
Always Required: Users must meet all the conditions outlined in each policy to get access (i.e., the conditions of the policies are combined with AND
).
Share Responsibility: Users need to meet the condition of at least one policy that applies (i.e., the conditions of the policies are combined with OR
).
Consider the following global subscription policies created by a data governor:
Policy 1: (Always Required) Allow users to subscribe to the data source when user is a member of group HR; otherwise, allow users to subscribe when approved by an Owner of the data source.
Policy 2: (Shared Responsibility) Allow users to subscribe to the data source when user is a member of group Analytics; otherwise, allow users to subscribe when approved by anyone with permission Governance.
Policy 3: (Shared Responsibility) Allow users to subscribe to the data source when user has attribute Office Location Ohio; otherwise, allow users to subscribe when approved by anyone with permission Audit.
If a data owner creates a data source and all of these policies apply, the subscription policies are combined, so the subscribing user must meet the requirements of the Always Required policy and the requirements of at least one of the Shared Responsibility policies to get access to the data source.
By default, users must meet all the conditions outlined in each global subscription policy that has been combined on a data source to get access (i.e., the conditions of the policies are combined with AND
). However, governors can opt to check the Shared Responsibility box if they would like users to meet the condition of at least one policy that applies (i.e., the conditions of the policies are combined with OR
).
Once enabled on a data source, combined global subscription policies can be edited and disabled by data owners.
Users can create more complex policies using functions and variables in the advanced DSL policy builder than the subscription policy builder allows.
Users can specify more than one level of criteria for the ABAC global subscription policy. Specifically, users can combine manual approvals with an ABAC subscription policy.
After a user selects Users with Specific Groups/Attributes in the global subscription policy builder, they can also enable Request Approval to Access. Selecting this option allows users to request access to a data source and be manually approved by a specified user, even if the requesting user does not meet the group or attribute conditions in the policy. By allowing an approval workflow as an alternative method of access if a user does not meet the ABAC conditions, this feature can reduce the number of policies required and allow more flexibility in approval workflows.
Manually added users will now see data regardless of meeting policy conditions.
In previous versions of Immuta, users who had been manually subscribed to a data source could not see any data. Now, these manually added users will see data even though they don't meet the ABAC conditions in the policy. Governors can change the behavior by switching the subscription policy to auto-subscribe (which removes any users who don't meet the subscription policy) or by adding a data policy that redacts rows for users who do not have the groups or attributes specified in the subscription policy.
Beyond Request Approval to Access, these options are also available:
Allow Data Source Discovery: Users can still see that this data source exists in Immuta, even if they do not have these attributes and groups. This option will automatically be selected and locked if users select Request Approval to Access.
Note: When these policies merge, if one or more policies do not have discovery enabled, then approvals will be removed from the final merged policy.
Require Manual Subscription: Users will not be automatically subscribed to the data source. They must manually subscribe to gain access.
Governors must also select how this policy should be merged with other policies that apply to the same data source:
Always Required: Users must always meet this condition, no matter what other policies apply (it will be AND'ed with other policies that apply to the data source) or
Share Responsibility: Users need to meet the conditions in this policy OR another share responsibility policy that applies to the data source. (It will be OR'ed with other shared responsibility policies that apply.)
Once a user is subscribed to a data source, the data policies that are applied to that data source determine what data the user sees.
For all data policies, you must establish the conditions for which they will be enforced. Immuta allows you to append multiple conditions to the data. Those conditions are based on user attributes and groups (which can come from multiple identity management systems and applied as conditions in the same policy), or purposes they are acting under through Immuta projects.
Conditions can be directed as exclusionary or inclusionary, depending on the policy that's being enforced:
exclusionary condition example: Mask using hashing values in columns tagged PII
on all data sources for everyone except users in the group AUDIT
.
Certain policies are not supported, or supported with caveats*, depending on the integration:
*Supported with Caveats:
On Databricks data sources, joins will not be allowed on data protected with replace with NULL/constant policies.
On Trino data sources, the Immuta functions @iam
and @interpolatedComparison
for WHERE clause policies can block the creation of views.
For example, governors could mask values using hashing for users acting under a specified purpose while masking those same values by making null for everyone else who accesses the data.
This variation can be created by selecting for everyone who when available from the condition dropdown menus and then completing the Otherwise clause.
For example, if the purpose Research
included Marketing
, Product
, and Onboarding
as sub-purposes, a governor could write the following global policy:
Limit usage to purpose(s) Research for everyone on data sources tagged PHI.
This hierarchy allows you to create this as a single purpose instead of creating separate purposes, which must then each be added to policies as they evolve.
Now, any user acting under the purpose or sub-purpose of Research
- whether Research.Marketing
or Research.Onboarding
- will meet the criteria of this policy. Consequently, purpose hierarchies eliminate the need for a governor to rewrite these global policies when sub-purposes are added or removed. Furthermore, if new projects with new Research purposes are added, for example, the relevant global policy will automatically be enforced.
Masking policies hide values in data, providing various levels of utility while still preserving privacy. In order to create masking policies on object-backed data sources, you must create data dictionary entries and the data format must be either, csv, tsv, or json.
This policy masks the values with an irreversible sha256 hash, which is consistent for the same value throughout the data source, so you can count or track the specific values, but not know the true raw value.
This policy makes values null, removing any utility of the data the policy applies to.
With this policy, you can replace the values with the same constant value you choose, such as 'Redacted', removing any utility of that data.
This policy is similar to replacing with a constant, but it provides more utility because you can retain portions of the true value. For example, the following regex rule would mask the final digits of an IP address:
Mask using a regex
\d+$
the value in the columnsip_address
for everyone.
In this case, the regular expression \d+$
\d
matches a digit (equal to [0-9])
+
Quantifier — Matches between one and unlimited times, as many times as possible, giving back as needed (greedy)
$
asserts position at the end of the string, or before the line terminator right at the end of the string (if any)
This ensures we capture the last digit(s) after the last .
in the ip address. We then can enter the replacement for what we captured, which in this case is XXX
. So the outcome of the policy, would look like this: 164.16.13.XXX
This is a technique to hide precision from numeric values while providing more utility than simply hashing. For example, you could remove precision from a geospatial coordinate. You can also use this type of policy to remove precision from dates and times by rounding to the nearest hour, day, month, or year.
Note: The user receiving the unmasking request must send the unmasked value to the requester.
With Reversible Masking, the raw values are switched out with consistent values to allow analysis without revealing the underlying sensitive data. The direct identifier is replaced with a token that can still be tracked or counted.
This option masks the value, but preserves the length and type of the value.
Preserving the data format is important if the format has some relevance to the analysis at hand. For example, if you need to retain the integer column type or if the first 6 digits of a 12-digit number have an important meaning.
This option uses functions native to the underlying database to transform the column.
Limitations
The masking functions are executed against the remote database directly. A poorly written function could lead to poor quality results, data leaks, and performance hits.
Using custom functions can result in changes to the original data type. In order to prevent query errors you must ensure that you cast this result back to the original type.
The function must be valid for the data type of the selected column. If it is not
Local policies will error and show a message that the function is not valid.
Global policies will error and change to the default masking type (hashing for text and NULL for all others).
For all of the policies above, both at the local and global policy levels, you can conditionally mask the value based on a value in another column. This allows you to build a policy that looks something like: "Mask bank account number where country = 'USA'" instead of blindly stating you want bank account masked always.
K-anonymity is measured by grouping records in a data source that contain the same values for a common set of quasi identifiers (QIs) - publicly known attributes (such as postal codes, dates of birth, or gender) that are consistently, but ambiguously, associated with an individual.
The k-anonymity of a data source is defined as the number of records within the least populated cohort, which means that the QIs of any single record cannot be distinguished from at least k other records. In this way, a record with QIs cannot be uniquely associated with any one individual in a data source, provided k is greater than 1.
In Immuta, masking with k-anonymization examines pairs of values across columns and hides groups that do not appear at least the specified number of times (k). For example, if one column contains street numbers and another contains street names, the group 123, "Main Street"
probably would appear frequently while the group 123, "Diamondback Drive"
probably would show up much less. Since the second group appears infrequently, the values could potentially identify someone, so this group would be masked.
After the fingerprint service identifies columns with a low number of distinct values, users will only be able to select those columns when building the policy. Users can either use a minimum group size (k) given by the fingerprint or manually select the value of k.
Masking multiple columns with k-anonymization
Governors can write global data policies using k-anonymization in the global data policy builder.
When this global policy is applied to data sources, it will mask all columns matching the specified tag.
Applying k-anonymization over disjoint sets of columns in separate policies does not guarantee k-anonymization over their union.
If you select multiple columns to mask with k-anonymization in the same policy, the policy is driven by how many times these values appear together. If the groups appear fewer than k times, they will be masked.
For example, if Policy A
Policy A: Mask with k-anonymization the values in the columns
gender
andstate
requiring a group size of at least 2 for everyone
was applied to this data source
the values would be masked like this:
Note: Selecting many columns to mask with k-anonymization increases the processing that must occur to calculate the policy, so saving the policy may take time.
However, if you select to mask the same columns with k-anonymization in separate policies, Policy C and Policy D,
Policy C: Mask with k-anonymization the values in the column
gender
requiring a group size of at least 2 for everyonePolicy D: Mask with k-anonymization the values in the column
state
requiring a group size of at least 2 for everyone
the values in the columns will be masked separately instead of as groups. Therefore, the values in that same data source would be masked like this:
All members of this cohort have indicated substance abuse, sensitive personal information that could have damaging consequences, and, even though direct identifiers have been removed and k-anonymization has been applied, outsiders could infer substance abuse for an individual if they knew a male welder in this zip code.
In this scenario, using randomized response would change some of the Y's in substance_abuse
to N's and vice versa; consequently, outsiders couldn't be sure of the displayed value of substance_abuse
given in any individual row, as they wouldn't know which rows had changed.
How the randomization works
Immuta applies a random number generator (RNG) that is seeded with some fixed attributes of the data source, column, backing technology, and the value of the high cardinality column, an approach that simulates cached randomness without having to actually cache anything.
For numeric data, Immuta uses the RNG to add a random shift from a 0-centered Laplace distribution with the standard deviation specified in the policy configuration. For most purposes, knowing the distribution is not important, but the net effect is that on average the reported values should be the true value plus or minus the specified deviation value.
Preserving data utility
Using randomized response doesn't destroy the data because data is only randomized slightly; aggregate utility can be preserved because analysts know how and what proportion of the values will change. Through this technique, values can be interpreted as hints, signals, or suggestions of the truth, but it is much harder to reason about individual rows.
Additionally, randomized response gives deniability of record content not dataset participation, so individual rows can be displayed.
In some cases, you may want several different masking policies applied to the same column through Otherwise policies. To build these policies, select everyone who instead of everyone or everyone except. After you specify who the masking policy applies to, select how it applies to everyone else in the Otherwise condition.
You can add and remove tags in Otherwise conditions for global policies (unlike local policy Otherwise conditions), as illustrated above; however, all tags or regular expressions included in the initial everyone who rule must be included in an everyone or everyone except rule in the additional clauses.
Feature limitations
Masking struct and array columns is only available for Databricks data sources.
Immuta only supports Parquet and Delta table types.
If the table is queried through the Query Engine instead of natively in Databricks, any struct fields that have an applied masking policy will have the root struct column made null.
Spark supports a class of data types called complex types, which can represent multiple data values in a single column. Immuta supports masking fields within array and struct columns:
array: an ordered collection of elements
struct: a collection of elements that are primitive or complex types
Without this feature enabled, the struct and array columns of a data source default to jsonb
in the Data Dictionary, and the masking policies that users can apply to jsonb
columns are limited. For example, if a user wanted to mask PII inside the column patient
in the image below, they would have to apply null masking to the entire column or use a custom function instead of just masking name
or address
.
After a global or local policy masks the columns containing PII, users who do not meet the exception specified in the policy will see these values masked:
Note: Immuta uses the >
delimiter to indicate that a field is nested instead of the .
delimiter, since field and column names could include .
.
Caveats
Struct Columns with Many Fields
If users have struct columns with many fields, they will need to either
create the data source against a cluster running Spark 3 or
add spark.debug.maxToStringFields 1000
to their Spark 2 cluster's configuration.
To get column information about a data source, Immuta executes a DESCRIBE
call for the table. In this call, Spark returns a simple string representation of the schema for each column in the table. For the patient
column above, the simple string would look like this:
struct<name:string,ssn:string,age:int,address:struct<city:string,state:string,zipCode:string,street:text>>
Immuta then parses this string into the following format for the data source's dictionary:
However, if the struct contains more than 25 fields, Spark truncates the string, causing the parser to fail and fall back to jsonb
. Immuta will attempt to avoid this failure by increasing the number of fields allowed in the server-side property setting, maxToStringFields
; however, this only works with clusters on a Spark 3 runtime. The maxToStringFields
configuration in Spark 2 cannot be set through the ODBC driver and can only be set through the Spark configuration on the cluster with spark.debug.maxToStringFields 1000
on cluster startup.
Deprecation notice
Support for this feature has been deprecated.
This feature allows Immuta to unmask data that is masked at rest in a remote database using a customer-provided encryption or masking algorithm. To do so,
Data owners apply these tags to columns that are masked (with encryption or another algorithm) in the remote database.
Unmasking process
Immuta will only unmask externally masked data if two conditions are met:
A masking policy is applied against that tagged column.
The querying user is exempt from that policy.
When a user who is exempt from the policy restrictions queries that masked column using a filter, Immuta converts the literal being queried using the external algorithm provided. Consider the following example:
The social_security_number
column is masked on-ingest and has the tag externally_masked_data
applied to it.
This masking policy is applied to the data source in Immuta: Mask using hashing the values in the column tagged externally_masked_data
except for users who belong to the group view_masked_values
.
The querying user belongs to the view_masked_values
group.
When the user above runs the query select * from table A where social_security_number = 220869988
, Immuta converts 220869988
to the masked value using the provided algorithm to query the database and return matching rows.
Use Equality Queries Only
Queries against masked values on-ingest should be equality queries only. For example, if an exempt user ran a query like select * from table A where social_security_number > 220869988
, the results may not make sense (depending on the algorithm used for masking the data).
Tutorials
These policies hide entire rows or objects of data based on the policy being enforced; some of these policies require the data to be tagged as well.
These policies match a user attribute with a row/object/file attribute to determine if that row/object/file should be visible. This process uses a direct string match, so the user attribute would have to match exactly the data attribute in order to see that row of data.
For example, to restrict access to insurance claims data to the state for which the user's home office is located, you could build a policy such as this:
Only show rows where user possesses an attribute in
Office Location
that matches the value in the columnState
for everyone except when user is a member of groupLegal
.
In this case, the Office Location
is retrieved by the identity management system as a user attribute or group. If the user's attribute (Office Location
) was Missouri
, rows containing the value Missouri
in the State
column in the data source would be the only rows visible to that user.
This policy can be thought of as a table "view" created automatically for the user based on the condition of the policy. For example, in the policy below, users who are not members of the Admins
group will only see taxi rides where passenger_count < 2
.
Only show rows where
public.us.taxis.passenger_count <2
for everyone except when user is a member of group Admins.
WHERE clause policy requirement
All columns referenced in the policy must have fully qualified names. Any column names that are unqualified (just the column name) will default to a column of the data source the policy is being applied to (if one matches the name).
These policies restrict access to rows/objects/files that fall within the time restrictions set in the policy. If a data source has time-based restriction policies, queries run against the data source by a user will only return rows/blobs with a date in its event-time
column/attribute from within a certain range.
The time window is based on the event time you select when creating the data source. This value will come from a date/time column in relational sources.
These policies return a limited percentage of the data, which is randomly sampled, at query time. but it is the same sample for all the users. For example, you could limit certain users to only 10% of the data. Immuta uses a hashing policy to return approximately 10% of the data, and the data returned will always be the same; however, the exact number of rows exposed depends on the distribution of high cardinality columns in the database and the hashing type available. Additionally, Immuta will adjust the data exposed when new rows are added or removed.
Best practice: row count
Immuta recommends you use a table with over 1,000 rows for the best results when using a data minimization policy.
Public preview
This feature is currently in public preview and available to all accounts.
If a global masking policy applies to a column, you can still use that masked column in a global row-level policy.
Consider the following policy examples:
Masking policy: Mask values in columns tagged Country
for everyone except users in group Admin
.
Row-level policy: Only show rows where user possesses an attribute in OfficeLocation
that matches the value in column tagged Country
for everyone.
Both of these policies use the Country
tag to restrict access. Therefore, the masking policy and the row-level policy would apply to data source columns with the tag Country
for users who are not in the Admin
group.
Limitations
Deprecation notice
Support for these policies has been deprecated.
Policy state | Enforcement | Description |
---|---|---|
See for a tutorial.
By default, Immuta does not apply subscription policies to data sources when you create them. This behavior ensures that existing policies in your remote data platform remain on that data and that current workflows stay intact. However, you can change this default setting on the so that data sources are immediately locked down when data is registered in Immuta.
For details, see the .
When two or more global subscription policies of the listed below apply to the same data source they may conflict:
For instructions on creating these policies, see the .
After an application admin has , data governors and owners can create global subscription policies using all the functions and variables outlined below.
Variable/Function | Description | Example |
---|
See for a tutorial.
: Only show rows where user is a member of a group that matches the value in the column tagged Department
.
For all policies except , inclusionary logic allows governors to vary policy actions with an Otherwise clause.
Purposes help define the scope and use of data within a project and allow users to meet . Governors create and manage purposes and their sub-purposes, which project owners then add to their project(s) and use to drive Data Policies.
Purposes can be constructed as a hierarchy, meaning that purposes can contain nested sub-purposes, much like in Immuta. This design allows more flexibility in managing purpose-based restriction policies and transparency in the relationships among purposes.
Please refer to the for a tutorial on purpose-based restrictions on data.
Hashed values are different across data sources, so you cannot join on hashed values unless you . Immuta prevents joins on hashed values to protect against link attacks where two data owners may have exposed data with the same masked column (a quasi-identifier), but their data combined by that masked value could result in a sensitive data leak.
This option masks the values using hashing, but allows users to submit an to users who meet the exceptions of the policy.
This option also allows users to submit an to users who meet the exceptions of the policy.
Note: When building conditional masking policies with custom SQL statements, avoid using a column that is masked using in the SQL statement, as this can lead to different behavior depending on whether you’re using the Query Engine, Spark, or Snowflake and may produce results that are unexpected.
Note: The default cardinality cutoff for columns to qualify for k-anonymization is 500. For details about adjusting this setting, navigate to the .
Gender | State |
---|
Gender | State |
---|
Gender | State |
---|
This policy masks data by slightly randomizing the values in a column, while preventing outsiders from inferring content of specific records.
For example, if an analyst wanted to publish data from a health survey she conducted, she could remove direct identifiers and apply to indirect identifiers to make it difficult to single out individuals. However, consider these survey participants, a cohort of male welders who share the same zip code:
participant_id | zip_code | gender | occupation | substance_abuse |
---|
For string data, the random number generator essentially flips a biased coin. If the coin comes up as tails, which it does with the frequency of the replacement rate , then the value is changed to any other possible value in the column, selected uniformly at random from among those values. If the coin comes up as heads, the true value is released.
After Complex Data Types is enabled on the , the column type for struct columns for new data sources will display as struct
in the Data Dictionary. (For data sources that are already in Immuta, users can edit the data source and change the column types for the appropriate columns from jsonb
to struct
.) Once struct fields are available, they can be searched, tagged, and used in masking policies. For example, a user could tag name
, ssn
, and street
as PII instead of the entire patient
column.
Immuta’s External Masking feature expects data to be masked at rest by an external tool consistently on a per-cell basis in the remote database. Immuta then provides policy-based unmasking (and additional masking on top of this using ) through the Query Engine.
System Administrators build their own custom logic and security in an . Because Immuta always pushes down the masked version of the literal when the user is exempt from the policy, the organization should use deterministic IVs/salt to ensure the same value is masked consistently throughout the data.
System Administrators give Immuta access to the that will be used by data owners to indicate that data is masked at rest in the remote database.
Data owners or governors that allow Immuta to reach out to this external REST service to unmask data according to the specifications in the policy.
To configure External Masking, see the .
For an implementation guide, see the .
Note: When building row-level policies with custom SQL statements, avoid using a column that is masked using in the SQL statement, as this can lead to different behavior depending on whether you’re using the Query Engine, Spark, or Snowflake and may produce results that are unexpected.
You can put any valid SQL in the policy. See the for a list of custom functions.
This feature is only available for and integrations.
This feature is only supported for , not local data policies.
Immuta includes the and the . Governors can activate these global policies to automatically enforce restrictions on data sources that have had relevant tags applied to them by users or .
To learn how to activate a templated policy, navigate to the .
Active policies
Enforced
If policies are edited in this state, the changes will be immediately enforced on data sources when the changes are saved.
Deleted policies
Not enforced
Once a policy has been deleted, it cannot be recovered or reactivated.
Disabled policies
Not enforced
Data owners or governors can place a policy in this state at the local level for a specific data source. Although this is similar to the staged policy state, this policy will still be enforced on other data sources after it is disabled for a specific data source.
Staged policies
Not enforced
This state is useful when regularly editing and reviewing policies. This state also allows you to lift a policy's enforcement without deleting the policy so that it can easily be re-enforced. See Clone, Activate, or Stage a Global Policy for a tutorial.
@database | Users who have an attribute key that matches a database will be subscribed to the data source(s) within the database. | @hasAttribute('SpecialAccess', '@hostname.@database.*'): If a user had the attribute |
@hasAttribute('Attribute Name', 'Attribute Value') | Users who have the specified attribute are subscribed to the data source. | @hasAttribute('Occupation', 'Manager'): Any user who has the attribute |
@hasTagAsAttribute('Attribute Name', 'dataSource' or 'column' ) | Users who have an attribute key that matches a tag on a data source or column will be subscribed to that data source. | @hasTagAsAttribute('PersonalData', 'dataSource'): Users who have the attribute key |
@hasTagAsGroup('dataSource' or 'column' ) | Users who are members of a group that matches a tag on a data source or column (respectively) will be subscribed to that data source. | @hasTagAsGroup('dataSource'): If Data Source 1 has the tags |
@hostname | Users who have an attribute key that match a hostname will be subscribed to the data source(s) with that hostname. | @hasAttribute('SpecialAccess', '@hostname.*'): If a user had the attribute |
@iam | Users who sign in with the IAM with the specified ID (ID that displays on the App Settings page) will be subscribed to the data source. | @iam == 'oktaSamlIAM': Any user whose IAM ID is |
@isInGroups('List', 'of', 'Groups') | Users who are members of the specified group(s) can be subscribed to the data source. | @isInGroups('finance','marketing','newhire'): Users who are members of the groups |
@schema | Users who have an attribute key that match this schema will be subscribed to the data source(s) under that schema. | @hasAttribute('SpecialAccess', '@hostname.@database.@schema'): If a user had the attribute |
@table | Users who have an attribute key that match this table will be subscribed to the data source(s). | @hasAttribute('SpecialAccess', '@hostname.@database.@schema.@table'): If a user had the attribute |
Female | Ohio |
Female | Florida |
Female | Florida |
Female | Arkansas |
Male | Florida |
Null | Null |
Female | Florida |
Female | Florida |
Null | Null |
Null | Null |
Female | Null |
Female | Florida |
Female | Florida |
Female | Null |
Null | Florida |
... | ... | ... | ... | ... |
| 75002 | Male | Welder | Y |
| 75002 | Male | Welder | Y |
| 75002 | Male | Welder | Y |
| 75002 | Male | Welder | Y |
| 75002 | Male | Welder | Y |
... | ... | ... | ... | ... |
When building a global data policy, governors can create custom certifications, which must then be acknowledged by data owners when the policy is applied to data sources.
For example, data governors could add a custom certification that states that data owners must verify that tags have been added correctly to their data sources before certifying the policy.
When a global data policy with a custom certification is cloned, the certification is also cloned. If the user who clones the policy and custom certification is not a governor, the policy will only be applied to data sources that user owns.
Private preview
This feature is only available to select accounts. Contact your Immuta representative to enable this feature.
Orchestrated masking policies (OMP) reduces conflicts between masking policies that apply to a single column, allowing policies to scale more effectively across your organization. Furthermore, OMP fosters distributed data stewardship, empowering policy authors who share responsibility of a data set to protect it while allowing data consumers acting under various roles or purposes to access the data.
When multiple masking policies apply to a column, Immuta combines the exception conditions of the masking policy so that data subscribers can access the data when they satisfy one of those exception conditions. Multiple masking policies will be enforced on a column if the following conditions are true:
Policies use the same masking type.
Policies use the for everyone except
condition.
Databricks Spark or Starburst (Trino) integration
OMP supports the following masking types:
Constant
Hashing
Format preserving masking
Null
Regex
Rounding
Governors can apply policies to all columns in a data source or target specific columns with tags or a regular expression. Without orchestrated masking policies enabled, when multiple global policies apply to the same columns, Immuta could only apply one of those policies.
Consider the following example to examine how policies behaved when one tag is used in two different policies:
Mask PII Global Policy 1: Mask using hashing the value in columns tagged email
except when user is acting under the purpose Email Campaign
.
Mask PII Global Policy 2: Mask using hashing the value in columns tagged email
except when user is acting under purpose Marketing
.
For columns tagged email
, only one of these policies is enforced. The Mask PII Global Policy 2 is not applied to the data source, so Immuta is not enforcing the masking policy properly for users who should be able to see emails because they are acting under the Marketing purpose.
Consider the following example where multiple masking policies apply to columns that have multiple tags, resulting in one policy applying:
Global Policy 3: Mask using hashing the value in columns tagged Employee Data
unless users are acting under the purpose Retention Analysis
.
Global Policy 4: Mask using hashing the value in columns tagged HR Data
unless users are acting under the purpose Employee Satisfaction Survey
.
If a column is tagged Employee Data
and HR Data
, Immuta will only apply one of the policies.
With orchestrated masking policies, Immuta applies multiple global masking policies that apply to a single column by combining the policy exceptions with OR. For these policies to combine, the masking type must be identical and the policy must use the for everyone except condition.
Consider the following example, both of these policies will apply to the data source:
Mask PII Global Policy 1: Mask using hashing the value in columns tagged email
except when user is acting under the purpose Email Campaign
.
Mask PII Global Policy 2: Mask using hashing the value in columns tagged email
except when user is acting under purpose Marketing
.
Users acting under the purpose Marketing
or Email Campaign
will be able to see emails in the clear.
However, in the following example, only one of these policies will apply to the data source because one masks using a constant and the other masks using hashing:
Global Policy 5: Mask using the constant REDACTED
the value in columns tagged Employee Data
unless users are acting under the purpose Retention Analysis
.
Global Policy 6: Mask using hashing the value in columns tagged HR Data
unless users are acting under the purpose Employee Satisfaction Survey
.
No UI enhancements were made in this release. Multiple masking policies applied to the same column are visible on a data source, but there is no indication that the exceptions are combined with OR
.
Masking types must match exactly for the policies to be combined. For example, both policies must mask using rounding.
Existing policies will not automatically migrate to the new policy logic when you enable the feature. To re-compute existing policies with the new logic, you must manually trigger global policy changes by staging and re-enabling each policy.
In some cases, two conflicting global data policies may apply to a single data source. When this happens, the policy containing a tag deeper in the hierarchy will apply to the data source to resolve the conflict.
Consider the following global data policies created by a data governor:
Data Policy 1: Mask columns tagged
PII
by making null for everyone on data sources with columns taggedPII
Data Policy 2: Mask columns tagged
PII.SSN
using hashing for everyone on data sources with columns taggedPII.SSN
If a Data Owner creates a data source and applies the PII.SSN
tag, both of these global data policies will apply. Instead of having a conflict, the policy containing a deeper tag in the hierarchy will apply:
In this example, Data Policy 2 cannot be applied to the data source. If data owners wanted to use Data Policy 2 on the data source instead, they would need to disable Data Policy 1.
Once enabled on a data source, global data policies can be edited and disabled by data owners.
Deprecation notice
Support for this feature has been deprecated.
Use Deterministic IVs/Salt
Use deterministic IVs/salt to ensure the same value is masked consistently throughout the data, as Immuta always pushes down the masked version of the literal when the querying user is exempt from the policy.
Immuta can make requests to your External Masking service with one of two authentication methods:
PKI Certificate: Immuta can send requests using a CA certificate, a certificate, and a key.
Alternatively, Immuta can make unauthenticated requests to your REST masking service. This is recommended only if you have other security measures in place (e.g., if the service is in an isolated network that's reachable only by your Immuta environment.)
The unmask
action allows Immuta to build predicates that can be used to query data that is consistently masked at rest in the remote database; it does not dynamically mask data at query time.
This endpoint accepts a set of values and a directive to either mask or unmask them.
Your service will need to parse and process the following body parameters:
Below is an example request payload to mask values in the ssn
and ccn
columns:
Below is an example request payload to unmask values in the ssn
and ccn
columns:
Your service will need to return a map of values that corresponds to the columns and values that were specified in the request. It is important that your service returns the same column keys and that the position of each masked/unmasked value in your response corresponds to the masked/unmasked value from the request.
For example, the following request
could return the following body:
Notice that both ssn
and ccn
columns are present and that each of them contains the exact number of values specified in the request. Immuta will fail to validate responses to its request under the following circumstances:
The response contains column keys that were not present in the request.
The response is missing column keys that were present in the request.
The response doesn't contain the exact number of values for each of the corresponding column keys in the request.
Below are some very simplistic implementation examples of a service with mask()
and unmask()
functions:
Best practice: write global policies
Build global subscription policies using attributes and Discovered tags instead of writing local policies to manage data access. This practice prevents you from having to write or rewrite single policies for every data source added to Immuta and from manually approving data access.
Determine your policy scope:
Select the level of access restriction you would like to apply:
Allow Anyone: Check the Require users to take action to subscribe checkbox to turn off automatic subscription. Enabling this feature will require users to manually subscribe to the data source if they meet the policy.
Allow Anyone Who Asks (and Is Approved):
Click anyone or an individual selected by user from the first dropdown menu in the subscription policy builder.
Note: If you choose an individual selected by user, when users request access to a data source they will be prompted to identify an approver with the permission specified in the policy and how they plan to use the data.
Select the Owner (of the data source), User_Admin, Governance, or Audit permission from the subsequent dropdown menu.
Note: You can add more than one approving party by selecting + Add.
Allow Individually Selected Users
For global policies: From the Where should this policy be applied dropdown menu, select When selected by data owners, On all data sources, or On data sources. If you selected On data sources, finish the condition in one of the following ways:
tagged: Select this option and then search for tags in the subsequent dropdown menu.
with columns tagged: Select this option and then search for tags in the subsequent dropdown menu.
with column names spelled like: Select this option, and then enter a regex and choose a modifier in the subsequent fields.
in server: Select this option and then choose a server from the subsequent dropdown menu to apply the policy to data sources that share this connection string.
created between: Select this option and then choose a start date and an end date in the subsequent dropdown menus.
Click Create Policy. If creating a global policy, you then need to click Activate Policy or Stage Policy.
The Immuta policy builder allows you to use custom functions that reference important Immuta metadata from within your where clause. These custom functions can be seen as utilities that help you create policies easier. Using the Immuta Policy Builder, you can include these functions in your policy queries by choosing where in the sub-action drop-down menu.
@attributeValuesContains()
FunctionThis function returns true
for a given row if the provided column evaluates to an attribute value for which the querying user has a corresponding attribute value. This function requires two arguments and accepts no more than three arguments.
@columnTagged()
FunctionThis function returns the column name with the specified tag.
If this function is used in a Global Policy and the tag doesn't exist on a data source, the policy will not be applied.
@groupsContains()
FunctionThis function returns true
for a given row if the provided column evaluates to a group to which the querying user belongs. This function requires at least one argument.
@hasAttribute()
FunctionThis function returns a boolean indicating if the current user has the specified attribute name and value combination. If the specified attribute name or attribute value has a single quote, you will need to escape it using a \'\'
expression within a custom WHERE
policy.
@iam
FunctionThis function returns the IAM ID for the current user.
None.
@interpolatedComparison()
FunctionDeprecation notice
Support for this function has been deprecated.
The Interpolated Comparison function is perhaps the most complex custom WHERE clause function. It essentially generates chained WHERE clauses. As opposed to the rest of the custom WHERE
clause functions, its output is not a list of values or a boolean, but rather the comparison itself.
¹ A custom function will not accept a placeholder argument when used inside @interpolatedComparison
.
The following invocation of interpolatedComparison()
:
WHERE @interpolatedComparison("group", "=", "upper('##')", "##", @groupsContains, "OR")
is equivalent to writing the following SQL WHERE
clause, given that a user belongs to two groups called founders and engineers:
WHERE ((group = upper('founders')) or (group = upper('engineers')))
@isInGroups()
FunctionThis function returns a boolean indicating if the current user is a member of all of the specified groups. If any of the specified groups has a single quote, you will need to escape it using a \'\'
expression within a custom WHERE
policy.
@isUsingPurpose()
FunctionThis function returns a boolean indicating if the current user is using the specified purpose. If the specified purpose has a single quote, you will need to escape it using a \'\'
expression within a custom WHERE
policy.
@purposesContains()
FunctionThis function returns true
for a given row if the provided column evaluates to a purpose under which the querying user is currently acting. This function requires at least one argument and accepts no more than two arguments.
@username
FunctionThis function returns the current user's user name.
None.
Username and password authentication: Immuta can send requests with a username and a password in the Authorization
HTTP header. In this case, your service will need to be able to parse a and validate the credentials sent with it.
To dynamically mask data, use .
Parameter | Type | Description |
---|
: Click the Policies page icon in the left sidebar and select the Subscription Policies tab. Click Add Subscription Policy and complete the Enter Name field.
: Navigate to a specific data source and click the Policies tab. Click Add Subscription Policy and select New Local Subscription Policy.
Allow Users with Specific Groups/Attributes: See the for instructions.
When you have multiple global ABAC subscription policies to enforce, create separate global ABAC subscription policies, and then Immuta will .
# | Parameter | Type | Required | Description |
---|
# | Parameter | Type | Required | Description |
---|
# | Parameter | Type | Required | Description |
---|
# | Parameter | Type | Required | Description |
---|
The @interpolatedComparison() function does not work with Immuta's .
# | Parameter | Type | Required | Description |
---|
# | Parameter | Type | Required | Description |
---|
# | Parameter | Type | Required | Description |
---|
# | Parameter | Type | Required | Description |
---|
| Enum( | Either |
| Map( | A map of columns containing an array of masked/unmasked values |
1 | Attribute Name | String | Required | The name of the attribute to retrieve values for |
2 | Column Name/SQL Expression | String | Required | The column that contains the value to match the attribute key against |
3 | Placeholder | String | Optional | A placeholder in case the list of values is empty |
1 | Tag Name | String | Required | The name of the tag |
1 | Column Name/SQL Expression | String | Required | The column that contains the value to match the group against |
2 | Placeholder | String | Optional | A placeholder in case the list of values is empty |
1 | Attribute Name | String | Required | The name of the attribute |
2 | Attribute Value | String | Required | The value to correspond with the attribute name |
1 | Column Name | String | Required | The name of a column in the remote database to use in the left-hand side of the clauses |
2 | Comparison Operator | String | Required | The comparison operator to use to compare the column and array value |
3 | Literal Template | String | Required | A necessary SQL logic and the template string which will be replaced with an array |
4 | Template String to Replace | String | Required | The string to replace in Literal Template with the array value |
5 | Custom function | Function | Required | A tagged function¹. Valid functions: |
6 | Logical (Chain) Operator | String | Required | The logical operator used to link each generated clause ( |
1 | Group names | Array (String) | Required | A list of group names, e.g. |
1 | Purpose | String | Required | The name of the purpose to check the user against |
1 | Column Name/SQL Expression | String/Expression | Required | The column that contains the value to match the purpose against |
2 | Placeholder | String | Optional | A placeholder in case the list of values is empty |
This guide demonstrates how to build attribute-based access control (ABAC) subscription policies using the policy builder in the Immuta UI. To build more complex policies than the builder allows, follow the Advanced rules DSL policy guide.
Determine your policy scope:
Global policy: Click the Policies page icon in the left sidebar and select the Subscription Policies tab. Click Add Subscription Policy and complete the Enter Name field.
Local policy: Navigate to a specific data source and click the Policies tab. Click Add Subscription Policy and select New Local Subscription Policy.
Select Allow Users with Specific Groups/Attributes.
Choose the condition that will drive the policy: when user is a member of a group or possesses attribute.
Use the subsequent dropdown to choose the group or attribute for your condition. You can add more than one condition by selecting + ADD. The dropdown menu in the subscription policy builder contains conjunctions for your policy. If you select or, only one of your conditions must apply to a user for them to see the data. If you select and, all of the conditions must apply.
If you would like to make your data source visible in the list of all data sources in the UI to all users, click the Allow Discovery checkbox. Otherwise, this data source will not be discoverable by users who do not meet the criteria established in the policy.
Check the Require users to take action to subscribe checkbox to turn off automatic subscription. Enabling this feature will require users to manually subscribe to the data source if they meet the policy.
If you would like users to have the ability to request approval to the data source, even if they do not have the required attributes or traits, check the Request Approval to Access checkbox. This will require an approver with permissions to be set.
For global policies: Select how you want Immuta to merge multiple global subscription policies that apply to a single data source.
Always Required: Users must meet all the conditions outlined in each policy to get access (i.e., the conditions of the policies are combined with AND
).
Share Responsibility: Users need to meet the condition of at least one policy that applies (i.e., the conditions of the policies are combined with OR
).
Note: To make this option selected by default, see the app settings page.
For global policies: Click the dropdown menu beneath Where should this policy be applied and select When selected by data owners, On all data sources, or On data sources. If you selected On data sources, finish the condition in one of the following ways:
tagged: Select this option and then search for tags in the subsequent dropdown menu.
with columns tagged: Select this option and then search for tags in the subsequent dropdown menu.
with column names spelled like: Select this option, and then enter a regex and choose a modifier in the subsequent fields.
in server: Select this option and then choose a server from the subsequent dropdown menu to apply the policy to data sources that share this connection string.
created between: Select this option and then choose a start date and an end date in the subsequent dropdown menus.
Click Create Policy. If creating a global policy, you then need to click Activate Policy or Stage Policy.
When you have multiple global ABAC subscription policies to enforce, create separate global ABAC subscription policies, and then Immuta will use boolean logic to merge all the relevant policies on the tables they map to.
Data owners who are not governors can write restricted subscription and data policies, which allow them to enforce policies on multiple data sources simultaneously, eliminating the need to write redundant local policies.
Unlike global policies, the application of these policies is restricted to the data sources owned by the users or groups specified in the policy and will change as users' ownerships change.
Click the Policies in the left sidebar and select Subscription Policies.
Click Add Policy, complete the Enter Name field, and then select the level of access restriction you would like to apply to your data source:
Allow Anyone: Check the Require users to take action to subscribe checkbox to turn off automatic subscription. Enabling this feature will require users to manually subscribe to the data source if they meet the policy.
Allow Anyone Who Asks (and Is Approved):
Click anyone or an individual selected by user from the first dropdown menu in the subscription policy builder.
Note: If you choose an individual selected by user, when users request access to a data source they will be prompted to identify an approver with the permission specified in the policy and how they plan to use the data.
Select the Owner (of the data source), User_Admin, Governance, or Audit permission from the subsequent dropdown menu.
Note: You can add more than one approving party by selecting + Add.
Allow Users with Specific Groups/Attributes:
Choose the condition that will drive the policy: when user is a member of a group or possesses attribute. Note: To build more complex policies than the builder allows, follow the Advanced rules DSL policy guide.
Use the subsequent dropdown to choose the group or attribute for your condition. You can add more than one condition by selecting + ADD. The dropdown menu in the subscription policy builder contains conjunctions for your policy. If you select or, only one of your conditions must apply to a user for them to see the data. If you select and, all of the conditions must apply.
If you would like to make your data source visible in the list of all data sources in the UI to all users, click the Allow Discovery checkbox. Otherwise, this data source will not be discoverable by users who do not meet the criteria established in the policy.
Check the Require users to take action to subscribe checkbox to turn off automatic subscription. Enabling this feature will require users to manually subscribe to the data source if they meet the policy.
Allow Individually Selected Users
From the Where should this policy be applied dropdown menu, select When selected by data owners, On all data sources, or On data sources. If you selected On data sources, finish the condition in one of the following ways:
tagged: Select this option and then search for tags in the subsequent dropdown menu.
with columns tagged: Select this option and then search for tags in the subsequent dropdown menu.
with column names spelled like: Select this option, and then enter a regex and choose a modifier in the subsequent fields.
in server: Select this option and then choose a server from the subsequent dropdown menu to apply the policy to data sources that share this connection string.
created between: Select this option and then choose a start date and an end date in the subsequent dropdown menus.
Beneath Whose Data Sources should this policy be restricted to, add users or groups to the policy restriction by typing in the text fields and selecting from the dropdown menus that appear.
Opt to complete the Enter Rationale for Policy (Optional) field.
Click Create Policy, and then click Activate Policy or Stage Policy.
This page details how users can create more complex policies using functions and variables in the Advanced DSL policy builder than the Subscription Policy builder allows.
For instructions on writing Global Subscription Policies, see the following tutorial.
Navigate to the App Settings Page.
Click Advanced Settings in the left panel, and scroll to the Preview Features section.
Check the Enable Enhanced Subscription Policy Variables checkbox.
Click Save.
Navigate to the Policies Page.
Select Subscription Policies and click + Add Subscription Policy.
Choose a name for your policy and select how the policy should grant access.
Select Create using Advanced DSL.
Select the rules for your policy from the Advanced DSL options. For example, creating a @hasTagsAsAttribute('Department', 'dataSource') would subscribe all users who have an attribute that matches a tag on a data source to that data source. So users with the attribute Department.Marketing
would be subscribed to data sources with the tag Marketing.
Select how you want Immuta to merge multiple Global Subscription policies that apply to a single data source.
Always Required: Users must meet all the conditions outlined in each policy to get access (i.e., the conditions of the policies are combined with AND
).
Share Responsibility: Users need to meet the condition of at least one policy that applies (i.e., the conditions of the policies are combined with OR
).
Select where this policy should be applied, On data sources, When selected by data owners, or On all data sources
If a user selects On data sources options include, with columns tagged, with columns spelled like, in server, and created between.
Click Create Policy.
Best practice: write global policies
Build global policies with tags instead of writing local policies to manage data access. This practice will prevent you from having to write or rewrite single policies for every data source added to Immuta.
Determine your policy scope:
Global policy: Click the Policies page icon in the left sidebar and select the Data Policies tab. Click Add Policy and enter a name for your policy.
Local policy: Navigate to a specific data source and click the Policies tab. Scroll to the Data Policies section and click Add Policy.
Select Mask from the first dropdown menu.
Select columns tagged, columns with any tag, columns with no tags, all columns, or columns with names spelled like.
Select a masking type:
using a constant: Enter a constant in the field that appears next to the masking type dropdown.
Enter a regular expression and replacement value in the fields that appear next to the masking type dropdown.
From the next dropdown, choose to make the regex Case Insensitive and/or Global.
Select using fingerprint or specifying the bucket from the subsequent dropdown menu.
If specifying the bucket, select the Bucket Type and then enter the bucket size.
Note: If you choose by rounding as your masking type, the statistics of the data fingerprint will autogenerate the bucket size when the policy is applied to a data source.
with K-Anonymization: Select either using fingerprint or requiring group size of at least and enter a group size in the subsequent dropdown menu.
using the custom function: Enter the custom function native to the underlying database.
Note: The function must be valid for the data type of the column. If it is not, the default masking type will be applied to the column.
Select everyone except, everyone, or everyone who to continue the condition.
everyone except: In the subsequent dropdown menus, choose is a member of group, possesses attribute, or is acting under purpose. Complete the condition with the subsequent dropdown menus.
for everyone who: Complete the Otherwise clause. You can add more than one condition by selecting + ADD. The dropdown menu in the policy builder contains conjunctions for your policy. If you select or, only one of your conditions must apply to a user for them to see the data. If you select and, all of the conditions must apply.
Opt to complete the Enter Rationale for Policy (Optional) field, and then click Add.
For global policies: Click the dropdown menu beneath Where should this policy be applied and select When selected by data owners, On all data sources, or On data sources. If you selected On data sources, finish the condition in one of the following ways:
tagged: Select this option and then search for tags in the subsequent dropdown menu.
with columns tagged: Select this option and then search for tags in the subsequent dropdown menu.
with column names spelled like: Select this option, and then enter a regex and choose a modifier in the subsequent fields.
in server: Select this option and then choose a server from the subsequent dropdown menu to apply the policy to data sources that share this connection string.
created between: Select this option and then choose a start date and an end date in the subsequent dropdown menus.
Click Create Policy. If creating a global policy, you then need to click Activate Policy or Stage Policy.
This step is optional, but data governors can add certifications that outline acknowledgements or require approvals from data owners. For example, data governors could add a custom certification that states that data owners must verify that tags have been added correctly to their data sources before certifying the policy.
Click Add Certification in the data policy builder.
Enter a Certification Label and Certification Text in the corresponding fields of the dialog that appears.
Click Save.
Determine your policy scope:
Select Minimize data source from the first dropdown.
Complete the enter percentage field to limit the amount of data returned at query-time.
Select for everyone except from the next dropdown menu to continue the condition. Additional options include for everyone and for everyone who.
Use the next field to choose the attribute, group, or purpose that you will match values against.
Notes:
If you choose for everyone who as a condition, complete the Otherwise clause before continuing to the next step.
You can add more than one condition by selecting + ADD. The dropdown menu in the far right of the Policy Builder contains conjunctions for your policy. If you select or, only one of your conditions must apply to a user for them to see the data. If you select and, all of the conditions must apply.
Opt to complete the Enter Rationale for Policy (Optional), and then click Add.
For global policies: Click the dropdown menu beneath Where should this policy be applied, and select On all data sources, On data sources, or When selected by data owners. If you select On data sources, finish the condition in one of the following ways:
tagged: Select this option and then search for tags in the subsequent dropdown menu.
with columns tagged: Select this option and then search for tags in the subsequent dropdown menu.
with column names spelled like: Select this option, and then enter a regex and choose a modifier in the subsequent fields.
in server: Select this option and then choose a server from the subsequent dropdown menu to apply the policy to data sources that share this connection string.
created between: Select this option and then choose a start date and an end date in the subsequent dropdown menus.
Click Create Policy. If creating a global policy, you then need to click Activate Policy or Stage Policy.
Determine your policy scope:
Select Limit usage to purpose(s) in the first dropdown menu.
In the next field, select a specific purpose that you would like to restrict usage of this data source to or ANY PURPOSE. You can add more than one condition by selecting + ADD. The dropdown menu in the policy builder contains conjunctions for your policy. If you select or, only one of your conditions must apply to a user for them to see the data. If you select and, all of the conditions must apply.
Select for everyone or for everyone except. If you select for everyone except, you must select conditions that will drive the policy such as group, purpose, or attribute.
Opt to complete the Enter Rationale for Policy (Optional) field, and then click Add.
For global policies: Click the dropdown menu beneath Where should this policy be applied, and select On all data sources, On data sources, or When selected by data owners. If you select On data sources, finish the condition in one of the following ways:
tagged: Select this option and then search for tags in the subsequent dropdown menu.
with columns tagged: Select this option and then search for tags in the subsequent dropdown menu.
with column names spelled like: Select this option, and then enter a regex and choose a modifier in the subsequent fields.
in server: Select this option and then choose a server from the subsequent dropdown menu to apply the policy to data sources that share this connection string.
created between: Select this option and then choose a start date and an end date in the subsequent dropdown menus.
Click Create Policy. If creating a global policy, you then need to click Activate Policy or Stage Policy.
: Click the Policies page icon in the left sidebar and select the Data Policies tab. Click Add Policy and enter a name for your policy.
: Navigate to a specific data source and click the Policies tab. Scroll to the Data Policies section and click Add Policy.
: Click the Policies page icon in the left sidebar and select the Data Policies tab. Click Add Policy and enter a name for your policy.
: Navigate to a specific data source and click the Policies tab. Scroll to the Data Policies section and click Add Policy.
Determine your policy scope:
Global policy: Click the Policies page icon in the left sidebar and select the Data Policies tab. Click Add Policy and enter a name for your policy.
Local policy: Navigate to a specific data source and click the Policies tab. Scroll to the Data Policies section and click Add Policy.
Select the Only show rows action from the first dropdown.
Choose one of the following policy conditions:
Where user
Choose the condition that will drive the policy from the next dropdown: is a member of a group or possesses an attribute.
Use the next field to choose the attribute, group, or purpose that you will match values against.
Use the next dropdown menu to choose the tag that will drive this policy. You can add more than one condition by selecting + ADD. The dropdown menu in the far right of the policy builder contains conjunctions for your policy. If you select or, only one of your conditions must apply to a user for them to see the data. If you select and, all of the conditions must apply.
Where the value in the column tagged
Select the tag from the next dropdown menu.
From the subsequent dropdown, choose is or is not in the list, and then enter a list of comma-separated values.
Where
Enter a valid SQL WHERE clause in the subsequent field. When you place your cursor in this field, a tooltip details valid input and the column names of your data source. See Custom WHERE Clause Functions for more information about specific functions.
Never
The never condition blocks all access to the data source.
Choose the condition that will drive the policy from the next dropdown: for everyone, for everyone except, or for everyone who.
Select the condition that will further define the policy: is a member of group, is acting under a purpose, or possesses attribute.
Use the next field to choose the group, purpose, or attribute that you will match values against.
Choose for everyone, everyone except, or for everyone who to drive the policy. If you choose for everyone except, use the subsequent dropdown to choose the group, purpose, or attribute for your condition. If you choose for everyone who as a condition, complete the Otherwise clause before continuing to the next step.
Opt to complete the Enter Rationale for Policy (Optional) field, and then click Add.
For global policies: Click the dropdown menu beneath Where should this policy be applied, and select On all data sources, On data sources, or When selected by data owners. If you select On data sources, finish the condition in one of the following ways:
tagged: Select this option and then search for tags in the subsequent dropdown menu.
with columns tagged: Select this option and then search for tags in the subsequent dropdown menu.
with column names spelled like: Select this option, and then enter a regex and choose a modifier in the subsequent fields.
in server: Select this option and then choose a server from the subsequent dropdown menu to apply the policy to data sources that share this connection string.
created between: Select this option and then choose a start date and an end date in the subsequent dropdown menus.
Click Create Policy. If creating a global policy, you then need to click Activate Policy or Stage Policy.
Data owners who are not governors can write restricted subscription and data policies, which allow them to enforce policies on multiple data sources simultaneously, eliminating the need to write redundant local policies.
Unlike global policies, the application of these policies is restricted to the data sources owned by the users or groups specified in the policy and will change as users' ownerships change.
Click Policies in the left sidebar and select Data Policies.
Click Add Policy and complete the Enter Name field.
Select how the policy should protect the data. Click a link below for instructions on building that specific data policy:
Opt to complete the Enter Rationale for Policy (Optional) field, and then click Add.
From the Where should this policy be applied dropdown menu, select When selected by data owners, On all data sources, or On data sources. If you selected On data sources, finish the condition in one of the following ways:
tagged: Select this option and then search for tags in the subsequent dropdown menu.
with columns tagged: Select this option and then search for tags in the subsequent dropdown menu.
with column names spelled like: Select this option, and then enter a regex and choose a modifier in the subsequent fields.
in server: Select this option and then choose a server from the subsequent dropdown menu to apply the policy to data sources that share this connection string.
created between: Select this option and then choose a start date and an end date in the subsequent dropdown menus.
Beneath Whose Data Sources should this policy be restricted to, add users or groups to the policy restriction by typing in the text fields and selecting from the dropdown menus that appear.
Click Create Policy, and then click Activate Policy or Stage Policy.
Determine your policy scope:
Global policy: Click the Policies page icon in the left sidebar and select the Data Policies tab. Click Add Policy and enter a name for your policy.
Local policy: Navigate to a specific data source and click the Policies tab. Scroll to the Data Policies section and click Add Policy.
Select Only show data by time from the first dropdown.
Select where data is more recent than or older than from the next dropdown, and then enter the number of minutes, hours, days, or years that you would like to restrict the data source to. Note that unlike many other policies, there is no field to select a column to drive the policy. This type of policy will be driven by the data source's event-time column, which is selected at data source creation.
Choose for everyone, everyone except, or for everyone who to drive the policy. If you choose for everyone except, use the subsequent dropdown to choose the group, purpose, or attribute for your condition. If you choose for everyone who as a condition, complete the Otherwise clause before continuing to the next step.
Opt to complete the Enter Rationale for Policy (Optional) field, and then click Add.
For global policies: Click the dropdown menu beneath Where should this policy be applied, and select On all data sources, On data sources, or When selected by data owners. If you select On data sources, finish the condition in one of the following ways:
tagged: Select this option and then search for tags in the subsequent dropdown menu.
with columns tagged: Select this option and then search for tags in the subsequent dropdown menu.
with column names spelled like: Select this option, and then enter a regex and choose a modifier in the subsequent fields.
in server: Select this option and then choose a server from the subsequent dropdown menu to apply the policy to data sources that share this connection string.
created between: Select this option and then choose a start date and an end date in the subsequent dropdown menus.
Click Create Policy. If creating a global policy, you then need to click Activate Policy or Stage Policy.
Click the Policies icon in the left sidebar and navigate to the Data Policies or Subscription Policies tab.
Click the dropdown menu in the Actions column of a policy and select Clone.
Open the dropdown menu and click Edit in the Global Policy Builder. Then make your changes using the dropdown menus.
Click Create Policy, select Activate Policy, and then click Confirm.
Note: If a cloned policy contains custom certifications, the certifications will also be cloned.
Click the Policies icon in the left sidebar and navigate to the Data Policies tab.
Click the dropdown menu in the Actions column of one of the templated policies and select Activate. Note: If data governors decide to stage an active policy, they select Stage from this dropdown menu.
The templated policy is now applied to all relevant data sources.
Click the dropdown menu in the Actions column of a policy and select Stage. Note: If data governors decide to make a staged policy active, they select Activate from this dropdown menu.
Click Confirm in the dialog that appears.
The policy is now removed from data sources.
To manage and apply existing policies to data sources, a user must have either the CREATE_DATA_SOURCE Immuta permission or be manually assigned the owner role on a data source.
After a policy with a certification requirement is applied to a data source, data owners will receive a notification indicating that they need to certify the policy.
Navigate to the Policies tab of the affected data source, and review the policy in the Data Policies section.
Click Certify Policy.
In the Policy Certification modal, click Sign and Certify.
Once this setting is enabled on the app settings page, data owners can exempt users from policies on a per-data-source basis to allow those users to see all the data, regardless of the global or local policies applied. Note: By default, policy exemptions are disabled in Immuta.
Select a data source and click the Policies tab.
In the Data Policies menu, click Add Exemptions. This button will only be visible if policy exemptions have been enabled in your Immuta instance.
Enter the names of the users or groups to exempt from your policies.
Click Create to finish your exemption policy.
Click Save All to apply the policy to your data source.
If you created a data source that is similar to an existing data source and you would like to apply the same policies, you can copy those policies in the policy builder.
Select a data source and click the Policies tab.
Click Edit Subscription Policy and then select Apply Existing Policies from Data Source.
Search for and select the data source that you would like to copy policies from. If your data source is capable of supporting these policies, they will appear in the policy builder. If not, you will receive an error. Be sure that the data source that you are copying policies from follows a similar structure to your current data source so that the policies remain relevant.
Once you have a data policy in effect, you can view the changes in your policies by clicking the Policy Diff button in the data policies section on a data source's policies tab.
The Policy Diff button displays previous policies and the current policy applied to the data source.
When executing transforms in your data platform, new tables and views are constantly being created, columns added, data changed - transform DDL. This constant transformation can cause latency between the DDL and Immuta policies discovering, adapting, and attaching to those changes, which can result in data leaks. This policy latency is referred to as policy downtime.
The goal is to have as little policy downtime as possible. However, because Immuta is separate from data platforms and those data platforms do not currently have webhooks or eventing service, Immuta does not receive alerts of DDL events. This causes policy downtime.
This page describes the appropriate steps to minimize policy downtime as you execute transforms using dbt or any other transform tool and links to tutorials that will help you complete these steps. This page is specific to Snowflake integrations, but the best practices outlined below can be used with other integrations.
Required:
Native schema monitoring enabled: This feature improves performance of legacy schema monitoring and enhances it by detecting destructively recreated tables (from CREATE OR REPLACE
statements) even if the table schema wasn’t changed. This feature is enabled by default.
Recommended:
Snowflake table grants enabled: This feature implements Immuta subscription policies as table GRANTS
in Snowflake rather than Snowflake row access policies. Note this feature may not be automatically enabled if you were an Immuta customer before January 2023; see Enable Snowflake table grants to enable.
Low row access mode enabled: This feature removes unnecessary Snowflake row access policies when Immuta Project workspaces or impersonation are disabled, which improves the query performance for data consumers.
Sensitive data discovery (SDD) enabled: This feature is only necessary if you are using auto-tagging.
Consult your Immuta professional for instructions to enable private preview features.
To benefit from the scalability and manageability provided by Immuta, you should author all Immuta policies as global policies. Global policies are built at the semantic layer using tags rather than having to reference individual tables with policy. When using global policies, as soon as a new tag is discovered by Immuta, any applicable policy will automatically be applied. This is the most efficient approach for reducing policy downtime.
There are three different approaches for tagging in Immuta:
Auto-tagging (recommended): This approach uses SDD to automatically tag data.
Manually tagging with an external catalog: This approach pulls in the tags from an external catalog. Immuta supports Snowflake, Alation, and Collibra to pull in external tags.
Manually tagging in Immuta: This approach requires a user to create and manually apply tags to all data sources using the Immuta API or UI.
Note that there is added complexity with manually tagging new columns with Alation, Collibra, or Immuta. These listed catalogs can only tag columns that are registered in Immuta. If you have a new column in Snowflake, you must wait until after schema detection runs and detects that new column. Then that column must be manually tagged. This issue is not present when manually tagging with Snowflake because Snowflake is already aware of the column or using SDD because it runs after schema monitoring.
Using this approach, Immuta can automatically tag your data after it is registered by schema monitoring using sensitive data discovery (SDD). SDD is made of algorithms you can customize to discover and tag data most important to you and your organization's policies. Once customized and deployed, any time Immuta discovers a new table or column through schema monitoring, SDD will run and automatically tag the new columns without the need for any manual intervention. This is the recommended option because once SDD is customized for your organization, it will eliminate the human error associated with manually tagging and is more proactive than manual tagging, further reducing policy downtime.
SDD should be enabled and customized before registering any data with Immuta.
Using this approach, you will rely on humans to tag. Those tags will be stored in the data platform (Snowflake) or catalog (Alation, Collibra). Then they will be synchronized back to Immuta. If using this option, Immuta recommends storing the tags in Snowflake because the write of the tags to Snowflake can be managed and reused with SQL from tools like dbt, removing burden from manually tagging on every run. API calls to Alation and Collibra are also possible, but are not accessible over SQL like dbt-to-Snowflake. Manually tagging through the Alation or Collibra UI will negatively impact data downtime.
Your catalog configuration should be enabled before registering any data with Immuta.
Preventing automatic table statistics when registering tables with the Immuta API
If using this approach, you must use a tag from your external catalog to prevent automatic table statistics. This means the preventing tag must be configured on the app settings page and must be identified in the API payload when registering the data source. If you do not do this, and instead use an Immuta tag, your external catalog tags will be overridden and not applied to your data source. This behavior ensures a single source of truth from tags. If for any reason you must mix Immuta tags (with the exception of SDD) with external catalog tags, then you must use the Immuta UI to register data sources.
Using this approach, you will rely on humans to tag, but the tags will be stored directly in Immuta. This can be done using the Immuta API or through the Immuta UI However, manually tagging through the Immuta UI will negatively impact data downtime.
When registering tables with Immuta, you must register each database or catalog with schema monitoring enabled. Schema monitoring means that you do not need to individually register tables but rather make Immuta aware of databases, and then Immuta will periodically scan that database for changes and register any new changes for you. You can also manually run schema monitoring using the Immuta API.
If you are not going to use any of the advanced masking techniques provided by Immuta (format preserving masking, K-Anonymization, or randomized response) you should configure Immuta to prevent automatic table statistics, which will improve data registration performance.
Access to and registration of views created from Immuta-protected tables only need to be taken into consideration if you are using both data and subscription policies.
Views will have existing data policies (row-level security, masking) enforced on them that exist on the backing tables by nature of how views work (the query is passed down to the backing tables). So when you tag and register a view with Immuta, you are re-applying the same data policies on the view that already exist on the backing tables, assuming the tags that drive the data policies are the same on the view’s columns.
If you do not want this behavior or its possible negative performance consequences, then Immuta recommends the following based on how you are tagging data:
For auto-tagging, place your incremental views in a separate database that is not being monitored by Immuta. Do not register them with Immuta, and schema monitoring will not detect them from the separate database.
For either manually tagging option, do not tag view columns.
Using either option, the views will only be accessible to the person who created them. The views will not have any subscription policies applied to give other users access because the views are either not registered in Immuta or there are no tags. To give other users access to the data in the view, they should subscribe to the table at the end of the transform pipeline.
However, if you do want to share the views using subscription policies, you should ensure that the tags that drive the subscription policies exist on the view and that those tags are not shared with tags that drive your data policies. It is possible to target subscription policies on all tables or tables from a specific database rather than using tags.
Policy is enforced on READ
. Therefore, if you run a transform that creates a new table, the data in that new table will represent the policy-enforced data.
For example, if the credit_card_number
column is masked for Steve, on read, the real credit card numbers will be dynamically masked. If Steve then copies them into a new table via the transform, he is physically loading masked credit card numbers into that table. Now if another user, Jane, is allowed to see credit card numbers and queries the table, her query will not show the credit card numbers. This is because credit card numbers are already masked in that table. This problem only exists for tables, not views, since tables have the data physically copied into them.
To address this situation, you can do one of the following:
Use views for all transforms.
Ensure the users who are executing the transforms always have a higher level of access than the users who will consume the results of the transforms. Or, if this is not possible,
Set up a dev environment for creating the transformation code; then, when ready for production, have a promotion process to execute those production transformations using a system account free of all policies. Once the jobs execute as that system account, Immuta will discover, tag, and apply the appropriate policy.
Data downtime refers to the techniques you can use to hide data after transformations until Immuta policies have had a chance to synchronize. It makes data inaccessible; however, it is preferred to the data leaks that could occur while waiting for policies to sync.
Whenever DDL occurs, it can result in policy downtime, such as in the following examples:
An existing table or view is recreated in Snowflake with the CREATE OR REPLACE
statement. This will drop all policy.
A new column is added to a table that needs to be masked from users that have access to that table.
A new table is created in a space where other users have read access.
A tag that drives a policy is updated, deleted, or added in Snowflake with no other changes to the schema or table.
Immuta recommends all of the following best practices to ensure data downtime occurs during policy downtime:
Do not COPY GRANTS
when executing a CREATE OR REPLACE
statement.
Do not use GRANT SELECT ON FUTURE TABLES
.
Use CREATE OR REPLACE
for all DDL, including altering tags, so that access is always revoked.
Without these best practices, you are making unintentional policy decision in Snowflake that may be in conflict with your organization's actual policies enforced by Immuta.
For example, if the CREATE OR REPLACE
added a new column that contains sensitive data, and the user COPY GRANTS
, it opens that column to users, causing a data leak. Instead, access must be blocked using the above data downtime techniques until Immuta has synchronized.
As discussed above, data platforms do not currently have webhooks or eventing service, so Immuta does not receive alerts of DDL events. Schema monitoring will run every 24 hours by default to detect changes, but schema monitoring should also be run across your databases after you make changes to them. You can manually run schema monitoring using the Immuta API and the payload can be scoped down run schema monitoring on a specific database or schema or column detection on a specific table.
When schema monitoring is run globally, it will detect the following:
Any new table
Any new view
Any existing table destructively recreated through CREATE OR REPLACE
(even if there are no schema changes)
Any existing view destructively recreated through CREATE OR REPLACE
(even if there are no schema changes)
Any dropped table
Any new column
Any dropped column
Any column type change (which can impact policy application)
Any tag created, updated, or deleted (but only if the schema changed; otherwise tag changes alone are detected with Immuta’s health check)
Then, if any of the above is detected, for those tables or views, Immuta will complete the following:
Synchronize the existing policy back to the table or view to reduce data downtime
If SDD is enabled, execute SDD on any new columns or tables
If an external catalog is configured, execute a tag synchronization
Synchronize the final updated policy based on the SDD results and tag synchronization
Apply "New" tags to all tables and columns not previously registered in Immuta and lock them down with the "New Column Added" templated global policy
The two options for running schema monitoring are described in the sections below. You can implement them together or separately.
If the data platform supports custom UDFs and external functions, you can wrap the /dataSource/detectRemoteChanges
endpoint with one. Then, as your transform jobs complete, you can use SQL to call this UDF or external function to tell Immuta to execute schema monitoring. The reason for wrapping in a UDF or external function is because dbt and transform jobs always compile to SQL, and the best way to make this happen immediately after the table is created (after the transform job completes) is to execute more SQL in the same job.
Consult your Immuta professional for a custom UDF compatible with Snowflake.
The default schedule for Immuta to run schema monitoring is every night at 12:30 a.m. UTC. However, this schedule can be updated through advanced configuration. The processing time for schema monitoring is dependent on the number of tables and columns changed in your data environment. If you want to change the schedule to run more frequently than daily, Immuta recommends you test the runtime (with a representative set of DDL changes) before making the configuration change.
There are some use cases where you want all users to have access to all tables, but want to mask sensitive data within those tables. While you could do this using just data policies, Immuta recommends you still utilize subscription policies to ensure users are granted access in Snowflake.
Subscription policies allow for Immuta to have a state to move table access into post-data-downtime to realize policy uptime. Without subscription policies, when Immuta synchronizes policy, users will continue to not have access to tables because there is no subscription policy granting them access. If you want all users to have access to all tables, use a global "Anyone" subscription policy in Immuta for all your tables. This will ensure users are granted access back to the tables after data downtime.