LogoLogo
SaaSBook a demo
  • Immuta Documentation - SaaS
  • Configuration
    • Connect Data Platforms
      • Data Platforms Overview
      • Amazon S3 Integration
      • AWS Lake Formation
        • Getting Started with AWS Lake Formation
        • Register an AWS Lake Formation Connection
        • Reference Guides
          • AWS Lake Formation
          • Security and Compliance
          • Protecting Data
          • Accessing Data
      • Azure Synapse Analytics
        • Getting Started with Azure Synapse Analytics
        • Configure Azure Synapse Analytics Integration
        • Reference Guides
          • Azure Synapse Analytics Overview
          • Azure Synapse Analytics Pre-Configuration Details
      • Databricks
        • Databricks Spark
          • Getting Started with Databricks Spark
          • How-to Guides
            • Configure a Databricks Spark Integration
            • Manually Update Your Databricks Cluster
            • Install a Trusted Library
            • Project UDFs Cache Settings
            • Run R and Scala spark-submit Jobs on Databricks
            • DBFS Access
            • Troubleshooting
          • Reference Guides
            • Databricks Spark Integration Configuration
              • Installation and Compliance
              • Customizing the Integration
              • Setting Up Users
              • Spark Environment Variables
              • Ephemeral Overrides
            • Security and Compliance
            • Registering and Protecting Data
            • Accessing Data
              • Delta Lake API
        • Databricks Unity Catalog
          • Getting Started with Databricks Unity Catalog
          • How-to Guides
            • Register a Databricks Unity Catalog Connection
            • Configure a Databricks Unity Catalog Integration
            • Migrating to Unity Catalog
          • Databricks Unity Catalog Integration Reference Guide
      • Google BigQuery Integration
      • Redshift
        • Getting Started with Redshift
        • How-to Guides
          • Configure Redshift Integration
          • Configure Redshift Spectrum
        • Reference Guides
          • Redshift Overview
          • Redshift Pre-Configuration Details
      • Snowflake
        • Getting Started with Snowflake
        • How-to Guides
          • Register a Snowflake Connection
          • Configure a Snowflake Integration
          • Edit or Remove Your Snowflake Integration
          • Integration Settings
            • Snowflake Table Grants Private Preview Migration
            • Enable Snowflake Table Grants
            • Using Snowflake Data Sharing with Immuta
            • Enable Snowflake Low Row Access Policy Mode
              • Upgrade Snowflake Low Row Access Policy Mode
            • Configure Snowflake Lineage Tag Propagation
        • Reference Guides
          • Snowflake Integration
          • Snowflake Table Grants
          • Snowflake Data Sharing with Immuta
          • Snowflake Low Row Access Policy Mode
          • Snowflake Lineage Tag Propagation
          • Warehouse Sizing Recommendations
        • Explanatory Guides
          • Phased Snowflake Onboarding
      • Starburst (Trino)
        • Getting Started with Starburst (Trino)
        • How-to Guides
          • Configure Starburst (Trino) Integration
          • Customize Read and Write Access Policies for Starburst (Trino)
        • Starburst (Trino) Integration Reference Guide
      • Queries Immuta Runs in Your Data Platform
      • Connect Your Data
        • Registering a Connection
          • How-to Guides
            • Run Object Sync
            • Manage Connection Settings
            • Use the Connection Upgrade Manager
              • Troubleshooting
          • Reference Guides
            • Connections
            • Upgrading to Connections
              • Before You Begin
              • API Changes
              • FAQ
        • Registering Metadata
          • Data Sources in Immuta
          • Register Data Sources
            • Amazon S3 Data Source
            • Azure Synapse Analytics Data Source
            • Databricks Data Source
            • Google BigQuery Data Source
            • Redshift Data Source
            • Snowflake Data Source
              • Bulk Create Snowflake Data Sources
            • Create a Starburst (Trino) Data Source
          • Data Source Settings
            • How-to Guides
              • Manage Data Source Settings
              • Manage Data Source Members
              • Manage Access Requests and Tasks
              • Manage Data Dictionary Descriptions
              • Disable Immuta from Sampling Raw Data
            • Data Source Health Checks Reference Guide
          • Schema Monitoring
            • How-to Guides
              • Manage Schema Monitoring
              • Run Schema Monitoring and Column Detection Jobs
            • Reference Guides
              • Schema Monitoring
              • Schema Projects
            • Why Use Schema Monitoring Concept Guide
    • Manage Data Metadata
      • Connect External Catalogs
        • Configure an External Catalog
        • Reference Guides
          • External Catalog Introduction
          • Custom REST Catalog Interface Introduction
          • Custom REST Catalog Interface Endpoints
      • Data Identification
        • Introduction
        • How-to Guides
          • Use Identification
          • Manage Identifiers
          • Run and Manage Identification
          • Manage Identification Frameworks
          • Use Sensitive Data Discovery (SDD)
        • Reference Guides
          • How Competitive Pattern Analysis Works
          • Built-in Identifier Reference
            • Built-in Identifier Changelog
          • Built-in Discovered Tags Reference
      • Data Classification
        • How-to Guides
          • Activate Classification Frameworks
          • Adjust Identification and Classification Framework Tags
          • How to Use a Classification Framework with Your Own Tags
        • Reference Guide
          • Classification Frameworks
      • Manage Tags
        • How-to Guides
          • Create and Manage Tags
          • Add Tags to Data Sources and Projects
        • Tags Reference Guide
    • Manage Users
      • Getting Started with Users
      • Identity Managers (IAMs)
        • How-to Guides
          • Okta LDAP Interface
          • OpenID Connect
            • OpenID Connect Protocol
            • Okta and OpenID Connect
            • OneLogin with OpenID Connect
          • SAML
            • SAML Protocol
            • Microsoft Entra ID
            • Okta SAML SCIM
        • Reference Guides
          • Identity Managers
          • SAML Protocol Configuration Options
          • SAML Single Logout
      • Immuta Users
        • How-to Guides
          • Managing Personas and Permissions
          • User Impersonation
          • Manage Attributes and Groups
          • External User ID Mapping
          • External User Info Endpoint
        • Reference Guides
          • Permissions and Personas
          • Attributes and Groups in Immuta
    • Organize Data into Domains
      • Getting Started with Domains
      • Domains Reference Guide
    • Application Settings
      • How-to Guides
        • App Settings
        • Private Networking Support
          • Data Connection Private Networking
            • AWS PrivateLink for Redshift
            • AWS PrivateLink for API Gateway
            • Databricks Private Connectivity
              • AWS PrivateLink for Databricks
              • Azure Private Link for Databricks
            • Snowflake Private Connectivity
              • AWS PrivateLink for Snowflake
              • Azure Private Link for Snowflake
            • Starburst (Trino) Private Connectivity
              • AWS PrivateLink for Starburst (Trino)
              • Azure Private Link for Starburst (Trino)
          • Immuta SaaS Private Networking
            • Immuta SaaS Private Networking Over AWS PrivateLink
        • BI Tools
          • BI Tool Configuration Recommendations
          • Power BI Configuration Example
          • Tableau Configuration Example
        • IP Filtering
        • System Status Bundle
      • Reference Guides
        • Deployment Options
        • Data Processing
        • Encryption and Masking Practices
  • Marketplace
    • Introduction
      • User Types
      • Walkthrough
    • Share Data Products
      • How-to Guides
        • Manage Data Products
        • Customize the Marketplace Branding
      • Reference Guides
        • Marketplace App Requirements
        • Data Products
        • Marketplace Permissions Matrix
        • Setting Up Domains for Marketplace
    • Review Access Requests
      • How-to Guides
        • View and Respond to Access Requests
        • Manage Request Forms
      • Reference Guides
        • Understanding Access Provisioning and Underlying Policies in Immuta
          • S3 Provisioning Best Practices
        • Integrating with Existing Catalogs
    • Access Data Products
      • How-to Guides
        • Logging into Marketplace
        • Requesting Access to a Data Product
      • Reference Guide
        • Data Source Access Status
    • Short-Term Limitations
  • Governance
    • Introduction
      • Automate Data Access Control Decisions
        • The Two Paths
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
        • Test and Deploy Policy
      • Compliantly Open More Sensitive Data for ML and Analytics
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
    • Author Policies for Data Access Control
      • Introduction
        • Scalability and Evolvability
        • Understandability
        • Distributed Stewardship
        • Consistency
        • Availability of Data
      • Policies
        • Authoring Policies at Scale
        • Data Engineering with Limited Policy Downtime
        • Subscription Policies
          • Overview
          • How-to Guides
            • Author a Subscription Policy
            • Author an ABAC Subscription Policy
            • Subscription Policies Advanced DSL Guide
            • Author a Restricted Subscription Policy
            • Clone, Activate, or Stage a Global Policy
          • Reference Guides
            • Subscription Policy Access Types
            • Advanced Use of Special Functions
        • Data Policies
          • Overview
          • How-to Guides
            • Author a Masking Data Policy
            • Author a Minimization Policy
            • Author a Purpose-Based Restriction Policy
            • Author a Restricted Data Policy
            • Author a Row-Level Policy
            • Author a Time-Based Restriction Policy
            • Policy Certifications and Diffs
          • Reference Guides
            • Data Policy Types
            • Masking Policies
            • Row-Level Policies
            • Custom WHERE Clause Functions
            • Data Policy Conflicts and Fallback
            • Custom Data Policy Certifications
            • Orchestrated Masking Policies
      • Projects and Purpose-Based Access Control
        • Projects and Purpose Controls
          • Getting Started
          • How-to Guides
            • Create a Project
            • Create and Manage Purposes
            • Project Management
              • Manage Projects and Project Settings
              • Manage Project Data Sources
              • Manage Project Members
          • Reference Guides
            • Projects and Purposes
          • Concept Guide
            • Why Use Purposes?
        • Equalized Access
          • Manage Project Equalization How-to Guide
          • Equalized Access Reference Guide
          • Why Use Project Equalization?
        • Masked Joins
          • Enable Masked Joins How-to Guide
          • Why Use Masked Joins?
        • Writing to Projects
          • How-to Guides
            • Create and Manage Snowflake Project Workspaces
            • Create and Manage Databricks Spark Project Workspaces
            • Write Data to the Workspace
          • Reference Guides
            • Writing to Projects
            • Project UDFs (Databricks)
      • Data Consumers
        • Subscribe to a Data Source
        • Query Data
          • Querying Snowflake Data
          • Querying Databricks Data
          • Querying Starburst (Trino) Data
          • Querying Databricks SQL Data
          • Querying Redshift Data
          • Querying Azure Synapse Analytics Data
        • Subscribe to Projects
    • Observe Access and Activity
      • Introduction
      • Audit
        • How-to Guides
          • Export Audit Logs to S3
          • Export Audit Logs to ADLS
          • Use Immuta Audit
          • Run Governance Reports
        • Reference Guides
          • Universal Audit Model (UAM)
            • UAM Schema Reference Guide
          • Query Audit Logs
            • Snowflake Query Audit Logs
            • Databricks Unity Catalog Query Audit Logs
            • Databricks Spark Query Audit Logs
            • Starburst (Trino) Query Audit Logs
          • Audit Export GraphQL Reference Guide
          • Unknown Users in Audit Logs
          • Governance Report Types
      • Dashboards
        • Use the Audit Dashboards How-To Guide
        • Audit Dashboards Reference Guide
      • Monitors
        • Manage Monitors and Observations
        • Monitors Reference Guide
  • Releases
    • Deployment Notes
      • 2024
      • 2023
      • 2022
    • Scheduled Maintenance Windows
    • Immuta Support Matrix Overview
    • Immuta CLI Release Notes
    • Preview Features
      • Features in Preview
    • Deprecations
  • Developer Guides
    • The Immuta CLI
      • Install and Configure the Immuta CLI
      • Manage Your Immuta Tenant
      • Manage Data Sources
      • Manage Sensitive Data Discovery
        • Manage Sensitive Data Discovery Rules
        • Manage Identification Frameworks
        • Run Sensitive Data Discovery on Data Sources
      • Manage Policies
      • Manage Projects
      • Manage Purposes
      • Manage Audit Export
    • The Immuta API
      • Authentication
      • Integrations API
        • Getting Started
        • How-to Guides
          • Configure an Amazon S3 Integration
          • Configure an Azure Synapse Analytics Integration
          • Configure a Databricks Unity Catalog Integration
          • Configure a Google BigQuery Integration
          • Configure a Redshift Integration
          • Configure a Snowflake Integration
          • Configure a Starburst (Trino) Integration
        • Reference Guides
          • Integrations API Endpoints
          • Integration Configuration Payload
          • Response Schema
          • HTTP Status Codes and Error Messages
      • Connections API
        • How-to Guides
          • Register a Connection
            • Register a Snowflake Connection
            • Register a Databricks Unity Catalog Connection
            • Register an AWS Lake Formation Connection
          • Manage a Connection
          • Deregister a Connection
        • Connection Registration Payloads Reference Guide
      • Marketplace API
        • Marketplace API Endpoints
        • Source Controlling Data Products
      • Immuta V2 API
        • Data Source Payload Attribute Details
          • Data Source Request Payload Examples
        • Create Policies API Examples
        • Create Projects API Examples
        • Create Purposes API Examples
      • Immuta V1 API
        • Configure Your Instance of Immuta
          • Get Job Status
          • Manage Frameworks
          • Manage IAMs
          • Manage Licenses
          • Manage Notifications
          • Manage Identification
            • API Changes - Identification Frameworks to Identifiers in Domains
            • Manage Sensitive Data Discovery (SDD)
          • Manage Tags
          • Manage Webhooks
          • Search Filters
        • Connect Your Data
          • Create and Manage an Amazon S3 Data Source
          • Create an Azure Synapse Analytics Data Source
          • Create a Databricks Data Source
          • Create a Redshift Data Source
          • Create a Snowflake Data Source
          • Create a Starburst (Trino) Data Source
          • Manage the Data Dictionary
        • Use Domains
        • Manage Data Access
          • Manage Access Requests
          • Manage Data and Subscription Policies
          • Manage Write Policies
            • Write Policies Payloads and Response Schema Reference Guide
          • Policy Handler Objects
          • Search Connection Strings
          • Search for Organizations
          • Search Schemas
        • Subscribe to and Manage Data Sources
        • Manage Projects and Purposes
          • Manage Projects
          • Manage Purposes
        • Generate Governance Reports
Powered by GitBook

Self-managed versions

  • 2025.1
  • 2024.3
  • 2024.2

Resources

  • Immuta Changelog

Copyright © 2014-2025 Immuta Inc. All rights reserved.

On this page
  • Prerequisites
  • Step 1: Create global policies and prepare tags for data sources
  • Auto-tagging (recommended)
  • Manually tagging with an external catalog
  • Manually tagging in Immuta
  • Step 2: Register your data in Immuta
  • Step 3: Consider the result and user making transformations
  • Views vs tables
  • Access level of job executioner
  • Step 4: Force data downtime
  • Best practices for Snowflake
  • Best practices for Databricks Unity Catalog
  • Step 5: Initiate policy uptime
  • Alert Immuta through the API or a custom function
  • Automatically run object sync or schema detection
  • Recommended Immuta policy types

Was this helpful?

Export as PDF
  1. Governance
  2. Author Policies for Data Access Control
  3. Policies

Data Engineering with Limited Policy Downtime

PreviousAuthoring Policies at ScaleNextSubscription Policies

Last updated 2 hours ago

Was this helpful?

When executing transforms in your data platform, new tables and views are constantly being created, columns added, data changed - transform DDL. This constant transformation can cause latency between the DDL and Immuta policies discovering, adapting, and attaching to those changes, which can cause data leaks. This policy latency is referred to as policy downtime.

The goal is to have as little policy downtime as possible. However, because Immuta is separate from data platforms and those data platforms do not currently have webhooks or eventing service, Immuta does not receive alerts of DDL events. This causes policy downtime.

This page describes the appropriate steps to minimize policy downtime as you execute transforms using dbt or any other transform tool and links to how-tos that will help you complete these steps.

Prerequisites

  • : This feature implements Immuta subscription policies as table GRANTS in Snowflake rather than Snowflake row access policies. Note this feature may not be automatically enabled if you were an Immuta customer before January 2023; see to enable.

  • : This feature removes unnecessary Snowflake row access policies when Immuta project workspaces or impersonation are disabled, which improves the query performance for data consumers.

Requirement:

(enabled by default): This feature detects destructively recreated tables (from CREATE OR REPLACE statements) and matches them with data sources in Immuta, even if the table schema wasn’t changed.

Requirement:

: This feature detects destructively recreated tables (from CREATE OR REPLACE statements) even if the table schema wasn’t changed. Enable schema monitoring when registering your data sources.

Recommended (if you are using Snowflake):

  • : This feature implements Immuta subscription policies as table GRANTS in Snowflake rather than Snowflake row access policies. Note this feature may not be automatically enabled if you were an Immuta customer before January 2023; see to enable.

  • : This feature removes unnecessary Snowflake row access policies when Immuta project workspaces or impersonation are disabled, which improves the query performance for data consumers.

Step 1: Create global policies and prepare tags for data sources

To benefit from the scalability and manageability provided by Immuta, you should author all Immuta policies as . Global policies are built at the semantic layer using tags rather than having to reference individual tables with policy. When using global policies, as soon as a new tag is discovered by Immuta, any applicable policy will automatically be applied. This is the most efficient approach for reducing policy downtime.

There are three different approaches for tagging in Immuta:

  1. : This approach uses to automatically tag data.

  2. : This approach pulls in the tags from an external catalog. See the for a list of the supported external catalogs.

  3. : This approach requires a user to create and manually apply tags to all data sources using the Immuta API or UI.

Note that there is added complexity with manually tagging new columns with Alation, Collibra, Microsoft Purview or Immuta. These listed catalogs can only tag columns that are registered in Immuta. If you have a new column, you must wait until after object sync or schema detection runs and detects that new column. Then that column must be manually tagged. This issue is not present when manually tagging with Snowflake or Databricks Unity Catalog because they are already aware of the column or using identification because it runs automatically with object sync or schema monitoring.

Auto-tagging (recommended)

Manually tagging with an external catalog

Manually tagging in Immuta

Step 2: Register your data in Immuta

Step 3: Consider the result and user making transformations

Views vs tables

Access to and registration of views created from Immuta-protected tables only need to be taken into consideration if you are using both data and subscription policies.

Views will have existing data policies (row-level security, masking) enforced on them that exist on the backing tables by nature of how views work (the query is passed down to the backing tables). So when you tag and register a view with Immuta, you are re-applying the same data policies on the view that already exist on the backing tables, assuming the tags that drive the data policies are the same on the view’s columns.

If you do not want this behavior or its possible negative performance consequences, then Immuta recommends the following based on how you are tagging data:

  • For auto-tagging, place your incremental views in a separate database that is not being monitored by Immuta. Do not register them with Immuta, and schema monitoring will not detect them from the separate database.

  • For either manually tagging option, do not tag view columns.

Using either option, the views will only be accessible to the person who created them. The views will not have any subscription policies applied to give other users access because the views are either not registered in Immuta or there are no tags. To give other users access to the data in the view, they should subscribe to the table at the end of the transform pipeline.

However, if you do want to share the views using subscription policies, you should ensure that the tags that drive the subscription policies exist on the view and that those tags are not shared with tags that drive your data policies. It is possible to target subscription policies on all tables or tables from a specific database rather than using tags.

Access level of job executioner

Policy is enforced on READ. Therefore, if you run a transform that creates a new table, the data in that new table will represent the policy-enforced data.

For example, if the credit_card_number column is masked for Steve, on read, the real credit card numbers will be dynamically masked. If Steve then copies them into a new table via the transform, he is physically loading masked credit card numbers into that table. Now if another user, Jane, is allowed to see credit card numbers and queries the table, her query will not show the credit card numbers. This is because credit card numbers are already masked in that table. This problem only exists for tables, not views, since tables have the data physically copied into them.

To address this situation, you can do one of the following:

  • Use views for all transforms.

  • Ensure the users who are executing the transforms always have a higher level of access than the users who will consume the results of the transforms. Or, if this is not possible,

  • Set up a dev environment for creating the transformation code; then, when ready for production, have a promotion process to execute those production transformations using a system account free of all policies. Once the jobs execute as that system account, Immuta will discover, tag, and apply the appropriate policy.

Step 4: Force data downtime

Data downtime refers to the techniques you can use to hide data after transformations until Immuta policies have had a chance to synchronize. It makes data inaccessible; however, it is preferred to the data leaks that could occur while waiting for policies to sync.

Whenever DDL occurs, it can cause policy downtime, such as in the following examples:

  • A new column is added to a table that needs to be masked from users that have access to that table (potential data leak).

  • A new table is created in a space where other users have read access on future tables (potential data leak).

  • A tag that drives a policy is updated, deleted, or newly added with no other changes to the schema or table. This is a limitation of how Immuta can discover changes - it is too inefficient to search for tag changes, so schema changes are what drives Immuta to take action.

Best practices for Snowflake

Immuta recommends all of the following best practices to ensure data downtime occurs during policy downtime:

  • Use CREATE OR REPLACE for all DDL, including altering tags, so that access is always revoked.

Without these best practices, you are making unintentional policy decisions in Snowflake that may be in conflict with your organization's actual policies enforced by Immuta.

For example, if the CREATE OR REPLACE added a new column that contains sensitive data, and the user COPY GRANTS, it opens that column to users, causing a data leak. Instead, access must be blocked using the above data downtime techniques until Immuta has synchronized.

Best practices for Databricks Unity Catalog

Immuta recommends all of the following best practices to ensure data downtime occurs during policy downtime:

Without these best practices, you are making unintentional policy decisions in Databricks Unity Catalog that may be in conflict with your organization's actual policies enforced by Immuta.

For example, if you GRANT SELECT on a schema, and then someone writes a new table with sensitive data into that schema, it could cause a data leak. Instead, access must be blocked using the above data downtime techniques until Immuta has synchronized.

Step 5: Initiate policy uptime

When object sync or schema monitoring is run globally, it will detect the following:

  • Any new table

  • Any new view

  • Changes to the object type backing an Immuta data source (for example, when a TABLE changes to a VIEW); when an object type changes, Immuta reapplies existing policies to the data source.

  • Any existing table destructively recreated through CREATE OR REPLACE (even if there are no schema changes)

  • Any existing view destructively recreated through CREATE OR REPLACE (even if there are no schema changes)

  • Any dropped table

  • Any new column

  • Any dropped column

  • Any column type change (which can impact policy application)

  • Any tag created, updated, or deleted (but only if the schema changed; otherwise tag changes alone are detected with Immuta’s health check)

Then, if any of the above is detected, for those tables or views, Immuta will complete the following:

  1. Synchronize the existing policy back to the table or view to reduce data downtime.

  2. If identification is enabled, execute identification on any new columns or tables.

  3. If an external catalog is configured, execute a tag synchronization.

  4. Synchronize the final updated policy based on the identification results and tag synchronization.

  5. Apply New tags:

    1. Using connections: If a global policy uses the new tag, all new columns not previously registered in Immuta will have the New tag applied, which will lock them down.

See the image below for an illustration of this process with Snowflake.

The two options for running object sync or schema monitoring are described in the sections below. You can implement them together or separately.

Alert Immuta through the API or a custom function

If the data platform supports custom UDFs and external functions, you can wrap the /data/crawl/{objectPath} endpoint with one. Then, as your transform jobs complete, you can use SQL to call this UDF or external function to tell Immuta to execute schema monitoring. The reason for wrapping in a UDF or external function is because dbt and transform jobs always compile to SQL, and the best way to make this happen immediately after the table is created (after the transform job completes) is to execute more SQL in the same job.

If the data platform supports custom UDFs and external functions, you can wrap the /dataSource/detectRemoteChanges endpoint with one. Then, as your transform jobs complete, you can use SQL to call this UDF or external function to tell Immuta to execute schema monitoring. The reason for wrapping in a UDF or external function is because dbt and transform jobs always compile to SQL, and the best way to make this happen immediately after the table is created (after the transform job completes) is to execute more SQL in the same job.

Consult your Immuta professional for a custom UDF compatible with Snowflake or Databricks Unity Catalog.

Automatically run object sync or schema detection

The default schedule for Immuta to run object sync is every night at 1:00 AM UTC. The processing time for object sync is dependent on the number of tables and columns changed in your data environment.

The default schedule for Immuta to run schema monitoring is every night at 12:30 a.m. UTC. However, this schedule can be updated by changing some advanced configuration. The processing time for schema monitoring is dependent on the number of tables and columns changed in your data environment. If you want to change the schedule to run more frequently than daily, Immuta recommends you test the runtime (with a representative set of DDL changes) before making the configuration change.

Consult your Immuta professional to update the schema monitoring schedule, if desired.

Recommended Immuta policy types

There are some use cases where you want all users to have access to all tables, but want to mask sensitive data within those tables. While you could do this using just data policies, Immuta recommends you still utilize subscription policies to ensure users are granted access.

Using this approach, Immuta can automatically tag your data after it is registered by object sync or schema monitoring using identification. Identification is made of algorithms to discover and tag the data most important to you and your organization's policies. Once customized and deployed, any time Immuta discovers a new table or column through object sync or schema monitoring, identification will run and automatically tag the new columns without the need for any manual intervention. This is the recommended option because once identification is customized for your organization, it will eliminate the human error associated with manually tagging and is more proactive than manual tagging, further reducing policy downtime.

Using this approach, you will rely on humans to tag. Those tags will be stored in the data platform or catalog. Then they will be synchronized back to Immuta. If using this option, Immuta recommends storing the tags in the data platform because the write of the tags to Snowflake or Databricks Unity Catalog can be , removing burden from manually tagging on every run. API calls to external catalogs are also possible, but are not accessible over dbt. Manually tagging through the Alation, Collibra, or Microsoft Purview UI will negatively impact .

Using this approach, you will rely on humans to tag, but the tags will be stored directly in Immuta. This can be done or through the However, manually tagging through the Immuta UI will negatively impact .

When registering a connection with Immuta, is enabled by default. Object sync will register all data objects that the Immuta system user can access and will periodically sync that connection for changes and register any new data objects for you. .

When registering tables with Immuta, you must register each database or catalog with . Schema monitoring means that you do not need to individually register tables but rather make Immuta aware of databases, and then Immuta will periodically scan that database for changes and register any new changes for you. .

An existing table or view is recreated with the statement. This will drop all policies resulting in users losing all access. There is one exception: with Databricks Unity Catalog, the grants on the table are not dropped, which means masking and row filtering policies are dropped but the table access is not, making policy downtime even more critical to manage.

Do not when executing a CREATE OR REPLACE statement.

Do not use .

Do not grant to catalogs or schemas, because those (future is the problematic piece).

There is no way to stop Databricks Unity Catalog from carrying over table grants on CREATE OR REPLACE statements, so ensure you as quickly as possible if you have row filters or masking policies on that table.

As discussed above, data platforms do not currently have webhooks or eventing service, so Immuta does not receive alerts of DDL events. Object sync or schema monitoring will run every 24 hours by default to detect changes, but can be manually run across your databases after you make changes to them. You can manually run or using the Immuta API and the payload can be scoped down to run on a specific database, schema, or table.

Using integrations: All new tables and columns not previously registered in Immuta will have the New tag applied, which will lock them down with the .

Subscription policies allow for Immuta to have a state to move table access into post-data-downtime to realize policy uptime. Without subscription policies, when Immuta synchronizes policy, users will continue to not have access to tables because there is no subscription policy granting them access. If you want all users to have access to all tables, use a in Immuta for all your tables. This will ensure users are granted access back to the tables after data downtime.

you can customize
CREATE OR REPLACE
COPY GRANTS
GRANT SELECT ON FUTURE TABLES
apply to current and future objects
global "Anyone" subscription policy
managed and reused with SQL from tools like dbt
data downtime
initiate policy uptime
Snowflake table grants enabled
Enable Snowflake table grants
Low row access mode enabled
Snowflake table grants enabled
Enable Snowflake table grants
Low row access mode enabled
global policies
identification
Auto-tagging (recommended)
Manually tagging with an external catalog
Manually tagging in Immuta
Immuta UI
data downtime
Object sync
Support matrix
object sync
You can also manually run object sync using the Immuta API
Schema monitoring
object sync
schema monitoring enabled
using the Immuta API
"New Column Added" templated global policy
You can also manually run schema monitoring using the Immuta API
schema monitoring