LogoLogo
2024.3
  • Immuta Documentation - 2024.3
  • What is Immuta?
  • Self-Managed Deployment
    • Requirements
    • Install
      • Managed Public Cloud
      • Red Hat OpenShift
    • Upgrade
      • Migrating to the New Helm Chart
      • Upgrading (IEHC)
      • Upgrading (IHC)
    • Guides
      • Ingress Configuration
      • TLS Configuration
      • Cosign Verification
      • Production Best Practices
      • Rotating Credentials
      • External Cache Configuration
      • Enabling Legacy Query Engine and Fingerprint
      • Private Container Registries
      • Air-Gapped Environments
    • Disaster Recovery
    • Troubleshooting
    • Conventions
  • Integrations
    • Immuta Integrations
    • Snowflake
      • Getting Started
      • How-to Guides
        • Configure a Snowflake Integration
        • Snowflake Table Grants Migration
        • Edit or Remove Your Snowflake Integration
        • Integration Settings
          • Enable Snowflake Table Grants
          • Use Snowflake Data Sharing with Immuta
          • Configure Snowflake Lineage Tag Propagation
          • Enable Snowflake Low Row Access Policy Mode
            • Upgrade Snowflake Low Row Access Policy Mode
      • Reference Guides
        • Snowflake Integration
        • Snowflake Data Sharing
        • Snowflake Lineage Tag Propagation
        • Snowflake Low Row Access Policy Mode
        • Snowflake Table Grants
        • Warehouse Sizing Recommendations
      • Phased Snowflake Onboarding Concept Guide
    • Databricks Unity Catalog
      • Getting Started
      • How-to Guides
        • Configure a Databricks Unity Catalog Integration
        • Migrate to Unity Catalog
      • Databricks Unity Catalog Integration Reference Guide
    • Databricks Spark
      • How-to Guides
        • Configuration
          • Simplified Databricks Spark Configuration
          • Manual Databricks Spark Configuration
          • Manually Update Your Databricks Cluster
          • Install a Trusted Library
        • DBFS Access
        • Limited Enforcement in Databricks Spark
        • Hide the Immuta Database in Databricks
        • Run spark-submit Jobs on Databricks
        • Configure Project UDFs Cache Settings
        • External Metastores
      • Reference Guides
        • Databricks Spark Integration
        • Databricks Spark Pre-Configuration Details
        • Configuration Settings
          • Databricks Spark Cluster Policies
            • Python & SQL
            • Python & SQL & R
            • Python & SQL & R with Library Support
            • Scala
            • Sparklyr
          • Environment Variables
          • Ephemeral Overrides
          • Py4j Security Error
          • Scala Cluster Security Details
          • Databricks Security Configuration for Performance
        • Databricks Change Data Feed
        • Databricks Libraries Introduction
        • Delta Lake API
        • Spark Direct File Reads
        • Databricks Metastore Magic
    • Starburst (Trino)
      • Getting Started
      • How-to Guides
        • Configure Starburst (Trino) Integration
        • Customize Read and Write Access Policies for Starburst (Trino)
      • Starburst (Trino) Integration Reference Guide
    • Redshift
      • Getting Started
      • How-to Guides
        • Configure Redshift Integration
        • Configure Redshift Spectrum
      • Reference Guides
        • Redshift Integration
        • Redshift Pre-Configuration Details
    • Azure Synapse Analytics
      • Getting Started
      • Configure Azure Synapse Analytics Integration
      • Reference Guides
        • Azure Synapse Analytics Integration
        • Azure Synapse Analytics Pre-Configuration Details
    • Amazon S3
    • Google BigQuery
    • Legacy Integrations
      • Securing Hive and Impala Without Sentry
      • Enabling ImmutaGroupsMapping
    • Catalogs
      • Getting Started with External Catalogs
      • Configure an External Catalog
      • Reference Guides
        • External Catalogs
        • Custom REST Catalogs
          • Custom REST Catalog Interface Endpoints
  • Data
    • Registering Metadata
      • Data Sources in Immuta
      • Register Data Sources
        • Create a Data Source
        • Create an Amazon S3 Data Source
        • Create a Google BigQuery Data Source
        • Bulk Create Snowflake Data Sources
      • Data Source Settings
        • How-to Guides
          • Manage Data Sources and Data Source Settings
          • Manage Data Source Members
          • Manage Access Requests and Tasks
          • Manage Data Dictionary Descriptions
          • Disable Immuta from Sampling Raw Data
        • Data Source Health Checks Reference Guide
      • Schema Monitoring
        • How-to Guides
          • Run Schema Monitoring and Column Detection Jobs
          • Manage Schema Monitoring
        • Reference Guides
          • Schema Monitoring
          • Schema Projects
        • Why Use Schema Monitoring?
    • Domains
      • Getting Started with Domains
      • Domains Reference Guide
    • Tags
      • How-to Guides
        • Create and Manage Tags
        • Add Tags to Data Sources and Projects
      • Tags Reference Guide
  • People
    • Getting Started
    • Identity Managers (IAMs)
      • How-to Guides
        • Okta LDAP Interface
        • OpenID Connect
          • OpenID Connect Protocol
          • Okta and OpenID Connect
          • OneLogin with OpenID
        • SAML
          • SAML Protocol
          • Microsoft Entra ID
          • Okta SAML SCIM
      • Reference Guides
        • Identity Managers
        • SAML Single Logout
        • SAML Protocol Configuration Options
    • Immuta Users
      • How-to Guides
        • Managing Personas and Permissions
        • Manage Attributes and Groups
        • User Impersonation
        • External User ID Mapping
        • External User Info Endpoint
      • Reference Guides
        • Attributes and Groups in Immuta
        • Permissions and Personas
  • Discover Your Data
    • Getting Started with Discover
    • Introduction
    • Data Discovery
      • How-to Guides
        • Enable Sensitive Data Discovery (SDD)
        • Manage Identification Frameworks
        • Manage Identifiers
        • Run and Manage SDD on Data Sources
        • Manage Sensitive Data Discovery Settings
        • Migrate From Legacy to Native SDD
      • Reference Guides
        • How Competitive Criteria Analysis Works
        • Built-in Identifier Reference
        • Built-in Discovered Tags Reference
    • Data Classification
      • How-to Guides
        • Activate Classification Frameworks
        • Adjust Identification and Classification Framework Tags
        • How to Use a Built-In Classification Framework with Your Own Tags
      • Built-in Classification Frameworks Reference Guide
  • Detect Your Activity
    • Getting Started with Detect
      • Monitor and Secure Sensitive Data Platform Query Activity
        • User Identity Best Practices
        • Integration Architecture
        • Snowflake Roles Best Practices
        • Register Data Sources
        • Automate Entity and Sensitivity Discovery
        • Detect with Discover: Onboarding Guide
        • Using Immuta Detect
      • General Immuta Configuration
        • User Identity Best Practices
        • Integration Architecture
        • Databricks Roles Best Practices
        • Register Data Sources
    • Introduction
    • Audit
      • How-to Guides
        • Export Audit Logs to S3
        • Export Audit Logs to ADLS
        • Run Governance Reports
      • Reference Guides
        • Universal Audit Model (UAM)
          • UAM Schema
        • Query Audit Logs
          • Snowflake Query Audit Logs
          • Databricks Unity Catalog Query Audit Logs
          • Databricks Spark Query Audit Logs
          • Starburst (Trino) Query Audit Logs
        • Audit Export GraphQL Reference Guide
        • Governance Report Types
        • Unknown Users in Audit Logs
      • Deprecated Audit Guides
        • Legacy to UAM Migration
        • Download Audit Logs
        • System Audit Logs
    • Dashboards
      • Use the Detect Dashboards How-To Guide
      • Detect Dashboards Reference Guide
    • Monitors
      • Manage Monitors and Observations
      • Detect Monitors Reference Guide
  • Secure Your Data
    • Getting Started with Secure
      • Automate Data Access Control Decisions
        • The Two Paths: Orchestrated RBAC and ABAC
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
        • Test and Deploy Policy
      • Compliantly Open More Sensitive Data for ML and Analytics
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
      • Federated Governance for Data Mesh and Self-Serve Data Access
        • Defining Domains
        • Managing Data Products
        • Managing Data Metadata
        • Apply Federated Governance
        • Discover and Subscribe to Data Products
    • Introduction
      • Scalability and Evolvability
      • Understandability
      • Distributed Stewardship
      • Consistency
      • Availability of Data
    • Authoring Policies in Secure
      • Authoring Policies at Scale
      • Data Engineering with Limited Policy Downtime
      • Subscription Policies
        • How-to Guides
          • Author a Subscription Policy
          • Author an ABAC Subscription Policy
          • Subscription Policies Advanced DSL Guide
          • Author a Restricted Subscription Policy
          • Clone, Activate, or Stage a Global Policy
        • Reference Guides
          • Subscription Policies
          • Subscription Policy Access Types
          • Advanced Use of Special Functions
      • Data Policies
        • Overview
        • How-to Guides
          • Author a Masking Data Policy
          • Author a Minimization Policy
          • Author a Purpose-Based Restriction Policy
          • Author a Restricted Data Policy
          • Author a Row-Level Policy
          • Author a Time-Based Restriction Policy
          • Certifications Exemptions and Diffs
          • External Masking Interface
        • Reference Guides
          • Data Policy Types
          • Masking Policies
          • Row-Level Policies
          • Custom WHERE Clause Functions
          • Data Policy Conflicts and Fallback
          • Custom Data Policy Certifications
          • Orchestrated Masking Policies
    • Projects and Purpose-Based Access Control
      • Projects and Purpose Controls
        • Getting Started
        • How-to Guides
          • Create a Project
          • Create and Manage Purposes
          • Adjust a Policy
          • Project Management
            • Manage Projects and Project Settings
            • Manage Project Data Sources
            • Manage Project Members
        • Reference Guides
          • Projects and Purposes
          • Policy Adjustments
        • Why Use Purposes?
      • Equalized Access
        • Manage Project Equalization
        • Project Equalization Reference Guide
        • Why Use Project Equalization?
      • Masked Joins
        • Enable Masked Joins
        • Why Use Masked Joins?
      • Writing to Projects
        • How-to Guides
          • Create and Manage Snowflake Project Workspaces
          • Create and Manage Databricks Spark Project Workspaces
          • Write Data to the Workspace
        • Reference Guides
          • Project Workspaces
          • Project UDFs (Databricks)
    • Data Consumers
      • Subscribe to a Data Source
      • Query Data
        • Querying Snowflake Data
        • Querying Databricks Data
        • Querying Databricks SQL Data
        • Querying Starburst (Trino) Data
        • Querying Redshift Data
        • Querying Azure Synapse Analytics Data
      • Subscribe to Projects
  • Application Settings
    • How-to Guides
      • App Settings
      • BI Tools
        • BI Tool Configuration Recommendations
        • Power BI Configuration Example
        • Tableau Configuration Example
      • Add a License Key
      • Add ODBC Drivers
      • Manage Encryption Keys
      • System Status Bundle
    • Reference Guides
      • Data Processing, Encryption, and Masking Practices
      • Metadata Ingestion
  • Releases
    • Immuta v2024.3 Release Notes
    • Immuta Release Lifecycle
    • Immuta LTS Changelog
    • Immuta Support Matrix Overview
    • Immuta CLI Release Notes
    • Immuta Image Digests
    • Preview Features
      • Features in Preview
    • Deprecations
  • Developer Guides
    • The Immuta CLI
      • Install and Configure the Immuta CLI
      • Manage Your Immuta Tenant
      • Manage Data Sources
      • Manage Sensitive Data Discovery
        • Manage Sensitive Data Discovery Rules
        • Manage Identification Frameworks
        • Run Sensitive Data Discovery on Data Sources
      • Manage Policies
      • Manage Projects
      • Manage Purposes
      • Manage Audit
    • The Immuta API
      • Integrations API
        • Getting Started
        • How-to Guides
          • Configure an Amazon S3 Integration
          • Configure an Azure Synapse Analytics Integration
          • Configure a Databricks Unity Catalog Integration
          • Configure a Google BigQuery Integration
          • Configure a Redshift Integration
          • Configure a Snowflake Integration
          • Configure a Starburst (Trino) Integration
        • Reference Guides
          • Integrations API Endpoints
          • Integration Configuration Payload
          • Response Schema
          • HTTP Status Codes and Error Messages
      • Immuta V2 API
        • Data Source Payload Attribute Details
        • Data Source Request Payload Examples
        • Create Policies API Examples
        • Create Projects API Examples
        • Create Purposes API Examples
      • Immuta V1 API
        • Authenticate with the API
        • Configure Your Instance of Immuta
          • Get Fingerprint Status
          • Get Job Status
          • Manage Frameworks
          • Manage IAMs
          • Manage Licenses
          • Manage Notifications
          • Manage Sensitive Data Discovery (SDD)
          • Manage Tags
          • Manage Webhooks
          • Search Filters
        • Connect Your Data
          • Create and Manage an Amazon S3 Data Source
          • Create an Azure Synapse Analytics Data Source
          • Create an Azure Blob Storage Data Source
          • Create a Databricks Data Source
          • Create a Presto Data Source
          • Create a Redshift Data Source
          • Create a Snowflake Data Source
          • Create a Starburst (Trino) Data Source
          • Manage the Data Dictionary
        • Manage Data Access
          • Manage Access Requests
          • Manage Data and Subscription Policies
          • Manage Domains
          • Manage Write Policies
            • Write Policies Payloads and Response Schema Reference Guide
          • Policy Handler Objects
          • Search Audit Logs
          • Search Connection Strings
          • Search for Organizations
          • Search Schemas
        • Subscribe to and Manage Data Sources
        • Manage Projects and Purposes
          • Manage Projects
          • Manage Purposes
        • Generate Governance Reports
Powered by GitBook

Other versions

  • SaaS
  • 2024.3
  • 2024.2

Copyright © 2014-2024 Immuta Inc. All rights reserved.

On this page
  • Navigate to the App Settings Page
  • Use Existing Identity Access Manager
  • Immuta Accounts
  • Link External Catalogs
  • Enable External Masking
  • Add a Workspace
  • Databricks
  • Add An Integration
  • Global Integration Settings
  • Manage Data Providers
  • Enable Email
  • Initialize Kerberos
  • Generate System API Key
  • Configure HDFS Cache Settings
  • Set Public URLs
  • Enable Sensitive Data Discovery
  • Audit Settings
  • Enable Exclude Query Text
  • Allow Policy Exemptions
  • Manage the Default Subscription Policy
  • Default Subscription Merge Options
  • Configure Governor and Admin Settings
  • Create Custom Permissions
  • Create Custom Data Source Access Requests
  • Create Custom Login Message
  • Prevent Automatic Table Statistics
  • K-anonymization
  • Advanced Settings
  • Preview Features
  • Advanced Configuration

Was this helpful?

Export as PDF
  1. Application Settings
  2. How-to Guides

App Settings

Last updated 16 days ago

Was this helpful?

Navigate to the App Settings Page

  1. Click the App Settings icon in the navigation menu.

  2. Click the link in the App Settings panel to navigate to that section.

Use Existing Identity Access Manager

See the identity manager pages for a tutorial to connect an , , or identity manager.

To configure Immuta to use all other existing IAMs,

  1. Click the Add IAM button.

  2. Complete the Display Name field and select your IAM type from the Identity Provider Type dropdown: LDAP/Active Directory, SAML, or OpenID.

Once you have selected LDAP/Active Directory from the Identity Provider Type dropdown menu,

  1. Adjust Default Permissions granted to users by selecting from the list in this dropdown menu, and then complete the required fields in the Credentials and Options sections. Note: Either User Attribute OR User Search Filter is required, not both. Completing one of these fields disables the other.

  2. Opt to have Case-insensitive user names by clicking the checkbox.

  3. Opt to Enable Debug Logging or Enable SSL by clicking the checkboxes.

  4. In the Profile Schema section, map attributes in LDAP/Active Directory to automatically fill in a user's Immuta profile. Note: Fields that you specify in this schema will not be editable by users within Immuta.

  5. Opt to Enable scheduled LDAP Sync support for LDAP/Active Directory and Enable pagination for LDAP Sync. Once enabled, confirm the sync schedule written in ; the default is every hour. Confirm the LDAP page size for pagination; the default is 1,000.

  6. Opt to Sync groups from LDAP/Active Directory to Immuta. Once enabled, map attributes in LDAP/Active Directory to automatically pull information about the groups into Immuta.

  7. Opt to Sync attributes from LDAP/Active Directory to Immuta. Once enabled, add attribute mappings in the attribute schema. The desired attribute prefix should be mapped to the relevant schema URN.

  8. Opt to enable External Groups and Attributes Endpoint, Make Default IAM, or Migrate Users from another IAM by selecting the checkbox.

  9. Then click the Test Connection button.

  10. Once the connection is successful, click the Test User Login button.

  11. Click the Test LDAP Sync button if scheduled sync has been enabled.

See the .

Once you have selected OpenID from the Identity Provider Type dropdown menu,

  1. Take note of the ID. You will need this value to reference the IAM in the callback URL in your identity provider with the format <base url>/bim/iam/<id>/user/authenticate/callback.

  2. Note the SSO Callback URL shown. Navigate out of Immuta and register the client application with the OpenID provider. If prompted for client application type, choose web.

  3. Adjust Default Permissions granted to users by selecting from the list in this dropdown menu.

  4. Back in Immuta, enter the Client ID, Client Secret, and Discover URL in the form field.

  5. Configure OpenID provider settings. There are two options:

    1. Set Discover URL to the /.well-known/openid-configuration URL provided by your OpenID provider.

    2. If you are unable to use the Discover URL option, you can fill out Authorization Endpoint, Issuer, Token Endpoint, JWKS Uri, and Supported ID Token Signing Algorithms.

  6. If necessary, add additional Scopes.

  7. Opt to Enable SCIM support for OpenID by clicking the checkbox, which will generate a SCIM API Key.

  8. In the Profile Schema section, map attributes in OpenID to automatically fill in a user's Immuta profile. Note: Fields that you specify in this schema will not be editable by users within Immuta.

  9. Opt to Allow Identity Provider Initiated Single Sign On or Migrate Users from another IAM by selecting the checkboxes.

  10. Click the Test Connection button.

  11. Once the connection is successful, click the Test User Login button.

Immuta Accounts

To set the default permissions granted to users when they log in to Immuta, click the Default Permissions dropdown menu, and then select permissions from this list.

Link External Catalogs

Enable External Masking

Deprecation notice: Support for this feature has been deprecated.

  1. Navigate to the App Settings page and click External Masking in the left sidebar.

  2. Click Add Configuration and specify an external endpoint in the External URI field.

  3. Click Configure, and then add at least one tag by selecting from the Search for tags dropdown menu. Note: Tag hierarchies are supported, so tagging a column as Sensitive.Customer would drive the policy if external masking was configured with the tag Sensitive).

  4. Select Authentication Method and then complete the authentication fields (when applicable).

  5. Click Test Connection and then Save.

Add a Workspace

  1. Select Add Workspace.

  2. Use the dropdown menu to select the Workspace Type and refer to the section below.

Databricks

Databricks cluster configuration

Before creating a workspace, the cluster must send its configuration to Immuta; to do this, run a simple query on the cluster (i.e., show tables). Otherwise, an error message will occur when users attempt to create a workspace.

Databricks API Token expiration

The Databricks API Token used for workspace access must be non-expiring. Using a token that expires risks losing access to projects that are created using that configuration.

Use the dropdown menu to select the Schema and refer to the corresponding tab below.

Required AWS S3 Permissions

When configuring a workspace using Databricks with S3, the following permissions need to be applied to arn:aws:s3:::immuta-workspace-bucket/workspace/base/path/* and arn:aws:s3:::immuta-workspace-bucket/workspace/base/path Note: Two of these values are found on the App Settings page; immuta-workspace-bucket is from the S3 Bucket field and workspace/base/path is from the Workspace Remote Path field:

  • s3:Get*

  • s3:Delete*

  • s3:Put*

  • s3:AbortMultipartUpload

Additionally, these permissions must be applied to arn:aws:s3:::immuta-workspace-bucket Note: One value is found on the App Settings page; immuta-workspace-bucket is from the S3 Bucket field:

  • s3:ListBucket

  • s3:ListBucketMultipartUploads

  • s3:GetBucketLocation

  1. Enter the Name.

  2. Click Add Workspace

  3. Enter the Hostname.

  4. Opt to enter the Workspace ID (required with Azure Databricks).

  5. Enter the Databricks API Token.

  6. Use the dropdown menu to select the AWS Region.

  7. Enter the S3 Bucket.

  8. Opt to enter the S3 Bucket Prefix.

  9. Click Test Workspace Bucket.

  10. Once the credentials are successfully tested, click Save.

  1. Enter the Name.

  2. Click Add Workspace.

  3. Enter the Hostname, Workspace ID, Account Name, Databricks API Token, and Storage Container.

  4. Enter the Workspace Base Directory.

  5. Click Test Workspace Directory.

  6. Once the credentials are successfully tested, click Save.

  1. Enter the Name.

  2. Click Add Workspace.

  3. Enter the Hostname, Workspace ID, Account Name, and Databricks API Token.

  4. Use the dropdown menu to select the Google Cloud Region.

  5. Enter the GCS Bucket.

  6. Opt to enter the GCS Object Prefix.

  7. Click Test Workspace Directory.

  8. Once the credentials are successfully tested, click Save.

Add An Integration

  1. Select Add Integration.

  2. Use the dropdown menu to select the Integration Type.

Global Integration Settings

Snowflake Audit Sync Schedule

  1. Click the App Settings icon in the navigation menu.

  2. Navigate to the Global Integration Settings section and within that the Snowflake Audit Sync Schedule.

  3. Enter an integer into the textbox. If you enter 12, the audit sync will happen once every 12 hours, so twice a day.

Databricks Unity Catalog Configuration

  1. Click the App Settings icon in the navigation menu.

  2. Navigate to the Global Integration Settings section and within that the Databricks Unity Catalog Configuration.

  3. Enter an integer into the textbox. If you enter 12, the audit sync will happen once every 12 hours, so twice a day.

Manage Data Providers

To enable a data provider,

  1. Click the menu button in the upper right corner of the provider icon you want to enable.

  2. Select Enable from the dropdown.

If an ODBC driver needs to be uploaded,

  1. Click the menu button in the upper right corner of the provider icon, and then select Upload Driver from the dropdown.

  2. Click in the Add Files to Upload box and upload your file.

  3. Click Close.

  4. Click the menu button again, and then select Enable from the dropdown.

Enable Email

Application Admins can configure the SMTP server that Immuta will use to send emails to users. If this server is not configured, users will only be able to view notifications in the Immuta console.

To configure the SMTP server,

  1. Complete the Host and Port fields for your SMTP server.

  2. Enter the username and password Immuta will use to log in to the server in the User and Password fields, respectively.

  3. Enter the email address that will send the emails in the From Email field.

  4. Opt to Enable TLS by clicking this checkbox, and then enter a test email address in the Test Email Address field.

  5. Finally, click Send Test Email.

Once SMTP is enabled in Immuta, any Immuta user can request access to notifications as emails, which will vary depending on the permissions that user has. For example, to receive email notifications about group membership changes, the receiving user will need the GOVERNANCE permission. Once a user requests access to receive emails, Immuta will compile notifications and distribute these compilations via email at 8-hour intervals.

Initialize Kerberos

To configure Immuta to protect data in a kerberized Hadoop cluster,

  1. Upload your Kerberos Configuration File, and then you can add modify the Kerberos configuration in the window pictured below.

  2. Upload your Keytab File.

  3. Enter the principal Immuta will use to authenticate with your KDC in the Username field. Note: This must match a principal in the Keytab file.

  4. Adjust how often (in milliseconds) Immuta needs to re-authenticate with the KDC in the Ticket Refresh Interval field.

  5. Click Test Kerberos Initialization.

Generate System API Key

  1. Click the Generate Key button.

  2. Save this API key in a secure location.

Configure HDFS Cache Settings

To improve performance when using Immuta to secure Spark or HDFS access, a user's access level is cached momentarily. These cache settings are configurable, but decreasing the Time to Live (TTL) on any cache too low will negatively impact performance.

To configure cache settings, enter the time in milliseconds in each of the Cache TTL fields.

Set Public URLs

You can set the URL users will use to access Immuta Application. Note: Proxy configuration must be handled outside Immuta.

  1. Complete the Public Immuta URL field.

  2. Click Save and confirm your changes.

Enable Sensitive Data Discovery

Audit Settings

Enable Exclude Query Text

By default, query text is included in query audit events from Snowflake, Databricks, and Starburst (Trino).

When query text is excluded from audit events, Immuta will retain query event metadata such as the columns and tables accessed. However, the query text used to make the query will not be included in the event. This setting is a global control for all configured integrations.

To exclude query text from audit events,

  1. Scroll to the Audit section.

  2. Check the box to Exclude query text from audit events.

  3. Click Save.

Allow Policy Exemptions

Deprecation notice

Click the Allow Policy Exemptions checkbox to allow users to specify who can bypass all policies on a data source.

Manage the Default Subscription Policy

Deprecation notice

  1. Click the App Settings icon in the navigation menu.

  2. Scroll to the Default Subscription Policy section.

  3. Select the radio button to define the behavior of subscription policies when new data sources are registered in Immuta:

    • None: When this option is selected, Immuta will not apply any subscription policies to data sources when they are registered. Changing the default subscription policy to none will only apply to newly created data sources. Existing data sources will retain their existing subscription policies.

    • Allow individually selected users: When a data source is created, Immuta will apply a subscription policy to it that requires users to be individually selected to access the underlying table. In most cases, users who were able to query the table before the data source was created will no longer be able to query the table in the remote data platform until they are subscribed to the data source in Immuta.

  4. Click Save and confirm your changes.

Default Subscription Merge Options

Immuta merges multiple Global Subscription policies that apply to a single data source; by default, users must meet all the conditions outlined in each policy to get access (i.e., the conditions of the policies are combined with AND). To change the default behavior to allow users to meet the condition of at least one policy that applies (i.e., the conditions of the policies are combined with OR),

  1. Click the Default Subscription Merge Options text in the left pane.

  2. Select the Default "allow shared policy responsibility" to be checked checkbox.

  3. Click Save.

Note: Even with this setting enabled, Governors can opt to have their Global Subscription policies combined with AND during policy creation.

Configure Governor and Admin Settings

These options allow you to restrict the power individual users with the GOVERNANCE and USER_ADMIN permissions have in Immuta. Click the checkboxes to enable or disable these options.

Create Custom Permissions

You can create custom permissions that can then be assigned to users and leveraged when building subscription policies. Note: You cannot configure actions users can take within the console when creating a custom permission, nor can the actions associated with existing permissions in Immuta be altered.

To add a custom permission, click the Add Permission button, and then name the permission in the Enter Permission field.

Create Custom Data Source Access Requests

To create a custom questionnaire that all users must complete when requesting access to a data source, fill in the following fields:

  1. Opt for the questionnaire to be required.

  2. Key: Any unique value that identifies the question.

  3. Header: The text that will display on reports.

  4. Label: The text that will display in the questionnaire for the user. They will be prompted to type the answer in a text box.

Create Custom Login Message

To create a custom message for the login page of Immuta, enter text in the Enter Login Message box. Note: The message can be formatted in markdown.

Opt to adjust the Message Text Color and Message Background Color by clicking in these dropdown boxes.

Prevent Automatic Table Statistics

Without fingerprints some policies will be unavailable.

Disabling the collection of statistics will cause these policies to be unavailable:

  • Masking with format preserving masking

  • Masking with K-Anonymization

  • Masking using randomized response

To disable the automatic collection of statistics with a particular tag,

  1. Use the Select Tags dropdown to select the tag(s).

  2. Click Save.

K-anonymization

Query engine and legacy fingerprint required

K-anonymization policies require the query engine and legacy fingerprint service, which are disabled by default. If you need to use k-anonymization policies, work with your Immuta representative to enable the query engine and legacy fingerprint service when you deploy Immuta.

When a k-anonymization policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process that generates rules enforcing k-anonymity. The results of this query, which may contain data that is subject to regulatory constraints such as GDPR or HIPAA, are stored in Immuta's metadata database.

The location of the metadata database depends on your deployment:

  • Self-managed Immuta deployment: The metadata database is located in the server where you have your external metadata database deployed.

  • SaaS Immuta deployment: The metadata database is located in the AWS global segment you have chosen to deploy Immuta.

To ensure this process does not violate your organization's data localization regulations, you need to first activate this masking policy type before you can use it in your Immuta tenant.

  1. Click Other Settings in the left panel and scroll to the K-Anonymization section.

  2. Select the Allow users to create masking policies using K-Anonymization checkbox to enable k-anonymization policies for your organization.

  3. Click Save and confirm your changes.

Advanced Settings

Preview Features

If you enable any Preview features, provide feedback on how you would like these features to evolve.

Policy Adjustments

  1. Click Advanced Settings in the left panel, and scroll to the Preview Features section.

  2. Check the Enable Policy Adjustments checkbox.

  3. Click Save.

Complex Data Types

  1. Click Advanced Settings in the left panel, and scroll to the Preview Features section.

  2. Check the Allow Complex Data Types checkbox.

  3. Click Save.

Enhanced Subscription Policy Variables

Advanced Configuration

Advanced configuration options provided by the Immuta Support team can be added in this section. The configuration must adhere to the YAML syntax.

Update the K-Anonymity Cardinality Cutoff

To increase the default cardinality cutoff for columns compatible with k-anonymity,

  1. Expand the Advanced Settings section and add the following text to the Advanced Configuration:

    plugins:
      postgresHandler:
        maxKAnonCardinality: 10000000
      snowflakeHandler:
        maxKAnonCardinality: 10000000
  2. Click Save.

  3. To regenerate the data source's fingerprint, navigate to that data source's page.

  4. Click the Status in the upper right corner.

  5. Click Re-run in the Fingerprint section of the dropdown menu.

Note: Re-running the fingerprint is only necessary for existing data sources. New data sources will be generated using the new maximum cardinality.

Update the Time to Webhook Request Timeouts

  1. Expand the Advanced Settings section and add the following text to the Advanced Configuration to specify the number of seconds before webhook requests timeout. For example use 30 for 30 seconds. Setting it to 0 will result in no timeout.

    webhookIntegrationResponseTimeout: 30
  2. Click Save.

Update the Audit Ingestion Expiration

  1. Expand the Advanced Settings section and add the following text to the Advanced Configuration to specify the number of minutes before an audit job expires. For example use 300 for 300 minutes.

    plugins:
      auditService:
        ingestionJob:
          expirationInMinutes: 300
  2. Click Save.

Enable Discover Features

Enable Sensitive Data Discovery for Starburst (Trino)

  1. Expand the Advanced Settings section and add the following text to the Advanced Configuration:

    featureFlags:
      nativeSddTrino: true
  2. Click Save.

Enable Frameworks API, Data Security Framework, and Risk Assessment Framework

  1. Expand the Advanced Settings section and add the following text to the Advanced Configuration:

    featureFlags:
      frameworks: true
  2. Click Save.

Enable Additional Compliance Frameworks and the Data Inventory Dashboard

Requirement: The frameworks feature flag must be enabled and the configuration saved before the dataInventoryDashboard can be added to the advanced configuration field and enabled.

  1. Expand the Advanced Settings section and add the following text to the Advanced Configuration:

    featureFlags:
      dataInventoryDashboard: true
  2. Click Save.

See the .

To enable ,

To enable Azure Synapse Analytics, see the .

To enable Databricks Spark, see the .

To enable Databricks Unity Catalog, see the

To enable Redshift, see .

To enable Snowflake, see the .

To enable Starburst, see the .

Requirements: See the requirements for Snowflake audit on the .

To configure the ,

Requirements: See the requirements for Databricks Unity Catalog audit on the .

To configure the ,

You can enable or disable the types of data sources users can create in this section. Some of these types will require you to upload an ODBC driver before they can be enabled. The list of currently supported drivers is on the .

To enable Sensitive Data Discovery and configure its settings, see the .

The ability to exclude users from all data policies at the data source level has been deprecated. In order to exempt users from having policies being applied to them, add them in the .

The ability to configure the behavior of the default subscription policy has been deprecated. Once this configuration setting is removed from the app settings page, Immuta will not apply a subscription policy to registered data sources unless an existing global policy applies to them. To set an "Allow individually selected users" subscription policy on all data sources, with that condition that applies to all data sources or apply a to individual data sources.

For instructions on enabling this feature, navigate to the .

Microsoft Entra ID
Okta
OneLogin
Cron rule
SAML protocol configuration guide
External Catalogs page
Azure Synapse Analytics configuration page
Simplified Databricks Spark configuration page
Getting started with the Databricks Unity Catalog integration page
Redshift configuration page
Snowflake configuration page
Starburst configuration page
ODBC Drivers page
Sensitive Data Discovery page
create a global subscription policy
local subscription policy
Global Subscription Policies Advanced DSL Tutorial
Databricks Unity Catalog query audit logs page
audit ingest frequency for Databricks Unity Catalog
exception criteria of your policies
external masking
Snowflake query audit logs page
audit ingest frequency for Snowflake