LogoLogo
2025.1Book a demo
  • Immuta Documentation - 2025.1
  • Configuration
    • Deploy Immuta
      • Requirements
      • Install
        • Managed Public Cloud
        • Red Hat OpenShift
      • Upgrade
        • Migrating to the New Helm Chart
        • Upgrading IEHC
      • Guides
        • Ingress Configuration
        • TLS Configuration
        • Cosign Verification
        • Production Best Practices
        • Rotating Credentials
        • External Cache Configuration
        • Enabling Legacy Query Engine
        • Private Container Registries
        • Air-Gapped Environments
      • Disaster Recovery
      • Troubleshooting
      • Conventions
    • Connect Data Platforms
      • Data Platforms Overview
      • Amazon S3
      • AWS Lake Formation
        • Register an AWS Lake Formation Connection
        • AWS Lake Formation Reference Guide
      • Azure Synapse Analytics
        • Getting Started with Azure Synapse Analytics
        • Configure Azure Synapse Analytics Integration
        • Reference Guides
          • Azure Synapse Analytics Integration
          • Azure Synapse Analytics Pre-Configuration Details
      • Databricks
        • Databricks Spark
          • Getting Started with Databricks Spark
          • How-to Guides
            • Configure a Databricks Spark Integration
            • Manually Update Your Databricks Cluster
            • Install a Trusted Library
            • Project UDFs Cache Settings
            • Run R and Scala spark-submit Jobs on Databricks
            • DBFS Access
            • Troubleshooting
          • Reference Guides
            • Databricks Spark Integration Configuration
              • Installation and Compliance
              • Customizing the Integration
              • Setting Up Users
              • Spark Environment Variables
              • Ephemeral Overrides
            • Security and Compliance
            • Registering and Protecting Data
            • Accessing Data
              • Delta Lake API
        • Databricks Unity Catalog
          • Getting Started with Databricks Unity Catalog
          • How-to Guides
            • Register a Databricks Unity Catalog Connection
            • Configure a Databricks Unity Catalog Integration
            • Migrate to Unity Catalog
          • Databricks Unity Catalog Integration Reference Guide
      • Google BigQuery
      • Redshift
        • Getting Started with Redshift
        • How-to Guides
          • Configure Redshift Integration
          • Configure Redshift Spectrum
        • Reference Guides
          • Redshift Integration
          • Redshift Pre-Configuration Details
      • Snowflake
        • Getting Started with Snowflake
        • How-to Guides
          • Register a Snowflake Connection
          • Configure a Snowflake Integration
          • Snowflake Table Grants Migration
          • Edit or Remove Your Snowflake Integration
          • Integration Settings
            • Enable Snowflake Table Grants
            • Use Snowflake Data Sharing with Immuta
            • Configure Snowflake Lineage Tag Propagation
            • Enable Snowflake Low Row Access Policy Mode
              • Upgrade Snowflake Low Row Access Policy Mode
        • Reference Guides
          • Snowflake Integration
          • Snowflake Data Sharing
          • Snowflake Lineage Tag Propagation
          • Snowflake Low Row Access Policy Mode
          • Snowflake Table Grants
          • Warehouse Sizing Recommendations
        • Explanatory Guides
          • Phased Snowflake Onboarding
      • Starburst (Trino)
        • Getting Started with Starburst (Trino)
        • How-to Guides
          • Configure Starburst (Trino) Integration
          • Customize Read and Write Access Policies for Starburst (Trino)
        • Starburst (Trino) Integration Reference Guide
      • Queries Immuta Runs in Remote Platforms
      • Legacy Integrations
        • Securing Hive and Impala Without Sentry
        • Enabling ImmutaGroupsMapping
      • Connect Your Data
        • Connections
          • How-to Guides
            • Run Object Sync
            • Manage Connection Settings
            • Use the Connection Upgrade Manager
              • Troubleshooting
          • Reference Guides
            • Connections Reference Guide
            • Upgrading to Connections
              • Before You Begin
              • API Changes
              • FAQ
        • Data Sources
          • Data Sources in Immuta
          • Register Data Sources
            • Amazon S3 Data Source
            • Azure Synapse Analytics Data Source
            • Databricks Data Source
            • Google BigQuery Data Source
            • Redshift Data Source
            • Snowflake Data Source
              • Bulk Create Snowflake Data Sources
            • Starburst (Trino) Data Source
          • Data Source Settings
            • How-to Guides
              • Manage Data Sources and Data Source Settings
              • Manage Data Source Members
              • Manage Access Requests and Tasks
              • Manage Data Dictionary Descriptions
              • Disable Immuta from Sampling Raw Data
            • Data Source Health Checks Reference Guide
          • Schema Monitoring
            • How-to Guides
              • Run Schema Monitoring and Column Detection Jobs
              • Manage Schema Monitoring
            • Reference Guides
              • Schema Monitoring
              • Schema Projects
            • Why Use Schema Monitoring?
    • Manage Data Metadata
      • Connect External Catalogs
        • Getting Started with External Catalogs
        • Configure an External Catalog
        • Reference Guides
          • External Catalogs
          • Custom REST Catalogs
            • Custom REST Catalog Interface Endpoints
      • Data Identification
        • Introduction
        • Getting Started with Data Identification
        • How-to Guides
          • Use Identification
          • Manage Identifiers
          • Run and Manage Identification
          • Manage Identification Frameworks
          • Use Sensitive Data Discovery (SDD)
        • Reference Guides
          • How Competitive Criteria Analysis Works
          • Built-in Identifier Reference
            • Built-In Identifier Changelog
          • Built-in Discovered Tags Reference
      • Data Classification
        • How-to Guides
          • Activate Classification Frameworks
          • Adjust Identification and Classification Framework Tags
          • How to Use a Built-In Classification Framework with Your Own Tags
        • Classification Frameworks Reference Guide
      • Manage Tags
        • How-to Guides
          • Create and Manage Tags
          • Add Tags to Data Sources and Projects
        • Tags Reference Guide
    • Manage Users
      • Getting Started with Users
      • Identity Managers (IAMs)
        • How-to Guides
          • Okta LDAP Interface
          • OpenID Connect
            • OpenID Connect Protocol
            • Okta and OpenID Connect
            • OneLogin with OpenID Connect
          • SAML
            • SAML Protocol
            • Microsoft Entra ID
            • Okta SAML SCIM
        • Reference Guides
          • Identity Managers
          • SAML Single Logout
          • SAML Protocol Configuration Options
      • Immuta Users
        • How-to Guides
          • Managing Personas and Permissions
          • Manage Attributes and Groups
          • User Impersonation
          • External User ID Mapping
          • External User Info Endpoint
        • Reference Guides
          • Attributes and Groups in Immuta
          • Permissions and Personas
    • Organize Data into Domains
      • Getting Started with Domains
      • Domains Reference Guide
    • Application Settings
      • How-to Guides
        • App Settings
        • BI Tools
          • BI Tool Configuration Recommendations
          • Power BI Configuration Example
          • Tableau Configuration Example
        • Add a License Key
        • Add ODBC Drivers
        • Manage Encryption Keys
        • System Status Bundle
      • Reference Guides
        • Data Processing, Encryption, and Masking Practices
        • Metadata Ingestion
  • Governance
    • Introduction
      • Automate Data Access Control Decisions
        • The Two Paths: Orchestrated RBAC and ABAC
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
        • Test and Deploy Policy
      • Compliantly Open More Sensitive Data for ML and Analytics
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
    • Author Policies for Data Access Control
      • Introduction
        • Scalability and Evolvability
        • Understandability
        • Distributed Stewardship
        • Consistency
        • Availability of Data
      • Policies
        • Authoring Policies at Scale
        • Data Engineering with Limited Policy Downtime
        • Subscription Policies
          • How-to Guides
            • Author a Subscription Policy
            • Author an ABAC Subscription Policy
            • Subscription Policies Advanced DSL Guide
            • Author a Restricted Subscription Policy
            • Clone, Activate, or Stage a Global Policy
          • Reference Guides
            • Subscription Policies
            • Subscription Policy Access Types
            • Advanced Use of Special Functions
        • Data Policies
          • Overview
          • How-to Guides
            • Author a Masking Data Policy
            • Author a Minimization Policy
            • Author a Purpose-Based Restriction Policy
            • Author a Restricted Data Policy
            • Author a Row-Level Policy
            • Author a Time-Based Restriction Policy
            • Policy Certifications and Diffs
          • Reference Guides
            • Data Policy Types
            • Masking Policies
            • Row-Level Policies
            • Custom WHERE Clause Functions
            • Data Policy Conflicts and Fallback
            • Custom Data Policy Certifications
            • Orchestrated Masking Policies
      • Projects and Purpose-Based Access Control
        • Projects and Purpose Controls
          • Getting Started
          • How-to Guides
            • Create a Project
            • Create and Manage Purposes
            • Project Management
              • Manage Projects and Project Settings
              • Manage Project Data Sources
              • Manage Project Members
          • Reference Guides
            • Projects and Purposes
          • Why Use Purposes?
        • Equalized Access
          • Manage Project Equalization
          • Project Equalization Reference Guide
          • Why Use Project Equalization?
        • Masked Joins
          • Enable Masked Joins
          • Why Use Masked Joins?
        • Writing to Projects
          • How-to Guides
            • Create and Manage Snowflake Project Workspaces
            • Create and Manage Databricks Spark Project Workspaces
            • Write Data to the Workspace
          • Reference Guides
            • Project Workspaces
            • Project UDFs (Databricks)
    • Observe Access and Activity
      • Introduction
      • Audit
        • How-to Guides
          • Export Audit Logs to S3
          • Export Audit Logs to ADLS
          • Run Governance Reports
        • Reference Guides
          • Universal Audit Model (UAM)
            • UAM Schema
          • Query Audit Logs
            • Snowflake Query Audit Logs
            • Databricks Unity Catalog Query Audit Logs
            • Databricks Spark Query Audit Logs
            • Starburst (Trino) Query Audit Logs
          • Audit Export GraphQL Reference Guide
          • Governance Report Types
          • Unknown Users in Audit Logs
      • Dashboards
        • Use the Audit Dashboards How-To Guide
        • Audit Dashboards Reference Guide
      • Monitors
        • Manage Monitors and Observations
        • Monitors Reference Guide
    • Access Data
      • Subscribe to a Data Source
      • Query Data
        • Querying Snowflake Data
        • Querying Databricks Data
        • Querying Databricks SQL Data
        • Querying Starburst (Trino) Data
        • Querying Redshift Data
        • Querying Azure Synapse Analytics Data
        • Connect to a Database Tool to Run Ad Hoc Queries
      • Subscribe to Projects
  • Releases
    • Release Notes
      • Immuta v2025.1 Release Notes
        • User Interface Changes in v2025.1 LTS
      • Immuta LTS Changelog
      • Immuta Image Digests
      • Immuta CLI Release Notes
    • Immuta Release Lifecycle
    • Immuta Support Matrix Overview
    • Preview Features
      • Features in Preview
    • Deprecations and EOL
  • Developer Guides
    • The Immuta CLI
      • Install and Configure the Immuta CLI
      • Manage Your Immuta Tenant
      • Manage Data Sources
      • Manage Sensitive Data Discovery
        • Manage Sensitive Data Discovery Rules
        • Manage Identification Frameworks
        • Run Sensitive Data Discovery on Data Sources
      • Manage Policies
      • Manage Projects
      • Manage Purposes
      • Manage Audit
    • The Immuta API
      • Integrations API
        • Getting Started
        • How-to Guides
          • Configure an Amazon S3 Integration
          • Configure an Azure Synapse Analytics Integration
          • Configure a Databricks Unity Catalog Integration
          • Configure a Google BigQuery Integration
          • Configure a Redshift Integration
          • Configure a Snowflake Integration
          • Configure a Starburst (Trino) Integration
        • Reference Guides
          • Integrations API Endpoints
          • Integration Configuration Payload
          • Response Schema
          • HTTP Status Codes and Error Messages
      • Connections API
        • How-to Guides
          • Register a Connection
            • Register a Snowflake Connection
            • Register a Databricks Unity Catalog Connection
            • Register an AWS Lake Formation Connection
          • Manage a Connection
          • Deregister a Connection
        • Connection Registration Payloads Reference Guide
      • Immuta V2 API
        • Data Source Payload Attribute Details
        • Data Source Request Payload Examples
        • Create Policies API Examples
        • Create Projects API Examples
        • Create Purposes API Examples
      • Immuta V1 API
        • Authenticate with the API
        • Configure Your Instance of Immuta
          • Get Job Status
          • Manage Frameworks
          • Manage IAMs
          • Manage Licenses
          • Manage Notifications
          • Manage Tags
          • Manage Webhooks
          • Search Filters
          • Manage Identification
            • Identification Frameworks to Identifiers in Domains
            • Manage Sensitive Data Discovery (SDD)
        • Connect Your Data
          • Create and Manage an Amazon S3 Data Source
          • Create an Azure Synapse Analytics Data Source
          • Create an Azure Blob Storage Data Source
          • Create a Databricks Data Source
          • Create a Presto Data Source
          • Create a Redshift Data Source
          • Create a Snowflake Data Source
          • Create a Starburst (Trino) Data Source
          • Manage the Data Dictionary
        • Use Domains
        • Manage Data Access
          • Manage Access Requests
          • Manage Data and Subscription Policies
          • Manage Write Policies
            • Write Policies Payloads and Response Schema Reference Guide
          • Policy Handler Objects
          • Search Connection Strings
          • Search for Organizations
          • Search Schemas
        • Subscribe to and Manage Data Sources
        • Manage Projects and Purposes
          • Manage Projects
          • Manage Purposes
        • Generate Governance Reports
Powered by GitBook

Other versions

  • SaaS
  • 2025.1
  • 2024.3
  • 2024.2

Copyright © 2014-2024 Immuta Inc. All rights reserved.

On this page
  • Navigate to the App Settings Page
  • Use Existing Identity Access Manager
  • Immuta Accounts
  • Link External Catalogs
  • Add a Workspace
  • Databricks Spark
  • Add An Integration
  • Global Integration Settings
  • Manage Data Providers
  • Enable Email
  • Initialize Kerberos
  • Generate System API Key
  • Configure HDFS Cache Settings
  • Set Public URLs
  • Audit Settings
  • Enable Exclude Query Text
  • Default Subscription Merge Options
  • Configure Governor and Admin Settings
  • Create Custom Permissions
  • Create Custom Data Source Access Requests
  • Create Custom Login Message
  • Prevent Automatic Table Statistics
  • Randomized response
  • Advanced Settings
  • Preview Features
  • Advanced Configuration

Was this helpful?

Export as PDF
  1. Configuration
  2. Application Settings
  3. How-to Guides

App Settings

Last updated 2 days ago

Was this helpful?

Navigate to the App Settings Page

  1. Click the App Settings icon in the navigation menu.

  2. Click the link in the App Settings panel to navigate to that section.

Use Existing Identity Access Manager

See the identity manager pages for a tutorial to connect an , , or identity manager.

To configure Immuta to use all other existing IAMs,

  1. Click the Add IAM button.

  2. Complete the Display Name field and select your IAM type from the Identity Provider Type dropdown: LDAP/Active Directory, SAML, or OpenID.

Once you have selected LDAP/Active Directory from the Identity Provider Type dropdown menu,

  1. Adjust Default Permissions granted to users by selecting from the list in this dropdown menu, and then complete the required fields in the Credentials and Options sections. Note: Either User Attribute OR User Search Filter is required, not both. Completing one of these fields disables the other.

  2. Opt to have Case-insensitive user names by clicking the checkbox.

  3. Opt to Enable Debug Logging or Enable SSL by clicking the checkboxes.

  4. In the Profile Schema section, map attributes in LDAP/Active Directory to automatically fill in a user's Immuta profile. Note: Fields that you specify in this schema will not be editable by users within Immuta.

  5. Opt to Enable scheduled LDAP Sync support for LDAP/Active Directory and Enable pagination for LDAP Sync. Once enabled, confirm the sync schedule written in ; the default is every hour. Confirm the LDAP page size for pagination; the default is 1,000.

  6. Opt to Sync groups from LDAP/Active Directory to Immuta. Once enabled, map attributes in LDAP/Active Directory to automatically pull information about the groups into Immuta.

  7. Opt to Sync attributes from LDAP/Active Directory to Immuta. Once enabled, add attribute mappings in the attribute schema. The desired attribute prefix should be mapped to the relevant schema URN.

  8. Opt to enable External Groups and Attributes Endpoint, Make Default IAM, or Migrate Users from another IAM by selecting the checkbox.

  9. Then click the Test Connection button.

  10. Once the connection is successful, click the Test User Login button.

  11. Click the Test LDAP Sync button if scheduled sync has been enabled.

See the .

Once you have selected OpenID from the Identity Provider Type dropdown menu,

  1. Take note of the ID. You will need this value to reference the IAM in the callback URL in your identity provider with the format <base url>/bim/iam/<id>/user/authenticate/callback.

  2. Note the SSO Callback URL shown. Navigate out of Immuta and register the client application with the OpenID provider. If prompted for client application type, choose web.

  3. Adjust Default Permissions granted to users by selecting from the list in this dropdown menu.

  4. Back in Immuta, enter the Client ID, Client Secret, and Discover URL in the form field.

  5. Configure OpenID provider settings. There are two options:

    1. Set Discover URL to the /.well-known/openid-configuration URL provided by your OpenID provider.

    2. If you are unable to use the Discover URL option, you can fill out Authorization Endpoint, Issuer, Token Endpoint, JWKS Uri, and Supported ID Token Signing Algorithms.

  6. If necessary, add additional Scopes.

  7. Opt to Enable SCIM support for OpenID by clicking the checkbox, which will generate a SCIM API Key.

  8. In the Profile Schema section, map attributes in OpenID to automatically fill in a user's Immuta profile. Note: Fields that you specify in this schema will not be editable by users within Immuta.

  9. Opt to Allow Identity Provider Initiated Single Sign On or Migrate Users from another IAM by selecting the checkboxes.

  10. Click the Test Connection button.

  11. Once the connection is successful, click the Test User Login button.

Immuta Accounts

To set the default permissions granted to users when they log in to Immuta, click the Default Permissions dropdown menu, and then select permissions from this list.

Link External Catalogs

Add a Workspace

  1. Select Add Workspace.

  2. Use the dropdown menu to select the Workspace Type and refer to the section below.

Databricks Spark

Databricks cluster configuration

Before creating a workspace, the cluster must send its configuration to Immuta; to do this, run a simple query on the cluster (i.e., show tables). Otherwise, an error message will occur when users attempt to create a workspace.

Databricks API Token expiration

The Databricks API Token used for workspace access must be non-expiring. Using a token that expires risks losing access to projects that are created using that configuration.

Use the dropdown menu to select the Schema and refer to the corresponding tab below.

Required AWS S3 Permissions

When configuring a workspace using Databricks with S3, the following permissions need to be applied to arn:aws:s3:::immuta-workspace-bucket/workspace/base/path/* and arn:aws:s3:::immuta-workspace-bucket/workspace/base/path Note: Two of these values are found on the App Settings page; immuta-workspace-bucket is from the S3 Bucket field and workspace/base/path is from the Workspace Remote Path field:

  • s3:Get*

  • s3:Delete*

  • s3:Put*

  • s3:AbortMultipartUpload

Additionally, these permissions must be applied to arn:aws:s3:::immuta-workspace-bucket Note: One value is found on the App Settings page; immuta-workspace-bucket is from the S3 Bucket field:

  • s3:ListBucket

  • s3:ListBucketMultipartUploads

  • s3:GetBucketLocation

  1. Enter the Name.

  2. Click Add Workspace

  3. Enter the Hostname.

  4. Opt to enter the Workspace ID (required with Azure Databricks).

  5. Enter the Databricks API Token.

  6. Use the dropdown menu to select the AWS Region.

  7. Enter the S3 Bucket.

  8. Opt to enter the S3 Bucket Prefix.

  9. Click Test Workspace Bucket.

  10. Once the credentials are successfully tested, click Save.

  1. Enter the Name.

  2. Click Add Workspace.

  3. Enter the Hostname, Workspace ID, Account Name, Databricks API Token, and Storage Container.

  4. Enter the Workspace Base Directory.

  5. Click Test Workspace Directory.

  6. Once the credentials are successfully tested, click Save.

  1. Enter the Name.

  2. Click Add Workspace.

  3. Enter the Hostname, Workspace ID, Account Name, and Databricks API Token.

  4. Use the dropdown menu to select the Google Cloud Region.

  5. Enter the GCS Bucket.

  6. Opt to enter the GCS Object Prefix.

  7. Click Test Workspace Directory.

  8. Once the credentials are successfully tested, click Save.

Add An Integration

  1. Select Add Integration.

  2. Use the dropdown menu to select the Integration Type.

Global Integration Settings

Snowflake Audit Sync Schedule

  1. Click the App Settings icon in the navigation menu.

  2. Navigate to the Global Integration Settings section and within that the Snowflake Audit Sync Schedule.

  3. Enter an integer into the textbox. If you enter 12, the audit sync will happen once every 12 hours, so twice a day.

Databricks Unity Catalog Configuration

  1. Click the App Settings icon in the navigation menu.

  2. Navigate to the Global Integration Settings section and within that the Databricks Unity Catalog Configuration.

  3. Enter an integer into the textbox. If you enter 12, the audit sync will happen once every 12 hours, so twice a day.

Manage Data Providers

To enable a data provider,

  1. Click the menu button in the upper right corner of the provider icon you want to enable.

  2. Select Enable from the dropdown.

If a driver needs to be uploaded,

  1. Click the menu button in the upper right corner of the provider icon, and then select Upload Driver from the dropdown.

  2. Click in the Add Files to Upload box and upload your file.

  3. Click Close.

  4. Click the menu button again, and then select Enable from the dropdown.

Enable Email

Application Admins can configure the SMTP server that Immuta will use to send emails to users. If this server is not configured, users will only be able to view notifications in the Immuta console.

To configure the SMTP server,

  1. Complete the Host and Port fields for your SMTP server.

  2. Enter the username and password Immuta will use to log in to the server in the User and Password fields, respectively.

  3. Enter the email address that will send the emails in the From Email field.

  4. Opt to Enable TLS by clicking this checkbox, and then enter a test email address in the Test Email Address field.

  5. Finally, click Send Test Email.

Once SMTP is enabled in Immuta, any Immuta user can request access to notifications as emails, which will vary depending on the permissions that user has. For example, to receive email notifications about group membership changes, the receiving user will need the GOVERNANCE permission. Once a user requests access to receive emails, Immuta will compile notifications and distribute these compilations via email at 8-hour intervals.

Initialize Kerberos

To configure Immuta to protect data in a kerberized Hadoop cluster,

  1. Upload your Kerberos Configuration File, and then you can add modify the Kerberos configuration in the window pictured below.

  2. Upload your Keytab File.

  3. Enter the principal Immuta will use to authenticate with your KDC in the Username field. Note: This must match a principal in the Keytab file.

  4. Adjust how often (in milliseconds) Immuta needs to re-authenticate with the KDC in the Ticket Refresh Interval field.

  5. Click Test Kerberos Initialization.

Generate System API Key

  1. Click the Generate Key button.

  2. Save this API key in a secure location.

Configure HDFS Cache Settings

To improve performance when using Immuta to secure Spark or HDFS access, a user's access level is cached momentarily. These cache settings are configurable, but decreasing the Time to Live (TTL) on any cache too low will negatively impact performance.

To configure cache settings, enter the time in milliseconds in each of the Cache TTL fields.

Set Public URLs

You can set the URL users will use to access Immuta Application. Note: Proxy configuration must be handled outside Immuta.

  1. Complete the Public Immuta URL field.

  2. Click Save and confirm your changes.

Audit Settings

Enable Exclude Query Text

By default, query text is included in query audit events from Snowflake, Databricks, and Starburst (Trino).

When query text is excluded from audit events, Immuta will retain query event metadata such as the columns and tables accessed. However, the query text used to make the query will not be included in the event. This setting is a global control for all configured integrations.

To exclude query text from audit events,

  1. Scroll to the Audit section.

  2. Check the box to Exclude query text from audit events.

  3. Click Save.

Default Subscription Merge Options

When multiple global subscription policies apply to a single data source, Immuta merges them. By default, these policies are combined with OR, meaning that users must meet the conditions outlined in at least one of the policies to get access. To change the default behavior and require users to meet the conditions of all policies applied to the data source (combine policies with AND),

  1. Click the Default Subscription Merge Options text in the left pane.

  2. Deselect the Default "allow shared policy responsibility" to be checked checkbox.

  3. Click Save.

Note: Even with this setting disabled, governors can opt to have their global subscription policies combined with OR during policy creation.

Configure Governor and Admin Settings

These options allow you to restrict the power individual users with the GOVERNANCE and USER_ADMIN permissions have in Immuta. Click the checkboxes to enable or disable these options.

Create Custom Permissions

You can create custom permissions that can then be assigned to users and leveraged when building subscription policies. Note: You cannot configure actions users can take within the console when creating a custom permission, nor can the actions associated with existing permissions in Immuta be altered.

To add a custom permission, click the Add Permission button, and then name the permission in the Enter Permission field.

Create Custom Data Source Access Requests

To create a custom questionnaire that all users must complete when requesting access to a data source, fill in the following fields:

  1. Opt for the questionnaire to be required.

  2. Key: Any unique value that identifies the question.

  3. Header: The text that will display on reports.

  4. Label: The text that will display in the questionnaire for the user. They will be prompted to type the answer in a text box.

Create Custom Login Message

To create a custom message for the login page of Immuta, enter text in the Enter Login Message box. Note: The message can be formatted in markdown.

Opt to adjust the Message Text Color and Message Background Color by clicking in these dropdown boxes.

Prevent Automatic Table Statistics

Without fingerprints some policies will be unavailable.

These policies will be unavailable until a data owner manually generates a fingerprint for a Snowflake data source:

  • Masking with format preserving masking

  • Masking using randomized response

To disable the automatic collection of statistics with a particular tag,

  1. Use the Select Tags dropdown to select the tag(s).

  2. Click Save.

Randomized response

Support limitation: This policy is only supported in Snowflake integrations.

When a randomized response policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process. To enforce the policy, Immuta generates and stores predicates and a list of allowed replacement values that may contain data that is subject to regulatory constraints (such as GDPR or HIPAA) in Immuta's metadata database.

The location of the metadata database depends on your deployment:

  • Self-managed Immuta deployment: The metadata database is located in the server where you have your external metadata database deployed.

  • SaaS Immuta deployment: The metadata database is located in the AWS global segment you have chosen to deploy Immuta.

To ensure this process does not violate your organization's data localization regulations, you need to first activate this masking policy type before you can use it in your Immuta tenant.

  1. Click Other Settings in the left panel and scroll to the Randomized Response section.

  2. Select the Allow users to create masking policies using Randomized Response checkbox to enable use of these policies for your organization.

  3. Click Save and confirm your changes.

Advanced Settings

Preview Features

If you enable any Preview features, provide feedback on how you would like these features to evolve.

Complex Data Types

  1. Click Advanced Settings in the left panel, and scroll to the Preview Features section.

  2. Check the Allow Complex Data Types checkbox.

  3. Click Save.

Advanced Configuration

Advanced configuration options provided by the Immuta Support team can be added in this section. The configuration must adhere to the YAML syntax.

Update the Time to Webhook Request Timeouts

  1. Expand the Advanced Settings section and add the following text to the Advanced Configuration to specify the number of seconds before webhook requests timeout. For example use 30 for 30 seconds. Setting it to 0 will result in no timeout.

    webhookIntegrationResponseTimeout: 30
  2. Click Save.

Update the Audit Ingestion Expiration

  1. Expand the Advanced Settings section and add the following text to the Advanced Configuration to specify the number of minutes before an audit job expires. For example use 300 for 300 minutes.

    plugins:
      auditService:
        ingestionJob:
          expirationInMinutes: 300
  2. Click Save.

See the .

To enable Azure Synapse Analytics, see the .

To enable Databricks Spark, see the .

To enable Databricks Unity Catalog, see the

To enable Redshift, see .

To enable Snowflake, see the .

To enable Starburst, see the .

Requirements: See the requirements for Snowflake audit on the .

To configure the ,

Requirements: See the requirements for Databricks Unity Catalog audit on the .

To configure the ,

You can enable or disable the types of data sources users can create in this section. Some of these types will require you to upload a driver before they can be enabled. The list of currently supported drivers is on the .

Microsoft Entra ID
Okta
OneLogin
Cron rule
SAML protocol configuration guide
External Catalogs page
Azure Synapse Analytics configuration page
Configure a Databricks Spark integration page
Getting started with the Databricks Unity Catalog integration page
Redshift configuration page
Snowflake configuration page
Starburst configuration page
ODBC Drivers page
Snowflake query audit logs page
audit ingest frequency for Snowflake
Databricks Unity Catalog query audit logs page
audit ingest frequency for Databricks Unity Catalog