# App Settings

## Navigate to the App Settings Page

1. Click the **App Settings** icon in the navigation menu.
2. Click the link in the **App Settings** panel to navigate to that section.

## Use Existing Identity Access Manager

See the identity manager pages for a tutorial to connect an [Microsoft Entra ID](https://documentation.immuta.com/2024.2/people/section-contents/how-to-guides/microsoft-entra-id), [Okta](https://documentation.immuta.com/2024.2/people/section-contents/how-to-guides/okta-ldap), or [OneLogin](https://documentation.immuta.com/2024.2/people/section-contents/how-to-guides/onelogin) identity manager.

To configure Immuta to use all other existing IAMs,

1. Click the **Add IAM** button.
2. Complete the **Display Name** field and select your IAM type from the **Identity Provider Type** dropdown: **LDAP/Active Directory**, **SAML**, or **OpenID**.

{% tabs %}
{% tab title="Add LDAP or Active Directory" %}
See the [Okta LDAP interface configuration guide](https://documentation.immuta.com/2024.2/people/section-contents/how-to-guides/okta-ldap#2-configure-your-iam-in-immuta).
{% endtab %}

{% tab title="Add SAML" %}
See the [SAML protocol configuration guide](https://documentation.immuta.com/2024.2/people/section-contents/how-to-guides/enable-saml).
{% endtab %}

{% tab title="Add OpenID" %}
See the [Okta and OpenID Connect configuration guide](https://documentation.immuta.com/2024.2/people/section-contents/how-to-guides/okta-openid-connect#2-add-openid-connect-in-immuta).
{% endtab %}
{% endtabs %}

## Immuta Accounts

To set the default permissions granted to users when they log in to Immuta, click the **Default Permissions** dropdown menu, and then select permissions from this list.

## Link External Catalogs

See the [External Catalogs page](https://documentation.immuta.com/2024.2/data-and-integrations/catalogs/configure).

## Enable External Masking

{% hint style="warning" %}
**Deprecation notice:** Support for this feature has been deprecated.
{% endhint %}

To enable [external masking](https://documentation.immuta.com/2024.2/secure-your-data/authoring-policies-in-secure/data-policies/reference-guides/data-policies#external-masking),

1. Navigate to the **App Settings** page and click **External Masking** in the left sidebar.
2. Click **Add Configuration** and specify an external endpoint in the **External URI** field.
3. Click **Configure**, and then add at least one tag by selecting from the **Search for tags** dropdown menu. *Note: Tag hierarchies are supported, so tagging a column as `Sensitive.Customer` would drive the policy if external masking was configured with the tag `Sensitive`).*
4. **Select Authentication Method** and then complete the authentication fields (when applicable).
5. Click **Test Connection** and then **Save**.

## Add a Workspace

1. Select **Add Workspace**.
2. Use the dropdown menu to select the **Workspace Type** and refer to the section below.

### **Databricks**

{% hint style="info" %}
**Databricks cluster configuration**

Before creating a workspace, the cluster must send its configuration to Immuta; to do this, run a simple query on the cluster (i.e., `show tables`). Otherwise, an error message will occur when users attempt to create a workspace.
{% endhint %}

{% hint style="info" %}
**Databricks API Token expiration**

The **Databricks API Token** used for project workspace access must be **non-expiring**. Using a token that expires risks losing access to projects that are created using that configuration.
{% endhint %}

Use the dropdown menu to select the **Schema** and refer to the corresponding tab below.

{% tabs %}
{% tab title="s3a" %}
{% hint style="info" %}
**Required AWS S3 Permissions**

When configuring a project workspace using Databricks with S3, the following permissions need to be applied to `arn:aws:s3:::immuta-workspace-bucket/workspace/base/path/*` and `arn:aws:s3:::immuta-workspace-bucket/workspace/base/path` *Note: Two of these values are found on the App Settings page; `immuta-workspace-bucket` is from the **S3 Bucket** field and `workspace/base/path` is from the **Workspace Remote Path** field*:

* s3:Get\*
* s3:Delete\*
* s3:Put\*
* s3:AbortMultipartUpload

Additionally, these permissions must be applied to `arn:aws:s3:::immuta-workspace-bucket` *Note: One value is found on the App Settings page; `immuta-workspace-bucket` is from the **S3 Bucket** field*:

* s3:ListBucket
* s3:ListBucketMultipartUploads
* s3:GetBucketLocation
  {% endhint %}

1. Enter the **Name**.
2. Click **Add Workspace**
3. Enter the **Hostname**.
4. Opt to enter the **Workspace ID** (required with Azure Databricks).
5. Enter the **Databricks API Token**.
6. Use the dropdown menu to select the **AWS Region**.
7. Enter the **S3 Bucket**.
8. Opt to enter the **S3 Bucket Prefix**.
9. Click **Test Workspace Bucket**.
10. Once the credentials are successfully tested, click **Save**.
    {% endtab %}

{% tab title="abfss" %}

1. Enter the **Name**.
2. Click **Add Workspace**.
3. Enter the **Hostname**, **Workspace ID**, **Account Name**, **Databricks API Token**, and **Storage Container**.
4. Enter the **Workspace Base Directory**.
5. Click **Test Workspace Directory**.
6. Once the credentials are successfully tested, click **Save**.
   {% endtab %}

{% tab title="gs" %}

1. Enter the **Name**.
2. Click **Add Workspace**.
3. Enter the **Hostname**, **Workspace ID**, **Account Name**, and **Databricks API Token**.
4. Use the dropdown menu to select the **Google Cloud Region**.
5. Enter the **GCS Bucket**.
6. Opt to enter the **GCS Object Prefix**.
7. Click **Test Workspace Directory**.
8. Once the credentials are successfully tested, click **Save**.
   {% endtab %}
   {% endtabs %}

## Add An Integration

1. Select **Add Integration**.
2. Use the dropdown menu to select the **Integration Type**.
   * To enable Azure Synapse Analytics, see the [Azure Synapse Analytics configuration page](https://documentation.immuta.com/2024.2/data-and-integrations/azure-synapse-analytics/configure-azure-synapse-analytics-integration).
   * To enable Databricks Spark, see the [Simplified Databricks configuration page](https://documentation.immuta.com/2024.2/data-and-integrations/databricks-spark/how-to-guides/configuration/simplified).
   * To enable Databricks Unity Catalog, see the [Getting started with the Databricks Unity Catalog integration page](https://documentation.immuta.com/2024.2/data-and-integrations/databricks-unity-catalog/how-to-guides/configure)
   * To enable Redshift, see [Redshift configuration page](https://documentation.immuta.com/2024.2/data-and-integrations/redshift/how-to-guides/redshift).
   * To enable Snowflake, see the [Snowflake configuration page](https://documentation.immuta.com/2024.2/data-and-integrations/snowflake/how-to-guides/enterprise).
   * To enable Starburst, see the [Starburst configuration page](https://documentation.immuta.com/2024.2/data-and-integrations/starburst-trino/how-to-guides/configure).

### Global Integration Settings

#### Snowflake Audit Sync Schedule

**Requirements**: See the requirements for Snowflake audit on the [Snowflake query audit logs page](https://documentation.immuta.com/2024.2/detect-your-activity/audit/reference-guides/snowflake#requirements).

To configure the [audit ingest frequency for Snowflake](https://documentation.immuta.com/2024.2/detect-your-activity/audit/reference-guides/snowflake#audit-frequency),

1. Click the **App Settings** icon in the navigation menu.
2. Navigate to the **Global Integration Settings** section and within that the **Snowflake Audit Sync Schedule**.
3. Enter an integer into the textbox. If you enter 12, the audit sync will happen once every 12 hours, so twice a day.

#### Databricks Unity Catalog Configuration

**Requirements**: See the requirements for Databricks Unity Catalog audit on the [Databricks Unity Catalog query audit logs page](https://documentation.immuta.com/2024.2/detect-your-activity/audit/reference-guides/databricks-uc#requirement).

To configure the [audit ingest frequency for Databricks Unity Catalog](https://documentation.immuta.com/2024.2/detect-your-activity/audit/reference-guides/databricks-uc#audit-frequency),

1. Click the **App Settings** icon in the navigation menu.
2. Navigate to the **Global Integration Settings** section and within that the **Databricks Unity Catalog Configuration**.
3. Enter an integer into the textbox. If you enter 12, the audit sync will happen once every 12 hours, so twice a day.

## Manage Data Providers

You can enable or disable the types of data sources users can create in this section. Some of these types will require you to upload an ODBC driver before they can be enabled. The list of currently supported drivers is on the [ODBC Drivers page](https://documentation.immuta.com/2024.2/application-settings/how-to-guides/odbc-drivers).

To enable a data provider,

1. Click the **menu** button in the upper right corner of the provider icon you want to enable.
2. Select **Enable** from the dropdown.

If an ODBC driver needs to be uploaded,

1. Click the **menu** button in the upper right corner of the provider icon, and then select **Upload Driver** from the dropdown.
2. Click in the **Add Files to Upload** box and upload your file.
3. Click **Close**.
4. Click the **menu** button again, and then select **Enable** from the dropdown.

## Enable Email

Application Admins can configure the SMTP server that Immuta will use to send emails to users. If this server is not configured, users will only be able to view notifications in the Immuta console.

To configure the SMTP server,

1. Complete the **Host** and **Port** fields for your SMTP server.
2. Enter the username and password Immuta will use to log in to the server in the **User** and **Password** fields, respectively.
3. Enter the email address that will send the emails in the **From Email** field.
4. Opt to **Enable TLS** by clicking this checkbox, and then enter a test email address in the **Test Email Address** field.
5. Finally, click **Send Test Email**.

Once SMTP is enabled in Immuta, any Immuta user can request access to notifications as emails, which will vary depending on the permissions that user has. For example, to receive email notifications about group membership changes, the receiving user will need the `GOVERNANCE` permission. Once a user requests access to receive emails, Immuta will compile notifications and distribute these compilations via email at 8-hour intervals.

## Initialize Kerberos

To configure Immuta to protect data in a kerberized Hadoop cluster,

1. Upload your **Kerberos Configuration File**, and then you can add modify the Kerberos configuration in the window pictured below.

   <figure><img src="https://1279220422-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F0QkDA8tcaDNby4bsGLcg%2Fuploads%2Fgit-blob-78b26f678d8fda533ea676faf6570bd9cf5f19fb%2Fkdc-config.png?alt=media" alt=""><figcaption></figcaption></figure>
2. Upload your **Keytab File**.
3. Enter the principal Immuta will use to authenticate with your KDC in the **Username** field. *Note: This must match a principal in the Keytab file.*
4. Adjust how often (in milliseconds) Immuta needs to re-authenticate with the KDC in the **Ticket Refresh Interval** field.
5. Click **Test Kerberos Initialization**.

## Generate System API Key

1. Click the **Generate Key** button.
2. Save this API key in a secure location.

## Configure HDFS Cache Settings

To improve performance when using Immuta to secure Spark or HDFS access, a user's access level is cached momentarily. These cache settings are configurable, but decreasing the Time to Live (TTL) on any cache too low will negatively impact performance.

To configure cache settings, enter the time in milliseconds in each of the **Cache TTL** fields.

## Set Public URLs

You can set the URL users will use to access Immuta Application. *Note: Proxy configuration must be handled outside Immuta.*

1. Complete the **Public Immuta URL** field.
2. Click **Save** and confirm your changes.

## Enable Sensitive Data Discovery

To enable Sensitive Data Discovery and configure its settings, see the [Sensitive Data Discovery page](https://documentation.immuta.com/2024.2/discover-your-data/data-discovery/how-to-guides/enable-sdd).

## Audit Settings

### Enable Exclude Query Text

By default, query text is included in query audit events from Snowflake, Databricks, and Starburst (Trino).

When query text is excluded from audit events, Immuta will retain query event metadata such as the columns and tables accessed. However, the query text used to make the query will not be included in the event. This setting is a global control for all configured integrations.

To exclude query text from audit events,

1. Scroll to the **Audit** section.
2. Check the box to **Exclude query text from audit events**.
3. Click **Save**.

## Allow Policy Exemptions

Click the **Allow Policy Exemptions** checkbox to allow users to specify who can bypass all policies on a data source.

## Manage the Default Subscription Policy

{% hint style="warning" %}
**Deprecation notice**

The ability to configure the behavior of the default subscription policy has been deprecated. Once this configuration setting is removed from the app settings page, Immuta will not apply a subscription policy to registered data sources unless an existing global policy applies to them. To set an "Allow individually selected users" subscription policy on all data sources, [create a global subscription policy](https://documentation.immuta.com/2024.2/secure-your-data/authoring-policies-in-secure/section-contents/how-to-guides/subscription-policy-tutorial) with that condition that applies to all data sources or apply a [local subscription policy](https://documentation.immuta.com/2024.2/secure-your-data/authoring-policies-in-secure/section-contents/how-to-guides/subscription-policy-tutorial) to individual data sources.
{% endhint %}

1. Click the **App Settings** icon in the navigation menu.
2. Scroll to the **Default Subscription Policy** section.
3. Select the **radio button** to define the behavior of subscription policies when new data sources are registered in Immuta:
   * **None**: When this option is selected, Immuta will not apply any subscription policies to data sources when they are registered. Changing the default subscription policy to none will only apply to newly created data sources. Existing data sources will retain their existing subscription policies.
   * **Allow individually selected users**: When a data source is created, Immuta will apply a subscription policy to it that requires users to be individually selected to access the underlying table. In most cases, users who were able to query the table before the data source was created will no longer be able to query the table in the remote data platform until they are subscribed to the data source in Immuta.
4. Click **Save** and confirm your changes.

## Default Subscription Merge Options

Immuta merges multiple Global Subscription policies that apply to a single data source; by default, users must meet all the conditions outlined in each policy to get access (i.e., the conditions of the policies are combined with `AND`). To change the default behavior to allow users to meet the condition of at least one policy that applies (i.e., the conditions of the policies are combined with `OR`),

1. Click the **Default Subscription Merge Options** text in the left pane.
2. Select the **Default "allow shared policy responsibility" to be checked** checkbox.
3. Click **Save**.

*Note: Even with this setting enabled, Governors can opt to have their Global Subscription policies combined with `AND` during policy creation.*

## Configure Governor and Admin Settings

These options allow you to restrict the power individual users with the GOVERNANCE and USER\_ADMIN permissions have in Immuta. Click the **checkboxes** to enable or disable these options.

## Create Custom Permissions

You can create custom permissions that can then be assigned to users and leveraged when building subscription policies. *Note: You cannot configure actions users can take within the console when creating a custom permission, nor can the actions associated with existing permissions in Immuta be altered.*

To add a custom permission, click the **Add Permission** button, and then name the permission in the **Enter Permission** field.

## Create Custom Data Source Access Requests

To create a custom questionnaire that all users must complete when requesting access to a data source, fill in the following fields:

1. Opt for the questionnaire to be required.
2. **Key**: Any unique value that identifies the question.
3. **Header**: The text that will display on reports.
4. **Label**: The text that will display in the questionnaire for the user. They will be prompted to type the answer in a text box.

## Create Custom Login Message

To create a custom message for the login page of Immuta, enter text in the **Enter Login Message** box. *Note: The message can be formatted in markdown.*

Opt to adjust the **Message Text Color** and **Message Background Color** by clicking in these dropdown boxes.

## Prevent Automatic Table Statistics

{% hint style="warning" %}
**Without fingerprints some policies will be unavailable.**

Disabling the collection of statistics will cause these policies to be unavailable:

* Masking with format preserving masking
* Masking with K-Anonymization
* Masking using randomized response
  {% endhint %}

To disable the automatic collection of statistics with a particular tag,

1. Use the **Select Tags** dropdown to select the tag(s).
2. Click **Save**.

## K-anonymization

{% hint style="info" %}
**Query engine and legacy fingerprint required**

K-anonymization policies require the query engine and legacy fingerprint service, which are disabled by default. If you need to use k-anonymization policies, work with your Immuta representative to enable the query engine and legacy fingerprint service when you deploy Immuta.
{% endhint %}

When a k-anonymization policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process that generates rules enforcing k-anonymity. The results of this query, which may contain data that is subject to regulatory constraints such as GDPR or HIPAA, are stored in Immuta's metadata database.

The location of the metadata database depends on your deployment:

* Self-managed Immuta deployment: The metadata database is located in the server where you have your external metadata database deployed.
* SaaS Immuta deployment: The metadata database is located in the AWS global segment you have chosen to deploy Immuta.

To ensure this process does not violate your organization's data localization regulations, you need to first activate this masking policy type before you can use it in your Immuta tenant.

1. Click **Other Settings** in the left panel and scroll to the **K-Anonymization** section.
2. Select the **Allow users to create masking policies using K-Anonymization** checkbox to enable k-anonymization policies for your organization.
3. Click **Save** and confirm your changes.

## Advanced Settings

### Preview Features

If you enable any Preview features, please provide feedback on how you would like these features to evolve.

#### Policy Adjustments

1. Click **Advanced Settings** in the left panel, and scroll to the **Preview Features** section.
2. Check the **Enable Policy Adjustments** checkbox.
3. Click **Save**.

#### Complex Data Types

1. Click **Advanced Settings** in the left panel, and scroll to the **Preview Features** section.
2. Check the **Allow Complex Data Types** checkbox.
3. Click **Save**.

#### Enhanced Subscription Policy Variables

For instructions on enabling this feature, navigate to the [Global Subscription Policies Advanced DSL Tutorial](https://documentation.immuta.com/2024.2/secure-your-data/authoring-policies-in-secure/section-contents/how-to-guides/advanced-dsl-policies).

### Advanced Configuration

Advanced configuration options provided by the Immuta Support team can be added in this section. The configuration must adhere to the YAML syntax.

#### Update the K-Anonymity Cardinality Cutoff

To increase the default cardinality cutoff for columns compatible with k-anonymity,

1. Expand the **Advanced Settings** section and add the following text to the **Advanced Configuration**:

   ```yaml
   plugins:
     postgresHandler:
       maxKAnonCardinality: 10000000
     snowflakeHandler:
       maxKAnonCardinality: 10000000
   ```
2. Click **Save**.
3. To regenerate the data source's fingerprint, navigate to that data source's page.
4. Click the **Status** in the upper right corner.
5. Click **Re-run** in the **Fingerprint** section of the dropdown menu.

*Note: Re-running the fingerprint is only necessary for existing data sources. New data sources will be generated using the new maximum cardinality.*

#### Update the Time to Webhook Request Timeouts

1. Expand the **Advanced Settings** section and add the following text to the **Advanced Configuration** to specify the number of seconds before webhook requests timeout. *For example use `30` for 30 seconds. Setting it to `0` will result in no timeout.*

   ```yaml
   webhookIntegrationResponseTimeout: 30
   ```
2. Click **Save**.

#### Update the Audit Ingestion Expiration

1. Expand the **Advanced Settings** section and add the following text to the **Advanced Configuration** to specify the number of minutes before an audit job expires. *For example use `300` for 300 minutes.*

   ```yaml
   plugins:
     auditService:
       ingestionJob:
         expirationInMinutes: 300
   ```
2. Click **Save**.

#### Enable Discover Features

**Enable Sensitive Data Discovery for Starburst (Trino)**

1. Expand the **Advanced Settings** section and add the following text to the **Advanced Configuration**:

   ```yaml
   featureFlags:
     nativeSddTrino: true
   ```
2. Click **Save**.

**Enable Frameworks API, Data Security Framework, and Risk Assessment Framework**

1. Expand the **Advanced Settings** section and add the following text to the **Advanced Configuration**:

   ```yaml
   featureFlags:
     frameworks: true
   ```
2. Click **Save**.

**Enable Additional Compliance Frameworks, Frameworks UI, and the Data Inventory Dashboard**

**Requirement**: The `frameworks` feature flag must be enabled and the configuration saved before the `dataInventoryDashboard` can be added to the advanced configuration field and enabled.

1. Expand the **Advanced Settings** section and add the following text to the **Advanced Configuration**:

   ```yaml
   featureFlags:
     dataInventoryDashboard: true
   ```
2. Click **Save**.
