# App Settings

## Navigate to the App Settings Page

1. Click the <i class="fa-gear">:gear:</i> **App Settings** icon in the navigation menu.
2. Click the link in the **App Settings** panel to navigate to that section.

## Use Existing Identity Access Manager

See the identity manager pages for a tutorial to connect an [Microsoft Entra ID](https://documentation.immuta.com/latest/configuration/people/section-contents/how-to-guides/saml/microsoft-entra-id), [Okta](https://documentation.immuta.com/latest/configuration/people/section-contents/how-to-guides/okta-ldap), or [OneLogin](https://documentation.immuta.com/latest/configuration/people/section-contents/how-to-guides/openid-connect/onelogin) identity manager.

To configure Immuta to use all other existing IAMs,

1. Click the **Add IAM** button.
2. Complete the **Display Name** field and select your IAM type from the **Identity Provider Type** dropdown: **LDAP/Active Directory**, **SAML**, or **OpenID**.

{% tabs %}
{% tab title="Add LDAP or Active Directory" %}
See the [Okta LDAP interface configuration guide](https://documentation.immuta.com/latest/people/section-contents/how-to-guides/okta-ldap#2-set-up-authentication-with-the-ldap-interface-in-immuta).
{% endtab %}

{% tab title="Add SAML" %}
See the [SAML protocol configuration guide](https://documentation.immuta.com/latest/configuration/people/section-contents/how-to-guides/saml/enable-saml).
{% endtab %}

{% tab title="Add OpenID" %}
See the [OpenID Connect protocol configuration guide](https://documentation.immuta.com/latest/configuration/people/section-contents/how-to-guides/openid-connect/openid-connect-protocol).
{% endtab %}
{% endtabs %}

## Immuta Accounts

To set the default permissions granted to users when they log in to Immuta, click the **Default Permissions** dropdown menu, and then select permissions from this list.

## Link External Catalogs

See the [External Catalogs page](https://documentation.immuta.com/latest/configuration/manage-data-metadata/catalogs/configure).

## Add a Workspace

1. Select **Add Workspace**.
2. Use the dropdown menu to select the **Workspace Type** and refer to the section below.

### **Databricks Spark**

{% hint style="info" %}
**Databricks cluster configuration**

Before creating a workspace, the cluster must send its configuration to Immuta; to do this, run a simple query on the cluster (i.e., `show tables`). Otherwise, an error message will occur when users attempt to create a workspace.
{% endhint %}

{% hint style="info" %}
**Databricks API Token expiration**

The **Databricks API Token** used for workspace access must be **non-expiring**. Using a token that expires risks losing access to projects that are created using that configuration.
{% endhint %}

Use the dropdown menu to select the **Schema** and refer to the corresponding tab below.

{% tabs %}
{% tab title="s3a" %}
{% hint style="info" %}
**Required AWS S3 Permissions**

When configuring a workspace using Databricks with S3, the following permissions need to be applied to `arn:aws:s3:::immuta-workspace-bucket/workspace/base/path/*` and `arn:aws:s3:::immuta-workspace-bucket/workspace/base/path` *Note: Two of these values are found on the App Settings page; `immuta-workspace-bucket` is from the **S3 Bucket** field and `workspace/base/path` is from the **Workspace Remote Path** field*:

* s3:Get\*
* s3:Delete\*
* s3:Put\*
* s3:AbortMultipartUpload

Additionally, these permissions must be applied to `arn:aws:s3:::immuta-workspace-bucket` *Note: One value is found on the App Settings page; `immuta-workspace-bucket` is from the **S3 Bucket** field*:

* s3:ListBucket
* s3:ListBucketMultipartUploads
* s3:GetBucketLocation
  {% endhint %}

1. Enter the **Name**.
2. Click **Add Workspace**
3. Enter the **Hostname**.
4. Opt to enter the **Workspace ID** (required with Azure Databricks).
5. Enter the **Databricks API Token**.
6. Use the dropdown menu to select the **AWS Region**.
7. Enter the **S3 Bucket**.
8. Opt to enter the **S3 Bucket Prefix**.
9. Click **Test Workspace Bucket**.
10. Once the credentials are successfully tested, click **Save**.
    {% endtab %}

{% tab title="abfss" %}

1. Enter the **Name**.
2. Click **Add Workspace**.
3. Enter the **Hostname**, **Workspace ID**, **Account Name**, **Databricks API Token**, and **Storage Container**.
4. Enter the **Workspace Base Directory**.
5. Click **Test Workspace Directory**.
6. Once the credentials are successfully tested, click **Save**.
   {% endtab %}

{% tab title="gs" %}

1. Enter the **Name**.
2. Click **Add Workspace**.
3. Enter the **Hostname**, **Workspace ID**, **Account Name**, and **Databricks API Token**.
4. Use the dropdown menu to select the **Google Cloud Region**.
5. Enter the **GCS Bucket**.
6. Opt to enter the **GCS Object Prefix**.
7. Click **Test Workspace Directory**.
8. Once the credentials are successfully tested, click **Save**.
   {% endtab %}
   {% endtabs %}

## Add An Integration

1. Select **Add Integration**.
2. Use the dropdown menu to select the **Integration Type**.
   * To enable Azure Synapse Analytics, see the [Azure Synapse Analytics configuration page](https://documentation.immuta.com/latest/configuration/integrations/azure-synapse-analytics/configure-azure-synapse-analytics-integration).
   * To enable Databricks Spark, see the [Configure a Databricks Spark integration page](https://documentation.immuta.com/latest/configuration/integrations/databricks/databricks-spark/how-to-guides/configuration).
   * To enable Databricks Unity Catalog, see the [Getting started with the Databricks Unity Catalog integration page](https://documentation.immuta.com/latest/configuration/integrations/databricks/databricks-unity-catalog/how-to-guides/configure)
   * To enable Redshift, see [Redshift configuration page](https://documentation.immuta.com/latest/configuration/integrations/redshift/how-to-guides/redshift).
   * To enable Snowflake, see the [Snowflake configuration page](https://documentation.immuta.com/latest/configuration/integrations/snowflake/how-to-guides/enterprise).
   * To enable Starburst, see the [Starburst configuration page](https://documentation.immuta.com/latest/configuration/integrations/starburst-trino/how-to-guides/configure).

### Global Integration Settings

#### Snowflake Audit Sync Schedule

**Requirements**: See the requirements for Snowflake audit on the [Snowflake query audit logs page](https://documentation.immuta.com/latest/governance/detect-your-activity/audit/reference-guides/query-audit-logs/snowflake#requirements).

To configure the [audit ingest frequency for Snowflake](https://documentation.immuta.com/latest/governance/detect-your-activity/audit/reference-guides/query-audit-logs/snowflake#audit-frequency),

1. Click the <i class="fa-gear">:gear:</i> **App Settings** icon in the navigation menu.
2. Navigate to the **Global Integration Settings** section and within that the **Snowflake Audit Sync Schedule**.
3. Enter an integer into the textbox. If you enter 12, the audit sync will happen once every 12 hours, so twice a day.

#### Databricks Unity Catalog Configuration

**Audit**

**Requirements**: See the requirements for Databricks Unity Catalog audit on the [Databricks Unity Catalog query audit logs page](https://documentation.immuta.com/latest/governance/detect-your-activity/audit/reference-guides/query-audit-logs/databricks-uc#requirement).

To configure the [audit ingest frequency for Databricks Unity Catalog](https://documentation.immuta.com/latest/governance/detect-your-activity/audit/reference-guides/query-audit-logs/databricks-uc#audit-frequency),

1. Click the <i class="fa-gear">:gear:</i> **App Settings** icon in the navigation menu.
2. Navigate to the **Global Integration Settings** section and within that the **Databricks Unity Catalog Configuration**.
3. Enter an integer into the textbox. If you enter 12, the audit sync will happen once every 12 hours, so twice a day.

**Additional privileges required for access**

By default, Immuta will revoke Immuta users' `USE CATALOG` and `USE SCHEMA` privileges in Unity Catalog for users that do not have access to any of the resources within that catalog/schema. This includes any `USE CATALOG` or `USE SCHEMA` privileges that were granted outside of Immuta.

To disable this setting,

1. Click the <i class="fa-gear">:gear:</i> **App Settings** icon in the navigation menu.
2. Navigate to **Global Integration Settings** > **Databricks Unity Catalog Configuration**.
3. Click the **Revoke additional privileges required for access** checkbox to disable the setting.
4. Click **Save**.

See the [Databricks Unity Catalog reference guide](https://documentation.immuta.com/latest/integrations/databricks/databricks-unity-catalog/unity-catalog-overview#user-permissions-immuta-revokes) for details about this setting.

## Manage Data Providers

You can enable or disable the types of data sources users can create in this section. Some of these types will require you to upload a driver before they can be enabled. The list of currently supported drivers is on the [ODBC Drivers page](https://documentation.immuta.com/latest/configuration/application-settings/how-to-guides/odbc-drivers).

To enable a data provider,

1. Click the **menu** button in the upper right corner of the provider icon you want to enable.
2. Select **Enable** from the dropdown.

If a driver needs to be uploaded,

1. Click the **menu** button in the upper right corner of the provider icon, and then select **Upload Driver** from the dropdown.
2. Click in the **Add Files to Upload** box and upload your file.
3. Click **Close**.
4. Click the **menu** button again, and then select **Enable** from the dropdown.

## Enable Email

Application Admins can configure the SMTP server that Immuta will use to send emails to users. If this server is not configured, users will only be able to view notifications in the Immuta console.

To configure the SMTP server,

1. Complete the **Host** and **Port** fields for your SMTP server.
2. Enter the username and password Immuta will use to log in to the server in the **User** and **Password** fields, respectively.
3. Enter the email address that will send the emails in the **From Email** field.
4. Opt to **Enable TLS** by clicking this checkbox, and then enter a test email address in the **Test Email Address** field.
5. Finally, click **Send Test Email**.

Once SMTP is enabled in Immuta, any Immuta user can request access to notifications as emails, which will vary depending on the permissions that user has. For example, to receive email notifications about group membership changes, the receiving user will need the `GOVERNANCE` permission. Once a user requests access to receive emails, Immuta will compile notifications and distribute these compilations via email at 8-hour intervals.

## Initialize Kerberos

To configure Immuta to protect data in a kerberized Hadoop cluster,

1. Upload your **Kerberos Configuration File**, and then you can modify the Kerberos configuration in the window pictured below.

   <figure><img src="https://content.gitbook.com/content/X9oF3f8eD0LqDsHubfGB/blobs/cuqd5k3wEfMIER5ktFDC/kdc-config.png" alt=""><figcaption></figcaption></figure>
2. Upload your **Keytab File**.
3. Enter the principal Immuta will use to authenticate with your KDC in the **Username** field. *Note: This must match a principal in the Keytab file.*
4. Adjust how often (in milliseconds) Immuta needs to re-authenticate with the KDC in the **Ticket Refresh Interval** field.
5. Click **Test Kerberos Initialization**.

## Generate System API Key

1. Click the **Generate Key** button.
2. Save this API key in a secure location.

## Configure HDFS Cache Settings

To improve performance when using Immuta to secure Spark or HDFS access, a user's access level is cached momentarily. These cache settings are configurable, but decreasing the Time to Live (TTL) on any cache too low will negatively impact performance.

To configure cache settings, enter the time in milliseconds in each of the **Cache TTL** fields.

## Set Public URLs

You can set the URL users will use to access Immuta Application. *Note: Proxy configuration must be handled outside Immuta.*

1. Complete the **Public Immuta URL** field.
2. Click **Save** and confirm your changes.

## Audit Settings

### Enable Exclude Query Text

By default, query text is included in query audit events from Snowflake, Databricks, and Starburst (Trino).

When query text is excluded from audit events, Immuta will retain query event metadata such as the columns and tables accessed. However, the query text used to make the query will not be included in the event. This setting is a global control for all configured integrations.

To exclude query text from audit events,

1. Scroll to the **Audit** section.
2. Check the box to **Exclude query text from audit events**.
3. Click **Save**.

## Configure Governor and Admin Settings

These options allow you to restrict the power individual users with the GOVERNANCE and USER\_ADMIN permissions have in Immuta. Click the **checkboxes** to enable or disable these options.

## Create Custom Permissions

You can create custom permissions that can then be assigned to users and leveraged when building subscription policies. *Note: You cannot configure actions users can take within the console when creating a custom permission, nor can the actions associated with existing permissions in Immuta be altered.*

To add a custom permission, click the **Add Permission** button, and then name the permission in the **Enter Permission** field.

## Create Custom Data Source Access Requests

To create a custom questionnaire that all users must complete when requesting access to a data source, fill in the following fields:

1. Opt for the questionnaire to be required.
2. **Key**: Any unique value that identifies the question.
3. **Header**: The text that will display on reports.
4. **Label**: The text that will display in the questionnaire for the user. They will be prompted to type the answer in a text box.

## Create Custom Login Message

To create a custom message for the login page of Immuta, enter text in the **Enter Login Message** box. *Note: The message can be formatted in markdown.*

Opt to adjust the **Message Text Color** and **Message Background Color** by clicking in these dropdown boxes.

## Prevent Automatic Table Statistics

{% hint style="warning" %}
**Without fingerprints some policies will be unavailable.**

These policies will be unavailable until a data owner manually generates a fingerprint for a Snowflake data source:

* Masking with format preserving masking
* Masking using randomized response
  {% endhint %}

To disable the automatic collection of statistics with a particular tag,

1. Use the **Select Tags** dropdown to select the tag(s).
2. Click **Save**.

## Randomized response

{% hint style="info" %}
**Support limitation**: This policy is only supported in Snowflake integrations.
{% endhint %}

When a randomized response policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process. To enforce the policy, Immuta generates and stores predicates and a list of allowed replacement values that may contain data that is subject to regulatory constraints (such as GDPR or HIPAA) in Immuta's metadata database.

The location of the metadata database depends on your deployment:

* Self-managed Immuta deployment: The metadata database is located in the server where you have your external metadata database deployed.
* SaaS Immuta deployment: The metadata database is located in the AWS global segment you have chosen to deploy Immuta.

To ensure this process does not violate your organization's data localization regulations, you need to first activate this masking policy type before you can use it in your Immuta tenant.

1. Click **Other Settings** in the left panel and scroll to the **Randomized Response** section.
2. Select the **Allow users to create masking policies using Randomized Response** checkbox to enable use of these policies for your organization.
3. Click **Save** and confirm your changes.

## Advanced Settings

### Preview Features

If you enable any Preview features, provide feedback on how you would like these features to evolve.

#### Complex Data Types

1. Click **Advanced Settings** in the left panel, and scroll to the **Preview Features** section.
2. Check the **Allow Complex Data Types** checkbox.
3. Click **Save**.

### Advanced Configuration

Advanced configuration options provided by the Immuta Support team can be added in this section. The configuration must adhere to the YAML syntax.

#### Enable the `@columnReference` function

1. Expand the **Advanced Settings** section and add the following text to the **Advanced Configuration.**

   ```yaml
   featureFlags:
     antlrFreeParsing: true
   ```
2. Click **Save**.

#### Update the Time to Webhook Request Timeouts

1. Expand the **Advanced Settings** section and add the following text to the **Advanced Configuration** to specify the number of seconds before webhook requests timeout. *For example use `30` for 30 seconds. Setting it to `0` will result in no timeout.*

   ```yaml
   webhookIntegrationResponseTimeout: 30
   ```
2. Click **Save**.

#### Update the Audit Ingestion Expiration

1. Expand the **Advanced Settings** section and add the following text to the **Advanced Configuration** to specify the number of minutes before an audit job expires. *For example use `300` for 300 minutes.*

   ```yaml
   plugins:
     auditService:
       ingestionJob:
         expirationInMinutes: 300
   ```
2. Click **Save**.
