Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The Immuta Web Service uses ODBC drivers to communicate with back end data platforms. Immuta deployments only include a few ODBC drivers that Immuta is able to distribute. All other drivers that are not redistributable must be obtained and deployed by a System Administrator before Data Owners can use the corresponding data source types in Immuta.
You can install ODBC drivers on the App Settings page.
This driver is included with Immuta.
HiveODBC-2.6.9.1009-1.x86_64.rpm
ImpalaODBC-2.6.8.1008-1.x86_64.rpm
msodbcsql17-17.10.2.1-1.x86_64.rpm
SimbaODBCDriverforGoogleBigQuery_2.5.0.1001-Linux.tar.gz
This driver is included with Immuta.
oracle-instantclient19.5-odbc-19.5.0.0.0-1.x86_64.rpm
This driver is included with Immuta.
You may purchase this ODBC driver from Magnitude.
This driver is included with Immuta.
The SAP Hana ODBC driver odbc-2019.01.19.tar.gz
is available as part of your SAP Hana installation. Upload a tar.gz file that contains the ODBC driver for Linux x86 64.
This driver is included with Immuta.
Navigate to the Licenses tab on the App Settings page.
Click Add License Key in the top right corner of the page, and then paste the license key provided to you by Immuta in the dialog that appears.
Click Save, and the system will generate a unique user id (UUID) for the license, which will appear at the bottom of the page.
Click delete in the Actions column of the license you would like to delete.
Click Confirm in the dialog that appears.
Note: If you delete a license key, your compliance may be affected and you will need to contact your Immuta Support Professional to receive a new license key.
When creating a data source in Power BI, specify Microsoft Account as the authentication method, if available. This setting allows you to use your enterprise SSO to connect to your compute platform.
After connecting to the compute platform and the tables to use for your data source, select DirectQuery to connect to the data source. This setting is required for Immuta to enforce policies.
After you publish the datasets to the Power BI service, force users to use their personal credentials to connect to the compute platform by following the steps below.
Enable SSO in the tenant admin portal under Settings -> Admin portal -> Integration settings.
Find the option to manage Data source credentials under Settings -> Datasets.
For most connectors you can enable OAuth2 as the authentication method to the compute platform.
Enable the option Report viewers can only access this data source with their own Power BI identities using DirectQuery. This forces end-users to use their personal credentials.
When creating the data source in Tableau, specify the authentication method as Sign in using OAuth. This setting will allow you to use your enterprise SSO to connect to your compute platform.
After connecting to the compute platform, select the tables you will use for your data source. Then, select Live connection. This setting is required for Immuta to enforce policies.
To share your dashboard to your organization, publish your data sources. During this process, set the authentication method to Prompt user. This option ensures that dashboard viewers will see the data according to their personal policies.
Immuta generates data encryption keys (on a user-defined rollover schedule) to encrypt and decrypt values. This page provides an overview of encryption key management and outlines its configuration options in Immuta.
Use an external key management service
Immuta recommends using an external Key Management Service (KMS) to encrypt or decrypt data keys as needed.
Immuta encrypts values with data encryption keys, either those that are system-generated or managed using an external key management service (KMS). Immuta recommends a KMS to encrypt or decrypt data keys and supports the AWS Key Management Service. To configure the AWS KMS, complete the steps below.
However, if no KMS is configured Immuta will generate a data encryption key on a user-defined rollover schedule, using the most recent data key to encrypt new values while preserving old data keys to decrypt old values. To change the default rollover schedule of 1 year, follow these steps.
Before you can configure the AWS KMS, you need to set up your AWS credentials. Immuta cannot encrypt the AWS access/secret keys in the KMS configuration, so we recommend using IAM roles.
Follow AWS documentation to create an IAM policy to attach to your IAM role. An example is provided below.
Example IAM Policy:
Other ways of setting up AWS credentials can be found here.
Choose one of the following options to set up an IAM role:
Attach an IAM role to your AWS EC2 instance. Then, continue to step 2.
If you're running Immuta in Kubernetes (AWS EKS), work with your Immuta Support Professional to set up an IAM role. Then, continue to step 2.
Add credentials in the KMS configuration (not recommended): This option should only be used if Immuta is not running on your AWS infrastructure and you need to leverage a KMS on AWS. For all other scenarios, use one of the options above.
Add the following configuration (with your AWS region
and keyId
) to the Advanced Configuration section of the App Settings page.
Immuta cannot encrypt the AWS access/secret keys in the KMS configuration, so we recommend using IAM roles.
This option should only be used if Immuta is not running on your AWS infrastructure. For example, if you are running Immuta on-prem and need to leverage a KMS on AWS. For all other scenarios, use one of the two other options above.
Before you begin, create a secret access key and an access key that will authenticate to Immuta.
Navigate to the App Settings page and add the following configuration (with your AWS keyId
, region
, and credentials) to the Advanced Configuration section:
Click the App Settings icon in the left sidebar and scroll to the Advanced Configuration section.
Paste the following configuration in the text box, adjusting dataKeyRollOverLength
days to your desired schedule:
The system status tab allows administrators to export a zip file called the Immuta status bundle. This bundle includes information helpful to assess and solve issues within an Immuta tenant by providing a snapshot of Immuta, associated services, and information about the remote source backing any of the selected data sources.
Click the App Settings icon.
Select the System Status tab.
Select the checkboxes for the information you want to export.
Click Generate Status Bundle to download the file.
Immuta can enforce policies on data in your dashboards when your BI tools are connected directly to your compute layer.
This page provides recommendations for configuring the interaction between your database, BI tools, and users.
To ensure that Immuta applies access controls to your dashboards, connect your BI tools directly to the compute layer where Immuta enforces policies without using extracts. Different tools may call this feature different names (such as live connections in Tableau or DirectQuery in Power BI).
Connecting your tools directly to the compute layer without using extracts will not impact performance and provides host of other benefits. For details, see Moving from legacy BI extracts to modern data security and engineering.
Personal credentials need to be used to query data from the BI tool so that Immuta can apply the correct policies for the user accessing the dashboard. Different authentication mechanisms are available, depending on the BI tool, connector, and compute layer. However, Immuta recommends to use one of the following methods:
Use OAuth single sign (SSO) on when available, as it offers the best user experience.
Use username and password authentication or personal access tokens as an alternative if OAuth is not supported.
Use impersonation if you cannot create and authenticate individual users in the compute layer. Native impersonation allows users to natively query data as another Immuta user. For details, see the user impersonation guide.
For configuration guidance, see Power BI configuration example and Tableau configuration example.
Immuta has verified several popular BI tool and compute platform combinations. The table below outlines these combinations and their recommended authentication methods. However, since these combinations depend on tools outside Immuta, consult the platform documentation to confirm these suggestions.
AWS Databricks + Power BI Service: The Databricks Power BI Connector does not work with OAuth or personal credentials. Use a Databricks PAT (personal access token) as an alternative.
Redshift + Tableau: Use username and password authentication or impersonation.
Starburst + Power BI Service: The Power BI connector for Starburst requires a gateway that shares credentials, so this combination is not supported.
Starburst + Tableau: Use username and password authentication or impersonation.
QuickSight: A shared service account is used to query data, so this tool is not supported.
Click the App Settings icon in the left sidebar.
Click the link in the App Settings panel to navigate to that section.
See the identity manager pages for a tutorial to connect an , , or identity manager.
To configure Immuta to use all other existing IAMs,
Click the Add IAM button.
Complete the Display Name field and select your IAM type from the Identity Provider Type dropdown: LDAP/Active Directory, SAML, or OpenID.
Once you have selected LDAP/Active Directory from the Identity Provider Type dropdown menu,
Adjust Default Permissions granted to users by selecting from the list in this dropdown menu, and then complete the required fields in the Credentials and Options sections. Note: Either User Attribute OR User Search Filter is required, not both. Completing one of these fields disables the other.
Opt to have Case-insensitive user names by clicking the checkbox.
Opt to Enable Debug Logging or Enable SSL by clicking the checkboxes.
In the Profile Schema section, map attributes in LDAP/Active Directory to automatically fill in a user's Immuta profile. Note: Fields that you specify in this schema will not be editable by users within Immuta.
Opt to Enable scheduled LDAP Sync support for LDAP/Active Directory and Enable pagination for LDAP Sync. Once enabled, confirm the sync schedule written in ; the default is every hour. Confirm the LDAP page size for pagination; the default is 1,000.
Opt to Sync groups from LDAP/Active Directory to Immuta. Once enabled, map attributes in LDAP/Active Directory to automatically pull information about the groups into Immuta.
Opt to Sync attributes from LDAP/Active Directory to Immuta. Once enabled, add attribute mappings in the attribute schema. The desired attribute prefix should be mapped to the relevant schema URN.
Opt to enable External Groups and Attributes Endpoint, Make Default IAM, or Migrate Users from another IAM by selecting the checkbox.
Then click the Test Connection button.
Once the connection is successful, click the Test User Login button.
Click the Test LDAP Sync button if scheduled sync has been enabled.
See the .
Once you have selected OpenID from the Identity Provider Type dropdown menu,
Take note of the ID. You will need this value to reference the IAM in the callback URL in your identity provider with the format <base url>/bim/iam/<id>/user/authenticate/callback
.
Note the SSO Callback URL shown. Navigate out of Immuta and register the client application with the OpenID provider. If prompted for client application type, choose web.
Adjust Default Permissions granted to users by selecting from the list in this dropdown menu.
Back in Immuta, enter the Client ID, Client Secret, and Discover URL in the form field.
Configure OpenID provider settings. There are two options:
Set Discover URL to the /.well-known/openid-configuration
URL provided by your OpenID provider.
If you are unable to use the Discover URL option, you can fill out Authorization Endpoint, Issuer, Token Endpoint, JWKS Uri, and Supported ID Token Signing Algorithms.
If necessary, add additional Scopes.
Opt to Enable SCIM support for OpenID by clicking the checkbox, which will generate a SCIM API Key.
In the Profile Schema section, map attributes in OpenID to automatically fill in a user's Immuta profile. Note: Fields that you specify in this schema will not be editable by users within Immuta.
Opt to Allow Identity Provider Initiated Single Sign On or Migrate Users from another IAM by selecting the checkboxes.
Click the Test Connection button.
Once the connection is successful, click the Test User Login button.
To set the default permissions granted to users when they log in to Immuta, click the Default Permissions dropdown menu, and then select permissions from this list.
Deprecation notice: Support for this feature has been deprecated.
Navigate to the App Settings page and click External Masking in the left sidebar.
Click Add Configuration and specify an external endpoint in the External URI field.
Click Configure, and then add at least one tag by selecting from the Search for tags dropdown menu. Note: Tag hierarchies are supported, so tagging a column as Sensitive.Customer
would drive the policy if external masking was configured with the tag Sensitive
).
Select Authentication Method and then complete the authentication fields (when applicable).
Click Test Connection and then Save.
Select Add Workspace.
Use the dropdown menu to select the Workspace Type and refer to the section below.
Databricks cluster configuration
Before creating a workspace, the cluster must send its configuration to Immuta; to do this, run a simple query on the cluster (i.e., show tables
). Otherwise, an error message will occur when users attempt to create a workspace.
Databricks API Token expiration
The Databricks API Token used for native workspace access must be non-expiring. Using a token that expires risks losing access to projects that are created using that configuration.
Use the dropdown menu to select the Schema and refer to the corresponding tab below.
Required AWS S3 Permissions
When configuring a native workspace using Databricks with S3, the following permissions need to be applied to arn:aws:s3:::immuta-workspace-bucket/workspace/base/path/*
and arn:aws:s3:::immuta-workspace-bucket/workspace/base/path
Note: Two of these values are found on the App Settings page; immuta-workspace-bucket
is from the S3 Bucket field and workspace/base/path
is from the Workspace Remote Path field:
s3:Get*
s3:Delete*
s3:Put*
s3:AbortMultipartUpload
Additionally, these permissions must be applied to arn:aws:s3:::immuta-workspace-bucket
Note: One value is found on the App Settings page; immuta-workspace-bucket
is from the S3 Bucket field:
s3:ListBucket
s3:ListBucketMultipartUploads
s3:GetBucketLocation
Enter the Name.
Click Add Workspace
Enter the Hostname.
Opt to enter the Workspace ID (required with Azure Databricks).
Enter the Databricks API Token.
Use the dropdown menu to select the AWS Region.
Enter the S3 Bucket.
Opt to enter the S3 Bucket Prefix.
Click Test Workspace Bucket.
Once the credentials are successfully tested, click Save.
Enter the Name.
Click Add Workspace.
Enter the Hostname, Workspace ID, Account Name, Databricks API Token, and Storage Container.
Enter the Workspace Base Directory.
Click Test Workspace Directory.
Once the credentials are successfully tested, click Save.
Enter the Name.
Click Add Workspace.
Enter the Hostname, Workspace ID, Account Name, and Databricks API Token.
Use the dropdown menu to select the Google Cloud Region.
Enter the GCS Bucket.
Opt to enter the GCS Object Prefix.
Click Test Workspace Directory.
Once the credentials are successfully tested, click Save.
Select Add Native Integration.
Use the dropdown menu to select the Integration Type.
To enable a data provider,
Click the menu button in the upper right corner of the provider icon you want to enable.
Select Enable from the dropdown.
If an ODBC driver needs to be uploaded,
Click the menu button in the upper right corner of the provider icon, and then select Upload Driver from the dropdown.
Click in the Add Files to Upload box and upload your file.
Click Close.
Click the menu button again, and then select Enable from the dropdown.
Application Admins can configure the SMTP server that Immuta will use to send emails to users. If this server is not configured, users will only be able to view notifications in the Immuta console.
To configure the SMTP server,
Complete the Host and Port fields for your SMTP server.
Enter the username and password Immuta will use to log in to the server in the User and Password fields, respectively.
Enter the email address that will send the emails in the From Email field.
Opt to Enable TLS by clicking this checkbox, and then enter a test email address in the Test Email Address field.
Finally, click Send Test Email.
Once SMTP is enabled in Immuta, any Immuta user can request access to notifications as emails, which will vary depending on the permissions that user has. For example, to receive email notifications about group membership changes, the receiving user will need the GOVERNANCE
permission. Once a user requests access to receive emails, Immuta will compile notifications and distribute these compilations via email at 8-hour intervals.
To configure Immuta to protect data in a kerberized Hadoop cluster,
Upload your Kerberos Configuration File, and then you can add modify the Kerberos configuration in the window pictured below.
Upload your Keytab File.
Enter the principal Immuta will use to authenticate with your KDC in the Username field. Note: This must match a principal in the Keytab file.
Adjust how often (in milliseconds) Immuta needs to re-authenticate with the KDC in the Ticket Refresh Interval field.
Click Test Kerberos Initialization.
Click the Generate Key button.
Save this API key in a secure location.
To improve performance when using Immuta to secure Spark or HDFS access, a user's access level is cached momentarily. These cache settings are configurable, but decreasing the Time to Live (TTL) on any cache too low will negatively impact performance.
To configure cache settings, enter the time in milliseconds in each of the Cache TTL fields.
You can set the URL users will use to access Immuta Application. Note: Proxy configuration must be handled outside Immuta.
Complete the Public Immuta URL field.
Click Save and confirm your changes.
By default, query text is included in native query audit events from Snowflake, Databricks, and Starburst (Trino).
When query text is excluded from audit events, Immuta will retain query event metadata such as the columns and tables accessed. However, the query text used to make the query will not be included in the event. This setting is a global control for all configured integrations.
To exclude query text from audit events,
Scroll to the Audit section.
Check the box to Exclude query text from audit events.
Click Save.
Click the Allow Policy Exemptions checkbox to allow users to specify who can bypass all policies on a data source.
Deprecation notice
Click the App Settings icon in the navigation menu.
Scroll to the Default Subscription Policy section.
Select the radio button to define the behavior of subscription policies when new data sources are registered in Immuta:
None: When this option is selected, Immuta will not apply any subscription policies to data sources when they are registered. Changing the default subscription policy to none will only apply to newly created data sources. Existing data sources will retain their existing subscription policies.
Allow individually selected users: When a data source is created, Immuta will apply a subscription policy to it that requires users to be individually selected to access the underlying table. In most cases, users who were able to query the table before the data source was created will no longer be able to query the table in the remote data platform until they are subscribed to the data source in Immuta.
Click Save and confirm your changes.
Immuta merges multiple Global Subscription policies that apply to a single data source; by default, users must meet all the conditions outlined in each policy to get access (i.e., the conditions of the policies are combined with AND
). To change the default behavior to allow users to meet the condition of at least one policy that applies (i.e., the conditions of the policies are combined with OR
),
Click the Default Subscription Merge Options text in the left pane.
Select the Default "allow shared policy responsibility" to be checked checkbox.
Click Save.
Note: Even with this setting enabled, Governors can opt to have their Global Subscription policies combined with AND
during policy creation.
These options allow you to restrict the power individual users with the GOVERNANCE and USER_ADMIN permissions have in Immuta. Click the checkboxes to enable or disable these options.
You can create custom permissions that can then be assigned to users and leveraged when building subscription policies. Note: You cannot configure actions users can take within the console when creating a custom permission, nor can the actions associated with existing permissions in Immuta be altered.
To add a custom permission, click the Add Permission button, and then name the permission in the Enter Permission field.
To create a custom questionnaire that all users must complete when requesting access to a data source, fill in the following fields:
Opt for the questionnaire to be required.
Key: Any unique value that identifies the question.
Header: The text that will display on reports.
Label: The text that will display in the questionnaire for the user. They will be prompted to type the answer in a text box.
To create a custom message for the login page of Immuta, enter text in the Enter Login Message box. Note: The message can be formatted in markdown.
Opt to adjust the Message Text Color and Message Background Color by clicking in these dropdown boxes.
Without fingerprints some policies will be unavailable.
Disabling the collection of statistics will cause these policies to be unavailable:
Masking with format preserving masking
Masking with K-Anonymization
Masking using randomized response
To disable the automatic collection of statistics with a particular tag,
Use the Select Tags dropdown to select the tag(s).
Click Save.
Query engine and legacy fingerprint required
K-anonymization policies require the query engine and legacy fingerprint service, which are disabled by default. If you need to use k-anonymization policies, work with your Immuta representative to enable the query engine and legacy fingerprint service when you deploy Immuta.
When a k-anonymization policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process that generates rules enforcing k-anonymity. The results of this query, which may contain data that is subject to regulatory constraints such as GDPR or HIPAA, are stored in Immuta's metadata database.
The location of the metadata database depends on your deployment:
Self-managed Immuta deployment: The metadata database is located in the server where you have your external metadata database deployed.
SaaS Immuta deployment: The metadata database is located in the AWS global segment you have chosen to deploy Immuta.
To ensure this process does not violate your organization's data localization regulations, you need to first activate this masking policy type before you can use it in your Immuta tenant.
Click Other Settings in the left panel and scroll to the K-Anonymization section.
Select the Allow users to create masking policies using K-Anonymization checkbox to enable k-anonymization policies for your organization.
Click Save and confirm your changes.
If you enable any Preview features, please provide feedback on how you would like these features to evolve.
Click Advanced Settings in the left panel, and scroll to the Preview Features section.
Check the Enable Policy Adjustments checkbox.
Click Save.
Click Advanced Settings in the left panel, and scroll to the Preview Features section.
Check the Allow Complex Data Types checkbox.
Click Save.
Advanced configuration options provided by the Immuta Support team can be added in this section. The configuration must adhere to the YAML syntax.
To increase the default cardinality cutoff for columns compatible with k-anonymity,
Expand the Advanced Settings section and add the following text to the Advanced Configuration:
Click Save.
To regenerate the data source's fingerprint, navigate to that data source's page.
Click the Status in the upper right corner.
Click Re-run in the Fingerprint section of the dropdown menu.
Note: Re-running the fingerprint is only necessary for existing data sources. New data sources will be generated using the new maximum cardinality.
Expand the Advanced Settings section and add the following text to the Advanced Configuration to specify the number of seconds before webhook requests timeout. For example use 30
for 30 seconds. Setting it to 0
will result in no timeout.
Click Save.
Expand the Advanced Settings section and add the following text to the Advanced Configuration to specify the number of minutes before an audit job expires. For example use 300
for 300 minutes.
Click Save.
Enable Native Sensitive Data Discovery for Starburst (Trino)
Expand the Advanced Settings section and add the following text to the Advanced Configuration:
Click Save.
Enable Frameworks API, Data Security Framework, and Risk Assessment Framework
Expand the Advanced Settings section and add the following text to the Advanced Configuration:
Click Save.
Enable Additional Compliance Frameworks and the Data Inventory Dashboard
Requirement: The frameworks
feature flag must be enabled and the configuration saved before the dataInventoryDashboard
can be added to the advanced configuration field and enabled.
Expand the Advanced Settings section and add the following text to the Advanced Configuration:
Click Save.
See the .
To enable ,
To enable Azure Synapse Analytics, see the .
To enable Databricks Spark, see the .
To enable Databricks Unity Catalog, see the
To enable Redshift, see .
To enable Snowflake, see the .
To enable Starburst, see the e.
You can enable or disable the types of data sources users can create in this section. Some of these types will require you to upload an ODBC driver before they can be enabled. The list of currently supported drivers is on the .
To enable Sensitive Data Discovery and configure its settings, see the .
The ability to configure the behavior of the default subscription policy has been deprecated. Once this configuration setting is removed from the app settings page, Immuta will not apply a subscription policy to registered data sources unless an existing global policy applies to them. To set an "Allow individually selected users" subscription policy on all data sources, with that condition that applies to all data sources or apply a to individual data sources.
For instructions on enabling this feature, navigate to the .