Skip to content

You are viewing documentation for Immuta version 2022.2.

For the latest version, view our documentation for Immuta SaaS or the latest self-hosted version.

Simplified Databricks Configuration

Audience: System Administrators

Content Summary: This guide details the simplified installation method for enabling native access to Databricks with Immuta policies enforced.

Prerequisites: Ensure your Databricks workspace, instance, and permissions meet the guidelines outlined in the Installation Introduction.

Databricks Unity Catalog

If Unity Catalog is enabled in a Databricks workspace, you must use an Immuta cluster policy when you setup the integration to create an Immuta-enabled cluster.

1 - Add the Integration on the App Settings Page

  1. Log in to Immuta and click the App Settings icon in the left sidebar.
  2. Scroll to the System API Key subsection under HDFS and click Generate Key.

    Generate Key

  3. Click Save and then Confirm.

  4. Scroll to the Native Integrations section, and click + Add a Native Integration.
  5. Select Databricks Integration from the dropdown menu.
  6. Complete the Hostname field.

    Databricks Quick Config Modal

  7. Enter a Unique ID for the integration. By default, your Immuta instance URL populates this field. This ID is used to tie the set of cluster policies to your instance of Immuta and allows multiple instances of Immuta to access the same Databricks workspace without cluster policy conflicts.

  8. Select your configured Immuta IAM from the dropdown menu.

  9. Choose one of the following options for your data access model:
    • Protected until made available by policy: All tables are hidden until a user is permissioned through an Immuta policy. This is how most databases work and assumes least privileged access and also means you will have to register all tables with Immuta.
    • Available until protected by policy: All tables are open until explicitly registered and protected by Immuta. This makes a lot of sense if most of your tables are non-sensitive and you can pick and choose which to protect.
  10. Select the Storage Access Type from the dropdown menu.
  11. Opt to add any Additional Hadoop Configuration Files.
  12. Click Add Native Integration.

2 - Configure Cluster Policies

Several cluster policies are available on the App Settings page when configuring this integration:

Click a link above to read more about each of these cluster policies before continuing with the tutorial.

  1. Click Configure Cluster Policies.

    Configure Cluster Policies

  2. Select one more more cluster policies in the matrix by clicking the Select button(s).

  3. Opt to make changes to these cluster policies by clicking Additional Policy Changes and editing the text field.

    Additional Policy Changes

  4. Use one of the two Installation Types described in the tabs below to apply the policies to your cluster:

    Automatically Push Cluster Policies

    This option allows you to automatically push the cluster policies to the configured Databricks workspace. This will overwrite any cluster policy templates previously applied to this workspace.

    1. Select the Automatically Push Cluster Policies radio button.
    2. Enter your Admin Token. This token must be for a user who can create cluster policies in Databricks.

      Automatically Push Cluster Policies

    3. Click Apply Policies.

    Manually Push Cluster Policies

    Enabling this option will allow you to manually push the cluster policies to the configured Databricks workspace. There will be various files to download and manually push to the configured Databricks workspace.

    1. Select the Manually Push Cluster Policies radio button.

      Manually Push Cluster Policies

    2. Click Download Init Script.

    3. Follow the steps in the Instructions to upload the init script to DBFS section.

      Instructions to Upload Init Script

    4. Click Download Policies, and then manually add these Cluster Policies in Databricks.

  5. Opt to click the Download the Benchmarking Suite to compare a regular Databricks cluster to one protected by Immuta. Detailed instructions are available in the first notebook, which will require an Immuta and non-Immuta cluster to generate test data and perform queries.

  6. Click Close, and then click Save and Confirm.

3 - Add Policies to Your Cluster

  1. Create a cluster in Databricks by following the Databricks documentation.
  2. In the Policy dropdown, select the Cluster Policies you pushed or manually added from Immuta.

    Select Cluster Policy

  3. Select a Cluster Mode: Immuta supports both High Concurrency and Standard clusters in Databricks.

  4. Opt to adjust Autopilot Options and Worker Type settings: The default values provided here may be more than what is necessary for non-production or smaller use-cases. To reduce resource usage you can enable/disable autoscaling, limit the size and number of workers, and set the inactivity timeout to a lower value.
  5. Opt to configure the Instances tab in the Advanced Options section:

    • IAM Role (AWS ONLY): Select the instance role you created for this cluster. (For access key authentication, you should instead use the environment variables listed in the AWS section.)
  6. Click Create Cluster.

4 - Query Immuta Data

When the Immuta-enabled Databricks cluster has been successfully started, Immuta will create an immuta database, which allows Immuta to track Immuta-managed data sources separately from remote Databricks tables so that policies and other security features can be applied. However, users can query sources with their original database or table name without referencing the immuta database. Additionally, when configuring a Databricks cluster you can hide immuta from any calls to SHOW DATABASES so that users aren't misled or confused by its presence. For more details, see the Hiding the immuta Database in Databricks page.

  1. Before users can query an Immuta data source, an administrator must give the user Can Attach To permissions on the cluster.

  2. See the Databricks Data Source Creation guide for a detailed walkthrough of creating Databricks data sources in Immuta.

Example Queries

Below are example queries that can be run to obtain data from an Immuta-configured data source. Because Immuta supports raw tables in Databricks, you do not have to use Immuta-qualified table names in your queries like the first example. Instead, you can run queries like the second example, which does not reference the immuta database.

%sql
select * from immuta.my_data_source limit 5;
%sql
select * from my_data_source limit 5;