Skip to content

You are viewing documentation for Immuta version 2023.2.

For the latest version, view our documentation for Immuta SaaS or the latest self-hosted version.

Databricks SQL Pre-Configuration Details (Public Preview)

Audience: System Administrators

Content Summary: This page describes the Databricks SQL integration, configuration options, and features. For a tutorial to enable this integration, see the installation guide. Databricks SQL is currently in Public Preview. Please provide feedback on any issues you encounter, as well as insight regarding how you would like this feature to evolve in the future.

Feature Availability

Project Workspaces Tag Ingestion User Impersonation Native Query Audit Multiple Integrations
❌ ❌ ❌ ❌ ✅

Prerequisites

  • Before an administrator configures the Databricks SQL integration within Immuta, a Databricks SQL administrator must set up a Databricks SQL environment. For guidance in setting up and using a Databricks SQL environment, see the Get started with Databricks SQL guide in the Databricks documentation.

  • The Databricks SQL administrator must generate a Databricks personal access token (generated by a Databricks SQL administrator), which will be used to configure Databricks SQL with Immuta. This token allows users to authenticate to the Databricks REST API and Immuta to connect to SQL endpoints and create the Immuta database inside Databricks SQL. Databricks will only display this personal access token once, so be sure to copy and save it.

Note: If a Databricks SQL administrator does not generate the token, it will not carry appropriate privileges to allow Immuta to create this database and an error will be displayed in the Immuta UI.

Authentication Method

The Databricks SQL integration supports the following authentication method to install the integration and create data sources:

  • Privileged User Token: Users can authenticate with a Databricks SQL personal access token. Note: The access token should not have an expiration date. If it has an expiration date set, the token will need to be updated periodically when the current one expires.

Tag Ingestion

The Immuta Databricks SQL integration cannot ingest tags from Databricks SQL, but you can connect any of these supported external catalogs to work with your integration.

Multiple Integrations

Users can configure multiple integrations of Databricks SQL with a single Immuta instance.

Databricks SQL Limitations

  • Starting a SQL Analytics endpoint in Databricks SQL can take several minutes to complete. This startup time is inherent in the Databricks SQL product. As a result, for the Databricks SQL and Immuta Native SQL integration to function properly (i.e., for schema changes to be automatically detected, and for other basic functionality), you should ensure Auto Stop is set to OFF for your SQL Analytics endpoint in Databricks SQL Analytics. Please note that this has cost implications for your Databricks usage.
  • Currently, Databricks SQL does not have support for UDFs. Due to this limitation, Immuta is unable to support format preserving encryption, reversible masking, randomized response, or regex policies.
  • In some situations where subscription policies are being updated frequently, a bottleneck can occur with respect to showing and hiding view metadata. This will not affect typical use cases.