Databricks Pre-Configuration Details

This page describes the Databricks Spark integration, configuration options, and features. See the Databricks Spark integration page for a tutorial on enabling Databricks and these features through the App Settings page.

Feature Availability

Project Workspaces
Databricks Tag Ingestion
User Impersonation
Native Query Audit
Multiple Integrations

Supported Databricks Cluster Configurations

The table below outlines the integrations supported for various Databricks cluster configurations. For example, the only integration available to enforce policies on a cluster configured to run on Databricks Runtime 9.1 is the Databricks Spark integration.

Example cluster
Databricks Runtime
Unity Catalog in Databricks
Databricks Spark integration
Databricks Unity Catalog integration

Cluster 1

9.1

Unavailable

Unavailable

Cluster 2

10.4

Unavailable

Unavailable

Cluster 3

11.3

/

Unavailable

Cluster 4

11.3

Cluster 5

11.3

Legend:

  • The feature or integration is enabled.

  • The feature or integration is disabled.

Databricks-Specific Details

Prerequisites

  • Databricks instance: Premium tier workspace and Cluster access control enabled

  • Databricks instance has network level access to Immuta tenant

  • Access to Immuta archives

  • Permissions and access to download (outside Internet access) or transfer files to the host machine

Recommended Databricks Workspace Configurations:

Note: Azure Databricks authenticates users with Microsoft Entra ID. Be sure to configure your Immuta tenant with an IAM that uses the same user ID as does Microsoft Entra ID. Immuta's Spark security plugin will look to match this user ID between the two systems. See this Microsoft Entra ID page for details.

Supported Databricks Runtime Versions

See this page for a list of Databricks Runtimes Immuta supports.

Supported Databricks Cluster Types

Supported Access Mode and Languages

Immuta supports the Custom access mode.

  • Supported Languages:

    • Python

    • SQL

    • R (requires advanced configuration; work with your Immuta support professional to use R)

    • Scala (requires advanced configuration; work with your Immuta support professional to use Scala)

Supported Features

The Immuta Databricks Spark integration supports the following Databricks features:

  • Change Data Feed: Databricks users can see the Databricks Change Data Feed on queried tables if they are allowed to read raw data and meet specific qualifications.

  • Databricks Libraries: Users can register their Databricks Libraries with Immuta as trusted libraries, allowing Databricks cluster administrators to avoid Immuta security manager errors when using third-party libraries.

  • External Metastores: Immuta supports the use of external metastores in local or remote mode.

  • Spark Direct File Reads: In addition to supporting direct file reads through workspace and scratch paths, Immuta allows direct file reads in Spark for file paths.

Workspaces

Users can have additional write access in their integration using project workspaces. Users can integrate a single or multiple workspaces with a single Immuta tenant. For more details, see the Databricks Spark Project Workspaces page.

Tag Ingestion

The Immuta Databricks Spark integration cannot ingest tags from Databricks, but you can connect any of these supported external catalogs to work with your integration.

User Impersonation

Native impersonation allows users to natively query data as another Immuta user. To enable native user impersonation, see the User Impersonation page.

Native Query Audit

Audit limitations

Immuta will audit queries that come from interactive notebooks, notebook jobs, and JDBC connections, but will not audit Scala or R submit jobs. Furthermore, Immuta only audits Spark jobs that are associated with Immuta tables. Consequently, Immuta will not audit a query in a notebook cell that does not trigger a Spark job, unless immuta.spark.audit.all.queries is set to true; for more details about this configuration and auditing all queries in Databricks, see Limited enforcement in Databricks.

Capturing the code or query that triggers the Spark plan makes audit records more useful in assessing what users are doing.

To audit the code or query that triggers the Spark plan, Immuta hooks into Databricks where notebook cells and JDBC queries execute and saves the cell or query text. Then, Immuta pulls this information into the audits of the resulting Spark jobs. Examples of a saved cell/query and the resulting audit record are provided on the Databricks query audit logs page.

Multiple Databricks Instances

A user can configure multiple integrations of Databricks to a single Immuta tenant and use them dynamically or with workspaces.

Schema monitoring for Databricks Spark

In most cases, Immuta’s schema monitoring job runs automatically from the Immuta web service. For Databricks, that automatic job is disabled because of the ephemeral nature of Databricks clusters. In this case, Immuta requires users to download a schema detection job template (a Python script) and import that into their Databricks workspace. See the Register a Databricks data source guide for details.

Limitation

Immuta does not support Databricks clusters with Photon acceleration enabled.

Last updated

Self-managed versions

2024.32024.22024.1

Copyright © 2014-2024 Immuta Inc. All rights reserved.