Databricks Spark Integration Configuration

The Immuta offers for Databricks.

In this integration, Immuta installs an Immuta-maintained Spark plugin on your Databricks cluster. When a user queries data that has been registered in Immuta as a data source, the plugin injects policy logic into the plan Spark builds so that the results returned to the user only include data that specific user should see.

The reference guides in this section are written for Databricks administrators who are responsible for setting up the integration, securing Databricks clusters, and setting up users:

  • Installation and compliance: This guide includes information about what Immuta creates in your Databricks environment and securing your Databricks clusters.

  • Customizing the integration: Consult this guide for information about customizing the Databricks Spark integration settings.

  • Setting up users: Consult this guide for information about connecting data users and setting up user impersonation.

  • Spark environment variables: This guide provides a list of Spark environment variables used to configure the integration.

  • Ephemeral overrides: This guide describes and how to configure them to reduce the risk that a user has overrides set to a cluster (or multiple clusters) that aren't currently up.

Last updated

Was this helpful?