Databricks Spark Integration Configuration
Last updated
Was this helpful?
Last updated
Was this helpful?
The Immuta offers for Databricks.
In this integration, Immuta installs an Immuta-maintained Spark plugin on your Databricks cluster. When a user queries data that has been registered in Immuta as a data source, the plugin injects policy logic into the plan Spark builds so that the results returned to the user only include data that specific user should see.
The reference guides in this section are written for Databricks administrators who are responsible for setting up the integration, securing Databricks clusters, and setting up users:
: This guide includes information about what Immuta creates in your Databricks environment and securing your Databricks clusters.
: Consult this guide for information about customizing the Databricks Spark integration settings.
: Consult this guide for information about connecting data users and setting up user impersonation.
: This guide provides a list of Spark environment variables used to configure the integration.
: This guide describes and how to configure them to reduce the risk that a user has overrides set to a cluster (or multiple clusters) that aren't currently up.