Databricks Spark
Last updated
Was this helpful?
Last updated
Was this helpful?
This integration enforces policies on Databricks securables registered in the legacy Hive metastore. Once these securables are registered as Immuta data sources, users can query policy-enforced data on Databricks clusters.
The guides in this section outline how to integrate Databricks Spark with Immuta.
This getting started guide outlines how to integrate Databricks with Immuta.
: Configure the Databricks Spark integration.
: Manually update your cluster to reflect changes in the Immuta init script or cluster policies.
: Register a Databricks library with Immuta as a trusted library to avoid Immuta security manager errors when using third-party libraries.
: Raise the caching on-cluster and lower the cache timeouts for the Immuta web service to allow use of project UDFs in Spark jobs.
: Run R and Scala spark-submit
jobs on your Databricks cluster.
: Access DBFS in Databricks for non-sensitive data.
: Resolve errors in the Databricks Spark configuration.
: This guide describes the design and components of the integration.
: This guide provides an overview of the Immuta features that provide security for your users and Databricks clusters and that allow you to prove compliance and monitor for anomalies.
: This guide provides an overview of registering Databricks securables and protecting them with Immuta policies.
: This guide provides an overview of how Databricks users access data registered in Immuta.