All pages
Powered by GitBook
1 of 11

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Reference Guides

Databricks Spark Integration Configuration

The Databricks Spark integration is one of two integrations Immuta offers for Databricks.

In this integration, Immuta installs an Immuta-maintained Spark plugin on your Databricks cluster. When a user queries data that has been registered in Immuta as a data source, the plugin injects policy logic into the plan Spark builds so that the results returned to the user only include data that specific user should see.

The reference guides in this section are written for Databricks administrators who are responsible for setting up the integration, securing Databricks clusters, and setting up users:

  • Installation and compliance: This guide includes information about what Immuta creates in your Databricks environment and securing your Databricks clusters.

  • Customizing the integration: Consult this guide for information about customizing the Databricks Spark integration settings.

  • Setting up users: Consult this guide for information about connecting data users and setting up user impersonation.

  • Spark environment variables: This guide provides a list of Spark environment variables used to configure the integration.

  • Ephemeral overrides: This guide describes ephemeral overrides and how to configure them to reduce the risk that a user has overrides set to a cluster (or multiple clusters) that aren't currently up.

Ephemeral Overrides

In the context of the Databricks Spark integration, Immuta uses the term ephemeral to describe data sources where the associated compute resources can vary over time. This means that the compute bound to these data sources is not fixed and can change. All Databricks data sources in Immuta are ephemeral.

Ephemeral overrides are specific to each data source and user. They effectively bind cluster compute resources to a data source for a given user. Immuta uses these overrides to determine which cluster compute to use when connecting to Databricks for various maintenance operations.

The operations that use the ephemeral overrides include

  • Visibility checks on the data source for a particular user. These checks assess how to apply row-level policies for specific users.

  • Stats collection triggered by a specific user.

  • Validating a custom WHERE clause policy against a data source. When owners or governors create custom WHERE clause policies, Immuta uses compute resources to validate the SQL in the policy. In this case, the ephemeral overrides for the user writing the policy are used to contact a cluster for SQL validation.

  • High cardinality column detection. Certain advanced policy types (e.g., minimization) in Immuta require a high cardinality column, and that column is computed on data source creation. It can be recomputed on demand and, if so, will use the ephemeral overrides for the user requesting computation.

Triggering an ephemeral override request

An ephemeral override request can be triggered when a user queries the securable corresponding to a data source in a Databricks cluster with the Spark plug-in configured. The actual triggering of this request depends on the configuration settings.

Ephemeral overrides can also be set for a data source in the Immuta UI by navigating to a data source page, clicking on the data source actions button, and selecting Ephemeral overrides from the dropdown menu.

Ephemeral override requests made from a cluster for data sources and users where ephemeral overrides were set in the UI will not be successful.

If ephemeral overrides are never set (either through the user interface or the cluster configuration), the system will continue to use the connection details directly associated with the data source, which are set during data source registration.

Configuring overrides in Immuta-enabled clusters

Ephemeral overrides can be problematic in environments that have a dedicated cluster to handle maintenance activities, since ephemeral overrides can cause these operations to execute on a different cluster than the dedicated one.

To reduce the risk that a user has overrides set to a cluster (or multiple clusters) that aren't currently up, complete one of the following actions:

  • Direct all clusters' HTTP paths for overrides to a cluster dedicated for metadata queries using the IMMUTA_EPHEMERAL_HOST_OVERRIDE_HTTPPATH Spark environment variable.

  • Disable ephemeral overrides completely by setting the IMMTUA_EPHEMERAL_HOST_OVERRIDE Spark environment variable to false.

Ephemeral overrides best practices

  1. Disable ephemeral overrides for clusters when using multiple workspaces and dedicate a single cluster to serve queries from Immuta in a single workspace.

  2. If you use multiple E2 workspaces without disabling ephemeral overrides, avoid applying the where user row-level policy to data sources.

Setting Up Users

When the Databricks Spark plugin is running on a Databricks cluster, all Databricks users running jobs or queries are either a privileged user or a non-privileged user:

  • Privileged users: Privileged users can effectively read from and write to any table or view in the cluster Metastore, or any file path accessible by the cluster, without restriction. Privileged users are either or users specified in . Any user writing queries or jobs impersonating another user is a non-privileged user, even if they are impersonating a privileged user.\

    Privileged users have effective authority to read from and write to any securable in the cluster metastore or file path, because in almost all cases Databricks clusters running with the Immuta Spark plug-in installed have disabled . However, if Hive metastore table access control is enabled on the cluster, privileged users will have the authority granted to them that is specified by table access control.

  • Non-privileged users: Non-privileged users are any users who are not privileged users, and all authorization for non-privileged users is determined by Immuta policies.

Whether a user is a privileged user or a non-privileged user, for a given query or job, is cached once first determined, based on . This caching can be disabled entirely by setting the value of that environment variable to 0.

Mapping Databricks users to Immuta

Usernames in Databricks must match the usernames in the connected Immuta tenant. By default, the Immuta Spark plugin checks the Databricks username against the username within Immuta's internal IAM to determine access. However, you can integrate your existing IAM with Immuta and use that instead of the default internal IAM. Ideally, you should use the same identity manager for Immuta that you use for Databricks. See the for a list of supported identity providers and protocols.

It is possible within Immuta to have multiple users share the same username if they exist within different IAMs. In this case, the cluster can be configured to look up users from a specified IAM. To do this, the value of the must be updated to be the targeted IAM ID configured within the Immuta tenant. The targeted IAM ID can be found on the . Each Databricks cluster can only be mapped to one IAM.

User impersonation

Databricks user impersonation allows a Databricks user to impersonate an Immuta user. With this feature,

  • the Immuta user who is being impersonated does not have to have a Databricks account, but they must have an Immuta account.

  • the Databricks user who is impersonating an Immuta user does not have to be associated with Immuta. For example, this could be a service account.

When acting under impersonation, the Databricks user loses their privileged access, so they can only access the tables the Immuta user has access to and only perform DDL commands when that user is acting under an allowed circumstance (such as workspaces, scratch paths, or non-Immuta reads/writes).

Use the to enable user impersonation.

Scala clusters

Immuta discourages use of this feature with Scala clusters, as the proper security mechanisms were not built to account for . Instead, this feature was developed for the BI tool use case in which service accounts connecting to the Databricks cluster need to impersonate Immuta users so that policies can be enforced.

Databricks workspace admins
IMMUTA_SPARK_ACL_ALLOWLIST
Hive metastore table access control
IMMUTA_SPARK_ACL_PRIVILEGED_TIMEOUT_SECONDS environment variable
Immuta support matrix page
IMMTUA_USER_MAPPING_IAMID Spark environment variable
App settings page
IMMUTA_SPARK_DATABRICKS_ALLOWED_IMPERSONATION_USERS Spark environment variable
user isolation limitations in Scala clusters

Accessing Data

Once a Databricks securable is registered in Immut as a data source and you are subscribed to that data source, you must access that data through SQL:

df = spark.sql("select * from immuta.table")
import org.apache.spark.sql.SparkSession

val spark = SparkSession
  .builder()
  .appName("Spark SQL basic example")
  .config("spark.some.config.option", "some-value")
  .getOrCreate()
val sqlDF = spark.sql("SELECT * FROM immuta.table")
%sql
select * from immuta.table
library(SparkR)
df <- SparkR::sql("SELECT * from immuta.table")

With R, you must load the SparkR library in a cell before accessing the data.

See the sections below for more guidance on accessing data using Delta Lake, direct file reads in Spark for file paths, and user impersonation.

Delta Lake

When using Delta Lake, the API does not go through the normal Spark execution path. This means that Immuta's Spark extensions do not provide protection for the API. To solve this issue and ensure that Immuta has control over what a user can access, the Delta Lake API is blocked.

Spark SQL can be used instead to give the same functionality with all of Immuta's data protections. See the Delta API reference guide for a list of corresponding Spark SQL calls to use.

Spark direct file reads

In addition to supporting direct file reads through workspace and scratch paths, Immuta allows direct file reads in Spark for file paths. As a result, users who prefer to interact with their data using file paths or who have existing workflows revolving around file paths can continue to use these workflows without rewriting those queries for Immuta.

When reading from a path in Spark, the Immuta Databricks Spark plugin queries the Immuta Web Service to find Databricks data sources for the current user that are backed by data from the specified path. If found, the query plan maps to the Immuta data source and follows existing code paths for policy enforcement.

Users can read data from individual parquet files in a sub-directory and partitioned data from a sub-directory (or by using a where predicate). Expand the blocks below to view examples of reading data using these methods.

Read data from an individual parquet file

To read from an individual file, load a partition file from a sub-directory:

spark.read.format("parquet").load("s3:/my_bucket/path/to/my_parquet_table/partition_column=01/my_file.parquet")
Read partitioned data from a sub-directory

To read partitioned data from a sub-directory, load a parquet partition from a sub-directory:

spark.read.format("parquet").load("s3:/my_bucket/path/to/my_parquet_table/partition_column=01")

Alternatively, load a parquet partition using a where predicate:

spark.read.format("parquet").load("s3:/my_bucket/path/to/my_parquet_table").where("partition_column=01")Read partitioned data from a sub-directory

Limitations

  • Direct file reads for Immuta data sources only apply to data sources created from tables, not data sources created from views or queries.

  • If more than one data source has been created for a path, Immuta will use the first valid data source it finds. It is therefore not recommended to use this integration when more than one data source has been created for a path.

  • In Databricks, multiple input paths are supported as long as they belong to the same data source.

  • CSV-backed tables are not currently supported.

  • Loading a delta partition from a sub-directory is not recommended by Spark and is not supported in Immuta. Instead, use a where predicate:

    # Not recommended by Spark and not supported in Immuta
    spark.read.format("delta").load("s3:/my_bucket/path/to/my_delta_table/partition_column=01")
    
    # Recommended by Spark and supported in Immuta.
    spark.read.format("delta").load("s3:/my_bucket/path/to/my_delta_table").where("partition_column=01")
    

User impersonation

User impersonation allows Databricks users to query data as another Immuta user. To impersonate another user, see the User impersonation page.

Delta Lake API

When using Delta Lake, the API does not go through the normal Spark execution path. This means that Immuta's Spark extensions do not provide protection for the API. To solve this issue and ensure that Immuta has control over what a user can access, the Delta Lake API is blocked.

Spark SQL can be used instead to give the same functionality with all of Immuta's data protections.

Requests

Below is a table of the Delta Lake API with the Spark SQL that may be used instead.

Delta Lake API
Spark SQL

DeltaTable.convertToDelta

CONVERT TO DELTA parquet./path/to/parquet/

DeltaTable.delete

DELETE FROM [table_identifier delta./path/to/delta/] WHERE condition

DeltaTable.generate

GENERATE symlink_format_manifest FOR TABLE [table_identifier delta./path/to/delta]

DeltaTable.history

DESCRIBE HISTORY [table_identifier delta./path/to/delta] (LIMIT x)

DeltaTable.merge

MERGE INTO

DeltaTable.update

UPDATE [table_identifier delta./path/to/delta/] SET column = valueWHERE (condition)

DeltaTable.vacuum

VACUUM [table_identifier delta./path/to/delta]

See here for a complete list of the Delta SQL Commands.

Merging tables in workspaces

When a table is created in a project workspace, you can merge a different Immuta data source from that workspace into that table you created.

  1. Create a table in the project workspace.

  2. Create a temporary view of the Immuta data source you want to merge into that table.

  3. Use that temporary view as the data source you add to the project workspace.

  4. Run the following command:

    MERGE INTO delta_native.target_native as target
    USING immuta_temp_view_data_source as source
    ON target.dr_number = source.dr_number
    WHEN MATCHED THEN
    UPDATE SET target.date_reported = source.date_reported

Registering and Protecting Data

In the Databricks Spark integration, Immuta installs an Immuta-maintained Spark plugin on your Databricks cluster. When a user queries data that has been registered in Immuta as a data source, the plugin injects policy logic into the plan Spark builds so that the results returned to the user only include data that specific user should see.

The sequence diagram below breaks down this process of events when an Immuta user queries data in Databricks.

Immuta intercepts Spark calls to the Metastore. Immuta then modifies the logical plan so that policies are applied to the data for the querying user.

Registering data

When data owners register Databricks securables in Immuta, the securable metadata is registered and Immuta creates a corresponding data source for those securables. The data source metadata is stored in the Immuta Metadata Database so that it can be referenced in policy definitions.

The image below illustrates what happens when a data owner registers the Accounts, Claims, and Customers securables in Immuta.

Users who are subscribed to the data source in Immuta can then query the corresponding securable directly in their Databricks notebook or workspace.

Authentication methods

See the Installation and compliance page for details about the authentication methods supported for registering data.

Schema monitoring

When schema monitoring is enabled, Immuta monitors your servers to detect when new tables or columns are created or deleted, and automatically registers (or disables) those tables in Immuta. These newly updated data sources will then have any global policies and tags that are set in Immuta applied to them. The Immuta data dictionary will be updated with any column changes, and the Immuta environment will be in sync with your data environment.

For Databricks Spark, the automatic schema monitoring job is disabled because of the ephemeral nature of Databricks clusters. In this case, Immuta requires you to download a schema detection job template (a Python script) and import that into your Databricks workspace.

See the Databricks data source guide for instructions on enabling schema monitoring.

Ephemeral overrides

In Immuta, a Databricks data source is considered ephemeral, meaning that the compute resources associated with that data source will not always be available.

Ephemeral data sources allow the use of ephemeral overrides, user-specific connection parameter overrides that are applied to Immuta metadata operations.

When a user runs a Spark job in Databricks, the Immuta plugin automatically submits ephemeral overrides for that user to Immuta. Consequently, subsequent metadata operations for that user will use the current cluster as compute.

See the Ephemeral overrides page for more details about ephemeral overrides and how to configure or disable them.

Ephemeral override requests

The Spark plugin has the capability to send ephemeral override requests to Immuta. These requests are distinct from ephemeral overrides themselves. Ephemeral overrides cannot be turned off, but the Spark plugin can be configured to not send ephemeral override requests.

Tag ingestion

Tags can be used in Immuta in a variety of ways:

  • Use tags for global subscription or data policies that will apply to all data sources in the organization. In doing this, company-wide data security restrictions can be controlled by the administrators and governors, while the users and data owners need only to worry about tagging the data correctly.

  • Generate Immuta reports from tags for insider threat surveillance or data access monitoring.

  • Filter search results with tags in the Immuta UI.

The Databricks Spark integration cannot ingest tags from Databricks, but you can connect any of these supported external catalogs to work with your integration.

You can also manage tags in Immuta by manually adding tags to your data sources and columns. Alternatively, you can use identification to automatically tag your sensitive data.

Protecting data

Immuta allows you to author subscription and data policies to automate access controls on your Databricks data.

  • Subscription policies: After registering data sources in Immuta, you can control who has access to specific securables in Databricks through Immuta subscription policies or by manually adding users to the data source. Data users will only see the immuta database with no tables until they are granted access to those tables as Immuta data sources. See the Subscription policy access types page for a list of policy types supported.

  • Data policies: You can create data policies to apply fine-grained access controls (such as restricting rows or masking columns) to manage what users can see in each table after they are subscribed to a data source. See the Data policy types page for details about specific types of data policies supported.

The image below illustrates how Immuta enforces a subscription policy that only allows users in the Analysts group to access the yellow-table.

See the Automate data access control decisions page for details about the benefits of using Immuta subscription and data policies.

Policy enforcement in Databricks

Once a Databricks user who is subscribed to the data source in Immuta queries the corresponding securable directly in their workspace, Spark Analysis initiates and the following events take place:

  1. Spark calls down to the Metastore to get table metadata.

  2. Immuta intercepts the call to retrieve table metadata from the Metastore.

  3. Immuta modifies the Logical Plan to enforce policies that apply to that user.

  4. Immuta wraps the Physical Plan with specific Java classes to signal to the Security Manager that it is a trusted node and is allowed to scan raw data.

  5. The Physical Plan is applied and filters out and transforms raw data coming back to the user.

  6. The user sees policy-enforced data.

The image below illustrates what happens when an Immuta user who is subscribed to the Customers data source queries the securable in Databricks.

Users who can read raw tables on-cluster

Regardless of the policies on the data source, the users will be able to read raw data on the cluster if they meet one of the criteria listed below:

  • Databricks administrator is tied to an Immuta account

  • A Databricks user is listed as an ignored user (Users can be specified in the IMMUTA_SPARK_ACL_ALLOWLIST Spark environment variable to become ignored users.)

Protected and unprotected tables

Generally, Immuta prevents users from seeing data unless they are explicitly given access, which blocks access to raw sources in the underlying databases.

Databricks non-admin users will only see sources to which they are subscribed in Immuta, and this can present problems if organizations have a data lake full of non-sensitive data and Immuta removes access to all of it. To address this challenge, Immuta allows administrators to change this default setting when configuring the integration so that Immuta users can access securables that are not registered as a data source. Although this is similar to how privileged users in Databricks operate, non-privileged users cannot bypass Immuta controls.

See the Customizing the integration guide for details about this setting.

Restricting users' access to data with Immuta projects

Immuta projects combine users and data sources under a common purpose. Sometimes this purpose is for a single user to organize their data sources or to control an entire schema of data sources through a single projects screen; however, most often this is an Immuta purpose for which the data has been approved to be used and will restrict access to data and streamline team collaboration. Consequently, data owners can restrict access to data for a specified purpose through projects.

When a user is working within the context of a project, they will only see the data in that project. This helps to prevent data leaks when users collaborate. Users can switch project contexts to access various data sources while acting under the appropriate purpose.

When users change project contexts (either through the Immuta UI or with project UDFs), queries reflect users as acting under the purposes of that project, which may allow additional access to data if there are purpose restrictions on the data source(s). This process also allows organizations to track not just whether a specific data source is being used, but why.

See the Customizing the integration page for details about how to prevent users from switching project contexts in a session.

Project workspaces

Users can have additional write access in their integration using project workspaces. Users can integrate a single or multiple workspaces with a single Immuta tenant.

See the Writing to projects page for more details.

Spark Environment Variables

This page outlines configuration details for Immuta-enabled Databricks clusters. Databricks administrators should place the desired configuration in the Spark environment variables.

IMMUTA_INIT_ADDITIONAL_CONF_URI

If you add additional Hadoop configuration during the integration setup, this variable sets the path to that file.

The additional Hadoop configuration is where sensitive configuration goes for remote filesystems (if you are using a secret key pair to access S3, for example).

IMMUTA_EPHEMERAL_HOST_OVERRIDE

Default value: true

Set this to false if ephemeral overrides should not be enabled for Spark. When true, this will automatically override ephemeral data source httpPaths with the httpPath of the Databricks cluster running the user's Spark application.

IMMUTA_EPHEMERAL_HOST_OVERRIDE_HTTPPATH

This configuration item can be used if automatic detection of the Databricks httpPath should be disabled in favor of a static path to use for ephemeral overrides.

IMMUTA_EPHEMERAL_TABLE_PATH_CHECK_ENABLED

Default value: true

When querying Immuta data sources in Spark, the metadata from the Metastore is compared to the metadata for the target source in Immuta to validate that the source being queried exists and is queryable on the current cluster. This check typically validates that the target (database, table) pair exists in the Metastore and that the table’s underlying location matches what is in Immuta. This configuration can be used to disable location checking if that location is dynamic or changes over time. Note: This may lead to undefined behavior if the same table names exist in multiple workspaces but do not correspond to the same underlying data.

IMMUTA_INIT_ALLOWED_CALLING_CLASSES_URI

A URI that points to a valid calling class file, which is an Immuta artifact you download during the process.

IMMUTA_SPARK_ACL_ALLOWLIST

This is a comma-separated list of Databricks users who can access any table or view in the cluster metastore without restriction.

IMMUTA_SPARK_ACL_PRIVILEGED_TIMEOUT_SECONDS

Default value: 3600

The number of seconds to cache privileged user status for the Immuta ACL. A privileged Databricks user is an admin or is allowlisted in IMMUTA_SPARK_ACL_ALLOWLIST.

IMMUTA_SPARK_AUDIT_ALL_QUERIES

Default value: false

Enables auditing all queries run on a Databricks cluster, regardless of whether users touch Immuta-protected data or not.

IMMUTA_SPARK_DATABRICKS_ALLOW_NON_IMMUTA_READS

Default value: false

Allows non-privileged users to SELECT from tables that are not protected by Immuta. See the for details about this feature.

IMMUTA_SPARK_DATABRICKS_ALLOW_NON_IMMUTA_WRITES

Default value: false

Allows non-privileged users to run DDL commands and data-modifying commands against tables or spaces that are not protected by Immuta. See the for details about this feature.

IMMUTA_SPARK_DATABRICKS_ALLOWED_IMPERSONATION_USERS

This is a comma-separated list of Databricks users who are allowed to impersonate Immuta users:

IMMUTA_SPARK_DATABRICKS_DBFS_MOUNT_ENABLED

Default value: false

Exposes the DBFS FUSE mount located at /dbfs. Granular permissions are not possible, so all users will have read/write access to all objects therein. Note: Raw, unfiltered source data should never be stored in DBFS.

IMMUTA_SPARK_DATABRICKS_DISABLED_UDFS

Block one or more Immuta from being used on an Immuta cluster. This should be a Java regular expression that matches the set of UDFs to block by name (excluding the immuta database). For example to block all project UDFs, you may configure this to be ^.*_projects?$. For a list of functions, see the .

IMMUTA_SPARK_DATABRICKS_JAR_URI

Default value: file:///databricks/jars/immuta-spark-hive.jar

The location of immuta-spark-hive.jar on the filesystem for Databricks. This should not need to change unless a custom initialization script that places immuta-spark-hive in a non-standard location is necessary.

IMMUTA_SPARK_DATABRICKS_LOCAL_SCRATCH_DIR_ENABLED

Default value: true

Creates a world-readable or writable scratch directory on local disk to facilitate the use of dbutils and 3rd party libraries that may write to local disk. Its location is non-configurable and is stored in the environment variable IMMUTA_LOCAL_SCRATCH_DIR. Note: Sensitive data should not be stored at this location.

IMMUTA_SPARK_DATABRICKS_LOG_LEVEL

Default value: INFO

The SLF4J log level to apply to Immuta's Spark plugins.

IMMUTA_SPARK_DATABRICKS_LOG_STDOUT_ENABLED

Default value: false

If true, writes logging output to stdout/the console as well as the log4j-active.txt file (default in Databricks).

IMMUTA_SPARK_DATABRICKS_SCRATCH_DATABASE

This configuration is a comma-separated list of additional databases that will appear as scratch databases when running a SHOW DATABASE query. This configuration increases performance by circumventing the Metastore to get the metadata for all the databases to determine what to display for a SHOW DATABASE query; it won't affect access to the scratch databases. Instead, use to control read and write access to the underlying database paths.

Additionally, this configuration will only display the scratch databases that are configured and will not validate that the configured databases exist in the Metastore. Therefore, it is up to the Databricks administrator to properly set this value and keep it current.

IMMUTA_SPARK_DATABRICKS_SCRATCH_PATHS

Comma-separated list of remote paths that Databricks users are allowed to directly read/write. These paths amount to unprotected "scratch spaces." You can create a scratch database by configuring its specified location (or configure dbfs:/user/hive/warehouse/<db_name>.db for the default location).

To create a scratch path to a location or a database stored at that location, configure

To create a scratch path to a database created using the default location,

IMMUTA_SPARK_DATABRICKS_SCRATCH_PATHS_CREATE_DB_ENABLED

Default value: false

Enables non-privileged users to create or drop scratch databases.

IMMUTA_SPARK_DATABRICKS_SINGLE_IMPERSONATION_USER

Default value: false

When true, this configuration prevents users from changing their impersonation user once it has been set for a given Spark session. This configuration should be set when the BI tool or other service allows users to submit arbitrary SQL or issue SET commands.

IMMUTA_SPARK_DATABRICKS_SUBMIT_TAG_JOB

Default value: true

Denotes whether the Spark job will be run that "tags" a Databricks cluster as being associated with Immuta.

IMMUTA_SPARK_DATABRICKS_TRUSTED_LIB_URIS

A comma-separated list of URIs.

IMMUTA_SPARK_NON_IMMUTA_TABLE_CACHE_SECONDS

Default value: 3600

The number of seconds Immuta caches whether a table has been exposed as a data source in Immuta. This setting only applies when IMMUTA_SPARK_DATABRICKS_ALLOW_NON_IMMUTA_WRITES or IMMUTA_SPARK_DATABRICKS_ALLOW_NON_IMMUTA_READS is enabled.

IMMUTA_SPARK_REQUIRE_EQUALIZATION

Default value: false

Requires that users act through a single, equalized project. A cluster should be equalized if users need to run Scala jobs on it, and it should be limited to Scala jobs only via spark.databricks.repl.allowedLanguages.

IMMUTA_SPARK_RESOLVE_RAW_TABLES_ENABLED

Default value: true

Enables use of the underlying database and table name in queries against a table-backed Immuta data source. Administrators or allowlisted users can set IMMUTA_SPARK_RESOLVE_RAW_TABLES_ENABLED to false to bypass resolving raw databases or tables as Immuta data sources. This is useful if an admin wants to read raw data but is also an Immuta user. By default, data policies will be applied to a table even for an administrative user if that admin is also an Immuta user.

IMMUTA_SPARK_SESSION_RESOLVE_RAW_TABLES_ENABLED

Default value: true

Same as the variable, but this is a session property that allows users to toggle this functionality. If users run set immuta.spark.session.resolve.raw.tables.enabled=false, they will see raw data only (not Immuta data policy-enforced data). Note: This property is not set in immuta_conf.xml.

IMMUTA_SPARK_SHOW_IMMUTA_DATABASE

Default value: true

This shows the immuta database in the configured Databricks cluster. When set to false Immuta will no longer show this database when a SHOW DATABASES query is performed. However, queries can still be performed against tables in the immuta database using the Immuta-qualified table name (e.g., immuta.my_schema_my_table) regardless of whether or not this feature is enabled.

IMMUTA_SPARK_VERSION_VALIDATE_ENABLED

Default value: true

Immuta checks the versions of its artifacts to verify that they are compatible with each other. When set to true, if versions are incompatible, that information will be logged to the Databricks driver logs and the cluster will not be usable. If a configuration file or the jar artifacts have been patched with a new version (and the artifacts are known to be compatible), this check can be set to false so that the versions don't get logged as incompatible and make the cluster unusable.

IMMUTA_USER_MAPPING_IAMID

Default value: bim

Denotes which IAM in Immuta should be used when mapping the current Spark user's username to a userid in Immuta. This defaults to Immuta's internal IAM (bim) but should be updated to reflect an actual production IAM.

"spark_env_vars.IMMUTA_SPARK_DATABRICKS_ALLOWED_IMPERSONATION_USERS": {
  "type": "fixed",
  "value": "[email protected],[email protected]"
}
IMMUTA_SPARK_DATABRICKS_SCRATCH_PATHS=s3://path/to/the/dir
IMMUTA_SPARK_DATABRICKS_SCRATCH_PATHS=s3://path/to/the/dir,dbfs:/user/hive/warehouse/any_db_name.db</value>
Databricks Spark configuration
Customizing the integration guide
Customizing the integration guide
user-defined functions (UDFs)
project UDFs page
IMMUTA_SPARK_DATABRICKS_SCRATCH_PATHS
Databricks trusted library
IMMUTA_SPARK_RESOLVE_RAW_TABLES_ENABLED

Security and Compliance

Immuta offers several features to provide security for your users and Databricks clusters and to prove compliance and monitor for anomalies.

Authentication

Configuring the integration and registering data

Immuta supports the following authentication methods to configure the Databricks Spark integration and register data sources:

  • OAuth machine-to-machine (M2M): Immuta uses the Client Credentials Flow to integrate with Databricks OAuth machine-to-machine authentication, which allows Immuta to authenticate with Databricks using a client secret. Once Databricks verifies the Immuta service principal’s identity using the client secret, Immuta is granted a temporary OAuth token to perform token-based authentication in subsequent requests. When that token expires (after one hour), Immuta requests a new temporary token. See the Databricks OAuth machine-to-machine (M2M) authentication page for more details.

  • Personal access token (PAT): This token gives Immuta temporary permission to push the cluster policies to the configured Databricks workspace and overwrite any cluster policy templates previously applied to the workspace when configuring the integration or to register securables as Immuta data sources.

User authentication

The built-in Immuta IAM can be used as a complete solution for authentication and fine-grained user entitlement. However, you can connect your existing identity management provider to Immuta to use that system for authentication and fine-grained user entitlement instead.

Each of the supported identity providers includes a specific set of configuration options that enable Immuta to communicate with the IAM system and map the users, permissions, groups, and attributes into Immuta.

See the Identity managers guide for a list of supported providers and details.

See the Setting up users guide for details and instructions on mapping Databricks user accounts to Immuta.

Cluster security

Data processing and encryption

See the Data processing and the Encryption and masking practices guides for more information about transmission of policy decision data, encryption of data in transit and at rest, and encryption key management.

Protecting the Immuta configuration

Non-administrator users on an Immuta-enabled Databricks cluster must not have access to view or modify Immuta configuration, as this poses a security loophole around Immuta policy enforcement. Databricks secrets allow you to securely apply environment variables to Immuta-enabled clusters.

Databricks secrets can be used in the environment variables configuration section for a cluster by referencing the secret path instead of the actual value of the environment variable.

See the Installation and compliance guide for details and instructions on using Databricks secrets.

Scala cluster security

There are limitations to isolation among users in Scala jobs on a Databricks cluster. When data is broadcast, cached (spilled to disk), or otherwise saved to SPARK_LOCAL_DIR, it's impossible to distinguish between which user’s data is composed in each file/block. To address this vulnerability, Immuta suggests that you

  • limit Scala clusters to Scala jobs only and

  • require equalized projects, which will force all users to act under the same set of attributes, groups, and purposes with respect to their data access. This requirement guarantees that data being dropped into SPARK_LOCAL_DIR will have policies enforced and that those policies will be homogeneous for all users on the cluster. Since each user will have access to the same data, if they attempt to manually access other users' cached/spilled data, they will only see what they have access to via equalized permissions on the cluster. If project equalization is not turned on, users could dig through that directory and find data from another user with heightened access, which would result in a data leak.

See the Installation and compliance guide for more details and configuration instructions.

Auditing and compliance

Immuta provides auditing features and governance reports so that data owners and governors can monitor users' access to data and detect anomalies in behavior.

You can view the information in these audit logs on dashboards or export the full audit logs to S3 and ADLS for long-term backup and processing with log data processors and tools. This capability fosters convenient integrations with log monitoring services and data pipelines.

See the Audit documentation for details about these capabilities and how they work with the Databricks Spark integration.

Databricks query audit

Immuta captures the code or query that triggers the Spark plan in Databricks, making audit records more useful in assessing what users are doing.

To audit what triggers the Spark plan, Immuta hooks into Databricks where notebook cells and JDBC queries execute and saves the cell or query text. Then, Immuta pulls this information into the audits of the resulting Spark jobs.

Immuta will audit queries that come from interactive notebooks, notebook jobs, and JDBC connections, but will not audit Scala or R submit jobs. Furthermore, Immuta only audits Spark jobs that are associated with Immuta tables. Consequently, Immuta will not audit a query in a notebook cell that does not trigger a Spark job, unless IMMUTA_SPARK_AUDI_ALL_QUERIES is set to true.

See the Databricks Spark query audit logs page for examples of saved queries and the resulting audit records. To exclude query text from audit events, see the App settings page.

Auditing all queries

Immuta supports auditing all queries run on a Databricks cluster, regardless of whether users touch Immuta-protected data or not.

See the Installation and compliance guide for details and instructions.

Auditing queries run while impersonating another user

When a query is run by a user impersonating another user, the extra.impersonationUser field in the audit log payload is populated with the Databricks username of the user impersonating another user. The userId field will return the Immuta username of the user being impersonated:

{
  "id": "query-a20e-493e-id-c1ada0a23a26",
  [...]
  "userId": "<immuta_username>",
  [...]
  "extra": {
    [...]
    "impersonationUser": "<databricks_username>"
  }
  [...]
}

See the Setting up users guide for details about user impersonation.

Governance reports

Immuta governance reports allow users with the GOVERNANCE Immuta permission to use a natural language builder to instantly create reports that delineate user activity across Immuta. These reports can be based on various entity types, including users, groups, projects, data sources, purposes, policy types, or connection types.

See the Governance report types page for a list of report types and guidance.

Customizing the Integration

You can customize the Databricks Spark integration settings using these components Immuta provides:

  • Cluster policies

  • Spark environment variables

  • Hadoop configuration file

Cluster policies

Immuta provides cluster policies that set the Spark environment variables and configuration on your Databricks cluster once you apply that policy to your cluster. These policies generated by Immuta must be applied to your cluster manually. The Configure a Databricks Spark integration guide includes instructions for generating and applying these cluster policies. Each cluster policy is described below.

Python and SQL

This is the most performant policy configuration.

In this configuration, Immuta is able to rely on Databricks-native security controls, reducing overhead. The key security control here is the enablement of process isolation. This prevents users from obtaining unintentional access to the queries of other users. In other words, masked and filtered data is consistently made accessible to users in accordance with their assigned attributes. This Immuta cluster configuration relies on Py4J security being enabled. Consequently, the following Databricks features are unsupported:

  • Many Python ML classes (such as LogisticRegression, StringIndexer, and DecisionTreeClassifier)

  • dbutils.fs

  • Databricks Connect client library

For full details on Databricks’ best practices in configuring clusters, read their governance documentation.

Python, SQL, and R

Additional overhead: Compared to the Python and SQL cluster policy, this configuration trades some additional overhead for added support of the R language.

In this configuration, you are able to rely on the Databricks-native security controls. The key security control here is the enablement of process isolation. This prevents users from obtaining unintentional access to the queries of other users. In other words, masked and filtered data is consistently made accessible to users in accordance with their assigned attributes.

Like the Python & SQL configuration, Py4j security is enabled for the Python & SQL & R configuration. However, because R has been added Immuta enables the Security Manager, in addition to Py4J security, to provide more security guarantees. For example, by default all actions in R execute as the root user; among other things, this permits access to the entire filesystem (including sensitive configuration data), and, without iptable restrictions, a user may freely access the cluster’s cloud storage credentials. To address these security issues, Immuta’s initialization script wraps the R and Rscript binaries to launch each command as a temporary, non-privileged user with limited filesystem and network access and installs the Immuta Security Manager, which prevents users from bypassing policies and protects against the above vulnerabilities from within the JVM.

Consequently, the cost of introducing R is that the Security Manager incurs a small increase in performance overhead; however, average latency will vary depending on whether the cluster is homogeneous or heterogeneous. (In homogeneous clusters, all users are at the same level of groups/authorizations; this is enforced externally, rather than directly by Immuta.)

When users install third-party Java/Scala libraries, they will be denied access to sensitive resources by default. However, cluster administrators can specify which of the installed Databricks libraries should be trusted by Immuta.

The following Databricks features are unsupported when this cluster policy is applied:

  • Many Python ML classes (such as LogisticRegression, StringIndexer, and DecisionTreeClassifier)

  • dbutils.fs

  • Databricks Connect client library

For full details on Databricks’ best practices in configuring clusters, read their governance documentation.

Python, SQL, and R with library support

Py4J security disabled: In addition to support for Python, SQL, and R, this configuration adds support for additional Python libraries and utilities by disabling Databricks-native Py4J security.

This configuration does not rely on Databricks-native Py4J security to secure the cluster, while process isolation is still enabled to secure filesystem and network access from within Python processes. On an Immuta-enabled cluster, once Py4J security is disabled the Immuta Security Manager is installed to prevent nefarious actions from Python in the JVM. Disabling Py4J security also allows for expanded Python library support, including many Python ML classes (such as LogisticRegression, StringIndexer, and DecisionTreeClassifier) and dbutils.fs.

By default, all actions in R will execute as the root user. Among other things, this permits access to the entire filesystem (including sensitive configuration data). And without iptable restrictions, a user may freely access the cluster’s cloud storage credentials. To properly support the use of the R language, Immuta’s initialization script wraps the R and Rscript binaries to launch each command as a temporary, non-privileged user. This user has limited filesystem and network access. The Immuta Security Manager is also installed to prevent users from bypassing policies and protects against the above vulnerabilities from within the JVM.

The Security Manager will incur a small increase in performance overhead; average latency will vary depending on whether the cluster is homogeneous or heterogeneous. (In homogeneous clusters, all users are at the same level of groups/authorizations; this is enforced externally, rather than directly by Immuta.)

When users install third-party Java/Scala libraries, they will be denied access to sensitive resources by default. However, cluster administrators can specify which of the installed Databricks libraries should be trusted by Immuta.

A homogeneous cluster is recommended for configurations where Py4J security is disabled. If all users have the same level of authorization, there would not be any data leakage, even if a nefarious action was taken.

For full details on Databricks’ best practices in configuring clusters, read their governance documentation.

Scala

Scala clusters: This configuration is for Scala-only clusters.

Where Scala language support is needed, this configuration can be used in the Custom access mode.

According to Databricks’ cluster type support documentation, Scala clusters are intended for single users only. However, nothing inherently prevents a Scala cluster from being configured for multiple users. Even with the Immuta Security Manager enabled, there are limitations to user isolation within a Scala job.

For a secure configuration, it is recommended that clusters intended for Scala workloads are limited to Scala jobs only and are made homogeneous through the use of project equalization or externally via convention/cluster ACLs. (In homogeneous clusters, all users are at the same level of groups/authorizations; this is enforced externally, rather than directly by Immuta.)

For full details on Databricks’ best practices in configuring clusters, read their governance documentation.

Sparklyr

Single-user clusters recommended: Like Databricks, Immuta recommends single-user clusters for sparklyr when user isolation is required. A single-user cluster can either be a job cluster or a cluster with credential passthrough enabled. Note: spark-submit jobs are not currently supported.

Two cluster types can be configured with sparklyr: Single-User Clusters (recommended) and Multi-User Clusters (discouraged).

  • Single-User Clusters: Credential Passthrough (required on Databricks) allows a single-user cluster to be created. This setting automatically configures the cluster to assume the role of the attached user when reading from storage. Because Immuta requires that raw data is readable by the cluster, the instance profile associated with the cluster should be used rather than a role assigned to the attached user.

  • Multi-User Clusters: Because Immuta cannot guarantee user isolation in a multi-user sparklyr cluster, it is not recommended to deploy a multi-user cluster. To force all users to act under the same set of attributes, groups, and purposes with respect to their data access and eliminate the risk of a data leak, all sparklyr multi-user clusters must be equalized either by convention (all users able to attach to the cluster have the same level of data access in Immuta) or by configuration (detailed below).

Single-user cluster configuration

1 - Enable sparklyr

In addition to the configuration for an Immuta cluster with R, add this environment variable to the Environment Variables section of the cluster:

IMMUTA_DATABRICKS_SPARKLYR_SUPPORT_ENABLED=true

This configuration makes changes to the iptables rules on the cluster to allow the sparklyr client to connect to the required ports on the JVM used by the sparklyr backend service.

2 - Set up a sparklyr connection in Databricks

  1. Install and load libraries into a notebook. Databricks includes the stable version of sparklyr, so library(sparklyr) in an R notebook is sufficient, but you may opt to install the latest version of sparklyr from CRAN. Additionally, loading library(DBI) will allow you to execute SQL queries.

  2. Set up a sparklyr connection:

    sc <- spark_connect(method = "databricks")
  3. Pass the connection object to execute queries:

    dbGetQuery(sc, "show tables in immuta")

3 - Configure a single-user cluster

Add the following items to the Spark Config section of the cluster:

spark.databricks.passthrough.enabled true

spark.databricks.pyspark.trustedFilesystems com.databricks.s3a.S3AFileSystem,shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem,shaded.databricks.v20180920_b33d810.org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem,com.databricks.adl.AdlFileSystem,shaded.databricks.V2_1_4.com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem,shaded.databricks.org.apache.hadoop.fs.azure.NativeAzureFileSystem,shaded.databricks.org.apache.hadoop.fs.s3a.S3AFileSystem,org.apache.hadoop.fs.ImmutaSecureFileSystemWrapper

spark.hadoop.fs.s3a.aws.credentials.provider com.amazonaws.auth.InstanceProfileCredentialsProvider

The trustedFileSystems setting is required to allow Immuta’s wrapper FileSystem (used in conjunction with the Security Manager for data security purposes) to be used with credential passthrough. Additionally, the InstanceProfileCredentialsProvider must be configured to continue using the cluster’s instance profile for data access, rather than a role associated with the attached user.

Multi-user cluster configuration

Avoid deploying multi-user clusters with sparklyr configuration

It is possible, but not recommended, to deploy a multi-user cluster sparklyr configuration. Immuta cannot guarantee user isolation in a multi-user sparklyr configuration.

The configurations in this section enable sparklyr, require project equalization, map sparklyr sessions to the correct Immuta user, and prevent users from accessing Immuta native workspaces.

  1. Add the following environment variables to the Environment Variables section of your cluster configuration:

    IMMUTA_DATABRICKS_SPARKLYR_SUPPORT_ENABLED=true
    
    IMMUTA_SPARK_REQUIRE_EQUALIZATION=true
    
    IMMUTA_SPARK_CURRENT_USER_SCIM_FALLBACK=false
  2. Add the following items to the Spark Config section:

    immuta.spark.acl.assume.not.privileged true
    
    immuta.api.key=<user’s API key>

Limitations

Immuta’s integration with sparklyr does not currently support

  • spark-submit jobs

  • UDFs

Spark environment variables

The Spark environment variables reference guide lists the various possible settings controlled by these variables that you can set in your cluster policy before attaching it to your cluster.

Additional Hadoop configuration file (optional)

In some cases it is necessary to add sensitive configuration to SparkSession.sparkContext.hadoopConfiguration to allow Spark to read data.

For example, when accessing external tables stored in Azure Data Lake Gen2, Spark must have credentials to access the target containers or filesystems in Azure Data Lake Gen2, but users must not have access to those credentials. In this case, an additional configuration file may be provided with a storage account key that the cluster may use to access Azure Data Lake Gen2.

To use an additional Hadoop configuration file, set the IMMUTA_INIT_ADDITIONAL_CONF_URI Spark environment variable to be the full URI to this file.

Configurable settings

Data source settings

Protected and unprotected tables

Databricks non-privileged users will only see sources to which they are subscribed in Immuta, and this can present problems if organizations have a data lake full of non-sensitive data and Immuta removes access to all of it. Immuta addresses this challenge by allowing Immuta users to access any tables that are not protected by Immuta (i.e., not registered as a data source or a table in a native workspace). Although this is similar to how privileged users in Databricks operate, non-privileged users cannot bypass Immuta controls.

  • Protected until made available by policy: This setting means that users can only see tables that Immuta has explicitly subscribed them to.

Behavior change

If a table is registered in Immuta and does not have a subscription policy applied to it, that data will be visible to users, even if the Protected until made available by policy setting is enabled.

If you have enabled this setting, author an "Allow individually selected users" global subscription policy that applies to all data sources.

  • Available until protected by policy: This setting means all tables are open until explicitly registered and protected by Immuta. This setting allows both non-Immuta reads and non-Immuta writes:

    • IMMUTA_SPARK_DATABRICKS_ALLOW_NON_IMMUTA_READS: Immuta users with regular (non-privileged) Databricks roles may SELECT from tables that are not registered in Immuta. This setting does not allow reading data directly with commands like spark.read.format("x"). Users are still required to read data and query tables using Spark SQL. When non-Immuta reads are enabled through the cluster policy, Immuta users will see all databases and tables when they run show databases or show tables. However, this does not mean they will be able to query all of them.

    • IMMUTA_SPARK_DATABRICKS_ALLOW_NON_IMMUTA_WRITES: Immuta users with regular (non-privileged) Databricks roles can run DDL commands and data-modifying commands against tables or spaces that are not registered in Immuta. With non-Immuta writes enabled through the cluster policy, users on the cluster can mix any policy-enforced data they may have access to via any registered data sources in Immuta with non-Immuta data and write the ensuing result to a non-Immuta write space where it would be visible to others. If this is not a desired possibility, the cluster should instead be configured to only use Immuta’s project workspaces.

The Configure a Databricks Spark integration guide includes instructions for applying these settings to your cluster.

Ephemeral overrides

In Immuta, a Databricks data source is considered ephemeral, meaning that the compute resources associated with that data source will not always be available.

Ephemeral data sources allow the use of ephemeral overrides, user-specific connection parameter overrides that are applied to Immuta metadata operations.

When a user runs a Spark job in Databricks, the Immuta plugin automatically submits ephemeral overrides for that user to Immuta for all applicable data sources to use the current cluster as compute for all subsequent metadata operations for that user against the applicable data sources.

For more details about ephemeral overrides and how to configure or disable them, see the Ephemeral overrides page.

Restricting users' access with Immuta projects

Immuta projects combine users and data sources under a common purpose. Sometimes this purpose is for a single user to organize their data sources or to control an entire schema of data sources through a single projects screen; however, most often this is an Immuta purpose for which the data has been approved to be used and will restrict access to data and streamline team collaboration. Consequently, data owners can restrict access to data for a specified purpose through projects.

When a user is working within the context of a project, data users will only see the data in that project. This helps to prevent data leaks when users collaborate. Users can switch project contexts to access various data sources while acting under the appropriate purpose. Consider adjusting the following project settings to suit your organization's needs:

  • Project UDFs (web service and on-cluster caches): Immuta caches a mapping of user accounts and users' current projects in the Immuta Web Service and on-cluster. When users change their project with UDFs instead of the Immuta UI, Immuta invalidates all the caches on-cluster (so that everything changes immediately) and the cluster submits a request to change the project context to a web worker. Immediately after that request, another call is made to a web worker to refresh the current project. To allow use of project UDFs in Spark jobs, raise the caching on-cluster and lower the cache timeouts for the Immuta Web Service. Otherwise, caching could cause dissonance among the requests and calls to multiple web workers when users try to change their project contexts.

  • Preventing users from changing projects within a session: If your compliance requirements restrict users from changing projects within a session, you can block the use of Immuta's project UDFs on a Databricks Spark cluster. To do so, configure the IMMUTA_SPARK_DATABRICKS_DISABLED_UDFS Spark environment variable.

Databricks features

This section describes how Immuta interacts with common Databricks features.

Change data feed

Databricks users can see the Databricks change data feed (CDF) on queried tables if they are allowed to read raw data and meet specific qualifications. Immuta does not support applying policies to the changed data, and the CDF cannot be read for data source tables if the user does not have access to the raw data in Databricks or for streaming queries.

The CDF can be read if the querying user is allowed to read the raw data and ONE of the following statements is true:

  • the table is in the current workspace

  • the table is in a scratch path

  • non-Immuta reads are enabled AND the table does not intersect with a workspace under which the current user is not acting

  • non-Immuta reads are enabled AND the table is not part of an Immuta data source

Databricks trusted libraries

Security vulnerability

Using this feature could create a security vulnerability, depending on the third-party library. For example, if a library exposes a public method named readProtectedFile that displays the contents of a sensitive file, then trusting that library would allow end users access to that file. Work with your Immuta support professional to determine if the risk does not apply to your environment or use case.

The trusted libraries feature allows Databricks cluster administrators to avoid Immuta Security Manager errors when using third-party libraries. An administrator can specify an installed library as trusted, which will enable that library's code to bypass the Immuta security manager. This feature does not impact Immuta's ability to apply policies; trusting a library only allows code through that otherwise would have been blocked by the Security Manager.

The following types of libraries are supported when installing a third-party library using the Databricks UI or the Databricks Libraries API:

  • Library source is Upload, DBFS or DBFS/S3 and the Library Type is Jar.

  • Library source is Maven.

When users install third-party libraries, those libraries will be denied access to sensitive resources by default. However, cluster administrators can specify which of the installed Databricks libraries should be trusted by Immuta. See the Install a trusted library guide to add a trusted library to your configuration.

Limitations

  • Installing trusted libraries outside of the Databricks Libraries API (e.g., ADD JAR ...) is not supported.

  • Databricks installs libraries right after a cluster has started, but there is no guarantee that library installation will complete before a user's code is executed. If a user executes code before a trusted library installation has completed, Immuta will not be able to identify the library as trusted. This can be solved by either

    • waiting for library installation to complete before running any third-party library commands or

    • executing a Spark query. This will force Immuta to wait for any trusted Immuta libraries to complete installation before proceeding.

  • When installing a library using Maven as a library source, Databricks will also install any transitive dependencies for the library. However, those transitive dependencies are installed behind the scenes and will not appear as installed libraries in either the Databricks UI or using the Databricks Libraries API. Only libraries specifically listed in the IMMUTA_SPARK_DATABRICKS_TRUSTED_LIB_URIS environment variable will be trusted by Immuta, which does not include installed transitive dependencies. This effectively means that any code paths that include a class from a transitive dependency but do not include a class from a trusted third-party library can still be blocked by the Immuta security manager. For example, if a user installs a trusted third-party library that has a transitive dependency of a file-util library, the user will not be able to directly use the file-util library to read a sensitive file that is normally protected by the Immuta security manager.

    In many cases, it is not a problem if dependent libraries aren't trusted because code paths where the trusted library calls down into dependent libraries will still be trusted. However, if the dependent library needs to be trusted, there is a workaround:

    1. Add the transitive dependency jar paths to the IMMUTA_SPARK_DATABRICKS_TRUSTED_LIB_URIS Spark environment variable. In the driver log4j logs, Databricks outputs the source jar locations when it installs transitive dependencies. In the cluster driver logs, look for a log message similar to the following:

      INFO LibraryDownloadManager: Downloaded library dbfs:/FileStore/jars/maven/org/slf4j/slf4j-api-1.7.25.jar as
      local file /local_disk0/tmp/addedFile8569165920223626894slf4j_api_1_7_25-784af.jar
      1. In the above example, where slf4j is the transitive dependency, you would add the path dbfs:/FileStore/jars/maven/org/slf4j/slf4j-api-1.7.25.jar to the IMMUTA_SPARK_DATABRICKS_TRUSTED_LIB_URIS environment variable and restart your cluster.

External catalogs

Connect any of these supported external catalogs to work with your Databricks Spark integration so data owners can tag their data.

External metastores

Immuta supports the use of external metastores in local or remote mode:

  • Local mode: The metastore client running inside a cluster connects to the underlying metastore database directly via JDBC.

  • Remote mode: Instead of connecting to the underlying database directly, the metastore client connects to a separate metastore service via the Thrift protocol. The metastore service connects to the underlying database. When running a metastore in remote mode, DBFS is not supported.

For more details about these deployment modes, see how to set up Databricks clusters to connect to an existing external Apache Hive metastore.

Configure external Hive metastore

Download the metastore jars and point to them as specified in Databricks documentation. Metastore jars must end up on the cluster's local disk at this explicit path: /databricks/hive_metastore_jars.

If using DBR 7.x with Hive 2.3.x, either

  • Set spark.sql.hive.metastore.version to 2.3.7 and spark.sql.hive.metastore.jars to builtin or

  • Download the metastore jars and set spark.sql.hive.metastore.jars to /databricks/hive_metastore_jars/* as before.

Configure AWS Glue Data Catalog

To use AWS Glue Data Catalog as the metastore for Databricks, see the Databricks documentation.

Notebook-scoped libraries on machine learning clusters

Users on Databricks Runtimes 8+ can manage notebook-scoped libraries with %pip commands.

However, this functionality differs from the support for Databricks trusted libraries, and Python libraries are not supported as trusted libraries. The Immuta Security Manager will deny the code of libraries installed with %pip access to sensitive resources.

Scratch paths

Scratch paths are cluster-specific remote file paths that Databricks users are allowed to directly read from and write to without restriction. The creator of a Databricks cluster specifies the set of remote file paths that are designated as scratch paths on that cluster when they configure a Databricks cluster. Scratch paths are useful for scenarios where non-sensitive data needs to be written out to a specific location using a Databricks cluster protected by Immuta.

To configure a scratch path, use the IMMUTA_SPARK_DATABRICKS_SCRATCH_PATHS Spark environment variable.

Installation and Compliance

In the Databricks Spark integration, Immuta installs an Immuta-maintained Spark plugin on your Databricks cluster. When a user queries data that has been registered in Immuta as a data source, the plugin injects policy logic into the plan Spark builds so that the results returned to the user only include data that specific user should see.

The sequence diagram below breaks down this process of events when an Immuta user queries data in Databricks.

Immuta intercepts Spark calls to the Metastore. Immuta then modifies the logical plan so that policies are applied to the data for the querying user.

System requirements

  • A Databricks workspace with the Premium tier, which includes cluster policies (required to configure the Spark integration)

  • A cluster that uses one of these supported Databricks Runtimes:

    • 11.3 LTS

    • 14.3 LTS

    For a comparison of features supported for both Databricks Runtimes, see the Databricks Runtime 14.3 section.

  • Supported languages

    • Python

    • R (not supported for Databricks Runtime 14.3 LTS)

    • Scala (not supported for Databricks Runtime 14.3 LTS)

    • SQL

  • A Databricks cluster that is one of these supported compute types:

    • All-purpose compute

    • Job compute

  • Custom access mode

  • A Databricks workspace and cluster with the ability to directly make HTTP calls to the Immuta web service. The Immuta web service also must be able to connect to and perform queries on the Databricks cluster, and to call Databricks workspace APIs.

What does Immuta do in my Databricks environment?

When an administrator configures the Databricks Spark integration, Immuta generates a cluster policy that the administrator then applies to the Databricks cluster. When the cluster starts after the cluster policy has been applied, the Databricks cluster init script that Immuta provides downloads Spark plugin artifacts onto the cluster that has the init script and puts the artifacts in the appropriate locations on local disk for use by Spark.

Once the init script runs, the Spark application running on the Databricks cluster will have the appropriate artifacts on its CLASSPATH to use Immuta for authorization and policy enforcement.

Immuta adds the following artifacts to your Databricks environment:

Immuta-maintained Spark plugin

The Databricks Spark integration injects this Immuta-maintained Spark plugin into the SparkSQL stack at cluster startup time. Policy determinations are obtained from the connected Immuta tenant and applied before returning results to the user. The plugin includes wrappers and Immuta analysis hook plan rewrites to enforce policies.

Immuta Security Manager

Note: The Security Manager is disabled for Databricks Runtime 14.3.

The Immuta Security Manager ensures users can't perform unauthorized actions when using Scala and R, since those languages have features that allow users to circumvent policies without the Security Manager enabled. The Immuta Security Manager blocks users from executing code that could allow them to gain access to sensitive data by only allowing select code paths to access sensitive files and methods. These select code paths provide Immuta's code access to sensitive resources while blocking end users from these sensitive resources directly.

Performance

The Security Manager must inspect the call stack every time a permission check is triggered, which adds overhead to queries. To improve Immuta's query performance on Databricks, Immuta disables the Security Manager when Scala and R are not being used.

The cluster init script checks the cluster’s configuration and automatically removes the Security Manager configuration when

  • spark.databricks.repl.allowedlanguages is a subset of {python, sql}

  • IMMUTA_SPARK_DATABRICKS_PY4J_STRICT_ENABLED is true

When the cluster is configured this way, Immuta can rely on Databricks' process isolation and Py4J security to prevent user code from performing unauthorized actions.

Note: Immuta still expects the spark.driver.extraJavaOptions and spark.executor.extraJavaOptions to be set and pointing at the Security Manager.

Beyond disabling the Security Manager, Immuta will skip several startup tasks that are required to secure the cluster when Scala and R are configured, and fewer permission checks will occur on the Driver and Executors in the Databricks cluster, reducing overhead and improving performance.

Caveats

  • There are still cases that require the Security Manager; in those instances, Immuta creates a fallback Security Manager to check the code path, so the IMMUTA_INIT_ALLOWED_CALLING_CLASSES_URI environment variable must always point to a valid calling class file.

  • Databricks’ dbutils is blocked by their Py4J security; therefore, it can’t be used to access scratch paths.

immuta database

When a table is registered in Immuta as a data source, users can see that table in the native Databricks database and in the immuta database. This allows for an option to use a single database (immuta) for all tables.

The immuta database on Immuta-enabled clusters allows Immuta to track Immuta-managed data sources separately from remote Databricks tables so that policies and other security features can be applied. However, Immuta supports raw tables in Databricks, so table-backed queries do not need to reference this database.

When configuring a Databricks cluster, you can hide immuta from any calls to SHOW DATABASES so that users are not confused or misled by that database. Hiding the database does not disable access to it. Queries can still be performed against tables in the immuta database using the Immuta-qualified table name (e.g., immuta.my_schema_my_table) regardless of whether or not this database is hidden.

To hide the immuta database, use the following environment variable in the Spark cluster configuration when configuring your integration:

IMMUTA_SPARK_SHOW_IMMUTA_DATABASE=false

Then, Immuta will not show this database when a SHOW DATABASES query is performed.

Once the Immuta-enabled cluster is running, the following user actions spur various processes. The list below provides an overview of each process:

  • Data source is registered: When a data owner registers a Databricks securable as a data source, data source metadata (column type, securable name, column names, etc.) is retrieved from the Metastore and stored in the Immuta Metadata Database. If tags are then applied to the data source, Immuta stores this metadata in the Metadata Database as well.

  • Data source is deleted: When a data source is deleted, the data source metadata is deleted from the Metadata Database. Depending on the settings configured for the integration, users will either be able to query that data now that it is no longer registered in Immuta, or access to the securable will be revoked for all users. See the Protected and unprotected tables section for details about this setting.

  • Policy is created or edited on a data source: Information about the policy and the columns or securables it applies to is stored in the Metadata Database. When a user queries the data in Databricks, the Spark plugin retrieves the policy information, the user metadata, and the data source metadata from the Metadata Database and injects this information as policy logic into the Spark logical plan. Immuta caches policy information and data source definitions in memory on the Spark application to reduce calls to the Metadata Database and boost performance.

  • A policy is deleted: When a policy is deleted, the policy information is deleted from the Metadata Database. If users were granted access to the data source by that policy, their access is revoked.

  • Databricks user is mapped to Immuta: When a Databricks user is mapped to Immuta, their metadata is stored in the Metadata Database.

  • Databricks user queries data: When a user queries the data in Databricks, Immuta intercepts the call from Spark down to the Metastore. Then, the Immuta-maintained Spark plugin retrieves the policy information, the user metadata, and the data source metadata from the Metadata Database and injects this information as policy logic into the Spark logical plan. Once the physical plan is applied, Databricks returns policy-enforced data to the user.

The image below illustrates these processes and how they interact.

Supported policies

The Databricks Spark integration allows users to author subscription and data policies to enforce access controls. See the corresponding pages for details about specific types of policies supported:

  • Subscription policy access types

  • Data policy types

Databricks Runtime 14.3

Immuta supports clusters on Databricks Runtime 14.3. The integration for this Databricks Runtime differs from the integration for Databricks Runtime 11.3 in the following ways:

  • Security Manager is disabled: The Security Manager is disabled for Databricks Runtime 14.3. Because the Security Manager is used to prevent users from circumventing access controls when using R and Scala, those languages are unsupported. Only Python and SQL clusters are supported.

  • Py4J security and process isolation automatically enabled: Immuta relies on Databricks process isolation and Py4J security to prevent user code from performing unauthorized actions. After selecting Runtime 14.3 during configuration, Immuta will automatically enable process isolation and Py4J security.

  • dbutils is unsupported: Immuta relies on Databricks process isolation and Py4J security to prevent user code from performing unauthorized actions. This means that dbutils is not supported for Databricks Spark integrations using Databricks Runtime 14.3 LTS.

  • Databricks Connect is unsupported: Databricks Connect is unsupported because Py4J security must be enabled to use it.

The table below compares the features supported for clusters on Databricks Runtime 11.3 and Databricks Runtime 14.3.

Feature
Databricks Runtime 11.3
Databricks Runtime 14.3

Subscription policies

βœ…

βœ…

Data policies

βœ…

βœ…

βœ…

βœ…

βœ…

βœ…

Non-Immuta reads and writes

βœ…

βœ…

βœ…

βœ…

βœ…

βœ…

Python

βœ…

βœ…

SQL

βœ…

βœ…

R

βœ…

❌

Scala

βœ…

❌

Immuta project workspaces

βœ…

❌

Smart mask ordering

βœ…

❌

Masking and tagging complex columns (STRUCT, ARRAY, MAP)

βœ…

❌

Photon support

βœ…

❌

dbutils

βœ…

❌

Databricks Connect

βœ…

❌

Write policies

❌

❌

Support for allowlisting networks or local filesystem paths

❌

βœ…

Cluster security and compliance

Authentication methods

The Databricks Spark integration supports the following authentication methods to configure the integration:

  • OAuth machine-to-machine (M2M): Immuta uses the Client Credentials Flow to integrate with Databricks OAuth machine-to-machine authentication, which allows Immuta to authenticate with Databricks using a client secret. Once Databricks verifies the Immuta service principal’s identity using the client secret, Immuta is granted a temporary OAuth token to perform token-based authentication in subsequent requests. When that token expires (after one hour), Immuta requests a new temporary token. See the Databricks OAuth machine-to-machine (M2M) authentication page for more details.

  • Personal access token (PAT): This token gives Immuta temporary permission to push the cluster policies to the configured Databricks workspace and overwrite any cluster policy templates previously applied to the workspace when configuring the integration or to register securables as Immuta data sources.

Audit

Immuta captures the code or query that triggers the Spark plan in Databricks, making audit records more useful in assessing what users are doing. To audit what triggers the Spark plan, Immuta hooks into Databricks where notebook cells and JDBC queries execute and saves the cell or query text. Then, Immuta pulls this information into the audits of the resulting Spark jobs.

Immuta supports auditing all queries run on a Databricks cluster, regardless of whether users touch Immuta-protected data or not. To configure Immuta to do so, set the IMMUTA_SPARK_AUDIT_ALL_QUERIES environment variable in the Spark cluster configuration when configuring your integration.

See the Security and compliance guide for more details about the audit capabilities in the Databricks Spark integration.

Protecting the Immuta configuration

Non-administrator users on an Immuta-enabled Databricks cluster must not have access to view or modify Immuta configuration or the immuta-spark-hive.jar file, as this poses a security loophole around Immuta policy enforcement. Databricks secrets allow you to securely apply environment variables to Immuta-enabled clusters.

Databricks secrets can be used in the environment variables configuration section for a cluster by referencing the secret path instead of the actual value of the environment variable. For example, if a user wanted to make the MY_SECRET_ENV_VAR=abcd_1234 value secret, they could instead create a Databricks secret and reference it as the value of that variable by following these steps:

  1. Create the secret scope my_secrets and add a secret with the key my_secret_env_var containing the sensitive environment variable.

  2. Reference the secret in the environment variables section as MY_SECRET_ENV_VAR={{secrets/my_secrets/my_secret_env_var}}.

At runtime, {{secrets/my_secrets/my_secret_env_var}} would be replaced with the actual value of the secret if the owner of the cluster has access to that secret.

Scala clusters

There are limitations to isolation among users in Scala jobs on a Databricks cluster, even when using Immuta’s Security Manager. When data is broadcast, cached (spilled to disk), or otherwise saved to SPARK_LOCAL_DIR, it's impossible to distinguish between which user’s data is composed in each file/block. If you are concerned about this vulnerability, Immuta suggests that you

  • limit Scala clusters to Scala jobs only and

  • require equalized projects, which will force all users to act under the same set of attributes, groups, and purposes with respect to their data access. To require that Scala clusters be used in equalized projects and avoid the risk described above, set the IMMUTA_SPARK_REQUIRE_EQUALIZATION Spark environment variable to true. Once this configuration is complete, users on the cluster will need to switch to an Immuta equalized project before running a job. Once the first job is run using that equalized project, all subsequent jobs, no matter the user, must also be run under that same equalized project. If you need to change a cluster's project, you must restart the cluster.

When data is read in Spark using an Immuta policy-enforced plan, the masking and redaction of rows is performed at the leaf level of the physical Spark plan, so a policy such as "Mask using hashing the column social_security_number for everyone" would be implemented as an expression on a project node right above the FileSourceScanExec/LeafExec node at the bottom of the plan. This process prevents raw data from being shuffled in a Spark application and, consequently, from ending up in SPARK_LOCAL_DIR.

This policy implementation coupled with an equalized project guarantees that data being dropped into SPARK_LOCAL_DIR will have policies enforced and that those policies will be homogeneous for all users on the cluster. Since each user will have access to the same data, if they attempt to manually access other users' cached data, they will only see what they have access to via equalized permissions on the cluster. If project equalization is not turned on, users could dig through that directory and find data from another user with heightened access, which would result in a data leak.

Troubleshooting the installation

The Troubleshooting page has guidance for resolving issues with your installation.

Scratch paths
Project UDFs
Impersonation
Metastore magic