Skip to content

Databricks Unity Catalog Integration Reference

Databricks Unity Catalog allows you to manage and access data in your Databricks account across all of your workspaces and introduces fine-grained access controls in Databricks.

Immuta’s integration with Unity Catalog allows you to manage multiple Databricks workspaces through Unity Catalog while protecting your data with Immuta policies. Instead of manually creating UDFs or granting access to each table in Databricks, you can author your policies in Immuta and have Immuta manage and enforce Unity Catalog access-control policies on your data in Databricks clusters or SQL warehouses:

  • Subscription policies: Immuta subscription policies automatically grant and revoke access to Databricks tables.
  • Data policies: Immuta data policies enforce row- and column-level security without creating views, so users can query tables as they always have without their workflows being disrupted.

Unity Catalog object model

Unity Catalog uses the following hierarchy of data objects:

  • Metastore: Created at the account level and is attached to one or more Databricks workspaces. The metastore contains metadata of all the catalogs, schemas, and tables available to query. All clusters on that workspace use the configured metastore and all workspaces that are configured to use a single metastore share those objects.
  • Catalog: A catalog sits on top of schemas (also called databases) and tables to manage permissions across a set of schemas.
  • Schema: Organizes tables and views.
  • Table: Tables can be managed or external tables.

For details about the Unity Catalog object model, see the Databricks Unity Catalog documentation.


Unity Catalog supports managing permissions at the Databricks account level through controls applied directly to objects in the metastore. To interact with the metastore and apply controls to any table, Immuta requires a personal access token (PAT) for an Immuta system account user with permissions to manage all data protected by Immuta. See the permissions requirements section for a list of specific Databricks privileges.

Immuta uses this Immuta system account user to run queries that set up all the tables, user-defined functions (UDFs), and other data necessary for policy enforcement. Upon enabling the native integration, Immuta will create a catalog named after your provided workspaceName that contains two schemas:

  • immuta_system: Contains internal Immuta data.
  • immuta_policies: Contains policy UDFs.

When policies require changes to be pushed to Unity Catalog, Immuta updates the internal tables in the immuta_system schema with the updated policy information. If necessary, new UDFs are pushed to replace any out-of-date policies in the immuta_policies schema and any row filters or column masks are updated to point at the new policies. Many of these operations require compute on the configured Databricks cluster or SQL endpoint, so compute must be available for these policies to succeed.

Policy enforcement

Immuta’s Unity Catalog integration applies Databricks table-, row-, and column-level security controls that are enforced natively within Databricks. Immuta's management of these Databricks security controls is automated and ensures that they synchronize with Immuta policy or user entitlement changes.

  • Table-level security: Immuta manages REVOKE and GRANT privileges on securable objects in Databricks through subscription policies. When you create a subscription policy in Immuta, Immuta uses the Unity Catalog API to issue GRANTS or REVOKES against the catalog, schema, or table in Databricks for every user affected by that subscription policy.
  • Row-level security: Immuta applies SQL UDFs to restrict access to rows for querying users.
  • Column-level security: Immuta applies column-mask SQL UDFs to tables for querying users. These column-mask UDFs run for any column that requires masking.

The Unity Catalog integration supports the following policy types:

Policy exemption groups

Some users may need to be exempt from masking and row-level policy enforcement. When you add user accounts to the configured exemption group in Databricks, Immuta will not enforce policies for those users. Exemption groups are created when the Unity Catalog integration is configured, and no policies will apply to these users' queries, despite any policies enforced on the tables they query.

The principal used to register data sources in Immuta will be automatically added to this exemption group for that Databricks table. Consequently, users added to this list and used to register data sources in Immuta should be limited to service accounts.

Policy support with hive_metastore

When enabling Unity Catalog support in Immuta, the catalog for all Databricks data sources will be updated to point at the default hive_metastore catalog. Internally, Databricks exposes this catalog as a proxy to the workspace-level Hive metastore that schemas and tables were kept in before Unity Catalog. Since this catalog is not a real Unity Catalog catalog, it does not support any Unity Catalog policies. Therefore, Immuta will ignore any data sources in the hive_metastore in any Databricks Unity Catalog integration, and policies will not be applied to tables there.

However, with Databricks metastore magic you can use hive_metastore and enforce subscription and data policies with the Databricks Spark integration.

Immuta data sources in Unity Catalog

The Unity Catalog data object model introduces a 3-tiered namespace, as outlined above. Consequently, your Databricks tables registered as data sources in Immuta will reference the catalog, schema (also called a database), and table.

External data connectors and query-federated tables

External data connectors and query-federated tables are preview features in Databricks. See the Databricks documentation for details about the support and limitations of these features before registering them as data sources in the Unity Catalog integration.

Unity Catalog audit

The Databricks Unity Catalog integration audits user queries run in clusters or SQL warehouses for deployments configured with either the Databricks Spark integration with Unity Catalog support or the Databricks Unity Catalog integration. Unity Catalog native audit does not support deployments that use the Databricks Spark integration and the Databricks Unity Catalog integrations simultaneously. See the Unity Catalog native audit page for details.

Configuration requirements

See the Enable Unity Catalog guide for a list of requirements.

Supported Databricks cluster configurations

The table below outlines the integrations supported for various Databricks cluster configurations. For example, the only integration available to enforce policies on a cluster configured to run on Databricks Runtime 9.1 is the Databricks Spark integration.

Example cluster Databricks Runtime Unity Catalog in Databricks Databricks Spark integration Databricks Spark with Unity Catalog support Databricks Unity Catalog integration
Cluster 1 9.1 Unavailable ✅ ⛔ Unavailable
Cluster 2 10.4 Unavailable ✅ ⛔ Unavailable
Cluster 3 11.3 ⛔ ✅ / ⛔ ⛔ / ✅ Unavailable
Cluster 4 11.3 ✅ ⛔ ✅ ⛔
Cluster 5 11.3 ✅ ✅ ⛔ ✅


  • ✅ The feature or integration is enabled.
  • ⛔ The feature or integration is disabled.

Feature support matrix

The table below outlines which Databricks and Immuta features are supported by the Databricks Spark integration with Unity Catalog support and the Unity Catalog integration.

Before migrating to Unity Catalog, understanding how the Databricks Spark and Unity Catalog integrations compare and impact your workflows in Databricks can help you configure the integration that suits your organization's needs. For example, if you use Immuta’s Databricks Spark integration for R or Scala multitenancy, you will likely select the Databricks Spark integration.

Use this table to identify what your Databricks workflows rely on to determine which integration is best for you.

Feature Databricks Spark integration with Unity Catalog support Unity Catalog integration
Audit Immuta users ✅ ✅
Audit non-Immuta users ❌ ✅
Change Data Feed ✅ ❌
Databricks access control can function with a non-connected Immuta ❌ ✅
Equalized entitlements with projects ✅ ❌
File types
  • Avro
  • CSV
  • Delta
  • ORC
  • Parquet
  • Delta
  • Parquet
  • Multiple IAMs on a single cluster ❌ ❌
    Non-Immuta reads and writes ❌ ✅
    Objects housed in Hive Metastore ✅ ❌
    Photon support ❌ ✅
    Project equalization ✅ ❌
    Column masking policies on tables ✅ ✅
    Column masking policies on views ✅ ❌
    Direct file-to-SQL reads ✅ ❌
    Mixing masking policies on same column ✅ ❌
    R and Scala for multitenancy ✅ ❌
    Row-redaction policies on tables ✅ ✅
    Row-redaction policies on views ✅ ❌
    Smart order masking (to boost performance) ✅ ❌
    Subscription policies on tables ✅ ✅
    Subscription policies on views ✅ ✅
    Supported masking policies
  • conditional masking
  • constant
  • custom masking
  • format preserving masking
  • hashing
  • k-anonymization
  • null
  • randomized response
  • regex
  • reversibility
  • rounding (date and numeric)
  • struct data types
  • conditional masking
  • constant
  • custom masking
  • hashing
  • null
  • regex
  • rounding numeric
  • Scratch paths ✅ ❌
    Write controls with projects ✅ ❌
    User impersonation ✅ ❌
    Policy enforcement on raw Spark reads ✅ ❌
    Python UDFs for advanced masking functions ✅ ❌

    Unity Catalog limitations

    • Row access policies with more than 1023 columns are unsupported. This is an underlying limitation of UDFs in Databricks. Immuta will only create row access policies with the minimum number of referenced columns. This limit will therefore apply to the number of columns referenced in the policy and not the total number in the table.
    • Policy enforcement on views is not fully supported. Views registered as data sources will only have subscription policies applied to them; row- or column-level policies cannot be applied to views.
    • Native workspaces are not supported. Creating a native workspace on a Unity Catalog enabled host is undefined behavior and may cause data loss or crashes.
    • User impersonation is not supported.
    • If you disable table grants, Immuta revokes the grants. Therefore, if users had access to a table before enabling Immuta, they’ll lose access.
    • Immuta projects are unsupported.
    • You must use the global regex flag (g) when creating a regex masking policy in this integration, and you cannot use the case insensitive regex flag (i) when creating a regex masking policy in this integration. See the examples below for guidance:

      • regex with a global flag (supported): /^ssn|social ?security$/g
      • regex without a global flag (unsupported): /^ssn|social ?security$/
      • regex with a case insensitive flag (unsupported): /^ssn|social ?security$/gi
      • regex without a case insensitive flag (supported): /^ssn|social ?security$/g

    Known issues

    • Date/time and struct columns may cause errors in policy application and should be avoided in column masking policies.
    • Snippets for Databricks data sources may be empty in the Immuta UI.
    • Some custom WHERE clause policies will cause policy enforcement errors in Unity Catalog. These policies should be avoided if errors occur during policy application.


    Configure the Databricks Unity Catalog integration.