Databricks Data Source
This page details how to register Databricks data sources using the existing workflow. To register data sources using connections, see this how-to guide.
Requirements
Databricks Spark integration
When exposing a table or view from an Immuta-enabled Databricks cluster, be sure that at least one of these traits is true:
The user exposing the tables has READ_METADATA and SELECT permissions on the target views/tables (specifically if Table ACLs are enabled).
The user exposing the tables is listed in the
immuta.spark.acl.whitelist
configuration on the target cluster.The user exposing the tables is a Databricks workspace administrator.
Databricks Unity Catalog integration
When exposing a table from Databricks Unity Catalog, be sure the credentials used to register the data sources have the Databricks privileges listed below.
The following privileges on the parent catalogs and schemas of those tables:
SELECT
USE CATALOG
USE SCHEMA
USE SCHEMA
onsystem.information_schema
Azure Databricks Unity Catalog limitation
Set all table-level ownership on your Unity Catalog data sources to an individual user or service principal instead of a Databricks group before proceeding. Otherwise, Immuta cannot apply data policies to the table in Unity Catalog. See the Azure Databricks Unity Catalog limitation for details.
Enter connection information
Use SSL
Although not required, it is recommended that all connections use SSL. Additional connection string arguments may also be provided.
Note: Only Immuta uses the connection you provide and injects all policy controls when users query the system. In other words, users always connect through Immuta with policies enforced and have no direct association with this connection.
Navigate to the My Data Sources page.
Click New Data Source.
Select the Databricks tile in the Data Platform section. When exposing a table or view from an Immuta-enabled Databricks cluster, be sure that at least one of these traits is true:
The user exposing the tables has READ_METADATA and SELECT permissions on the target views/tables (specifically if Table ACLs are enabled).
The user exposing the tables is listed in the `immuta.spark.acl.whitelist` configuration on the target cluster.
The user exposing the tables is a Databricks workspace administrator.
Complete the first four fields in the Connection Information box:
Server: hostname or IP address
Port: port configured for Databricks, typically port 443
SSL: when enabled, ensures communication between Immuta and the remote database is encrypted
Database: the remote database
Select your authentication method from the dropdown:
Access Token:
Enter your Databricks API Token. Use a non-expiring token so that access to the data source is not lost unexpectedly.
Enter the HTTP Path of your Databricks cluster or SQL warehouse.
OAuth machine-to-machine (M2M):
Enter the HTTP Path of your Databricks cluster or SQL warehouse.
Fill out the Token Endpoint with the full URL of the identity provider. This is where the generated token is sent. The default value is
https://<your workspace name>.cloud.databricks.com/oidc/v1/token
.Fill out the Client ID. This is a combination of letters, numbers, or symbols, used as a public identifier and is the same as the service principal's application ID.
Enter the Scope (string). The scope limits the operations and roles allowed in Databricks by the access token. See the OAuth 2.0 documentation for details about scopes.
Enter the Client Secret. Immuta uses this secret to authenticate with the authorization server when it requests a token.
If you are using a proxy server with Databricks, specify it in the Additional Connection String Options:
Click Test Connection.
Further considerations
Immuta pushes down joins to be processed on the native database when possible. To ensure this happens, make sure the connection information matches between data sources, including host, port, ssl, username, and password. You will see performance degradation on joins against the same database if this information doesn't match.
If a client certificate is required to connect to the source database, you can add it in the Upload Certificates section at the bottom of the form.
Select virtual population
Decide how to virtually populate the data source by selecting one of the options:
Create sources for all tables in this database: This option will create data sources and keep them in sync for every table in the dataset. New tables will be automatically detected and new Immuta views will be created.
Schema / Table: This option will allow you to specify tables or datasets that you want Immuta to register.
Opt to Edit in the table selection box that appears.
By default, all schemas and tables are selected. Select and deselect by clicking the checkbox to the left of the name in the Import Schemas/Tables menu. You can create multiple data sources at one time by selecting an entire schema or multiple tables.
After making your selection(s), click Apply.
Enter basic information
Enter the SQL Schema Name Format to be the SQL name that the data source exists under in Immuta. It must include a schema macro but you may personalize it using lowercase letters, numbers, and underscores to personalize the format. It may have up to 255 characters.
Enter the Schema Project Name Format to be the name of the schema project in the Immuta UI. If you enter a name that already exists, the name will automatically be incremented. For example, if the schema project
Customer table
already exists and you enter that name in this field, the name for this second schema project will automatically becomeCustomer table 2
when you create it.When selecting Create sources for all tables in this database and monitor for changes you may personalize this field as you wish, but it must include a schema macro.
When selecting Schema/Table this field is prepopulated with the recommended project name and you can edit freely.
Select the Data Source Name Format, which will be the format of the name of the data source in the Immuta UI.
<
Tablename
>: The data source name will be the name of the remote table, and the case of the data source name will match the case of the macro.<
Schema
><Tablename
>: The data source name will be the name of the remote schema followed by the name of the remote table, and the case of the data source name will match the cases of the macros.Custom: Enter a custom template for the Data Source Name. You may personalize this field as you wish, but it must include a tablename macro. The case of the macro will apply to the data source name (i.e., <
Tablename
> will result in "Data Source Name," <tablename
> will result in "data source name," and <TABLENAME
> will result in "DATA SOURCE NAME").
Enter the SQL Table Name Format, which will be the format of the name of the table in Immuta. It must include a table name macro, but you may personalize the format using lowercase letters, numbers, and underscores. It may have up to 255 characters.
Enable or disable schema monitoring
Schema monitoring best practices
Schema monitoring is a powerful tool that ensures tables are all governed by Immuta.
Consider using schema monitoring later in your onboarding process, not during your initial setup and configuration when tables are not in a stable state.
Consider using Immuta’s API to either run the schema monitoring job when your ETL process adds new tables or to add new tables.
Activate the new column added templated global policy to protect potentially sensitive data. This policy will null the new columns until a data owner reviews new columns that have been added, protecting your data to avoid data leaks on new columns getting added without being reviewed first.
When selecting the Schema/Table option, you can opt to enable Schema Monitoring by selecting the checkbox in this section.
Note: This step will only appear if all tables within a server have been selected for creation.
Create a Schema Detection Job in Databricks
Generate your Immuta API key
Before you can run the script referenced in this tutorial, generate your Immuta API Key from your user profile page. The Immuta API key used in the Databricks notebook job for schema detection must either belong to an Immuta Admin or the user who owns the schema detection groups that are being targeted.
Enable Schema Monitoring or Detect Column Changes on the Data Source creation page.
Click Download Schema Job Detection Template.
Click the Click Here To Download text.
Before you can run the script, create the correct scope and secret by running these commands in the CLI using the Immuta API Key generated on your user profile page:
Import the Python script you downloaded into a Databricks workspace as a notebook. Note: The job template has commented out lines for specifying a particular database or table. With those two lines commented out, the schema detection job will run against ALL databases and tables in Databricks. Additionally, if you need to add proxy configuration to the job template, the template uses the Python requests library, which has a simple mechanism for configuring proxies for a request.
Schedule the script as part of a notebook job to run as often as required. Each time the job runs, it will make an API call to Immuta to trigger schema detection queries, and these queries will run on the cluster from which the request was made. Note: Use the
api_immuta
cluster for this job. The job in Databricks must use an Existing All-Purpose Cluster so that Immuta can connect to it over ODBC. Job clusters do not support ODBC connections.
Opt to configure advanced settings
Although not required, completing these steps will help maximize the utility of your data source. Otherwise, skip to the next step.
Column detection
This setting monitors when remote tables' columns have been changed, updates the corresponding data sources in Immuta, and notifies Data Owners of these changes.
To enable, select the checkbox in this section.
See the Schema projects overview page to learn more about column detection.
Event time
An Event Time column denotes the time associated with records returned from this data source. For example, if your data source contains news articles, the time that the article was published would be an appropriate Event Time column.
Click the Edit button in the Event Time section.
Select the column(s).
Click Apply.
Selecting an Event Time column will enable
more statistics to be calculated for this data source including the most recent record time, which is used for determining the freshness of the data source.
the creation of time-based restrictions in the policy builder.
Latency
Click Edit in the Latency section.
Complete the Set Time field, and then select MINUTES, HOURS, or DAYS from the subsequent dropdown menu.
Click Apply.
This setting impacts how often Immuta checks for new values in a column that is driving row-level redaction policies. For example, if you are redacting rows based on a country column in the data, and you add a new country, it will not be seen by the Immuta policy until this period expires.
Sensitive data discovery
Data owners can disable sensitive data discovery for their data sources in this section.
Click Edit in this section.
Select Enabled or Disabled in the window that appears, and then click Apply.
Data source tags
Adding tags to your data source allows users to search for the data source using the tags and Governors to apply Global policies to the data source. Note if Schema Detection is enabled, any tags added now will also be added to the tables that are detected.
To add tags,
Click the Edit button in the Data Source Tags section.
Begin typing in the Search by Tag Name box to select your tag, and then click Add.
Tags can also be added after you create your data source from the data source details page on the overview tab or the data dictionary tab.
Create the data source
Click Create to save the data source(s).
Last updated