Skip to content

Create a Databricks Data Source

Enter connection information

Best Practice: Connections Use SSL

Although not required, it is recommended that all connections use SSL. Additional connection string arguments may also be provided.

Note: Only Immuta uses the connection you provide and injects all policy controls when users query the system. In other words, users always connect through Immuta with policies enforced and have no direct association with this connection.

  1. Navigate to the My Data Sources page.
  2. Click New Data Source.
  3. Select the Databricks tile in the Data Platform section.

    User Requirements to Expose a Table or View

    When exposing a table or view from an Immuta-enabled Databricks cluster, be sure that at least one of these traits is true:

    • The user exposing the tables has READ_METADATA and SELECT permissions on the target views/tables (specifically if Table ACLs are enabled).
    • The user exposing the tables is listed in the immuta.spark.acl.whitelist configuration on the target cluster.
    • The user exposing the tables is a Databricks workspace administrator.
  4. Complete the first four fields in the Connection Information box:

    • Server: hostname or IP address
    • Port: port configured for Databricks, typically port 443
    • SSL: when enabled, ensures communication between Immuta and the remote database is encrypted
    • Database: the remote database
  5. Enter your Databricks API Token. Use a non-expiring token so that access to the data source is not lost unexpectedly.

  6. If you are using a proxy server with Databricks, specify it in the Additional Connection String Options:

    UseProxy=1;ProxyHost=my.host.com;ProxyPort=6789
    
  7. Click Test Connection.

Further Considerations

  • Immuta pushes down joins to be processed on the native database when possible. To ensure this happens, make sure the connection information matches between data sources, including host, port, ssl, username, and password. You will see performance degradation on joins against the same database if this information doesn't match.
  • If a client certificate is required to connect to the source database, you can add it in the Upload Certificates section at the bottom of the form.

Select virtual population

  1. Decide how to virtually populate the data source by selecting Create sources for all tables in this database and monitor for changes or Schema/Table.

  2. Complete the workflow for Create sources for all tables in this database and monitor for changes or Schema/Table selection, which are outlined on the tabs below:

    Create sources for all tables in this database and monitor for changes

    Selecting this option will create and keep in sync all data sources within this database. New schemas will be automatically detected and the corresponding data sources and schema projects will be created.

    Schema/Table

    Selecting this option will create and keep in sync all tables within the schema(s) selected. No new schemas will be detected.

    1. If you choose Schema/Table, click Edit in the table selection box that appears.
    2. By default, all schemas and tables are selected. Select and deselect by clicking the checkbox to the left of the name in the Import Schemas/Tables menu. You can create multiple data sources at one time by selecting an entire schema or multiple tables.

    3. After making your selection(s), click Apply.

Enter basic information

Provide information about your source to make it discoverable to users.

  1. Enter the SQL Schema Name Format to be the SQL name that the data source exists under in Immuta. It must include a schema macro but you may personalize it using lowercase letters, numbers, and underscores to personalize the format. It may have up to 255 characters.
  2. Enter the Schema Project Name Format to be the name of the schema project in the Immuta UI. If you enter a name that already exists, the name will automatically be incremented. For example, if the schema project Customer table already exists and you enter that name in this field, the name for this second schema project will automatically become Customer table 2 when you create it.

    1. When selecting Create sources for all tables in this database and monitor for changes you may personalize this field as you wish, but it must include a schema macro.
    2. When selecting Schema/Table this field is prepopulated with the recommended project name and you can edit freely.
  3. Select the Data Source Name Format, which will be the format of the name of the data source in the Immuta UI.

    <Tablename>

    The data source name will be the name of the remote table, and the case of the data source name will match the case of the macro.

    <Schema><Tablename>

    The data source name will be the name of the remote schema followed by the name of the remote table, and the case of the data source name will match the cases of the macros.

    Custom

    Enter a custom template for the Data Source Name. You may personalize this field as you wish, but it must include a tablename macro. The case of the macro will apply to the data source name (i.e., <Tablename> will result in "Data Source Name," <tablename> will result in "data source name," and <TABLENAME> will result in "DATA SOURCE NAME").

  4. Enter the SQL Table Name Format, which will be the format of the name of the table in Immuta. It must include a table name macro, but you may personalize the format using lowercase letters, numbers, and underscores. It may have up to 255 characters.

Enable or disable schema monitoring

Schema monitoring best practices

Schema monitoring is a powerful tool that ensures tables are all governed by Immuta.

  • Consider using schema monitoring later in your onboarding process, not during your initial setup and configuration when tables are not in a stable state.
  • Consider using Immuta’s API to either run the schema monitoring job when your ETL process adds new tables or to add new tables.
  • Activate the new column added templated global policy to protect potentially sensitive data. This policy will null the new columns until a data owner reviews new columns that have been added, protecting your data to avoid data leaks on new columns getting added without being reviewed first.

When selecting the Schema/Table option, you can opt to enable Schema Monitoring by selecting the checkbox in this section.

Note: This step will only appear if all tables within a server have been selected for creation.

Create a Schema Detection Job in Databricks

Generate Your Immuta API Key

Before you can run the script referenced in this tutorial, generate your Immuta API Key from your user profile page. The Immuta API key used in the Databricks notebook job for schema detection must either belong to an Immuta Admin or the user who owns the schema detection groups that are being targeted.

  1. Enable Schema Monitoring or Detect Column Changes on the Data Source creation page.

  2. Click Download Schema Job Detection Template.

  3. Click the Click Here To Download text.

  4. Before you can run the script, create the correct scope and secret by running these commands in the CLI using the Immuta API Key generated on your user profile page:

        databricks secrets create-scope --scope auth
        databricks secrets put --scope auth --key apikey
    
  5. Import the Python script you downloaded into a Databricks workspace as a notebook. Note: The job template has commented out lines for specifying a particular database or table. With those two lines commented out, the schema detection job will run against ALL databases and tables in Databricks. Additionally, if you need to add proxy configuration to the job template, the template uses the Python requests library, which has a simple mechanism for configuring proxies for a request.

  6. Schedule the script as part of a notebook job to run as often as required. Each time the job runs, it will make an API call to Immuta to trigger schema detection queries, and these queries will run on the cluster from which the request was made. Note: Use the api_immuta cluster for this job. The job in Databricks must use an Existing All-Purpose Cluster so that Immuta can connect to it over ODBC. Job clusters do not support ODBC connections.

Opt to configure advanced settings

Although not required, completing these steps will help maximize the utility of your data source. Otherwise, skip to the next step.

Column Detection

This setting monitors when remote tables' columns have been changed, updates the corresponding data sources in Immuta, and notifies Data Owners of these changes.

To enable, select the checkbox in this section.

See Schema Projects Overview to learn more about Column Detection.

Event Time

An Event Time column denotes the time associated with records returned from this data source. For example, if your data source contains news articles, the time that the article was published would be an appropriate Event Time column.

  1. Click the Edit button in the Event Time section.
  2. Select the column(s).
  3. Click Apply.

Selecting an Event Time column will enable

Latency

  1. Click Edit in the Latency section.
  2. Complete the Set Time field, and then select MINUTES, HOURS, or DAYS from the subsequent dropdown menu.
  3. Click Apply.

This setting impacts how often Immuta checks for new values in a column that is driving row-level redaction policies. For example, if you are redacting rows based on a country column in the data, and you add a new country, it will not be seen by the Immuta policy until this period expires.

Sensitive Data Discovery

Data Owners can disable Sensitive Data Discovery for their data sources in this section.

  1. Click Edit in this section.
  2. Select Enabled or Disabled in the window that appears, and then click Apply.

Data Source Tags

Adding tags to your data source allows users to search for the data source using the tags and Governors to apply Global policies to the data source. Note if Schema Detection is enabled, any tags added now will also be added to the tables that are detected.

To add tags,

  1. Click the Edit button in the Data Source Tags section.
  2. Begin typing in the Search by Tag Name box to select your tag, and then click Add.

Tags can also be added after you create your data source from the Data Source details page on the Overview tab or the Data Dictionary tab.

Create the data source

Click Create to save the data source(s).