Skip to content

Create a Query-Backed Data Source

Audience: Data Owners

Content Summary: Query-backed data sources are accessible to subscribed Data Users through the Immuta Query Engine and appear as though they are Postgres tables. Although these files look like they are local, reading a file executes a query to the remote storage system or the Immuta caching layer for that file. The Immuta policies are put into action either through querying the remote database or filtering automatically.

This guide outlines the process of creating query-backed data sources, such as Amazon Athena, Amazon Redshift, Azure SQL Data Warehouse, BigQuery, Databricks, ElasticSearch, Greenplum, HIVE, IBM DB2, IBM Netezza, Impala, MariaDB, MemSQL, MongoDB, MS SQL Server, MySQL, Oracle, PostgreSQL, PrestoSQL, SAP HANA, Snowflake, Sybase ASE, Teradata, and Vertica.

If your storage technology is not listed above, navigate to the Create Object-backed Data Source.

1 - Create a New Data Source

  1. Click the plus button in the top left of the Immuta console.
  2. Select the Data Source icon.

Alternatively,

  1. Navigate to the My Data Sources page.
  2. Click the New Data Source button in the top right corner.

2 - Select Your Storage Technology

Select the storage technology containing the data you wish to expose by clicking a tile. The list of enabled technologies is configurable and may differ from the image below.

Data Source Creation Storage Technology

3 - Enter Connection Information

Best Practice: Connections Use SSL

Although not required, it is recommended that all connections use SSL. Additional connection string arguments may also be provided.

Note: Only Immuta uses the connection you provide and injects all policy controls when users query the system. In other words, users always connect through Immuta with policies enforced and have no direct association with this connection.

  1. Input the connection parameters to the database you're exposing. Click the tabs below for guidance for select storage technologies.

    Azure SQL

    Immuta supports Azure SQL data sources via the Microsoft SQL Server data source handler.

    Refer to the screenshot below for an example of how to fill out the Connection Information step when creating an Azure SQL data source in Immuta.

    Azure SQL Connection Information

    Connect to Read-Only Replicas

    If your Azure SQL Database supports read-only replicas (such as Azure SQL Hyperscale), you can specify ApplicationIntent=ReadOnly in your connection string to ensure that all Immuta user queries are directed to read-only replicas. Although not strictly required, read-only connections are considered a best practice when creating an Immuta data source. An example configuration is shown below.

    Azure SQL Connection Information Read Only

    Big Query

    1. Enter your BigQuery service account credentials in the following fields:

      • Account Email Address: email associated with your Google BigQuery account
        • Project: name of the project containing the dataset
        • Dataset: name of the remote dataset
    2. You can choose to enter Additional Connection String Options.

    3. Click Select a File and upload your BigQuery Key File.

      BigQuery Connection Information

    Databricks

    User Requirements to Expose a Table or View

    When exposing a table or view from an Immuta-enabled Databricks cluster, be sure that at least one of these traits is true:

    • The user exposing the tables has READ_METADATA and SELECT permissions on the target views/tables (specifically if Table ACLs are enabled).
    • The user exposing the tables is listed in the immuta.spark.acl.whitelist configuration on the target cluster.
    • The user exposing the tables is a Databricks workspace administrator.
    1. Complete the first four fields in the Connection Information box:

      • Server: hostname or IP address
        • Port: port configured for Databricks, typically port 443
        • SSL: when enabled, ensures communication between Immuta and the remote database is encrypted
        • Database: the remote database
    2. Retrieve your Personal Access Token:

      • Navigate to Databricks in your browser, and then click the user account icon in the top right corner of the page.
        • Select User Settings from the dropdown menu.
        • Navigate to the Access Tokens tab, and then click the Generate New Token button.

      Databricks Personal Access Token

      • Enter this token in the Personal Access Token field in the Immuta console.
    3. Retrieve the HTTP Path.

      • Navigate to your cluster in Databricks.
        • Click Advanced at the bottom of the page, and then select the JDBC/ODBC tab.

      Databricks HTTP Path

      • Enter the unique HTTP Path from this page in the HTTP Path field in the Immuta console.
    4. You can then choose to enter Additional Connection String Options or Upload Certificates to connect to the database.

  2. Click the Test Connection button.

    If the connection is successful, a check mark will appear and you will be able to proceed. If an error occurs when attempting to connect, the error will be displayed in the UI. In order to proceed to the next step of data source creation, you MUST be able to connect to this data source using the connection information that you just entered.

Further Considerations

  • Immuta pushes down joins to be processed on the native database when possible. To ensure this happens, make sure the connection information matches between data sources, including host, port, ssl, username, and password. You will see performance degradation on joins against the same database if this information doesn't match.
  • Some storage technologies require different connection information than pictured in this section. Please refer to the tool-tips in the Immuta UI for this step if you need additional guidance.
  • If you are creating an Impala data source against a Kerberized instance of Impala, the username field locks down to your Immuta username unless you possess the IMPERSONATE_HDFS_USER permission.
  • If a client certificate is required to connect to the source database, you can add it in the Upload Certificates section at the bottom of the form.

4 - Select Virtual Population

  1. Decide how to virtually populate the data source by selecting Table or SQL Statement.

    Query-backed Virtual Population

    Note: When creating Hive or Impala data sources, the SQL Statement option is disabled.

  2. Complete the workflow for Table or SQL Statement selection, which are outlined on the tabs below:

    Table

    1. If you choose Table, click Edit in the table selection box that appears.
    2. By default, all tables are selected. Select and deselect tables by clicking the checkbox to the left of the table name in the Import Tables menu. You can create multiple data sources at one time by selecting multiple tables.

    3. After making your selection(s), click Apply.

      Multiple Data Sources

    SQL Statement

    Backing data sources with SQL statements allows SQL to build the data source, while the complexities of the statements and the specifics of the database remain hidden. Users just see a data source in Immuta and just a PostgreSQL table in the Immuta Query Engine.

    1. Before entering a SQL statement, test the statement to verify that it works.
    2. Enter your SQL statement in the text box.
    3. Click Validate Statement.

5 - Enter Basic Information

Provide information about your source to make it discoverable to users.

  1. Enter the SQL Schema Name Format to be the SQL name that the data source exists under in the Immuta Query Engine. It must include a schema macro but you may personalize it using lowercase letters, numbers, and underscores to personalize the format. It may have up to 255 characters.
  2. Enter the Schema Project Name Format to be the name of the schema project in the Immuta UI. This field is disabled if the schema project already exists within Immuta.

    1. When selecting Create sources for all tables in this database and monitor for changes you may personalize this field as you wish, but it must include a schema macro.
    2. When selecting Schema/Table this field is prepopulated with the recommended project name and you can edit freely.
  3. Select the Data Source Name Format to be the name that is shown in the Immuta UI.

    <Tablename>

    The data source name will be the name of the remote table, and the case of the data source name will match the case of the macro.

    <Schema><Tablename>

    The data source name will be the name of the remote schema followed by the name of the remote table, and the case of the data source name will match the case of the macros.

    Custom

    Enter a custom template for the Data Source Name. You may personalize this field as you wish, but it must include a tablename macro. The casing of the macro will apply to the Data Source name (i.e., <Tablename> will result in "Data Source Name", <tablename> will result in "data source name", and <TABLENAME> will result in "DATA SOURCE NAME").

    Custom Schema Name

  4. Enter the SQL Table Name Format to be the name of the table in the Immuta Query Engine. It must include a table name macro but you may personalize it using lowercase letters, numbers, and underscores to personalize the format. It may have up to 255 characters.

Data Source Creation Basic Information

6 - Enable or Disable Schema Monitoring

To enable Schema Monitoring, select the checkbox in this section.

Schema Monitoring

Note: This step will only appear if all tables within a server have been selected for creation.

Schema Monitoring with Databricks

In most cases, Immuta’s schema detection job runs automatically from the Immuta web service. For Databricks, that automatic job is disabled because of the ephemeral nature of Databricks clusters. In this case, Immuta requires users to download a schema detection job template (a Python script) and import that into their Databricks workspace. See the tutorial at the bottom of this page for details.

7 - Create the Data Source

Opt to configure settings in the Advanced Options section (outlined below), and then click Create to save the data source(s).

Advanced Options

None of the following options are required. However, completing these steps will help maximize the utility of your data source.

Column Detection

This setting monitors when remote tables' columns have been changed, updates the corresponding data sources in Immuta, and notifies Data Owners of these changes.

Column Detection

To enable, select the checkbox in this section.

See Schema Monitoring Overview to learn more about Table Evolution Detection.

Columns

This section allows you to decide which columns to include in the data source.

  1. Click the Edit button in the Columns section.
  2. By default, all columns are selected. De-select a column by clicking in the checkbox in the top left corner of that column.
  3. When necessary, convert column types by clicking the Type dropdown menu. Note that incorrectly converting a column type will break your data source at query time.
  4. Click Apply.

    Query-backed Select Columns

Further Considerations: Not Null Constraints

These constraints help Business Intelligence (BI) tools send better queries.

  • For SQL query-backed data sources, select Not NULL in the Nullable dropdown menu for any columns that have this constraint in the source database.

  • For table-backed data sources, this constraint is automatically detected.

Event Time

An Event Time column denotes the time associated with records returned from this data source. For example, if your data source contains news articles, the time that the article was published would be an appropriate Event Time column.

  1. Click the Edit button in the Event Time section.
  2. Select the column(s).
  3. Click Apply.

    Query-backed Event Time

Selecting an Event Time column will enable

Latency

  1. Click Edit in the Latency section.
  2. Complete the Set Time field, and then select MINUTES, HOURS, or DAYS from the subsequent dropdown menu.
  3. Click Apply.

This setting impacts the following behaviors:

  • How long Immuta waits to refresh data that is in cache by querying the native data source. For example, if you only load data once a day in the native source, this setting should be greater than 24 hours. If data is constantly loaded in the native source, you need to decide how much data latency is tolerable vs how much load you want on your data source; however this is only relevant to Immuta S3, since SQL will always interactively query the native database.
  • How often Immuta checks for new values in a column that is driving row-level redaction policies. For example, if you are redacting rows based on a country column in the data, and you add a new country, it will not be seen by the Immuta policy until this period expires.
  • How long noisy results are cached for a Differential Privacy policy.

    Query-backed Latency

Sensitive Data Detection

Data Owners can disable Sensitive Data Detection for their data sources in this section.

  1. Click Edit in this section.
  2. Select Enabled or Disabled in the window that appears, and then click Apply.

Data Source Tags

Adding tags to your data source allows users to search for the data source using the tags and Governors to apply Global policies to the data source.

To add tags,

  1. Click the Edit button in the Data Source Tags section.
  2. Begin typing in the Search by Tag Name box to select your tag, and then click Add.

Add Tag

Tags can also be added after you create your data source from the Data Source details page on the Overview tab or the Data Dictionary tab.

Additional Tutorial

Create a Schema Detection Job in Databricks

Generate Your Immuta API Key

Before you can run the script referenced in this tutorial, generate your Immuta API Key from your user profile page. The Immuta API key used in the Databricks notebook job for schema detection must either belong to an Immuta Admin or the user who owns the schema detection groups that are being targeted.

  1. Enable Schema Monitoring or Detect Column Changes on the Data Source creation page.

  2. Click Download Schema Job Detection Template.

    Download Schema Detection Job

  3. Click the Click Here To Download text.

    Schema Detection Template Modal

  4. Before you can run the script, create the correct scope and secret by running these commands in the CLI using the Immuta API Key generated on your user profile page:

        databricks secrets create-scope --scope auth
        databricks secrets put --scope auth --key apikey
    
  5. Import the Python script you downloaded into a Databricks workspace as a notebook. Note: The job template has commented out lines for specifying a particular database or table. With those two lines commented out, the schema detection job will run against ALL databases and tables in Databricks. Additionally, if you need to add proxy configuration to the job template, the template uses the Python requests library, which has a simple mechanism for configuring proxies for a request.

  6. Schedule the script as part of a notebook job to run as often as required. Each time the job runs, it will make an API call to Immuta to trigger schema detection queries, and these queries will run on the cluster from which the request was made. Note: Use the api_immuta cluster for this job. The job in Databricks must use an Existing All-Purpose Cluster so that Immuta can connect to it over ODBC. Job clusters do not support ODBC connections.

What's Next

Now that you've created a date source, you can choose to continue to the next page or to one of these tutorials:

Manage Data Sources Write a Local Policy