All pages
Powered by GitBook
1 of 7

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Query Data

Querying Snowflake Data

Prerequisites:

  • Snowflake integration configured with Immuta

  • Snowflake tables registered as Immuta data sources

Query data with Snowflake table grants

  1. Execute the USE SECONDARY ROLES ALL command or change your role to the table grants role.

  2. Query the data as you normally would in Snowflake.

    SELECT * FROM emp_basic LIMIT 100;

Query data without Snowflake table grants

Prerequisite: Users have been granted SELECT privileges on all relevant Snowflake tables

Query the data as you normally would in Snowflake:

SELECT * FROM emp_basic LIMIT 100;

Querying Databricks Data

Prerequisites:

  • Databricks Unity Catalog integration configured with Immuta

  • Databricks tables registered as Immuta data sources

Query data with Python

  1. Create a new workspace.

  2. Query the Immuta-protected data as you normally would:

    df = spark.sql('select * from database.table_name')
    df.show()

Query data with SQL

  1. Create a new workspace.

  2. Query the Immuta-protected data as you normally would:

    select * from database.table_name;

Querying Starburst (Trino) Data

Prerequisites:

Query data

  1. Use your tool of choice to connect to Starburst.

  2. Query the Immuta-protected data as you normally would:

Querying Databricks SQL Data

Prerequisites:

Query data

  1. Select SQL from the navigation menu in Databricks.

  2. Click Create → Query.

  3. Run your query as you normally would:

select * from "tpch"."sf1000"."customer" limit 100
Starburst integration configured with Immuta
Starburst tables registered as Immuta data sources
SELECT
  concat(pickup_zip, '-', dropoff_zip) as route,
  AVG(fare_amount) as average_fare
FROM
  `samples`.`nyctaxi`.`trips`
GROUP BY
  1
ORDER BY
  2 DESC
LIMIT 1000
Databricks Unity Catalog integration configured with Immuta
Databricks SQL tables registered as Immuta data sources

Querying Redshift Data

Prerequisites:

  • Redshift integration configured with Immuta

  • Redshift tables registered as Immuta data sources

  • REVOKE users' access to raw tables

Query data

  1. Use your tool of choice to connect to Redshift.

  2. Select the Immuta database name used when configuring the integration.

  3. Query the Immuta-protected data, which takes the form of immuta_database.backing_schema.table_name:

    1. Immuta Database: The Immuta database name used when configuring the integration.

    2. Backing Schema: The schema that houses the backing tables of your Immuta data sources.

    3. Table Name: The name of the table backing your Immuta data sources.

  4. Run your query (it is recommended that you use the catalog in the query). It should look something like this:

    select * from immuta_database.backing_schema.table_name limit 100

Querying Azure Synapse Analytics Data

Prerequisites:

  • REVOKE users' access to raw tables

  • GRANT users' access to the Immuta schema

Query data

  1. Click the Data menu in Synapse Studio.

  2. Click the Workspace tab.

  3. Expand the databases, and you should see the dedicated pool you specified when .

  4. Expand the dedicated pool and you should see the Immuta schema you created when .

  5. Select that schema.

  6. Select New SQL script and then Empty script.

  7. Run your query (note that Synapse does not support LIMIT and the SQL is case sensitive). It should look something like this:

SELECT TOP 100 * FROM immuta_schema.backing_database_backing_table;
Azure Synapse Analytics integration configured with Immuta
Azure Synapse Analytics tables registered as Immuta data sources
configuring the integration
configuring the integration