All pages
Powered by GitBook
1 of 7

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Querying Snowflake Data

Prerequisites:

  • Snowflake integration configured with Immuta

  • Snowflake tables registered as Immuta data sources

Query data with Snowflake table grants

  1. Execute the USE SECONDARY ROLES ALL command or change your role to the .

  2. Query the data as you normally would in Snowflake.

Query data without Snowflake table grants

Prerequisite: Users have been granted SELECT privileges on all relevant Snowflake tables

Query the data as you normally would in Snowflake:

SELECT * FROM emp_basic LIMIT 100;
table grants role
SELECT * FROM emp_basic LIMIT 100;

Query Data

Querying Databricks Data

Prerequisites:

  • Databricks Unity Catalog integration configured with Immuta

  • Databricks tables registered as Immuta data sources

Query data with Python

  1. Create a new workspace.

  2. Query the Immuta-protected data as you normally would:

Query data with SQL

  1. Create a new workspace.

  2. Query the Immuta-protected data as you normally would:

df = spark.sql('select * from database.table_name')
df.show()
select * from database.table_name;

Querying Starburst (Trino) Data

Prerequisites:

  • Starburst integration configured with Immuta

  • Starburst tables registered as Immuta data sources

Query data

  1. Use your tool of choice to connect to Starburst.

  2. Query the Immuta-protected data as you normally would:

select * from "tpch"."sf1000"."customer" limit 100

Querying Redshift Data

This guide is specific to querying data sources registered in the Amazon Redshift view-based integration. For instructions on how to query data registered in the Amazon Redshift viewless integration, see the Accessing data page.

Prerequisites:

  • Redshift view-based integration configured with Immuta

  • REVOKE users' access to raw tables

Query data

  1. Use your tool of choice to connect to Redshift.

  2. Select the Immuta database name used when .

  3. Query the Immuta-protected data, which takes the form of immuta_database.backing_schema.table_name:

Querying Databricks SQL Data

Prerequisites:

  • Databricks Unity Catalog integration configured with Immuta

  • Databricks SQL tables registered as Immuta data sources

Query data

  1. Select SQL from the navigation menu in Databricks.

  2. Click Create → Query.

  3. Run your query as you normally would:

Querying Azure Synapse Analytics Data

Prerequisites:

  • Azure Synapse Analytics integration configured with Immuta

  • Azure Synapse Analytics tables registered as Immuta data sources

  • REVOKE users' access to raw tables

  • GRANT users' access to the Immuta schema

Query data

  1. Click the Data menu in Synapse Studio.

  2. Click the Workspace tab.

  3. Expand the databases, and you should see the dedicated pool you specified when .

  4. Expand the dedicated pool and you should see the Immuta schema you created when .

Immuta Database: The Immuta database name used when configuring the integration.
  • Backing Schema: The schema that houses the backing tables of your Immuta data sources.

  • Table Name: The name of the table backing your Immuta data sources.

  • Run your query (it is recommended that you use the catalog in the query). It should look something like this:

  • Redshift tables registered as Immuta data sources
    configuring the integration

    Select that schema.

  • Select New SQL script and then Empty script.

  • Run your query (note that Synapse does not support LIMIT and the SQL is case sensitive). It should look something like this:

  • configuring the integration
    configuring the integration
    SELECT
      concat(pickup_zip, '-', dropoff_zip) as route,
      AVG(fare_amount) as average_fare
    FROM
      `samples`.`nyctaxi`.`trips`
    GROUP BY
      1
    ORDER BY
      2 DESC
    LIMIT 1000
    select * from immuta_database.backing_schema.table_name limit 100
    SELECT TOP 100 * FROM immuta_schema.backing_database_backing_table;