Immuta Data Access Patterns
Audience: Data Owners, Data Users, and System Administrators
Content Summary: The Immuta data control plane does not require users to learn a new API or language to access data exposed there. Immuta plugs into existing tools and ongoing work while remaining completely invisible to downstream consumers by exposing the data through these foundational access patterns: Azure Synapse Analytics, Databricks, Databricks SQL, HDFS, the Immuta Query Engine, Trino, S3, Snowflake, and SparkSQL.
The Dynamic Azure Synapse Analytics access pattern allows Immuta to apply policies directly in Azure Synapse Analytics dedicated SQL pools without the need for users to go through the Immuta Query Engine. Users can work within their existing Synapse Studio and have per user policies dynamically applied at query time.
Users can configure multiple native Azure Synapse Analytics integrations, which give users direct access to views in a Dedicated SQL Pool in Synapse Studio.
This native integration makes Databricks data sources exposed in Immuta available as tables in a Databricks cluster, and users can then query these data sources through their Notebook. Like other integrations, policies are applied to the plan that Spark builds for a user's query and all data access is native.
Databricks SQL provides a simple experience for SQL users who want to run quick ad hoc queries on their data lake, create multiple visualization types to explore query results from different perspectives, and build and share dashboards.
Immuta's native Databricks SQL integration creates policy-enforced views in users' Databricks SQL environment that they can access.
Unlike the other access patterns, the Immuta HDFS access pattern is not virtual. The value in HDFS processing is to bring the code to the data, and as such, requires the Immuta policies to be enforced in-place on the data in the HDFS data nodes. Because of this, the Immuta HDFS layer can only act on data stored in HDFS. However, you are able to build complex subscription and granular access policies on objects stored in HDFS and retain all the rich audit capabilities provided by the other Immuta virtual layers.
Users are provided a basic Immuta PostgreSQL connection. The tables within this connection represent all the connected data across your organization. Those tables, however, are virtual tables, completely empty until a query is run. At query time the SQL is proxied through the virtual Immuta table down to the native database while enforcing the policy automatically. The Immuta SQL connection can be used within any Business Intelligence (BI) tool or integrated directly into code for interactive analysis.
With the Native Redshift access pattern, Immuta applies policies directly in Redshift. This allows data analysts query their data natively with Redshift instead of going through the Immuta Query Engine.
Immuta supports an S3-style REST API, which allows users to communicate with Immuta the same way they would with S3. Consequently, Immuta easily integrates with tools users may already be using to work with S3.
Immuta supports accessing object-backed data sources using an
is3a file system with Spark and Databricks.
Native Dynamic Snowflake
Through the Native Dynamic Snowflake access pattern, Immuta applies policies directly in Snowflake, allowing data analysts to use the Snowflake Web UI and their existing BI tools and have per-user policies dynamically applied at query time.
Users can configure multiple native Snowflake integrations in a single instance of Immuta.
Snowflake workspaces allow users to access protected data directly in Snowflake without having to go through the Immuta Query Engine. Within these workspaces, users can interact directly with Snowflake secure views, create derived data sources, and collaborate with other project members at a common access level. Because these derived data sources will inherit all appropriate policies, that data can then be shared outside the project. Additionally, derived data sources use the credentials of the Immuta system Snowflake account, which will allow them to persist after a workspace is disconnected.
For more details about Snowflake workspaces, see the Projects Overview.
Users are able to access subscribed data sources within their Spark Jobs by utilizing Spark SQL with the ImmutaContext class. All tables are virtual and are not populated until a query is materialized. When a query is materialized, data from metastore backed data sources, such as Hive and Impala, will be accessed using standard Spark libraries to access the data in the underlying files stored in HDFS. All other data source types will access data using the Query Engine which will proxy the query to the native database technology. Policies for each data source will be enforced automatically.
The Trino (previously PrestoSQL) access pattern applies policies directly in Trino without users going through the Immuta Query Engine. This means users can use their existing Trino tooling (querying, reporting, etc.) and have per-user policies dynamically applied at query time.
While it is not a data access pattern, Immuta's dbt Cloud integration allows users to connect their data sources from various access patterns into Immuta using dbt Cloud. Integrating your data sources through dbt Cloud allows Immuta to keep your data sources in sync, while also populating the data source details through jobs run in dbt Cloud.