HDFS Project Workspaces

Overview

This workspace allows native access to data on cluster without having to go through the Immuta SparkSession or Immuta Query Engine. Within a project, users can enable an HDFS Native Workspace, which creates a workspace directory in HDFS (and a corresponding database in the Hive metastore) where users can write files.

After a project owner creates a workspace, users will only be able to access this HDFS directory and database when acting under the project, and they should use the SparkSQL session to copy data into the workspace. The Immuta Spark SQL Session will apply policies to the data, so any data written to the workspace will already be compliant with the restrictions of the equalized project, where all members see data at the same level of access.

Once derived data is ready to be shared outside the workspace, it can be exposed as a derived data source in Immuta. At that point, the derived data source will inherit policies appropriately, and it will then be available through Immuta outside the project and can be used in future project workspaces by different teams in a compliant way.

Administrators

  • Administrators can opt to configure where all Immuta projects are kept in HDFS (default is /user/immuta/workspace). Note: If an administrator changes the default directory, the Immuta user must have full access to that directory. Once any workspace is created, this directory can no longer be modified.

  • Administrators can place a configuration value in the cluster configuration (core-site.xml) to mark that cluster as unavailable for use as a workspace.

Project Owners

  • Once a project is equalized, project owners can enable a workspace for the project.

    • If more than one cluster is configured, Immuta will prompt for which to use.

    • Once enabled, the full URI of where that workspace is located will display on the project page.

    • Project owners can also add connection information for Hive and/or Impala to allow Hive or Impala workspace sources to be created. The connection information provided and the Kerberos credentials configured for Immuta will be used for each derived Hive or Impala data source. The connection string for Hive or Impala will be displayed on the project page with the full URI.

  • Project owners can disable the workspace at any time.

    • When disabled, the workspace will not allow reading/writing from project members any longer.

    • Data sources living in this directory will still exist and their access will not be changed. (Subscribed users will still have access as usual.)

    • All data in this directory will still exist, regardless of whether it belongs to a data source or not.

    • Project owners can purge all data in the workspace after it has been disabled. Project Owners can

      • Purge all non-data-source data only.

      • Purge all data (including data source data).

        • When purging all data source data, sources can either be disabled or fully deleted.

Project Members

  • When a user is acting under the project context, Immuta will provide them read/write access to the project HDFS directory (using HDFS ACLs). If there are Immuta data sources already exposed in that directory, the user will bypass the namenode plugin if acting under the project for the data in that directory.

  • Once a user is not acting under the project, all access to that directory will be revoked and they can only access data in that project as official Immuta data sources, if any exist.

  • When users with the CREATE_DATA_SOURCE_IN_PROJECT permission create a derived data source with workspace enabled, they will be prompted with a modified create data source workflow:

    • The user will select the directory (starting with the project root directory) of the data they are exposing.

    • If the directory contains parquet or ORC files, then Hive, Impala, and HDFS will be an option for the data source; otherwise, only HDFS will be available.

    • Users will not be asked for the connection information because the Immuta user connection will be used to create the data source, which will ensure join pushdown and that the data source will work even when the user isn’t acting in the project. Note: Hive or Impala workspace sources are only available if the Project Owner added Hive or Impala connection information to the workspace.

    • If Hive or Impala is selected as the data source type, Immuta will infer schema/partitions from files and generate create table statements for Hive.

    • Once the data source is created, policy inheritance will take effect.

Note: To avoid data source collisions, Immuta will not allow HDFS and Hive/Impala data sources to be backed from the same location in HDFS.

Last updated

Other versions

SaaS2024.32024.2

Copyright © 2014-2024 Immuta Inc. All rights reserved.