Delta Lake API

Delta Lake API reference guide

When using Delta Lake, the API does not go through the normal Spark execution path. This means that Immuta's Spark extensions do not provide protection for the API. To solve this issue and ensure that Immuta has control over what a user can access, the Delta Lake API is blocked.

Spark SQL can be used instead to give the same functionality with all of Immuta's data protections.

Requests

Below is a table of the Delta Lake API with the Spark SQL that may be used instead.

Delta Lake APISpark SQL

DeltaTable.convertToDelta

CONVERT TO DELTA parquet./path/to/parquet/

DeltaTable.delete

DELETE FROM [table_identifier delta./path/to/delta/] WHERE condition

DeltaTable.generate

GENERATE symlink_format_manifest FOR TABLE [table_identifier delta./path/to/delta]

DeltaTable.history

DESCRIBE HISTORY [table_identifier delta./path/to/delta] (LIMIT x)

DeltaTable.merge

MERGE INTO

DeltaTable.update

UPDATE [table_identifier delta./path/to/delta/] SET column = valueWHERE (condition)

DeltaTable.vacuum

VACUUM [table_identifier delta./path/to/delta]

See here for a complete list of the Delta SQL Commands.

Merging tables in native workspaces

When a table is created in a native workspace, you can merge a different Immuta data source from that workspace into that table you created.

  1. Create a table in the native workspace.

  2. Create a temporary view of the Immuta data source you want to merge into that table.

  3. Use that temporary view as the data source you add to the project workspace.

  4. Run the following command:

    MERGE INTO delta_native.target_native as target
    USING immuta_temp_view_data_source as source
    ON target.dr_number = source.dr_number
    WHEN MATCHED THEN
    UPDATE SET target.date_reported = source.date_reported

Last updated

Self-managed versions

2024.32024.22024.1

Copyright © 2014-2024 Immuta Inc. All rights reserved.