Databricks Spark Query Audit Logs
In addition to the executed Spark plan, the tables, and the tables' underlying paths for every audited Spark job, Immuta captures the code or query that triggers the Spark plan. Immuta audits the activity of Immuta users on Immuta data sources.
Requirements
Databricks users registered as Immuta users: Note that the users' Databricks usernames must be mapped to Immuta. Without this, Immuta will not know the users are Immuta users and will not collect audit events for their data access activity.
Store audit logs
By default Immuta audit logs expire after 7 days. Export the universal audit model (UAM) logs to S3 or ADLS Gen 2, and store audit logs outside of Immuta in order to retain the audit logs long-term.
Audit schema
Each audit message from the Immuta platform will be a one-line JSON object containing the properties listed below.
Example queryText
Below is an example of the queryText
, which contains the full notebook cell (since the query was the result of a notebook). If the query had been from a JDBC connection, the queryText
would contain the full SQL query.
This notebook cell had multiple audit records associated with it.
Example audit record
Enriched Databricks audit logs
Beyond raw audit events (such as “John Doe queried Table X in Databricks"), the Databricks audit records include the policy information enforced during the query execution, even if a query was denied.
Queries will be denied if at least one of the conditions below is true:
User does not meet policy conditions.
User is not subscribed to the data source.
Data source is not in the user's current project.
Data source is in the user's current project, but the user is not subscribed to the data source.
Data source is not registered in Immuta.
User entitlements
The user's entitlements
represent the state at the time of the query. This includes the following fields:
Policy information
The policySet
includes the following fields: