Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Immuta Detect continually monitors your data environment to help answer questions about your most active data users, the most accessed data, and the events happening within your data environment. This understanding can help drive prioritization of where to place access control policies in Immuta Secure.
This section provides use cases to guide you through implementing Immuta Detect.
This reference guide discusses the benefits of Immuta Detect and how it works.
Immuta provides robust audit logging on actions within the application and on queries in native technologies like Snowflake, Databricks, and Unity Catalog. This section includes how-to guides for exporting Immuta audit logs for long-term backup and reference guides that describe Immuta's audit model.
Immuta Detect monitors your data environment and provides analytic dashboards in the Immuta UI based on your data use. This section includes how-to and reference guides about dashboards that offer rich visualizations of audit events and the sensitivity of queries, data sources, and columns.
Monitors allow you to gain awareness of when your users' behavior changes, maintain data availability through data platform policy changes, ensure access patterns remain consistent for your data controls to remain effective, and be notified about anomalies.
Select your use case
Detect is one of the Immuta flagship modules. Immuta Detect continually monitors your data environment to help answer questions about your most active data users, the most accessed data, and the events happening within your data environment. This understanding can help drive prioritization of where to place access control policies in Immuta’s other flagship module, Immuta Secure, which is why you should start here.
Below are two use cases to get started with Detect. They contain the basic Immuta configuration steps that you need to accomplish before using Immuta Secure. It is highly recommended that you follow the below use case, regardless of Detect support.
Currently, Immuta Detect only supports tag filtering and sensitivity with Snowflake. So you are presented with two paths to choose from:
If you use Snowflake: Monitor and secure sensitive data platform query activity. This will guide you through the basics of Immuta configuration and culminate with findings in Detect.
If don't use Snowflake: General Immuta configuration. This will guide you through the required Immuta configurations in order to move on to Immuta Secure.
In order to take advantage of all the capabilities of Immuta, you must make Immuta aware of your data metadata. This is done by registering your data with Immuta as data sources. It’s important to remember that Immuta is not reading your actual data at all; it is simply discovering your information schemas and pulling that information back as the foundation for everything else.
This section offers the best practices when onboarding data sources into Immuta.
If you have an external data catalog, configure the catalog integration first; then register your data in Immuta. This process will automatically tag your data with the external catalog tags as you register it.
Find more on this topic in the Automate entity and sensitivity discovery guide.
Use Immuta's no default subscription policy setting to onboard metadata without affecting your users' access. This means you onboard all metadata in Immuta without any impact on current accesses which gives you time to fully convert your operations to Immuta without causing unnecessary data downtime. Immuta will only take control when the first policies are applied. Because of this, register all tables.
While it can be tempting to start small and register only the pieces of data that you intend to protect, you must remember that Immuta is not just about access control. It’s important to register your data metadata so that Immuta can also track activity and understand where that sensitive data lies (with Immuta Detect). In other words, Immuta can’t tell you where you have problems unless you first tell it to look at your metadata.
Without the no default subscription policy, Immuta will set each data source's subscription policy to the most restrictive option which automatically locks data down during onboarding. To unlock the data and give your users access again, new subscription policies must be set.
If you are delegating the registration and control of data, then read our Data mesh use case for more information.
Use the /api/v2/data
endpoint to register a schema; then use schema monitoring to find new data sources and automatically register them.
One of the greatest benefits of a modern data platform is that you can manage all your data transformations at the data tier. This means that data is constantly changing in the data platform, which may result in the need for access control changes as well. This is why it is critical that you enable schema monitoring and column detection when registering metadata with Immuta. This will allow Immuta to constantly monitor and update for these changes.
It’s also important to understand that many data engineering tools make changes by destructively recreating tables and views, which results in all policies being dropped in the data platform. This is actually a good thing, because this gives Immuta a chance to update the access as the changes are found (policy uptime) while the only user that can see the data being recreated is the creator of that change (data downtime for all other users). This is why schema monitoring and column detection are so critical.
This guide is for users who wish to understand their data estate and where there may be security gaps or non-compliant user query activity that needs to be addressed. It also contains details for configuring Immuta that must be accomplished before moving on to the Secure your data use cases.
This use case is tailored to quickly get you monitoring queries in your data platform and understanding where you may have security gaps using Immuta Discover and Immuta Detect. If you are not using Snowflake, instead move to the General Immuta configuration use case because filtering by tags and sensitivity in Immuta Detect is currently only available on Snowflake.
As part of this use case, you will learn special considerations and configurations for setting up Immuta for Immuta Detect. Upon completion, you will understand existing security gaps and it will help guide your Immuta Secure journey.
Follow these steps to configure Immuta and start using Detect:
Ensure you have the Immuta software available to you. For the best experience, follow the steps below on Immuta SaaS because of the many SaaS benefits.
Configure your users in Immuta, using the user identity best practices in order to review and summarize user activity and plan your first policy.
Read the native integration architecture overview and connect Immuta to your database. Consider the Snowflake roles best practices.
Register data sources in order to review and summarize data activity and plan your first policy.
Start Using Immuta Detect. To get the most out of it, consider populating sensitivity using Automate entity and sensitivity discovery (SDD) and then configure Detect with SDD.
Requirement:
Snowflake Enterprise Edition or higher
Users and Data Sources have been registered in Immuta:
Snowflake tables registered as Immuta data sources
Snowflake users registered in Immuta
Currently, Detect only supports filtering by tag and showing sensitivity of audit records for Snowflake.
This onboarding process is recommended for organizations that have not tagged any sensitive data yet. Immuta will identify, classify, and tag your data. After you are fully onboarded, you will see Detect dashboards with information on your organization's data use and data sensitivity, and the Discover data inventory dashboard will show details about the data that was scanned.
After you are happy with the Detect dashboards on the select data sources you enabled, you can integrate Detect with more of your data environment.
Speed is an inherent ability in SaaS offerings because they are meant to be turnkey operations – you should just need to plug and play to leverage SaaS software to solve a specific problem. Immuta’s SaaS offering helps you to get to value fast without worrying about the IT department finding the time and energy to stand the software up. This helps organizations focus more on productive work and increases the organizations’ overall efficiency.
Self-managed solutions require long-term planning to scale operations and are often not the best option for growing businesses, as the IT staff has to constantly struggle in the upgrade loop. This could lead to significant restructuring costs as performance and functional demands increase. Additionally, upgrading or modifying existing systems can become costly due to potential downtime or other expenses associated with transitioning from one platform to another.
By contrast, Immuta’s SaaS platform software offers a more convenient way of optimizing operations across large corporate structures with minimal lead time needed for additional licenses or functionality additions. This allows you to easily scale according to business needs, so you don’t have to worry about how your internal IT team will keep up with future growth.
Organizations today are extremely cautious about avoiding data leaks and data or privacy breaches, and, therefore, need to invest in a powerful data security platform and trustworthy security solutions provider. Immuta’s processes, policies, and management system have been certified under the ISO 27001 and 27701 standards and SOC 2 Type 2 attestation, demonstrating that data security and privacy are important to Immuta.
The Immuta SaaS platform is deployed using industry-leading technologies, security controls, and deployment methodologies. The platform undergoes continuous vulnerability scanning and is penetration tested at least twice annually. Additionally, Immuta’s deploy-as-code methodology ensures that every system meets our baseline requirements before a system can be moved to production. Immuta’s SDLC requires that container images are continually hardened and tested several times a year in addition to our comprehensive penetration tests, ensuring the Immuta SaaS platform is continually evolving to face emerging threats.
In addition to our technical security controls, Immuta strives to minimize the data needed to deliver our services to clients. Only metadata is stored in Immuta’s SaaS platform to make policy enforcement decisions, meaning Immuta is not in the data path between your users and the data sources it protects, nor does it pull any of your data back at all. For the metadata Immuta does need, it is encrypted in transit using TLS 1.2 or newer and at rest using AES256 encryption.
Immuta offers automated backups with encryption at rest, eliminating the need for someone to own or set up the backups manually. This helps us guarantee 99.9% uptime for Immuta’s SaaS service.
One of the biggest benefits SaaS solutions offer is cost savings. Self-managed software, on the other hand, requires a cloud engineer or IT person to ensure that software is running well and upgraded on time. Immuta’s SaaS platform can provide notable savings in several different ways, including eliminating the cost of IT resources for installation. With Immuta, organizations can benefit from both short-term savings (i.e., installation costs) and long-term savings (i.e., operational costs).
New features developed by Immuta’s Engineering team are deployed and available on SaaS first. This allows customers to get access to features faster, without worrying about planning major version upgrades and dealing with the hassle of downtime. With Immuta’s SaaS platform, customers can also access private preview features. SaaS also enables you to have security vulnerability patches as quickly as possible, without requiring self-managed manual upgrades.
Immuta’s SaaS platform eliminates the need to spend time and money on updating software by allowing customers to log on to already upgraded services. Upgrades and maintenance are done off hours with typically no downtime to ensure that Immuta’s capabilities are always available. The ability to monitor the dynamic threat landscape and to deliver patches directly through Immuta allows organizations to use data to focus on business.
Intermingling your pre-existing roles in Snowflake with Immuta can be confusing at first. This guide outlines some best practices on how to think about roles in each platform.
Roles play a crucial role in Snowflake by organizing and controlling access to data, platform permissions, and data warehouses. Immuta also leverages Snowflake roles to grant users permission to read data based on subscription policies.
Users who consume data (directly in Snowflake or through other applications) need roles to access objects. But roles are also used to control write, Snowflake warehouses, and Snowflake permissions through system-defined roles.
To manage this at scale, Immuta recommends taking a 4-layer approach, where you separate the different permissions into different roles:
Roles for read access (Immuta managed)
Roles for write access (optional, soon supported by Immuta)
Roles for warehouse, internal billing
Roles for Snowflake permissions (optional)
Since Immuta leverages Snowflake roles, you can still use existing roles in Snowflake. This means you can gradually migrate to an Immuta-protected Snowflake.
Warehouses are granted to users to give them access to computing resources. Since this is directly tied to Snowflake’s consumption model, warehouses are typically linked to cost centers for (internal) billing purposes. Immuta recommends creating a role/warehouse per team/domain/cost center and granting this warehouse role to users using identity manager groups.
Snowflake permissions are granted through system-defined roles like ACCOUNTADMIN
or SECURITYADMIN
. These are high-privilege roles that are only granted to administrators. This can be done manually or using AD groups.
Snowflake allows users to select a specific role, but you can also use all roles simultaneously. Immuta recommends using all roles since that helps to separate the different roles.
Alternatively, you could create personal roles and grant the warehouse-role/immuta-read-role and possibly the snowflake-permission-role and write-role to this.
Immuta has two types of service accounts to connect to Snowflake:
Data ownership role: This role is used to register data sources. A service account/role is recommended so that when the user moves or leaves the organization, Immuta will still have the proper credentials to connect to Snowflake. You can follow one of the two best practices:
A central role for registration (recommended): It is recommended that you create a service role/user with USAGE
permissions for all objects in Snowflake. This allows Immuta to register all the objects from Snowflake, populate the Immuta catalog, and scan the objects for sensitive data using Immuta Discover. Immuta will not apply policy directly by default, so no existing access will be impacted.
Immuta lives in the middle control plane. To do this, Immuta knows details about the subjects and enterprise resources, acts as the policy decision point through policies administered by policy administrators, and makes real-time policy decisions using the internal Immuta policy engine.
Lastly, and of importance to how Immuta Secure functions, Immuta also enables the policy enforcement point by administering the policies natively in your data platform in a way that can react to policy changes and live queries.
This guide outlines best practices for managing user identities in Immuta with your identity manager, such as Active Directory and Okta.
Reusing information you have already captured today is a good practice. A lot of information about users is in your identity manager platform and can be used in Immuta for user onboarding and policies.
All users protected by Immuta must be registered in Immuta, even though people might not log in to Immuta.
SAML is commonly used as a single sign-on mechanism for users to log in to Immuta. This means you can use your organization's SSO, which complies with your security standards.
Every user that will be protected by Immuta needs to have a user on the platform to enforce policy, regardless of if they are logging in to Immuta. SCIM should be used to provision users from your identity manager platforms to Immuta automatically. The advantage here is that not all end-users need to log in to Immuta to create their accounts, and updates in your identity manager will be automatically reflected in Immuta, hence updating the access in your platforms.
Details on how to configure your individual identity manager's protocols can be found here:
Within the Immuta product ecosystem, Immuta Detect is responsible for surfacing and indexing a wide range of security-related events, making it a rich source of data security posture insights.
In a typical deployment, Immuta Detect efficiently surfaces and processes a vast number of data security events. While these events all have security relevance, it may be challenging to understand their potential impacts without manual investigation. At the same time, the sheer volume of events typically greatly exceeds what a team can manually explore.
Enter Immuta Discover: Immuta’s data discovery and security analysis engine can identify, categorize, and classify data. Immuta Discover analyzes data available within the operational context of an event in conjunction with applicable legal, regulatory, compliance, and security frameworks to make deep inferences about the status of the data. For example, in a medical context, Immuta Discover can understand the difference between anonymized and identified medical data.
With additional classification metadata powered by Immuta Discover, Immuta Detect analyzes data security events for sensitivity, ensuring that highly significant events remain highly visible. In the context of the previous example, Immuta Detect can detect and flag the accidental identification of anonymized medical data.
With Discover, Immuta Detect can provide insightful oversight of who accesses sensitive data, where it is stored, and how it is used, enabling
Rapid and exact compliance monitoring and assessment
Insights into data usage patterns for setting data access policy
Simplified and expedient audit responses
Context-aware analysis of data flows as seen through the lens of security or regulatory compliance frameworks
Immuta Discover works in three phases: identification, categorization, and classification.
In the first phase, data is identified by its kind – for example, a name or an age. This identification can be manually performed, externally provided, or automatically determined by Immuta Discover through column-level analysis. This is commonly termed entity identification.
In the second phase, data is categorized in the context where it appears, subject to any active data compliance or security frameworks. For example, a record occurring in a clinical context containing both a name and individual health entities is Protected Health Information under HIPAA.
Though entirely customizable, for this purpose, Immuta provides a default framework known as the Immuta Data Security Framework (Immuta DSF.) Immuta DSF gives a fine-grained categorization into a consistent set of security and compliance concepts, including things like whether or not a record pertains to an individual, the composition and kinds of any identifiers that are present, the subject matter of the data, whether it belongs to any commonly controlled data categories, etc. The rules of the framework use the entities found in the table in phase 1 to drive how the data is categorized.
The categorization provided by the Immuta DSF may be used directly. Still, it is best leveraged as a starting point for purpose-built compliance frameworks implementing organization-specific compliance categories or other relevant high-level regulatory and compliance frameworks, such as those for categorizing data into categories defined under CCPA, GDPR, GLBA, HIPAA, etc.
Bottom line, think of categorization as a way to apply higher level categories to the fine-grained entities discovered in phase 1 through rules you can customize. These categories are presented as tags in Immuta, just like the entities in phase 1, and thus, can be used for Immuta Secure policies.
In the third and final phase, data is classified according to its sensitivity level (e.g., Customer Financial Data is Highly Sensitive). Again, Immuta supplies sensitivity classification defaults based on standard industry practice. Just like how categories are built from phase 1 entities, classification builds on the phase 2 categories. Customers are free to customize this classification under their respective views. These classifications are key to surfacing sensitive queries in Detect based on your definition of sensitive.
There are good reasons to automate data discovery and analysis with Immuta Discover:
It formalizes the entire process, producing a coherent set of classification rules.
It makes it possible to automatically and uniformly scale compliance to new data sources.
It enables Immuta Detect to automatically detect additional threats, such as unauthorized or attempted access to sensitive data, and for soft enforcement of organizational data access policies. (For example, that access to personal information, direct identifiers, or login credentials be masked.)
Speed. Automating data discovery and analysis with Immuta Discover enables faster access to data by removing the manual effort of tagging and classifying new tables and columns.
Be aware that
Some customization may be necessary. Although Immuta's sensitive data discovery discovers over 60 types of sensitive data, only some data elements may be relevant. Further, unique sensitive data elements may not be covered out of the box. In these cases, it is possible to create new sensitive data discovery identifiers to ensure data is properly discovered and tagged.
New global templates should be created to find only entities that are relevant to the organization. This will ensure extraneous tags are not added to data elements.
Some customers may already have an existing data catalog tagging data; Immuta’s sensitive data discovery can work in combination with the data catalog.
Because data environments are not static, it is imperative that data tagging is automatically performed with new or changed data so that policies can be enabled in real-time, lowering the risk of data leaks.
Many organizations have invested in an enterprise data catalog as part of their data governance programs. Entity tags from the data catalogs will be pulled into Immuta in a one-way sync because the catalog is the system of record for entity tags. The tags pulled in from the data catalog can later be mapped to categories in the same way that entities automatically discovered in phase 1 are mapped to categories. This in turn will associate the appropriate sensitivity via classification to the external tags.
For a concrete example, consider a scenario where the Collibra catalog has tags for Longitude
and Latitude
. The following rule assigns Immuta DSF.Longitude
to any column tagged Collibra.Location.Longitude
. These rules appear in the rules array in the framework definition.
Incorporating tags from external catalog rules is fairly straightforward. External tags are referenced in rules, except the source field identifies the external catalog. The source field generally varies depending on the external catalog system. The correct value for the field may be identified by examining tag objects listed with the tags API, which includes the source field.
Immuta Detect provides value from the moment the dashboards are visible, which can be enabled for organizations with Snowflake, Databricks Spark, and Databricks Unity Catalog integrations. Currently, organizations with Snowflake integrations can get even more value with data sensitivity and tagging. To determine and surface the sensitivity of your data access, enable and tune classification.
Completing all the steps below will fully onboard you with Detect and Discover:
Prerequisites:
The onboarding process assumes that these prerequisites have already been set up, but here are the Immuta features and configuration required to enable Detect. Each integration can be used alone or a Snowflake integration can be used with either Databricks Spark or Databricks Unity Catalog. Databricks Spark and Databricks Unity Catalog are not supported together with Detect:
For Snowflake integrations:
Snowflake tables and users registered in Immuta: Detect only audits events by users registered in Immuta on tables registered in Immuta. If you do not register the tables and users, their actions will not appear in the audit records or on the Detect dashboards.
Benefits and limitations of enabling table grants
Unauthorized query events will be audited and present in the Detect dashboards.
Table grants will manage the privileges in Snowflake for Immuta tables, making it more efficient than without.
Without table grants:
Unauthorized events are unavailable because users will have successful queries of zero rows, even if they do not have access to the table.
You can use project workspaces. Table grants is not compatible with project workspaces. If your organization depends on that capability, table grants is not recommended.
For Databricks Spark integrations:
For Databricks Unity Catalog integrations:
Recommended:
This setting is not required for Detect, but can be used for better functionality:
Requirement:
Immuta permission USER_ADMIN
Actions:
To see sensitivity information using a Snowflake integration, proceed with the steps below.
Only available with Snowflake integrations: Discover classification is supported with Databricks and Snowflake integrations; however, the sensitivity can only be visualized in Detect dashboards with Snowflake integrations.
There are two options to tag data and activate classification frameworks to determine the sensitivity of your data:
After completing either of the tutorials above, data sources are tagged with entity tags and classification tags. Once users start querying data, and after the data latency with Snowflake, the Detect dashboards will show audit information with sensitivity information and the Discover data inventory dashboard will show details about the data that was scanned.
If you notice some sensitivity types are not appearing as you expect, proceed with the step below.
Only available with Snowflake integrations: Discover classification is supported with Databricks and Snowflake integrations; however, the sensitivity can only be visualized in Detect dashboards with Snowflake integrations.
Requirement:
Immuta permissions AUDIT
and GOVERNANCE
Actions:
After Discover has run SDD and the classification frameworks, it may be necessary to adjust the resulting tags based on your organization's data, security, and compliance needs:
After completing the tutorials above, all data appears as the appropriate sensitivity type on the Detect dashboards with Snowflake data sources.
Detect supports the following integration for activity pages with dynamic query sensitivity that will determine and visualize the sensitivity of user queries:
Detect supports the following integrations for activity pages, but will not visualize any sensitivity:
Databricks Spark
Check your data source tags
Navigate to the data source dictionary page.
Confirm one of the following tags is applied to one of the queried data columns:
RAF.Confidentiality.Very High
RAF.Confidentiality.High
RAF.Confidentiality.Medium
Detect uses the sensitivity scores associated with these tags to classify a query's sensitivity. When the queried columns have these tags and the associated classification rules in RAF or Data Security Framework (DSF) are enabled at the time of audit query processing, the query event will indicate the proper classification.
If there are no RAF tags applied, check if there are any DSF or Discovered tags applied. These tags are necessary for RAF tags to be applied.
If you see Discovered tags but no RAF or DSF, activate the frameworks.
If you do not see any Discovered, DSF, or RAF tags, run SDD.
Activate the frameworks
If you do not see any RAF tags, ensure the Data Security Framework and Risk Assessment Framework are active:
Navigate to the classification frameworks page.
Check the status of the Data Security Framework and the Risk Assessment Framework.
Run sensitive data discovery
Speed is an inherent ability in SaaS offerings because they are meant to be turnkey operations – you should just need to plug and play to leverage SaaS software to solve a specific problem. Immuta’s SaaS offering helps you to get to value fast without worrying about the IT department finding the time and energy to stand the software up. This helps organizations focus more on productive work and increases the organizations’ overall efficiency.
Self-managed solutions require long-term planning to scale operations and are often not the best option for growing businesses, as the IT staff has to constantly struggle in the upgrade loop. This could lead to significant restructuring costs as performance and functional demands increase. Additionally, upgrading or modifying existing systems can become costly due to potential downtime or other expenses associated with transitioning from one platform to another.
By contrast, Immuta’s SaaS platform software offers a more convenient way of optimizing operations across large corporate structures with minimal lead time needed for additional licenses or functionality additions. This allows you to easily scale according to business needs, so you don’t have to worry about how your internal IT team will keep up with future growth.
Organizations today are extremely cautious about avoiding data leaks and data or privacy breaches, and, therefore, need to invest in a powerful data security platform and trustworthy security solutions provider. Immuta’s processes, policies, and management system have been certified under the ISO 27001 and 27701 standards and SOC 2 Type 2 attestation, demonstrating that data security and privacy are important to Immuta.
The Immuta SaaS platform is deployed using industry-leading technologies, security controls, and deployment methodologies. The platform undergoes continuous vulnerability scanning and is penetration tested at least twice annually. Additionally, Immuta’s deploy-as-code methodology ensures that every system meets our baseline requirements before a system can be moved to production. Immuta’s SDLC requires that container images are continually hardened and tested several times a year in addition to our comprehensive penetration tests, ensuring the Immuta SaaS platform is continually evolving to face emerging threats.
In addition to our technical security controls, Immuta strives to minimize the data needed to deliver our services to clients. Only metadata is stored in Immuta’s SaaS platform to make policy enforcement decisions, meaning Immuta is not in the data path between your users and the data sources it protects, nor does it pull any of your data back at all. For the metadata Immuta does need, it is encrypted in transit using TLS 1.2 or newer and at rest using AES256 encryption.
Immuta offers automated backups with encryption at rest, eliminating the need for someone to own or set up the backups manually. This helps us guarantee 99.9% uptime for Immuta’s SaaS service.
One of the biggest benefits SaaS solutions offer is cost savings. Self-managed software, on the other hand, requires a cloud engineer or IT person to ensure that software is running well and upgraded on time. Immuta’s SaaS platform can provide notable savings in several different ways, including eliminating the cost of IT resources for installation. With Immuta, organizations can benefit from both short-term savings (i.e., installation costs) and long-term savings (i.e., operational costs).
New features developed by Immuta’s Engineering team are deployed and available on SaaS first. This allows customers to get access to features faster, without worrying about planning major version upgrades and dealing with the hassle of downtime. With Immuta’s SaaS platform, customers can also access private preview features. SaaS also enables you to have security vulnerability patches as quickly as possible, without requiring self-managed manual upgrades.
Immuta’s SaaS platform eliminates the need to spend time and money on updating software by allowing customers to log on to already upgraded services. Upgrades and maintenance are done off hours with typically no downtime to ensure that Immuta’s capabilities are always available. The ability to monitor the dynamic threat landscape and to deliver patches directly through Immuta allows organizations to use data to focus on business.
This guide outlines best practices for managing user identities in Immuta with your identity manager, such as Active Directory and Okta.
Reusing information you have already captured today is a good practice. A lot of information about users is in your identity manager platform and can be used in Immuta for user onboarding and policies.
All users protected by Immuta must be registered in Immuta, even though people might not log in to Immuta.
SAML is commonly used as a single sign-on mechanism for users to log in to Immuta. This means you can use your organization's SSO, which complies with your security standards.
Every user that will be protected by Immuta needs to have a user on the platform to enforce policy, regardless of if they are logging in to Immuta. SCIM should be used to provision users from your identity manager platforms to Immuta automatically. The advantage here is that not all end-users need to log in to Immuta to create their accounts, and updates in your identity manager will be automatically reflected in Immuta, hence updating the access in your platforms.
Details on how to configure your individual identity manager's protocols can be found here:
Requirements:
Immuta permission AUDIT
Use the following how-to to configure a periodical export of your Immuta audit logs to an S3 bucket. This export configuration requires access to your S3 bucket to add objects using one of the following authentication methods:
Configure your Immuta audit logs to export to your S3 bucket and allow Immuta to authenticate using your AWS access key ID and secret access key.
Before Immuta can export audit events to your S3 bucket, you need to create a bucket policy that allows the Immuta audit service to add objects to your specified S3 bucket. The following Amazon S3 action will be granted to the audit service in the bucket policy:
To create the policy for the bucket, you must be the bucket owner.
Save your changes.
Configure the audit export to S3 using the Immuta CLI or GraphQL API with the following fields:
interval: The interval at which audit logs will be exported to your S3 bucket. They can be sent at 2-, 4-, 6-, 12-, or 24-hour intervals.
bucket name: Name of the bucket your audit logs will be sent to that your added the policy to above.
bucket path: The name of the folder within the bucket to put the audit logs in. This field is optional.
region: AWS region (such as "us-east-1").
secretAccessKey: AWS secret access key for authentication.
Run the following command with the above fields in a JSON file:
Example ./exportConfig.json
file
Run the following mutation to this URL, https://your-immuta.com/api/audit/graphql
, with the above fields passed directly:
Example response
Immuta requires a role with the following allowed action to the S3 bucket you want the audit logs exported to:
Note: If you use this example, replace the content in angle brackets with your bucket name.
Response error
When creating the export configuration, this step will return an error. Take the returned export configuration ID and continue with step 3 and 4 to create a trust relationship and verify the connection between Immuta and S3.
Configure the audit export to S3 using the Immuta CLI or GraphQL API with the following fields:
interval: The interval at which audit logs will be exported to your S3 bucket. They can be sent at 2-, 4-, 6-, 12-, or 24-hour intervals.
bucket name: Name of the bucket your audit logs will be sent to.
bucket path: The name of the folder within the bucket to put the audit logs in. This field is optional.
region: AWS region (such as "us-east-1").
roleArn: AWS role ARN for authentication that you added the policies to above. Immuta will assume this role when exporting audit logs to S3.
Run the following command with the above fields in a JSON file:
Example ./exportConfig.json
file
Example response:
Run the following mutation to this URL, https://your-immuta.com/api/audit/graphql
, with the above fields passed directly:
Example response
Fill in the content in angle brackets with the following:
Immuta AWS Account ID: Contact your Immuta representative for this ID.
Export Configuration ID: Insert the ID from step 2's response.
Now that the configuration and the trust relationship have been created, test the connection from Immuta to S3 to ensure your audit logs are exported to your S3 bucket.
If connectionStatus
returns SUCCESS
, your export configuration has been successfully set up.
Run the following command
Run the following mutation to this URL, https://your-immuta.com/api/audit/graphql
:
Native SDD and classification frameworks enabled in Immuta. If you do not know if they are enabled, collaborate with your Immuta representative to turn on and in your Immuta tenant.
:
: SDD will sample and tag your data based on the sensitive data detected. These tags are necessary for the classification framework tags in step 2 to be applied.
: Once you activate the , they will tag your data with classification tags. contain the metadata required to assign your data sensitivity levels.
: After SDD and classification frameworks have been enabled and run, it may be necessary to adjust the output tags based on your organization's data, security, and compliance needs.
: Grant the appropriate users the AUDIT
permission to view Immuta Detect dashboards.
: Once all tags are correctly applied, the Detect dashboards will reflect accurate audit information. Navigate through Immuta Detect and explore the dashboards that visualize the sensitive data in your data environment.
: If you already had SDD enabled before starting Detect onboarding, skip this step. Once you are satisfied with the SDD tags and classification tags applied to your selected data sources, and the classification tags look correct, you should enable SDD for all data sources. This will add entity and classification tags to the rest of the data sources within your environment. You can choose to run SDD on all data sources, or run another payload with just a select few to gradually onboard the rest of your tables.
are becoming an increasingly popular way to protect data at the speed of the cloud. In this guide, we'll explore seven ways the provides a reliable and versatile solution to data security complexity. You can skip this overview if you are already happily using Immuta SaaS, but if curious, read on!
SaaS also helps to maximize the return on investment (ROI) in the first year post-purchase by reducing the overall overhead and reaching value faster. To see for yourself how this works, check out our .
Leveraging the Immuta SaaS platform solution will improve availability and reliability by providing always-on data access while complying with and sovereignty regulations. With a guaranteed uptime SLA of 99.9% and 24/7 monitoring by a world-class Site Reliability Engineering team, you’ll get the benefit of verified security and compliance capabilities, along with cost savings. Immuta also provides round-the-clock case support for enterprise customers, ensuring that experts are available to answer any product-related questions or educate users about features and use cases.
Read access is managed by Immuta. By using , data access can be controlled to the table level. help you scale compared to RBAC, where access control is typically done on a schema or database level.
Write access is typically granted on a schema or database. This makes it easy to manage in Snowflake through manual grants. We recommend creating roles that give insert, update, and delete permissions to a specific schema or database and attach this role to a user. This attachment can be done manually or using your identity manager groups. (See the for details.) Note that Immuta is working towards supporting write policies, so this will not need to be separately managed for long.
This feature is called ‘’ and can be enabled using the following command in Snowflake: USE SECONDARY ROLES ALL
Policy role: This role gives Immuta the power to create and apply policy. Immuta can , or you can to create the policy role.
A role per team/domain (alternative): Alternatively, if you cannot create a role with USAGE
permissions for all objects, you can allow the different domains or teams in the organization to use a service user/role scoped to their data to register data sources. This is delegating metadata registration and aligns well with type use cases and means every team is responsible for registering their data sets in Immuta.
Immuta is not just a location to define your policy logic; Immuta also enforces that logic in your data platform. How that occurs varies based on each data platform, but the overall architecture remains consistent and follows the . The below diagram describes the recommended architecture from NIST:
To use Immuta, you must configure the Immuta native integration, which will require some level of privileged access to administer policies in your data platform, depending on your data platform and how the Immuta integration works. Refer to for Snowflake before configuring the native integration.
There are several different combinations of , so consider those as you plan your user synchronization.
In Immuta, permissions control what actions a user is allowed to take through the API and UI. The different permissions can be found in the .
We recommend using identity manager groups to manage permissions. When you configure the , you can enable group permissions. This allows you to control the permissions via identity manager groups and use the group assignment and approval process currently in place.
This guide is for anyone ready to begin their journey with Immuta who isn't using Snowflake. Specifically, it provides details for configuring Immuta that must be accomplished before moving on to the use cases.
If you are using Snowflake with Immuta, see the use case.
Ensure you have the Immuta software available to you. For the best experience, follow the steps below on Immuta SaaS because of the many .
Configure your users in Immuta, using the .
Read the and connect Immuta to your databases. Consider the if using Databricks.
in order to start using Immuta features on the data sources.
:
: This feature can be enabled when first configuring the integration or when editing the integration.
: While not required, it is recommended to enable this feature to properly audit unauthorized query events. Without it, unauthorized events will still show as successful. Project workspaces cannot be used with table grants, so if your organization relies on them, leave this feature disabled.
With enabled:
with Note that it is enabled by default when configuring the integration.
: This feature sets the subscription policy of all new data sources to none when they are registered. Using this feature, allows for organizations to register all Snowflake tables in Immuta. Their audit information will appear in the Detect dashboards, but users' access to them will not be impacted by Immuta until a subscription policy is set.
permission to see the Detect dashboards.
Navigate through Immuta Detect and that visualize user and query audit information for your data environment.
These actions will result in users seeing the containing information on the audit events in your data environment. These dashboards will not contain any information on the sensitivity of your data.
: This option is the smoothest onboarding experience because it is the most automated process. You will not need to manually tag your data, and the framework to determine sensitivity is already set to use the SDD tags.
: This option requires more manual configuration, but is best for organizations that have already configured tags for their tables. Contact your Immuta representative for guidance.
Detect activity pages will have active charts when configured correctly with supported integrations after audit logs have been ingested. The user viewing must have the .
Snowflake with
Databricks Unity Catalog with
See the for more information on the required configuration for each integration.
Query events sensitivity is determined by the tags with sensitivity metadata on the columns queried from Snowflake data sources. Immuta comes with a built-in framework with sensitivity tags, the . Ensure you have completed the .
If you have completed the above steps and still see query events as "Indeterminate" or "Nonsensitive", check that the right tags were applied in the data dictionary:
If the frameworks are inactive, . Once activated, allow time for the frameworks to run on your data sources. Then, again for RAF and DSF tags.
If both frameworks are activated, there are no RAF tags, and there are no Discovered tags, .
are becoming an increasingly popular way to protect data at the speed of the cloud. In this guide, we'll explore seven ways the provides a reliable and versatile solution to data security complexity. You can skip this overview if you are already happily using Immuta SaaS, but if curious, read on!
SaaS also helps to maximize the return on investment (ROI) in the first year post-purchase by reducing the overall overhead and reaching value faster. To see for yourself how this works, check out our .
Leveraging the Immuta SaaS platform solution will improve availability and reliability by providing always-on data access while complying with and sovereignty regulations. With a guaranteed uptime SLA of 99.9% and 24/7 monitoring by a world-class Site Reliability Engineering team, you’ll get the benefit of verified security and compliance capabilities, along with cost savings. Immuta also provides round-the-clock case support for enterprise customers, ensuring that experts are available to answer any product-related questions or educate users about features and use cases.
There are several different combinations of , so consider those as you plan your user synchronization.
In Immuta, permissions control what actions a user is allowed to take through the API and UI. The different permissions can be found in the .
We recommend using identity manager groups to manage permissions. When you configure the , you can enable group permissions. This allows you to control the permissions via identity manager groups and use the group assignment and approval process currently in place.
If you will use the Immuta CLI instead of GraphQL API, . Must be CLI v1.4.0 or newer.
: Adds an object to a bucket.
Follow for adding a bucket policy in the Amazon S3 console. To create the policy for the bucket, you must be the bucket owner.
Edit the JSON in the Policy section to include a bucket policy like the example below. In this example, the policy allows immuta-audit-service (the ) to add objects to customer-bucket-name (and the contents within that bucket).
Note: If you use this example, replace the content in angle brackets with your and bucket name.
accessKeyId: AWS access key ID for authentication. See the for information about using an access key ID and secret access key.
For additional CLI commands, see the .
For additional GraphQL API commands, see the.
Configure your Immuta audit logs to export to your S3 bucket and allow Immuta to authenticate using an . With this option, you provide Immuta with an IAM role from your AWS account that is granted a trust relationship with Immuta’s IAM role for adding objects to your S3 bucket. Immuta will assume this IAM role from Immuta’s AWS account in order to perform operations in your AWS account.
which allows to the role to add an object to a bucket.
Follow to create a new role for Immuta to assume and add objects to your S3 bucket.
Follow for creating IAM policies in the Amazon S3 console for the new role. Use the example JSON below to allow the provided role to add objects to the specified buckets. Ensure the buckets provided here are the ones used when configuring the export.
For additional CLI commands, see the
For additional GraphQL API commands, see the .
Follow for creating IAM policies in the Amazon S3 console. Use the example JSON below to create a trust policy between Immuta and your AWS bucket.
Immuta provides robust audit logging on actions within the application and on queries in native technologies like Snowflake, Databricks, and Unity Catalog. Users with the audit permission can view the audit page in Immuta and export audit logs to S3 or ADLS Gen2.
Export audit logs to S3: Use the CLI or GraphQL to export Immuta audit logs to S3. These logs can then be stored long-term, used for compliance, or viewed in analytic platforms.
Export audit logs to ADLS Gen2: Use the CLI or GraphQL to export Immuta audit logs to ADLS Gen2. These logs can then be stored long-term, used for compliance, or viewed in analytic platforms.
Run governance reports: Create a governance report in the Immuta UI to understand the state of your Immuta environment.
Audit overview: This reference guide describes Immuta's universal audit model, the events available in this model, and the recommended audit workflow.
UAM schema reference guide: This reference guide lists the UAM events and examples of the logs.
Query audit logs: These reference guides describe the audit available for the specific integration, details about enabling and configuring audit, and an example schema.
Audit export GraphQL reference guide: This reference guide describes the commands available in the GraphQL for exporting audit logs.
Governance reports: This reference guide describes the different reports available in Immuta.
Unknown users in audit logs: Unity Catalog native query audit brings in audit information for all tables and data sources, so some audit logs are created from activity by users not registered in Immuta. These audit records will appear in Immuta, providing valuable information of activity, with the username Unknown
. This guide illustrates how to determine the username of these Unknown
users and register them in Immuta.
Download audit logs: Download Immuta legacy audit logs through the API.
Legacy to UAM Migration: Understand the audit events from UAM that map to legacy audit events.
In order to take advantage of all the capabilities of Immuta, you must make Immuta aware of your data metadata. This is done by registering your data with Immuta as data sources. It’s important to remember that Immuta is not reading your actual data at all; it is simply discovering your information schemas and pulling that information back as the foundation for everything else.
This section offers the best practices when onboarding data sources into Immuta.
If you have an external data catalog, configure the catalog integration first; then register your data in Immuta. This process will automatically tag your data with the external catalog tags as you register it.
Use Immuta's no default subscription policy setting to onboard metadata without affecting your users' access. This means you onboard all metadata in Immuta without any impact on current accesses which gives you time to fully convert your operations to Immuta without causing unnecessary data downtime. Immuta will only take control when the first policies are applied. Because of this, register all tables.
While it can be tempting to start small and register only the pieces of data that you intend to protect, you must remember that Immuta is not just about access control. It’s important to register your data metadata so that Immuta can also track activity and understand where that sensitive data lies (with Immuta Detect). In other words, Immuta can’t tell you where you have problems unless you first tell it to look at your metadata.
Without the no default subscription policy, Immuta will set each data source's subscription policy to the most restrictive option which automatically locks data down during onboarding. To unlock the data and give your users access again, new subscription policies must be set.
If you are delegating the registration and control of data, then read our Data mesh use case for more information.
Use the /api/v2/data
endpoint to register a schema; then use schema monitoring to find new data sources and automatically register them.
One of the greatest benefits of a modern data platform is that you can manage all your data transformations at the data tier. This means that data is constantly changing in the data platform, which may result in the need for access control changes as well. This is why it is critical that you enable schema monitoring and column detection when registering metadata with Immuta. This will allow Immuta to constantly monitor and update for these changes.
It’s also important to understand that many data engineering tools make changes by destructively recreating tables and views, which results in all policies being dropped in the data platform. This is actually a good thing, because this gives Immuta a chance to update the access as the changes are found (policy uptime) while the only user that can see the data being recreated is the creator of that change (data downtime for all other users). This is why schema monitoring and column detection are so critical.
Requirements:
Immuta permission AUDIT
If you will use the Immuta CLI instead of GraphQL API, install and configure the Immuta CLI. Must be CLI v1.4.0 or newer.
Before Immuta can export audit events to your Azure Data Lake Storage (ADLS) Gen2 storage account, you need to create a shared access signature (SAS) token that allows the Immuta audit service to add audit logs to your specified ADLS storage account and file system.
Follow the Azure documentation to create the following in Azure:
An ADLS Gen2 storage account with the following settings required for audit export:
Enable hierarchical namespace
Standard performance is adequate, but premium may be used
A shared access signature (SAS) for your dedicated container with at least the following permissions at the storage account or container level:
Create
Write
Save the SAS token to use in the next steps. Do not navigate away from the SAS page unless you have saved the token.
Configure the audit export to ADLS using the Immuta CLI or GraphQL API with the following fields:
interval: The interval at which audit logs will be exported to your ADLS storage. They can be sent at 2-, 4-, 6-, 12-, or 24-hour intervals.
storage account: The name of the storage account you created that your audit logs will be sent to.
file system: The name of the file system (or container) you created that your audit logs will be written to.
path: The name of the path in the file system. This will be a new folder or directory in the container where Immuta will send your audit logs for storage.
SAS token: The previously-generated SAS token.
Run the following command with the above fields in a JSON file:
Example ./your-exportConfig.json
file
For additional CLI commands, see the audit CLI reference guide.
Run the following mutation to this URL, https://your-immuta.com/api/audit/graphql
, with the above fields passed directly:
Example response
For additional GraphQL API commands, see the GraphQL API reference guide.
Detect is one of the Immuta flagship modules. Immuta Detect continually monitors your data environment to help answer questions about your most active data users, the most accessed data, and the events happening within your data environment. This understanding can help drive prioritization of where to place access control policies in Immuta’s other flagship module: Immuta Secure, which is why it is recommended that you start with Detect.
Data use has become ubiquitous across every industry and, with it, so have threats to data security. In response, organizations are undertaking a number of enterprise security initiatives, including those aimed at continuously detecting and managing internal data security risks, data security posture management, and identifying and privacy risks.
A key requirement of these initiatives is the ability to inventory and continuously monitor user access behavior and risk across modern cloud data platforms like Snowflake and Databricks. Yet, most existing solutions fall short of giving a complete view of what’s happening with regard to data access in those systems at any given time. To fill the gap, security and data platform teams looking to protect data, manage, and remediate data access risk should seek solutions that make it easy to identify and track sensitive data and monitor data access risk across cloud data platforms.
Immuta Detect provides these capabilities, in addition to Immuta’s established discovery and security capabilities, as part of a comprehensive data security platform. This page explains how Immuta Detect can help you achieve full-spectrum data security, so you can rest assured data is protected and risks are kept at bay.
Immuta Detect provides security and platform teams with granular insights into data activity. With detailed user and data activity views that summarize data source activity by time frame, data access event categorization, most active data sources, and sensitive data indicators, teams receive actionable insights and are able to drill down to specific data sources.
Detect also shows detailed data access behavior analytics like person activity, queries over time, and sensitive data indicators.
Each data source column is assigned a sensitivity level based on its classification under the organization’s respective data security framework, as well as the mitigations applied to a user querying that column.
Incident alerts can be set up so that security and data teams are always aware of risks and anomalies and can be proactive in countermeasures.
Ultimately, Immuta Detect enables data security and platform teams to easily and quickly answer questions:
What data access activity took place in the last 24 hours?
Who accessed sensitive data, and what sensitive data was accessed?
What are the most trafficked data sources containing sensitive data?
What users were most active in accessing sensitive data?
How do I quantify, assess, and show my organization’s data security posture?
How can I stay aware of data security posture changes?
Use these audit export configuration commands to manage exporting your audit logs to S3 and ADLS Gen2. To configure an audit export see the Export to S3 or Export to ADLS guides.
To disable a configuration, use the disableExportConfiguration
mutation:
To enable a disabled configuration, use the enableExportConfiguration
mutation:
To delete a configuration, use the deleteExportConfiguration
mutation:
Immuta reports allow data governors to use a natural language builder to instantly create reports that detail user activity across Immuta.
Click select entity and choose the option you would like the report based on from the dropdown menu. Your options include User, Group, Project, Data Source, Purpose, Policy Type, Connection, or Tag.
After making your selection, type your entity name in the enter name field.
Select the name from the dropdown menu that appears. Once the entity name has been selected, a number of reports will populate the center window.
Click a tile with the description of the report to run that report.
Once you've run the report, you can click the Export to CSV button in the top right of the page to download the report.
Note: If you would like to switch reports from this page, you can make changes by clicking the dropdown menu and then Refresh to run a new report. Otherwise, click Back to Report Builder in the top right of the page to return to the full report builder.
Requirement: Immuta permission AUDIT
, Audit Activity
or data owner
The Detect dashboards are visualizations of the data being queried in your data environment, how sensitive it is, and who the active users are. Use the guides below to adjust a dashboard to show the information you're most interested in.
You can filter dashboards in several ways and use multiple filters at once. To filter a dashboard,
Click Filters.
Select the filter you want, and select the type.
Repeat this process as necessary for any filters you want to apply to the dashboard.
To remove the filters, click the delete (X) icon.
Note: For a more responsive experience, Immuta limits the number of auto-suggested filter values to 100 of the most active values. The total item count for each filter type still reflects the number of events in the dashboard time range.
By default the time range for all dashboards is 24 hours. To select a different time range,
Click the date range.
Select the time range you want from the options, choose to enter a custom date range, or choose to enter a custom time range in hours.
Note this will revert back to the default when you log out.
By default all sensitivity types and indeterminate will appear on the graph if you have set up classification.
To take off specific sensitivity types, select the name of the sensitivity type from the chart legend that you do not want on the graph. The icon color will change to dark gray to signal it is not represented on the graph.
To add specific sensitivity types, select the name of the sensitivity type you want on the graph. The icon color will change from dark gray to the color it is represented by on the graph (blue or light gray).
Note this will only affect the dashboard you are viewing and will revert back to the default when you navigate away from the page.
Audit events from Snowflake and Databricks Unity Catalog are ingested on a configurable schedule; however, users can manually pull in audit events from these integrations at any time by completing the following steps.
Navigate to the Audit page.
Click Load Audit Events.
The ingestion job may take time to finish, but will complete in the background. Once it is complete, the new audit events will populate on the events page.
Immuta Detect monitors your data environment and provides analytic dashboards in the Immuta UI based on your data use.
This guide illustrates how to adjust a dashboard to show the information you're most interested in, such as the data being queried in your data environment, how sensitive it is, and who the active users are.
This reference guide details the components, architecture, and dashboards available with Detect.
Immuta reports allow data governors to use a natural language builder to instantly create reports that delineate user activity across Immuta. These reports can be based on various entity types, including users, groups, projects, data sources, purposes, policy types, or connection types.
User reports can be run for all users or for individual users who have been registered in Immuta. Non-registered users' activity will not appear in reports.
Data sources subscribed to. This report lists data sources each user is subscribed to and includes user roles, subscription types, when users last subscribed, who approved the users' subscriptions to the data sources, when the subscriptions expire, what attributes the users possess, and the groups the users belong to.
Status of all users. This report lists account information of all users in the system, including the users' full names, usernames, IAMs, HDFS principals, and last login dates.
Groups the user belongs to. This report lists the names of the groups the user belongs to and the dates that groups were joined.
Data sources the user subscribes to. This report details the data source names, the user's roles, when the user last subscribed, who approved the subscriptions, when the subscriptions expire (if applicable), and the reasons for subscribing (if applicable).
Projects the user is currently a member of. This report lists the project names, whether the projects are public or private, the user's roles in the projects, the creator of the projects, when the projects were created, and when the user joined the projects.
All data sources ever accessed by the user. This report lists the data source names, when the data sources were first accessed by the user (or "read date"), and when the data sources were last accessed by the user. By default, this report only displays the last month of results. (You can download the full report by clicking Export to CSV.) The time period can be configured in the date field at the top of report's page.
Attributes the user has. This report lists the current attributes a user has and the values assigned to each attribute.
Purposes for accessing data. This report lists all purposes under which the user has accessed data sources. By default, this report only displays the last month of results. (The full report can be downloaded by clicking Export to CSV.) The time period can be configured in the date field at the top of the report's page.
Group Reports can be run for all groups or for individual groups.
Data sources that members of this group are subscribed to. This report lists the data source names, the group's role, when the group last subscribed to the data sources, who approved the subscriptions, and the expiration dates (if applicable), and reasons (if applicable) for the subscriptions.
Users who belong to the group. This report lists the names of users and the dates the users joined the group.
Data sources that members of this group are subscribed to. This report lists the data source names, the group's role, when the group last subscribed to the data sources, who approved the subscriptions, and the expiration dates (if applicable), and reasons (if applicable) for the subscriptions.
Projects that users in this group are members of. This report includes the names of the projects, whether the projects are public or private, the group's role in the projects, the names of the project creators, when the projects were created, and when the group joined the projects.
Attributes of the group. This report includes the names of the attributes assigned to this group.
Users and groups who are members of the project. This report includes usernames, email addresses, user roles in the project, when the users joined, and the subscription types. The subscription types may be "Individual User," indicating that the user joined the project directly, or it might be "Group," in which case the name of the group will be stated. Group subscriptions occur when an entire group is added to a project.
Data sources that are part of the project. This report lists the data source names, the reasons given when added to the project (if applicable), the users who added the data sources, and when the data sources were added to the project.
Purpose of the project. This report includes the purpose name, the user who added the purpose, and when the purpose was added to the project.
Data source reports can be run for all data sources or for individual data sources that are registered in Immuta. Activity to non-registered tables will not appear in the reports.
Users and groups subscribed to data sources. This report lists all users and groups subscribed to every data source and includes usernames, email addresses, subscription types, user roles, subscription dates, who approved the subscriptions, expiration dates, and user attributes.
Users and groups subscribed to the data source. This report lists the names of users, reasons for accessing the data sources (if applicable), user roles, email addresses, when users last subscribed, who approved the subscriptions, when the subscriptions expire (if applicable), and the subscription types. A subscription type may be "Individual User," indicating that the user subscribed to the data sources directly, or it might be "Group," in which case the name of the group will be stated. Group subscriptions occur when an entire group is added to a data source.
Projects that contain the data source. This report lists the project names, the users who added the data source to projects, when the data source was added to projects, the reasons for adding the data sources (if applicable), whether the projects are public or private, who created the projects, and when the projects were created.
Purposes of all projects that contain the data source. This report states the purpose names, the users who assigned the purposes to the projects, the dates the purposes were assigned, the names of the projects, the reasons the purposes were added (if applicable), whether the projects are public or private, who created the projects, and when the projects were created.
All users who have accessed the data source. This report lists usernames, email addresses, each user's latest query, and the date of the last access. By default, this report only displays the last month of results. (The full report can be downloaded by clicking Export to CSV.) The time period can be configured in the date field at the top of report's page.
All purposes for data source access. This report lists users who have accessed the data source and the purposes under which they were working. By default, this report only displays the last month of results. (The full report can be downloaded by clicking Export to CSV.) The time period can be configured in the date field at the top of report's page.
All users who have subscribed to the data source. This report lists users or groups, email addresses, when users subscribed, reasons for subscriptions (if applicable), who approved the subscriptions, when the subscriptions expire, and the dates and reasons users unsubscribed (if applicable). By default, this report only displays the last month of results. (The full report can be downloaded by clicking Export to CSV.)
All identifiers for the columns of the data source. This report lists all the identifiers that matched to a column of the data source through sensitive data discovery. It includes information about the column name, the hit percentage, and the number of rows sampled.
Users who are members of projects with this purpose. This report lists usernames, email addresses, their roles in the project, the names of the projects, whether the projects are public or private, the creators of the projects, when the projects were created, when users joined, and their subscription types (individual or group).
Data sources that are part of projects with this purpose. This report lists the names of the data sources, who created the data sources, the project names, whether the projects are public or private, the creators of the projects, whether the projects have other purposes, and when the projects were created. Note that whether projects have other purposes will be assigned as "True" or "False."
Whether any other purposes have been combined with this purpose. This report lists the names of the other purposes combined with the purpose you select, the project name where they are combined, the users who added each purpose, the project creator, whether the project is public or private, and the date the project was created.
Projects that have this purpose. This report lists the names of the projects, the users who added the purpose, whether the projects are public or private, creators of the projects, whether the projects have other purposes, and when the projects were created.
Data sources that have been accessed for this purpose. This report lists the names of the data sources, the users who accessed data sources for this purpose, the project names, and whether projects have other purposes. By default, this report only displays the last month of results, but the time period can be configured in the date field at the top of this report's page.
Data sources with this policy type. Immuta supports a range of policy types, such as masking, WHERE clauses, purpose restrictions, and more. This report lists every data source with this policy type, including when they were created, who created the data sources, who created the policy, and when the policy was created.
Global policy reports can be run for all global policies or for individual global policies.
Global policies that have been disabled. This report details the names of the policies, the policies themselves, the policy types, the data sources from which the policies were disabled, who disabled the policies, when they were disabled, the justifications the users provided for disabling the policies, who created the policies, when the policies were created, and how the policies were associated with the data sources.
Global policies that cannot currently be applied. This report details the names of the policies, the policies themselves, the policy types, the names of the data sources the policies cannot be applied to, when the data sources were created, when the policies were created, the reasons the policies cannot be applied, who created the policies, and how the policies are associated with the data sources.
Data sources impacted by the policy. This report lists the data sources, when the data sources were created, and whether or not the policy is fully applied to the data sources.
Data sources impacted by the policy that have not been certified. This report lists the data sources that have not been certified, when the global policy was applied, and the data owner.
Data sources impacted by the policy that have been certified. This report lists the data sources that have been certified, the user that certified it, when the global policy was applied, and when it was certified.
Data sources with this connection type. This report lists the data sources, each data source's creator, the creation date, and the tables or queries used by the connection selected.
Tag reports can be run for all tags or for individual tags.
Data sources this tag has been assigned to. This report generates a list of data sources associated with that tag and includes the columns tagged, the value types of the data tagged, who tagged the data sources, when the data sources were tagged, and when the data sources were created.
Purposes associated with data sources containing this tag. This report generates a list of purposes under which users have accessed data sources containing this tag. By default, this report only displays the last month of results. (The full report can be downloaded by clicking Export to CSV.) The time period can be configured in the date field at the top of the report's page.
Users who have accessed data sources containing this tag. This report lists users who have accessed data sources with this tag, their email addresses, when they queried the data, and when the data sources were created.
Projects that contain data with this tag. This report details the projects associated with this tag, whether or not the projects are public or private, when the projects were created, the data sources in the projects, and when the data sources were created.
Users that have subscribed to data sources with any tag. This report lists users, their subscription type, and all of the tags in Immuta with information of whether or not users are subscribed to at least one data source where that tag is applied.
Data sources any tag has been applied to. This report lists data sources with the tags applied to them and the columns they are applied to.
Projects that contain a data source with any tag. This report lists projects and the data sources assigned to them with the tag they have applied.
Columns with SDD tags applied. This report generates a list of all Discovered tags that have been applied to data sources by sensitive data discovery. It includes information about the column it is applied to within each data source and active policies that use the tag.
Columns with legacy SDD tags. This report generates a list of all Discovered tags applied by legacy SDD and provides context if native SDD also found those tags. It includes information about the data sources, columns, and active policies that use the tag.
Unity Catalog native query audit brings in audit information for all tables and data sources, so some audit logs are created from activity by users not registered in Immuta. These audit records will appear in Immuta, providing valuable information of activity, with the username Unknown. This can be seen on the audit page or in user and data activity dashboards.
While the Immuta user is unknown, the user's Databricks Unity Catalog username can be found within the audit log. To view the user's data platform username:
Navigate to the event page.
Select View JSON.
The username can be found in the auditPayload.technologyContext.account.
username
field.
To improve your future audit records, ensure these users are properly registered and can be named in the logs:
If you have not registered any users, pull in users from your IAM.
If you have registered users but this user was missed, manually create the Immuta user.
If this user is in Immuta but not appearing in the audit record, map the user's Databricks username into Immuta.
Deprecation notice: The /audit
endpoint has been deprecated and replaced by Immuta Detect.
Generate your API key on the API Keys tab on your profile page and save the API key somewhere secure.
You will pass this API key in the authorization header when you make a request, as illustrated in the example below:
Download your audit logs using the GET /audit
endpoint. To filter or sort the audit logs, use the query parameters on the /audit
endpoint API reference page. For example, the request below saves 50 audit logs for https://your-immuta-url.immuta.com
in the file audit-logs-file.json
, with the audit records sorted by time in descending order.
Public preview
This feature is available to all accounts. Contact your Immuta representative to enable this feature.
Immuta Detect monitors automates manual aggregation and calculation of user activity metrics based on query events. It can then notify you when the metrics exceed your intended operating thresholds. When a user's query activity metric exceeds a configured threshold within a timeframe, Immuta creates an observation that links the user's query audit events that contributed to the breached activity metric. You can specify monitor conditions that work with the information in the query event context to compute each user's activity metrics. Monitors can be configured with multiple thresholds to assign an observation severity which allows you to control the severity of observations Immuta Detect sends a webhook notification for.
Create monitors to gain awareness of when your users' behavior changes, maintain data availability through data platform policy changes, ensure access patterns remain consistent for your data controls to remain effective, and be notified about anomalies. See the following example use cases:
Change management: Configure a monitor to notify you when unauthorized or failed queries to production data sources exceed a baseline threshold, indicating potential data outage from a recent subscription policy change.
Know when a new user has gained access to sensitive data: Configure a monitor to notify the first time a user has accessed any data source that's classified sensitive or highly-sensitive and tagged specifically within a controlled set of data sources.
Find high-volume data activity: Use Detect to review typical query count over a supported timeframe, and configure a monitor for when a user has issued more than the typical number of queries.
Surface forbidden data combinations: For data sources or columns that must never be joined in a query, configure a monitor with query tag conditions to notify you when forbidden queries may have occurred.
Track data platform cost and experience deviations: Configure a monitor to watch for long-running queries that dominate resource or need optimization.
Immuta permission AUDIT
Detect monitors can be created for data sources from the following integrations:
The query event context includes a set of metadata that is used by Detect monitors to compute each user's query activity metrics. Query audit events in Immuta can include metadata such as the sensitivity classification, query execution outcome, queried column tag, queried table tag, data source name, and schema. The sensitivity classification is set if Immuta Discover was configured at the time of query audit processing. For example, the sensitivity measures emitted by Discover's Data Security Framework rules that inspect adjacent columns work in Detect to help you assess sensitivity of each query.
You can review the full context of a query audit event in the Query detail page in Detect.
Monitors are organization-defined thresholds for access to your organization's data by users. Within monitors, an organization can define what user actions should trigger an observation. If a user activity metric exceeds the configured severity thresholds in the specified timeframe, Immuta creates an observation.
Monitors can be scoped to individual data sources, all the data sources within a schema, or all registered data sources in Immuta.
Additionally, monitors can selectively assess user activity based on matching conditions in the query event context. You can create monitors for
when a user accesses more than a set number of rows of data with specific tags or sensitivity.
when a user has made more than a set number of queries of data with specific tags or sensitivity.
when a user has made more than a set number of queries, each over a specific query duration.
when a user has more than a set number of unauthorized or failed queries against any data source in Immuta.
All specified conditions must be satisfied for the query event to contribute to the user activity metric. For example, you can specify both highly sensitivity and a specific tag for the event to count towards the selected user activity metric.
Query duration details
The query duration option for monitors provides notifications for long-running queries:
When monitoring the user query count metric, each query must meet the query duration condition to be included in a user's query count. When the user query count metric crosses the monitor thresholds, an observation will be created.
When monitoring rows accessed, each query must meet the query duration condition for that query row count to be included in the user's rows accessed metric. When the user's rows accessed metric crosses the monitor thresholds, an observation will be created.
Best practice: Ensure the audit frequency for each integration is set to a lowest value possible for your organization when using monitors. For Snowflake and Databricks Unity Catalog integrations, frequent audit record ingestion will lead to highly current audit records; however, there could be performance and cost impacts from the frequent jobs.
Monitors create observations when a user's query activity metrics exceed the specified threshold within the configured timeframe. Evaluating whether the user activity metrics have exceeded monitor thresholds is performed using a sliding window algorithm in increments of 10 minutes aligned to query events' timestamps.
Only completed queries are evaluated for the monitors. Once they are audited, they then fall into the timeframe window for the time that they complete.
An observation is based on the results of monitors and a user's query activity metric. Each observation can be marked open or acknowledged for review tracking purposes.
Observations surface the user query activity being monitored once the acceptable threshold has been exceeded. When creating a monitor, you can choose to be notified about the observations in one of three ways:
Never: Every time the monitor is exceeded, an observation will be created in Immuta, but you will not receive any webhook events from them.
Notify each time an Observation is generated: Every time the monitor is exceeded, it will create an observation, and a webhook event will notify you of that creation.
Notify the first time an Observation is generated for each user: Every time the monitor is exceeded, it will create an observation, and a webhook event will notify you when the first observation is created for a user. You will not receive notifications for observations from that monitor again for that specific user.
Observations contain the following information:
Open or acknowledged label: A quick visual of the state of the observation.
Severity label: This label aligns with the severity specified by the threshold within the monitor.
The user name: The user who exceeded the severity thresholds of the monitor.
Message: An Immuta-generated message describing what created the observation.
Monitor type: The type of monitor that created the observation.
Related events: A list of audit events that created the observation with links to the event details page for each.
Users can use the webhooks configured when creating a monitor to send notifications to outside applications. Applications like Slack and Teams can receive these webhooks and surface them as alerts in the app. The notification contents include the Immuta profile ID for the user that caused the observation, the severity of the observation, a short message about what prompted the observation, and a link to look at the observation in Immuta for additional details. Note that the notification does not show any personally identifiable information about your users. The Immuta profile ID is an integer that represents a user profile in Immuta, but any additional information about the user must be accessed in the Immuta UI.
Due to a Snowflake limitation, a monitor for unauthorized query outcome cannot be combined with specific tags or sensitivities. It must be created for all data sources and cannot be scoped to a schema or specific data source. Successful query outcomes can be combined with specific tags or sensitivities and scoped to a specific schema or data source.
Public preview
This feature is available to all accounts. Contact your Immuta representative to enable this feature.
Requirements:
Immuta permission AUDIT
Monitors feature enabled
Navigate to Detect in the navigation menu.
Click Create Monitor.
Enter a Name for the monitor.
Choose what to monitor in the dropdown menu:
When User Accessed Any Data Source: This monitors user activity for all data sources in Immuta.
When User Accessed Data Source in Schema: This monitors user activity for just the data sources within the schema you enter.
When User Accessed Specific Data Source: This monitors user activity for the specific data source you enter.
Create conditions for the monitor to further scope user activities:
Query Duration: This scopes the monitor to consider the duration of the query. Enter the Query Duration in seconds.
Tag: This scopes the monitor to consider queries whose event contexts include all of the selected tags. The query must be associated with all specified tags in any combination of queried column tags, queried classification tags, and queried table tags. For more information, see the query event context concept.
Query Outcome: This scopes the monitor to consider the queries' results as successful, unauthorized, or failure. You can select Unauthorized or Failed to create a monitor that can notify you when a registered Immuta user has exceeded the configurable threshold for unauthorized or failed queries. This condition only works with the User Query Count metric scoped to When User Accessed Any Data Source.
Sensitivity: This scopes the monitors to only consider queries that are classified as sensitive or highly sensitive. This condition should only be used if classification has been configured.
All conditions must be satisfied for the query to be considered by the monitor.
Select Next to configure rules.
Select the Timeframe from the dropdown menu to specify the time range the threshold cannot be exceeded within.
Choose what kind of user activity metric to monitor in the metric dropdown menu:
Number of Rows Accessed: This monitors for the quantity of rows the user accessed and can be combined with additional conditions on tags and sensitivity. The exact number of rows is configured in the severity thresholds.
User Query Count: This monitors the number of queries the user made and can be combined with additional conditions on tags, sensitivity, and query outcome. The exact number of queries is configured in the severity thresholds.
Select at least one of the Severity Thresholds to set thresholds for the configured user activity metric. An observation will be created and assigned the matching severity when the metric exceeds the threshold.
Click Next to show the notifications configuration.
Choose the frequency of the notifications to webhooks when an observation is created:
Never: You can review observations in the Immuta application, and Immuta will not send webhook notifications when observations are made.
Notify each time an Observation is generated: Every time the monitor creates an observation, a webhook notification will be sent.
Notify the first time an Observation is generated for each user: Every time the monitor creates an observation, a webhook notification will be sent for the first observation about a user. You will not receive notifications for observations from the monitor again for previously notified observations about the same user. New observations about users that were previously notified can be reviewed in the Immuta UI.
Select a webhook from the dropdown menu or opt to create a new webhook.
Choose the severity you want notifications for. This will send out webhook notifications only for the severity threshold that you select.
Click Next and review the monitor selections.
Click Create Monitor.
Intermingling your pre-existing roles in Databricks with Immuta can be confusing at first. Below outlines some best practices on how to think about roles in each platform.
Access to data, platform permissions, and the ability to use clusters and data warehouses are controlled in Databricks Unity Catalog with permissions to individual users or groups. Immuta can control those permissions to grant users permission to read data based on subscription policies.
This section discusses best practices for Databricks Unity Catalog permissions for end-users.
Users who consume data (directly in your Databricks workspace or through other applications) need permission to access objects. But permissions are also used to control write, Databricks clusters and warehouses, and other object types that can be registered in Databricks Unity Catalog.
To manage this at scale, Immuta recommends taking a 3-layer approach, where you separate the different permissions into different privileges:
Privileges for read access (Immuta managed)
Privileges for write access (optional, soon supported by Immuta)
Privileges for warehouse and clusters, internal billing
Read access is managed by Immuta. By using subscription policies, data access can be controlled to the table level. Attribute-based table GRANTS help you scale compared to RBAC, where access control is typically done on a schema or catalog level.
Since Immuta leverages native Databricks Unity Catalog GRANTs, you can combine Immuta’s grants with grants done manually in Databricks Unity Catalog. This means you can gradually migrate to an Immuta-protected Databricks workspace.
Write access is typically granted on a schema, catalog, or volume level. This makes it easy to manage in Databricks Unity Catalog through manual grants. We recommend creating groups that give INSERT
, UPDATE
, or DELETE
permissions to a specific schema or catalog and attach this group to a user. This attachment can be done manually or using your identity manager groups. (See the Databricks documentation for details.) Note that Immuta is working toward supporting write policies, so this will not need to be separately managed for long.
Warehouses and clusters are granted to users to give them access to computing resources. Since this is directly tied to Databricks’ consumption model, warehouses and clusters are typically linked to cost centers for (internal) billing purposes. Immuta recommends creating a group per team/domain/cost center, applying this group for cluster/warehouse privileges, and granting this group to users using identity manager groups.
Immuta has two types of service accounts to connect to Databricks:
Policy role: Immuta needs to use a service principal to be able to push policies to Databricks Unity Catalog and to pull audits to Immuta (optional). This principal needs USE CATALOG
and USE SCHEMA
on all catalogs and schemas, and SELECT
and MODIFY
on all tables in the metastore managed by Immuta.
Data ownership role: You will also need a user/principal for the data source registration. A service account/principal is recommended so that when the user moves or leaves the organization, Immuta still has the proper credentials to connect to Databricks Unity Catalog. You can follow one of the two best practices:
A central role for registration (recommended): It is recommended that you create a service role/user with SELECT
permissions for all objects in your metastore. Immuta can register all the tables and views from Databricks, populate the Immuta catalog, and scan the objects for sensitive data using Immuta Discover. Immuta will not apply policy directly by default, so no existing access will be impacted.
A service principal per domain (alternative): Alternatively, if you cannot create a service principal with SELECT
permissions for all objects, you can allow the different domains or teams in the organization to use a service user/principal scoped to their data. This is delegating metadata registration and aligns well with data mesh type use cases and means every team is responsible for registering their data sets in Immuta.
Immuta’s universal audit model (UAM) provides audit logs with a consistent structure for query, authentication, policy, project, and tag events from your Immuta users and data sources. You can view the information in these UAM audit logs on the Detect dashboards or export the full audit logs to S3 and ADLS for long-term backup and processing with log data processors and tools. This capability fosters convenient integrations with log monitoring services and data pipelines.
You can specify an S3 bucket destination where Immuta will periodically export audit logs when using S3. When using ADLS, you can specify the container destination where Immuta will export audit logs. If desired, users can configure both export options to export their audit logs to S3 and ADLS simultaneously.
The events captured are events relevant to user and system actions that affect Immuta or the integrated data platforms, such as creating policies or data sources and running queries.
See a list of the events captured and example schemas on the UAM schema reference guide.
The Immuta audit service is an independent microservice that captures audit events from Immuta and queries run against your Snowflake, Databricks, or Unity Catalog integration.
Immuta stores the export endpoints you provide during configuration, retrieves the audit records pushed to the audit service by your integration, and manages the audit exports based on an export schedule you define. These audit records are also stored to support future reporting and user interface enhancements that will allow you to search based on keywords and facets easily across the entire body of audit events.
After the integration endpoint has been configured, the export scheduler will run on the schedule you defined in your configuration.
When users query data and the event is audited, the audit service receives events from your Snowflake, Databricks Spark, Databricks Unity Catalog, or Starburst (Trino) integration.
Immuta exports the audit logs to your configured S3 bucket or ADLS container.
The table below outlines what information is included in the query audit logs for each integration where query audit is supported.
Legend:
The audit service does not capture system-level logging and debugging information, such as 404 errors.
Snowflake query audit events from a query using cached results will show 0
for the rowsProduced
field.
Enrichment of audit logs with Immuta entitlements information is not supported. While you will see these entitlements in the Databricks Spark audit logs, the following will not be in the Databricks Unity Catalog audit logs:
Immuta policies information
User attributes
Groups
Immuta determines unauthorized events based on error messages within Unity Catalog records. When the error messages contain expected language, unauthorized events will be available for Databricks Unity Catalog audit logs, in other cases it is not possible to determine the cause of an error.
Unauthorized logs for cluster queries are not marked as unauthorized; they always will be a failure.
Data source information will be provided when available:
For some queries, Databricks Unity Catalog does not report the target data source for the data access operation. In these cases the activity is audited, yet the audit record in Immuta will not include the target data source information.
The target data source information is not available for unauthorized queries and events.
The column affected by the query is not currently supported.
The cluster for the Unity Catalog integration must always be running for Immuta to audit activity and present audit logs.
Audit for the columns accessed in the query is not currently supported for Starburst, but is coming soon.
Audit for unauthorized access is not currently supported.
Audit including the user’s entitlements is not currently supported.
Immuta is not just a location to define your policy logic; Immuta also enforces that logic in your data platform. How that occurs varies based on each data platform, but the overall architecture remains consistent and follows the NIST Zero Trust framework. The below diagram describes the recommended architecture from NIST:
Immuta lives in the middle control plane. To do this, Immuta knows details about the subjects and enterprise resources, acts as the policy decision point through policies administered by policy administrators, and makes real-time policy decisions using the internal Immuta policy engine.
Lastly, and of importance to how Immuta Secure functions, Immuta also enables the policy enforcement point by administering the policies natively in your data platform in a way that can react to policy changes and live queries.
To use Immuta, you must configure the Immuta native integration, which will require some level of privileged access to administer policies in your data platform, depending on your data platform and how the Immuta integration works. If using Databricks, refer to Databricks roles best practices for Databricks before configuring the native integration.
Starburst (Trino) query audit logs is a feature that audits queries that users run natively in Starburst (Trino) and presents them in a universal format as Immuta audit logs. Users can view audit records for queries made in Starburst (Trino) against Immuta data sources on the audit page. Immuta audits the activity of Immuta users on Immuta data sources.
Starburst (Trino) integration with the Starburst or Trino plugin version 443 or newer, or Trino 435 with the Immuta Trino 435.1 plugin
Starburst (Trino) users registered as Immuta users: Note that the users' Starburst (Trino) usernames must be mapped to Immuta. Without this, Immuta will not know the users are Immuta users and will not collect audit events for their data access activity.
Store audit logs
By default Starburst (Trino) audit logs expire after 90 days. Export the universal audit model (UAM) logs to S3 or ADLS Gen 2, and store audit logs outside of Immuta in order to retain the audit logs long-term.
Each audit message from the Immuta platform will be a one-line JSON object containing the properties listed below.
objectsAccessed
is not available with Hive or Iceberg views.
columnsAccessed
will include columns related to the query that were not actually accessed in some cases:
For row access policies that rely on a column in the queried table, even if that column was not a part of the query, it will be included in the columnsAccessed
.
For conditional masking, if the policy protects a column accessed, then the conditional column will be included in the columnsAccessed
.
In addition to the executed Spark plan, the tables, and the tables' underlying paths for every audited Spark job, Immuta captures the code or query that triggers the Spark plan. Immuta audits the activity of Immuta users on Immuta data sources.
Databricks users registered as Immuta users: Note that the users' Databricks usernames must be mapped to Immuta. Without this, Immuta will not know the users are Immuta users and will not collect audit events for their data access activity.
Store audit logs
By default Databricks audit logs expire after 90 days. Export the universal audit model (UAM) logs to S3 or ADLS Gen2, and store audit logs outside of Immuta in order to retain the audit logs long-term.
Each audit message from the Immuta platform will be a one-line JSON object containing the properties listed below.
Below is an example of the queryText
, which contains the full notebook cell (since the query was the result of a notebook). If the query had been from a JDBC connection, the queryText
would contain the full SQL query.
This notebook cell had multiple audit records associated with it.
Beyond raw audit events (such as “John Doe queried Table X in Databricks"), the Databricks audit records include the policy information enforced during the query execution, even if a query was denied.
Queries will be denied if at least one of the conditions below is true:
User does not meet policy conditions.
User is not subscribed to the data source.
Data source is not in the user's current project.
Data source is in the user's current project, but the user is not subscribed to the data source.
Data source is not registered in Immuta.
The user's entitlements
represent the state at the time of the query. This includes the following fields:
The policySet
includes the following fields:
Native query audit for Databricks Unity Catalog captures user data access within Unity Catalog and presents them in a universal format as Immuta audit logs. Multiple access options are supported for audit:
Cluster queries with the following supported languages: SQL, Scala, Python, and R.
SQL warehouse queries
Immuta audits the activity of all Unity Catalog users and tables regardless of whether they are registered in Immuta.
A Databricks deployment with system tables capabilities
Store audit logs
By default Databricks Unity Catalog audit logs expire after 90 days. Export the universal audit model (UAM) logs to S3 or ADLS Gen2, and store audit logs outside of Immuta in order to retain the audit logs long-term.
Immuta collects audit records at the frequency configured when enabling the integration, which is between 1 and 24 hours. The frequency is a global setting based on integration type, so organizations with multiple Unity Catalog integrations will have the same audit frequency for all of them. The more frequent the audit records are ingested, the more current the audit records; however, there could be performance and cost impacts from the frequent jobs. Immuta will start a Databricks cluster to complete the audit ingest job if one is not already running.
To manually prompt the native query audit, click Load Audit Events on the Immuta audit page.
Immuta audits all data sources and users in Unity Catalog. An administrator can configure the integration to just ingest specific workspaces when enabling the integration.
Each audit message from the Immuta platform will be a one-line JSON object containing the properties listed below.
Enrichment of audit logs with Immuta entitlements information is not supported. While you will see these entitlements in the Databricks Spark audit logs, the following will not be in the native query audit for Unity Catalog:
Immuta policies information
User attributes
Groups
Immuta determines unauthorized events based on error messages within Unity Catalog records. When the error messages contain expected language, unauthorized events will be available for native query audit for Unity Catalog. In other cases, it is not possible to determine the cause of an error.
Audit for cluster queries do not support UNAUTHORIZED
status. If a cluster query is unauthorized, it will show FAILURE
.
Data source information will be provided when available:
For some queries, Databricks Unity Catalog does not report the target data source for the data access operation. In these cases the activity is audited, yet the audit record in Immuta will not include the target data source information.
Data source information is not available for unauthorized queries and events.
Column information from the query is not currently supported.
Immuta audit records include unregistered data sources and users; however, activity from them will not appear in any governance reports.
Immuta Detect is a tool that monitors your data environment and provides analytic dashboards in the Immuta UI based on your data use. These dashboards offer visualizations of audit events, including user queries and when Discover classification is enabled, the sensitivity of those queries, data sources, and columns. It can work within your current Immuta integration.
Immuta Detect continually monitors your data environment to help answer questions about your most active data users, the most accessed data, and the events happening within your data environment. Detect can provide even more value with Discover classification enabled to answer questions about the sensitive data accessed by your users and the tables that contain sensitive data. Because of this information, your organization can do the following:
Meet compliance requirements more effectively
Quickly decide what data access is allowed for what purposes
Reduce the effort and time to respond to auditors about data access in your company
Reduce the effort of classifying data within the scope of security or regulatory compliance frameworks
Use Discover classification when using a Snowflake integration
You have the option to use Immuta Detect on its own or, if you are using a Snowflake integration, to enable Discover to classify your data. There are benefits to both, but for the fullest functionality, greatest value, and best experience, it is recommended to enable and tune classification.
Only available with Snowflake integrations
Dashboards with data activity patterns for data sources and users
Dynamic query sensitivity on joined tables calculate sensitivity based on the columns queried and their toxicity when joined
Dashboards to help users find the most recently accessed data sources and active columns
Immuta Detect uses several features of the Immuta platform to create user-friendly dashboards that are always available in the UI and do not need to be generated like Immuta reports. These dashboards are created by combining Snowflake audit events from registered users and the sensitivity of your data. Audit information and events are gathered from the Snowflake ACCOUNT_USAGE
views into Immuta Detect. Additionally, Immuta Discover calculates the sensitivity of your data using Immuta built-in frameworks: the Data Security Framework and Risk Assessment Framework, which find sensitive data on a column-by-column basis using tags applied by SDD. Once Immuta does this work behind the scenes, users with the AUDIT
permission will see dashboards that show the sensitive data within your organization’s data environment and what users are accessing that data.
With Discover classification enabled, Immuta qualifies both columns and queries as the following sensitivity types in the dashboards:
Highly sensitive: Includes data that can cause severe harm or loss with inappropriate access or misuse.
Sensitive: Includes personal data and data that could cause harm or loss with inappropriate access or misuse.
Non-sensitive: Includes publicly available information or data that would not typically cause harm or loss if disclosed.
Indeterminate: The sensitivity of the data is unknown. Immuta deems sensitivity indeterminate because of an error in the query or because the sensitive data discovery (SDD) or classification has not completed processing at the time the query was run.
How does Immuta determine column sensitivity?
Column sensitivity is determined by the classification tags applied to the columns by the frameworks. The classification tags contain sensitivity metadata.
How does Immuta determine query sensitivity?
For queries that read from a single table, query sensitivity is determined by the column with the highest sensitivity in the query
For a query that joins tables, Immuta uses the same classification rules applied to tables and applies those rules to columns of the query. Immuta applies a new set of classification tags to the query columns and calculates sensitivity for the query event in the audit record. These query classification tags are not included on the tables' data dictionary.
Quicker and easier onboarding experience
Dashboards with data activity patterns for data sources and users
Dashboards to help users find the most recently accessed data sources and active columns
Immuta Detect uses several features of the Immuta platform to create user-friendly dashboards that are always available in the UI and do not need to be generated like Immuta reports. These dashboards are created from audit information and events gathered from Snowflake, Databricks Spark, and Databricks Unity Catalog into Immuta Detect. Immuta pulls audit information from Snowflake and Databricks Spark for data sources and users registered in Immuta; for Databricks Unity Catalog, Immuta pulls in audit information for all users and tables. Users with the AUDIT
permission will see dashboards that show the data events within your organization’s data environment and what users are accessing that data.
Several dashboards are available to help you find the information you need which can be filtered or set to a specific date range by the viewer. Users with the AUDIT
permission, Audit Activity
permission, or a data owner can see the dashboards.
Data-centric views: These dashboards provide information on how your data sources are being queried.
Activity summary of all data sources found on the main Data Overview tab.
Activity summary by data source found on each data source's Data Overview tab.
Audit views: These dashboards present your audit logs in an organized table.
Immuta activity audit found on the Audit tab.
Detailed audit event found by selecting an event ID from the Audit page.
User-centric views: These dashboards provide information about your Immuta users.
Activity summary of all users found on the main People tab.
Activity summary by user found by selecting a user's name from the People page.
The Detect dashboard shows near real-time events for Immuta events, such as login, policy changes, and data platform policy changes. Query events are ingested from Snowflake and Databricks once a day, but you can manually trigger an immediate query retrieval by using the ↻Native Query Audit button on the Audit page or the Load Audit Events button on the Audit page. To update your automatic query retrieval, edit your integration.
The most recent query history that is available to Immuta Detect depends on the underlying data platform latency. For example, there is up to three hours of latency between an executed query and recording the event on the Snowflake data platform side.
Detect with Databricks Spark and Databricks Unity Catalog does not support using Discover classification to determine query sensitivity at this time.
Snowflake | Databricks Spark | Databricks Unity Catalog | Starburst (Trino) | |
---|---|---|---|---|
This is available and the information is included in audit logs.
This is not available and the information is not included in audit logs.
Property | Description | Example |
---|---|---|
Property | Description | Example |
---|---|---|
Property | Description |
---|---|
Property | Description | Possible values |
---|---|---|
Property | Description | Example |
---|---|---|
action
The action associated with the audit log.
QUERY
actor.type
The Immuta user type of the actor who made the query.
USER_ACTOR
actor.id
The Immuta user ID of the actor who made the query.
taylor@starburst.com
actor.name
The Immuta name of the user who made the query.
Taylor
actor.identityProvider
The IAM the user is registered in. bim
is the built-in Immuta IAM.
bim
actor.profileId
The profile ID of the user who made the query.
10
actionStatus
Indicates whether or not the user was granted access to the data. Possible values are FAILURE
or SUCCESS
. Unauthorized access is not audited for Starburst (Trino).
SUCCESS
eventTimestamp
The time the query occurred.
2023-06-27T11:03:59.000Z
id
The unique Immuta ID of the audit record. This will match the Trino query ID.
20240221_200952_00200_qhadw
tenantId
The Immuta SaaS tenant ID.
your-immuta.com
targetType
The type of targets affected by the query; this value will always be DATASOURCE
.
DATASOURCE
targets
A list of the targets affected by the query.
See the example below
auditPayload.type
The type of audit record; this value will always be: QueryAuditPayload
.
QueryAuditPayload
auditPayload.queryId
The unique Starburst (Trino) ID of the query.
20240221_200952_00200_qhadw
auditPayload.query
The command text of the query that was run in the integration. Immuta truncates the query text to the first 2048 characters.
select * from lineitem l join orders o on l.orderkey = o.orderkey limit 10
auditPayload.startTime
The date and time the query started in UTC.
2023-06-27T11:03:59.000Z
auditPayload.duration
The time the query took in seconds.
0.557
auditPayload.objectsAccessed
An array of the data sources accessed in the query.
See example below.
auditPayload.objectsAccessed.name
The name of the data source accessed in the query.
\"tpch\".\"tiny\".\"customer\"
auditPayload.objectsAccessed.datasourceId
The Immuta data source ID.
17
auditPayload.objectsAccessed.databaseName
The name of the Starburst (Trino) catalog.
tpch
auditPayload.objectsAccessed.schemaName
The name of the Starburst (Trino) schema.
tiny
auditPayload.objectsAccessed.type
Specifies if the queried data source is a table or view. Starburst (Trino) queries are always LOGICAL_TABLE
, which could be either.
LOGICAL_TABLE
auditPayload.objectsAccessed.columns
An array of the columns accessed in the query.
See example below.
auditPayload.objectsAccessed.columns.name
The name of the column.
custkey
auditPayload.objectsAccessed.columns.tags
An array of the tags on the column.
See example below.
auditPayload.objectsAccessed.columns.securityProfile
Details about the sensitivity of the column. Available when classification frameworks are configured.
See example below.
auditPayload.objectsAccessed.columns.inferred
If true
, the column accessed has been determined by Immuta based on the available audit information from Starburst (Trino) and query parsing. It was not explicitly provided.
true
auditPayload.objectsAccessed.securityProfile
A classification for all the columns accessed together. Available when classification frameworks are configured.
See example below.
auditPayload.technologyContext.type
The technology the query was made in.
TrinoContext
auditPayload.technologyContext.trinoUsername
The Starburst (Trino) user ID for the user who made the query.
taylor@starburst.com
auditPayload.technologyContext.immutaPluginVersion
The version of the Immuta plugin in Starburst (Trino).
437-SNAPSHOT
auditPayload.technologyContext.rowsProduced
The number of rows returned in the query.
3
auditPayload.version
The version of the audit event schema.
1
receivedTimestamp
The timestamp of when the audit event was received and stored by Immuta.
2023-06-27T15:18:22.314Z
action
The action associated with the audit log.
QUERY
actor.type
The Immuta user type of the actor who made the query.
USER_ACTOR
actor.id
The Immuta user ID of the actor who made the query.
taylor@databricks.com
actor.name
The Immuta name of the user who made the query.
Taylor
actor.identityProvider
The IAM the user is registered in. bim
is the built-in Immuta IAM.
bim
sessionId
The session ID of the user who performed the action.
01ee14d9-cab3-1ef6-9cc4-f0c315a53788
actionStatus
Indicates whether or not the user was granted access to the data. Possible values are UNAUTHORIZED
, FAILURE
, or SUCCESS
.
SUCCESS
actionStatusReason
When a user's query is denied, this property explains why. When a query is successful, this value is null
.
eventTimestamp
The time the query occurred.
2023-06-27T11:03:59.000Z
id
The unique ID of the audit record.
9f542dfd-5099-4362-a72d-8377306db3b8
customerId
The unique Databricks customer ID.
9f542dfd-5099-4362-a72d-8377306db3b8
targetType
The type of targets affected by the query; this value will always be DATASOURCE
.
DATASOURCE
targets
A list of the targets affected by the query.
See the example below
auditPayload.type
The type of audit record; this value will always be: QueryAuditPayload
.
QueryAuditPayload
auditPayload.queryId
The unique ID of the query. If the query joins multiple tables, each table will appear as a separate log, but all will have the same query ID.
01ee14da-517a-1670-afce-0c3e0fdcf7d4
auditPayload.query
The query that was run in the integration. Immuta truncates the query text to the first 2048 characters.
See the example below
auditPayload.startTime
The date and time the query started in UTC.
2023-06-27T11:03:59.000Z
auditPayload.duration
Not available for Databricks Spark audit events.
null
auditPayload.accessControls
Includes the user's groups, attributes, and current project at the time of the query.
auditPayload.policySet
Provides policy details.
auditPayload.technologyContext.type
The technology the query was made in.
DatabricksContext
auditPayload.technologyContext.clusterId
The Databricks cluster ID.
null
auditPayload.technologyContext.clusterName
The Databricks cluster name.
databricks-cluster-name
auditPayload.technologyContext.workspaceId
The Databricks workspace ID.
8765531160949612
auditPayload.technologyContext.pathUris
The Databricks URI scheme for the storage type.
["dbfs:/user/hive/warehouse/your_database.db/movies"]
auditPayload.technologyContext.metastoreTables
The Databricks metastore tables.
["your_database.movies"]
auditPayload.technologyContext.queryLanguage
The queryLanguage
corresponds to the programming language used: SQL, Python, Scala, or R. Audited JDBC queries will indicate that it came from JDBC here.
python
auditPayload.technologyContext.queryText
The queryText
will contain either the full notebook cell (when the query is the result of a notebook) or the full SQL query (when it is a query from a JDBC connection).
See the example below
auditPayload.technologyContext.immutaPluginVersion
The Immuta plugin version for the Databricks integration.
2022.3.0-spark-3.1.1
receivedTimestamp
The timestamp of when the audit event was received and stored by Immuta.
2023-06-27T15:18:22.314Z
project
The user's current project.
attributes
The user's attributes.
groups
The user's groups.
impersonatedUsers
The user that the current user is impersonating.
subscriptionPolicyType
The type of subscription policy.
MANUAL
, ADVANCED
, or ENTITLEMENTS
type
Indicates whether the policy is a subscription or data policy. Query denied records will always be a subscription policy type
.
SUBSCRIPTION
or DATA
ruleAppliedForUser
True if the policy was applied for the user. If false
, the user was an exception to the policy.
true
or false
rationale
The policy rationale written by the policy creator.
-
global
True if the policy was a global policy. If false
, the policy is local.
true
or false
mergedPolicies
Shows the policy information for each of the merged global subscription policies, if available.
-
action
The action associated with the audit log.
QUERY
actor.type
The Immuta user type of the actor who made the query. When the actor is not registered with Immuta, the type
, id
, and name
fields will be unknown
.
USER_ACTOR
actor.id
The Immuta user ID of the actor who made the query. When the actor is not registered with Immuta, the type
, id
, and name
fields will be unknown
.
taylor@databricks.com
actor.name
The Immuta name of the user who made the query. When the user is not registered with Immuta, the type
, id
, and name
fields will be unknown
.
Taylor
actor.identityProvider
The IAM the user is registered in. bim
is the built-in Immuta IAM. When the user is not registered with Immuta, this field will be omitted.
bim
actor.profileId
The profile ID of the user who made the query. When the user is not registered with Immuta, this field will be omitted.
10
sessionId
The session ID of the user who performed the action.
01ee14d9-cab3-1ef6-9cc4-f0c315a53788
requestId
The API request ID that triggered the action, if applicable.
504b8fd9-38c1-4a90-966e-7445a6675f79
actionStatus
Indicates whether or not the user was granted access to the data. Possible values are UNAUTHORIZED
, FAILURE
, or SUCCESS
.
SUCCESS
actionStatusReason
When available, the reason from Unity Catalog that the user’s query was denied.
null
if actionStatus is SUCCESS
eventTimestamp
The time the query occurred.
2023-06-27T11:03:59.000Z
id
The unique ID of the audit record.
9f542dfd-5099-4362-a72d-8377306db3b8
tenantId
The Immuta SaaS tenant ID.
your-immuta.com
userAgent
Client information of the user who made the query.
-
targetType
The type of targets affected by the query; this value will always be DATASOURCE
.
DATASOURCE
targets
A list of the targets affected by the query.
See the example below
auditPayload.type
The type of audit record; this value will always be: QueryAuditPayload
.
QueryAuditPayload
auditPayload.queryId
The unique ID of the query. If the query joins multiple tables, each table will appear as a separate log, but all will have the same query ID.
01ee14da-517a-1670-afce-0c3e0fdcf7d4
auditPayload.query
The command text of the query that was run in the integration. Immuta truncates the query text to the first 2048 characters.
SELECT VERSION AS 'version' FROM 'sample-data'.'__immuta_version'
auditPayload.startTime
The date and time the query started in UTC.
2023-06-27T11:03:59.000Z
auditPayload.duration
The time the query took in seconds.
0.557
auditPayload.errorCode
The errorCode
for the denied query.
null
if actionStatus is SUCCESS
auditPayload.technologyContext.type
The technology the query was made in.
DatabricksContext
auditPayload.technologyContext.clusterId
The Unity Catalog cluster ID.
null
auditPayload.technologyContext.workspaceId
The Unity Catalog workspace ID.
8765531160949612
auditPayload.technologyContext.service
Where in Unity Catalog the query was made. Possible values are SQL
for SQL warehouses and NOTEBOOK
for notebooks.
SQL
auditPayload.technologyContext.warehouseId
The Unity Catalog warehouse ID.
559483c6eac0359f
auditPayload.technologyContext.notebookId
The Unity Catalog notebook ID.
869500255746458
auditPayload.technologyContext.account.id
The actor’s Unity Catalog account ID
52e863bc-ea7f-46a9-8e17-6aed7541832d
auditPayload.technologyContext.account.username
The actor’s Unity Catalog username.
taylor@databricks.com
auditPayload.technologyContext.host
The Unity Catalog host.
deployment-name.cloud.databricks.com
auditPayload.technologyContext.clientIp
The IP address of the Spark cluster the request is coming from.
0.0.0.0
auditPayload.objectsAccessed
The Unity Catalog objects accessed.
[]
auditPayload.securityProfile.sensitivity.score
The sensitivity score of the query. Classification must be configured for this field.
INDETERMINATE
auditPayload.version
The version of the audit event schema.
1
receivedTimestamp
The timestamp of when the audit event was received and stored by Immuta.
2023-06-27T15:18:22.314Z
Table and user coverage
Registered data sources and users
Registered data sources and users
All tables and users
Registered data sources and users
Object queried
Columns returned
Query text
Unauthorized information
Policy details
User's entitlements
Column tags
Table tags
Public preview
This feature is available to all accounts. Contact your Immuta representative to enable this feature.
Monitors allow you to gain awareness of when your users' behavior changes, maintain data availability through data platform policy changes, ensure access patterns remain consistent for your data controls to remain effective, and be notified about anomalies.
This guide illustrates how to create a monitor to generate user activity metrics based on query events.
This guide explains the design and requirements for monitors and the contexts in which to use them.
Snowflake Enterprise Edition or higher
Store audit logs
To manually request native query audit ingestion, click Load Audit Events on the Immuta audit page.
Each audit message from the Immuta platform will be a one-line JSON object containing the properties listed below.
Universal audit model (UAM) is Immuta's consistent structure for all Immuta system and user query audit logs. This reference guide maps the legacy audit events to the new UAM events and provides example schemas of all the UAM events available in Immuta.
Event: ApiKeyCreated
Legacy event: apiKey
Description: An audit event for when an API key is created on the Immuta app settings page or from an Immuta user's profile page.
Event: ApiKeyDeleted
Legacy event: apiKey
Description: An audit event for when an API key is deleted on the Immuta app settings page or from an Immuta user's profile page.
Event: AttributeApplied
Legacy events: accessUser
and accessGroup
Description: An audit event for an attribute applied to a group or user.
Additional parameter details: targetType
will specify whether the attribute was added to a USER
or GROUP
.
Event: AttributeRemoved
Legacy events: accessUser
and accessGroup
Description: An audit event for an attribute removed from a group or user.
Additional parameter details: targetType
will specify whether the attribute was removed from a USER
or GROUP
.
Event: ConfigurationUpdated
Legacy event: configurationUpdate
Description: An audit event for updates to the configuration on the Immuta app settings page.
Event: DatasourceAppliedToProject
Legacy event: addToProject
Description: An audit event for adding a data source to an Immuta project.
Event: DatasourceCatalogSynced
Legacy event: catalogUpdate
Description: An audit event for syncing an external catalog to tag Immuta data sources.
Event: DatasourceCreated
Legacy event: dataSourceCreate
Description: An audit event for registering a table as an Immuta data source.
Event: DatasourceDeleted
Legacy event: dataSourceDelete
Description: An audit event for deleting a data source in Immuta.
Event: DatasourceDisabled
Legacy event: None
Description: An audit event for disabling a data source in Immuta.
Event: DatasourceGlobalPolicyApplied
Legacy event: globalPolicyApplied
Description: An audit event for applying a global policy to a data source.
Event: DatasourceGlobalPolicyConflictResolved
Legacy event: globalPolicyConflictResolved
Description: An audit event for a global policy conflict being resolved on a data source.
Event: DatasourceGlobalPolicyDisabled
Legacy event: globalPolicyDisabled
Description: An audit event for a data owner disabling a global policy from their data source.
Event: DatasourceGlobalPolicyRemoved
Legacy event: globalPolicyRemoved
Description: An audit event for a data owner removing a global policy from their data source.
Event: DatasourcePolicyCertificationExpired
Legacy event: policyCertificationExpired
Description: An audit event for a global policy certification expiring on a data source.
Event: DatasourcePolicyCertified
Legacy event: globalPolicyCertify
Description: An audit event for a global policy being certified by a data owner for their data source.
Event: DatasourcePolicyDecertified
Legacy events: None
Description: An audit event for a global policy being decertified on a data source.
Event: DatasourceRemovedFromProject
Legacy event: removeFromProject
Description: An audit event for removing a data source from a project.
Event: DatasourceUpdated
Legacy events: dataSourceUpdate
and dataSourceSave
Description: An audit event for updating a data source with the new data source details.
Event: DomainCreated
Legacy event: collectionCreated
Description: An audit event for creating a domain.
Event: DomainDataSourcesUpdated
Legacy events: collectionDataSourceAdded
, collectionDataSourceRemoved
, and collectionDataSourceUpdated
Description: An audit event for updating a domain's data sources.
Additional parameter details: auditPayload.updateType will specify whether the data source was added to or removed from the domain.
Event: DomainDeleted
Legacy event: collectionDeleted
Description: An audit event for deleting a domain.
Event: DomainPermissionsUpdated
Legacy events: collectionPermissionGranted
and collectionPermissionRevoked
Description: An audit event for granting or revoking a user's domain-related permissions.
Additional parameter details: auditPayload.updateType will specify whether the permission was granted to or revoked from a user.
Event: DomainUpdated
Legacy event: collectionUpdated
Description: An audit event for updating an Immuta domain.
Event: GlobalPolicyApprovalRescinded
Legacy event: globalPolicyApprovalRescinded
Description: An audit event for a global policy approval rescinded in the approve to promote workflow.
Event: GlobalPolicyApproved
Legacy event: globalPolicyApproved
Description: An audit event for a global policy approved in the approve to promote workflow.
Event: GlobalPolicyChangeRequested
Legacy event: globalPolicyChangeRequested
Description: An audit event for requested edits on a global policy in the approve to promote workflow.
Event: GlobalPolicyCreated
Legacy event: globalPolicyCreate
Description: An audit event for creating a global policy.
Event: GlobalPolicyDeleted
Legacy event: globalPolicyDelete
Description: An audit event for deleting a global policy.
Event: GlobalPolicyPromoted
Legacy event: globalPolicyPromoted
Description: An audit event for when a global policy is fully approved and promoted to production in the approve to promote workflow.
Event: GlobalPolicyReviewRequested
Legacy event: globalPolicyReviewRequested
Description: An audit event for when a global policy is ready and requests a review in the approve to promote workflow.
Event: GlobalPolicyUpdated
Legacy event: globalPolicyUpdate
Description: An audit event for a global policy being updated with details about the policy.
Event: GroupCreated
Legacy event: accessGroup
Description: An audit event for a group created in Immuta.
Event: GroupDeleted
Legacy event: accessGroup
Description: An audit event for a group deleted in Immuta.
Event: GroupMemberAdded
Legacy event: accessGroup
Description: An audit event for a member added to a group in Immuta.
Event: GroupMemberRemoved
Legacy event: accessGroup
Description: An audit event for a group member removed from the group in Immuta.
Event: GroupUpdated
Legacy event: accessGroup
Description: An audit event for a group updated in Immuta.
Event: LicenseCreated
Legacy event: licenseCreate
Description: An audit event for creating an Immuta license.
Event: LicenseDeleted
Legacy event: licenseDelete
Description: An audit event for deleting an Immuta license.
Event: LocalPolicyCreated
Legacy event: policyHandlerCreate
Description: An audit event for creating a local policy for an Immuta data source.
Event: LocalPolicyUpdated
Legacy event: policyHandlerUpdate
Description: An audit event for updating a local policy on an Immuta data source.
Event: PermissionApplied
Legacy event: accessUser
Description: An audit event for a permission applied to an Immuta user.
Event: PermissionRemoved
Legacy event: accessUser
Description: An audit event for a permission removed from an Immuta user.
Event: PolicyAdjustmentCreated
Legacy event: policyAdjustmentCreate
Description: An audit event for creating a policy adjustment in an Immuta project.
Event: PolicyAdjustmentDeleted
Legacy event: policyAdjustmentDelete
Description: An audit event for deleting a policy adjustment in an Immuta project.
Event: ProjectCreated
Legacy event: projectCreate
Description: An audit event for creating a project in Immuta.
Event: ProjectDeleted
Legacy event: projectDelete
Description: An audit event for deleting a project in Immuta.
Event: ProjectDisabled
Legacy events: None
Description: An audit event for disabling a project in Immuta.
Event: ProjectPurposeApproved
Legacy event: projectPurposeApprove
Description: An audit event for approving a purpose for a project in Immuta.
Event: ProjectPurposeDenied
Legacy event: projectPurposeDeny
Description: An audit event for denying a purpose for a project in Immuta.
Event: ProjectPurposesAcknowledged
Legacy event: acknowledgePurposes
Description: An audit event for acknowledging a purpose for a project in Immuta.
Event: ProjectUpdated
Legacy event: projectPurposeDeny
Description: An audit event for updating a project in Immuta.
Event: PurposeDeleted
Legacy event: purposeDelete
Description: An audit event for deleting a purpose in Immuta.
Event: PurposeUpdated
Legacy event: purposeUpdate
Description: An audit event for updating a purpose in Immuta.
Event: PurposeUpserted
Legacy event: purposeCreate
Description: An audit event for creating a purpose in Immuta.
Event: SDDClassifierCreated
Legacy event: sddClassifierCreated
Description: An audit event for creating a sensitive data discovery (SDD) column name regex, regex, or dictionary identifier.
Additional parameter details:
auditPayload.config.columnNameRegex: For column name regex identifiers, the regex to match against column names.
auditPayload.config.values: For dictionary identifiers, the values within the dictionary to match against column values.
auditPayload.config.regex: For regex identifiers, the regex to match against column values.
Event: SDDClassifierDeleted
Legacy event: sddClassifierDeleted
Description: An audit event for deleting a sensitive data discovery (SDD) identifier.
Event: SDDClassifierUpdated
Legacy event: sddClassifierUpdated
Description: An audit event for updating a sensitive data discovery (SDD) column name regex, regex, or dictionary identifier.
Additional parameter details:
auditPayload.config.columnNameRegex: For column name regex identifiers, the regex to match against column names.
auditPayload.config.values: For dictionary identifiers, the values within the dictionary to match against column values.
auditPayload.config.regex: For regex identifiers, the regex to match against column values.
Event: SDDDatasourceTagUpdated
Legacy event: sddDatasourceTagUpdate
Description: An audit event for the results from a sensitive data discovery (SDD) run that updates the tags on Immuta data sources.
Event: SDDTemplateApplied
Legacy event: sddTemplateApplied
Description: An audit event for applying an identification framework to data sources.
Event: SDDTemplateCloned
Legacy event: sddTemplateCreated
Description: An audit event for cloning an identification framework from another framework.
Event: SDDTemplateCreated
Legacy event: sddTemplateCreated
Description: An audit event for creating an identification framework.
Event: SDDTemplateDeleted
Legacy event: sddTemplateDeleted
Description: An audit event for deleting an identification framework.
Event: SDDTemplateUpdated
Legacy event: sddTemplateUpdated
Description: An audit event for updating an identification framework.
Event: SubscriptionCreated
Legacy events: dataSourceSubscription
and projectSubscription
Description: An audit event for subscribing a user to a data source or project.
Additional parameter details: auditPayload.modelType will specify whether the user was subscribed to a DATASOURCE
or PROJECT
.
Event: SubscriptionUpdated
Legacy events: dataSourceSubscription
and projectSubscription
Description: An audit event for removing a user's subscription to a data source or project.
Additional parameter details: auditPayload.modelType will specify whether the user's subscription was removed from a DATASOURCE
or PROJECT
.
Event: SubscriptionUpdated
Legacy events: dataSourceSubscription
and projectSubscription
Description: An audit event for a user's request to subscribe to a data source or project.
Additional parameter details: targets.model.type will specify whether the subscription was approved for a DATASOURCE
or PROJECT
.
Event: SubscriptionUpdated
Legacy events: dataSourceSubscription
and projectSubscription
Description: An audit event for denying a user's request to subscribe to a data source or project.
Additional parameter details: auditPayload.modelType will specify whether the user's subscription was denied for a DATASOURCE
or PROJECT
.
Event: SubscriptionRequested
Legacy events: dataSourceSubscription
and projectSubscription
Description: An audit event for a user requesting to subscribe to a data source or project.
Additional parameter details: auditPayload.modelType will specify whether the user requested to subscribe to a DATASOURCE
or PROJECT
.
Event: SubscriptionUpdated
Legacy events: dataSourceSubscription
and projectSubscription
Description: An audit event for a user subscribing to a data source or project.
Additional parameter details: targets.model.type will specify whether the subscription was updated on a DATASOURCE
or PROJECT
.
Event: TagApplied
Legacy event: tagAdded
Description: An audit event for applying a tag to an object in Immuta.
Event: TagCreated
Legacy event: tagCreated
Description: An audit event for creating a tag in Immuta.
Event: TagDeleted
Legacy event: tagDeleted
Description: An audit event for deleting a tag in Immuta.
Event: TagRemoved
Legacy event: tagRemoved
Description: An audit event for removing a tag from an object in Immuta.
Event: TagUpdated
Legacy event: tagUpdated
Description: An audit event for updating a tag in Immuta.
Event: UserAuthenticated
Legacy event: authenticate
Description: An audit event for a user authenticating in Immuta.
Additional parameter details: authenticationMethod
possible values include
OAuth
: The user authenticated using the 3rd party authentication OAuth.
OpenId
: The user authenticated using the 3rd party authentication OpenId.
SAML
: The user authenticated using the 3rd party authentication SAML.
apiKey
: The user authenticated or impersonated using an API key.
password
: The user authenticated with username and password.
Event: UserCloned
Legacy event: accessUser
Description: An audit event for creating a new user in Immuta by cloning an existing user.
Event: UserCreated
Legacy event: accessUser
Description: An audit event for creating a new user in Immuta.
Event: UserDeleted
Legacy event: accessUser
Description: An audit event for deleting a user in Immuta.
Event: UserLogout
Legacy events: None
Description: An audit event for a user logging out of Immuta.
Additional parameter details:
authenticationMethod
possible values include
OAuth
: The user authenticated using the 3rd party authentication OAuth.
OpenId
: The user authenticated using the 3rd party authentication OpenId.
SAML
: The user authenticated using the 3rd party authentication SAML.
apiKey
: The user authenticated or impersonated using an API key.
password
: The user authenticated with username and password.
logoutReason
possible values include
EXPIRATION
: The user was logged out because the token expired.
IDP_INITIATED
: The IdP initiated the logout.
USER_LOGOUT_TRIGGERED
: The user manually logged out.
Event: UserOneTimeTokenCreated
Legacy event: accessUser
Description: An audit event for creating a single use login token for a user.
Event: UserPasswordUpdated
Legacy event: accessUser
Description: An audit event for updating a user's Immuta password.
Event: UserUpdated
Legacy event: externalUserIdChanged
Description: An audit event for updating user details in Immuta.
Event: WebhookCreated
Legacy event: webhookCreate
Description: An audit event for creating an Immuta webhook.
Event: WebhookDeleted
Legacy event: webhookDelete
Description: An audit event for deleting an Immuta webhook.
blobDelete
blobFetch
blobIndex
blobUpdateFeatures
blobUpdateTags
blobVisibility
checkPendingRequest
dataSourceExpired
dataSourceTestQuery
dictionaryCreate
dictionaryDelete
dictionaryUpdate
driverUpload
featureList
governanceUpdate
policyExemption
policyExport
policyImport
queryDebugRequest
sqlAccess
sqlCreateUser
sqlDeleteUser
sqlResetPassword
sqlQuery
Universal audit model (UAM) is Immuta's consistent structure for all Immuta system and user query audit logs. This reference guide provides example schemas of all the UAM events available in Immuta.
There are some parameter details throughout to help better understand the UAM schemas. But there are two important parameters to each event:
targetType
: Informs the Immuta object that's the target of the action being audited. This will specify if it was a user, project, policy, etc. being affected by the action.
action
: Informs the base action being performed on the target. This will specify if something was created, deleted, updated, etc.
Event: ApiKeyCreated
Legacy event: apiKey
Description: An audit event for when an API key is created on the Immuta app settings page or from an Immuta user's profile page.
Event: ApiKeyDeleted
Legacy event: apiKey
Description: An audit event for when an API key is deleted on the Immuta app settings page or from an Immuta user's profile page.
Event: AttributeApplied
Legacy events: accessUser
and accessGroup
Description: An audit event for an attribute applied to a group or user.
Additional parameter details: targetType
will specify whether the attribute was added to a USER
or GROUP
.
Event: AttributeRemoved
Legacy events: accessUser
and accessGroup
Description: An audit event for an attribute removed from a group or user.
Additional parameter details: targetType
will specify whether the attribute was removed from a USER
or GROUP
.
Event: ConfigurationUpdated
Legacy event: configurationUpdate
Description: An audit event for updates to the configuration on the Immuta app settings page.
Event: DatasourceAppliedToProject
Legacy event: addToProject
Description: An audit event for adding a data source to an Immuta project.
Event: DatasourceCatalogSynced
Legacy event: catalogUpdate
Description: An audit event for syncing an external catalog to tag Immuta data sources.
Event: DatasourceCreated
Legacy event: dataSourceCreate
Description: An audit event for registering a table as an Immuta data source.
Event: DatasourceDeleted
Legacy event: dataSourceDelete
Description: An audit event for deleting a data source in Immuta.
Event: DatasourceDisabled
Legacy event: None
Description: An audit event for disabling a data source in Immuta.
Event: DatasourceGlobalPolicyApplied
Legacy event: globalPolicyApplied
Description: An audit event for applying a global policy to a data source.
Event: DatasourceGlobalPolicyConflictResolved
Legacy event: globalPolicyConflictResolved
Description: An audit event for a global policy conflict being resolved on a data source.
Event: DatasourceGlobalPolicyDisabled
Legacy event: globalPolicyDisabled
Description: An audit event for a data owner disabling a global policy from their data source.
Event: DatasourceGlobalPolicyRemoved
Legacy event: globalPolicyRemoved
Description: An audit event for a data owner removing a global policy from their data source.
Event: DatasourcePolicyCertificationExpired
Legacy event: policyCertificationExpired
Description: An audit event for a global policy certification expiring on a data source.
Event: DatasourcePolicyCertified
Legacy event: globalPolicyCertify
Description: An audit event for a global policy being certified by a data owner for their data source.
Event: DatasourcePolicyDecertified
Legacy events: None
Description: An audit event for a global policy being decertified on a data source.
Event: DatasourceRemovedFromProject
Legacy event: removeFromProject
Description: An audit event for removing a data source from a project.
Event: DatasourceUpdated
Legacy events: dataSourceUpdate
and dataSourceSave
Description: An audit event for updating a data source with the new data source details.
Event: DomainCreated
Legacy event: collectionCreated
Description: An audit event for creating a domain.
Event: DomainDataSourcesUpdated
Legacy events: collectionDataSourceAdded
, collectionDataSourceRemoved
, and collectionDataSourceUpdated
Description: An audit event for updating a domain's data sources.
Additional parameter details: auditPayload.updateType will specify whether the data source was added to or removed from the domain.
Event: DomainDeleted
Legacy event: collectionDeleted
Description: An audit event for deleting a domain.
Event: DomainPermissionsUpdated
Legacy events: collectionPermissionGranted
and collectionPermissionRevoked
Description: An audit event for granting or revoking a user's domain-related permissions.
Additional parameter details: auditPayload.updateType will specify whether the permission was granted to or revoked from a user.
Event: DomainUpdated
Legacy event: collectionUpdated
Description: An audit event for updating an Immuta domain.
Event: GlobalPolicyApprovalRescinded
Legacy event: globalPolicyApprovalRescinded
Description: An audit event for a global policy approval rescinded in the approve to promote workflow.
Event: GlobalPolicyApproved
Legacy event: globalPolicyApproved
Description: An audit event for a global policy approved in the approve to promote workflow.
Event: GlobalPolicyChangeRequested
Legacy event: globalPolicyChangeRequested
Description: An audit event for requested edits on a global policy in the approve to promote workflow.
Event: GlobalPolicyCreated
Legacy event: globalPolicyCreate
Description: An audit event for creating a global policy.
Event: GlobalPolicyDeleted
Legacy event: globalPolicyDelete
Description: An audit event for deleting a global policy.
Event: GlobalPolicyPromoted
Legacy event: globalPolicyPromoted
Description: An audit event for when a global policy is fully approved and promoted to production in the approve to promote workflow.
Event: GlobalPolicyReviewRequested
Legacy event: globalPolicyReviewRequested
Description: An audit event for when a global policy is ready and requests a review in the approve to promote workflow.
Event: GlobalPolicyUpdated
Legacy event: globalPolicyUpdate
Description: An audit event for a global policy being updated with details about the policy.
Event: GroupCreated
Legacy event: accessGroup
Description: An audit event for a group created in Immuta.
Event: GroupDeleted
Legacy event: accessGroup
Description: An audit event for a group deleted in Immuta.
Event: GroupMemberAdded
Legacy event: accessGroup
Description: An audit event for a member added to a group in Immuta.
Event: GroupMemberRemoved
Legacy event: accessGroup
Description: An audit event for a group member removed from the group in Immuta.
Event: GroupUpdated
Legacy event: accessGroup
Description: An audit event for a group updated in Immuta.
Event: LicenseCreated
Legacy event: licenseCreate
Description: An audit event for creating an Immuta license.
Event: LicenseDeleted
Legacy event: licenseDelete
Description: An audit event for deleting an Immuta license.
Event: LocalPolicyCreated
Legacy event: policyHandlerCreate
Description: An audit event for creating a local policy for an Immuta data source.
Event: LocalPolicyUpdated
Legacy event: policyHandlerUpdate
Description: An audit event for updating a local policy on an Immuta data source.
Event: PermissionApplied
Legacy event: accessUser
Description: An audit event for a permission applied to an Immuta user.
Event: PermissionRemoved
Legacy event: accessUser
Description: An audit event for a permission removed from an Immuta user.
Event: PolicyAdjustmentCreated
Legacy event: policyAdjustmentCreate
Description: An audit event for creating a policy adjustment in an Immuta project.
Event: PolicyAdjustmentDeleted
Legacy event: policyAdjustmentDelete
Description: An audit event for deleting a policy adjustment in an Immuta project.
Event: ProjectCreated
Legacy event: projectCreate
Description: An audit event for creating a project in Immuta.
Event: ProjectDeleted
Legacy event: projectDelete
Description: An audit event for deleting a project in Immuta.
Event: ProjectDisabled
Legacy events: None
Description: An audit event for disabling a project in Immuta.
Event: ProjectPurposeApproved
Legacy event: projectPurposeApprove
Description: An audit event for approving a purpose for a project in Immuta.
Event: ProjectPurposeDenied
Legacy event: projectPurposeDeny
Description: An audit event for denying a purpose for a project in Immuta.
Event: ProjectPurposesAcknowledged
Legacy event: acknowledgePurposes
Description: An audit event for acknowledging a purpose for a project in Immuta.
Event: ProjectUpdated
Legacy event: projectPurposeDeny
Description: An audit event for updating a project in Immuta.
Event: PurposeDeleted
Legacy event: purposeDelete
Description: An audit event for deleting a purpose in Immuta.
Event: PurposeUpdated
Legacy event: purposeUpdate
Description: An audit event for updating a purpose in Immuta.
Event: PurposeUpserted
Legacy event: purposeCreate
Description: An audit event for creating a purpose in Immuta.
Event: SDDClassifierCreated
Legacy event: sddClassifierCreated
Description: An audit event for creating a sensitive data discovery (SDD) column name regex, regex, or dictionary identifier.
Additional parameter details:
auditPayload.config.columnNameRegex: For column name regex identifiers, the regex to match against column names.
auditPayload.config.values: For dictionary identifiers, the values within the dictionary to match against column values.
auditPayload.config.regex: For regex identifiers, the regex to match against column values.
Event: SDDClassifierDeleted
Legacy event: sddClassifierDeleted
Description: An audit event for deleting a sensitive data discovery (SDD) identifier.
Event: SDDClassifierUpdated
Legacy event: sddClassifierUpdated
Description: An audit event for updating a sensitive data discovery (SDD) column name regex, regex, or dictionary identifier.
Additional parameter details:
auditPayload.config.columnNameRegex: For column name regex identifiers, the regex to match against column names.
auditPayload.config.values: For dictionary identifiers, the values within the dictionary to match against column values.
auditPayload.config.regex: For regex identifiers, the regex to match against column values.
Event: SDDDatasourceTagUpdated
Legacy event: sddDatasourceTagUpdate
Description: An audit event for the results from a sensitive data discovery (SDD) run that updates the tags on Immuta data sources.
Event: SDDTemplateApplied
Legacy event: sddTemplateApplied
Description: An audit event for applying an identification framework to data sources.
Event: SDDTemplateCloned
Legacy event: sddTemplateCreated
Description: An audit event for cloning an identification framework from another framework.
Event: SDDTemplateCreated
Legacy event: sddTemplateCreated
Description: An audit event for creating an identification framework.
Event: SDDTemplateDeleted
Legacy event: sddTemplateDeleted
Description: An audit event for deleting an identification framework.
Event: SDDTemplateUpdated
Legacy event: sddTemplateUpdated
Description: An audit event for updating an identification framework.
Event: SubscriptionCreated
Legacy events: dataSourceSubscription
and projectSubscription
Description: An audit event for subscribing a user to a data source or project.
Additional parameter details: auditPayload.modelType will specify whether the user was subscribed to a DATASOURCE
or PROJECT
.
Event: SubscriptionUpdated
Legacy events: dataSourceSubscription
and projectSubscription
Description: An audit event for removing a user's subscription to a data source or project.
Additional parameter details: auditPayload.modelType will specify whether the user's subscription was removed from a DATASOURCE
or PROJECT
.
Event: SubscriptionUpdated
Legacy events: dataSourceSubscription
and projectSubscription
Description: An audit event for a user's request to subscribe to a data source or project.
Additional parameter details: targets.model.type will specify whether the subscription was approved for a DATASOURCE
or PROJECT
.
Event: SubscriptionUpdated
Legacy events: dataSourceSubscription
and projectSubscription
Description: An audit event for denying a user's request to subscribe to a data source or project.
Additional parameter details: auditPayload.modelType will specify whether the user's subscription was denied for a DATASOURCE
or PROJECT
.
Event: SubscriptionRequested
Legacy events: dataSourceSubscription
and projectSubscription
Description: An audit event for a user requesting to subscribe to a data source or project.
Additional parameter details: auditPayload.modelType will specify whether the user requested to subscribe to a DATASOURCE
or PROJECT
.
Event: SubscriptionUpdated
Legacy events: dataSourceSubscription
and projectSubscription
Description: An audit event for a user subscribing to a data source or project.
Additional parameter details: targets.model.type will specify whether the subscription was updated on a DATASOURCE
or PROJECT
.
Event: TagApplied
Legacy event: tagAdded
Description: An audit event for applying a tag to an object in Immuta.
Event: TagCreated
Legacy event: tagCreated
Description: An audit event for creating a tag in Immuta.
Event: TagDeleted
Legacy event: tagDeleted
Description: An audit event for deleting a tag in Immuta.
Event: TagRemoved
Legacy event: tagRemoved
Description: An audit event for removing a tag from an object in Immuta.
Event: TagUpdated
Legacy event: tagUpdated
Description: An audit event for updating a tag in Immuta.
Event: UserAuthenticated
Legacy event: authenticate
Description: An audit event for a user authenticating in Immuta.
Additional parameter details: authenticationMethod
possible values include
OAuth
: The user authenticated using the 3rd party authentication OAuth.
OpenId
: The user authenticated using the 3rd party authentication OpenId.
SAML
: The user authenticated using the 3rd party authentication SAML.
apiKey
: The user authenticated or impersonated using an API key.
password
: The user authenticated with username and password.
Event: UserCloned
Legacy event: accessUser
Description: An audit event for creating a new user in Immuta by cloning an existing user.
Event: UserCreated
Legacy event: accessUser
Description: An audit event for creating a new user in Immuta.
Event: UserDeleted
Legacy event: accessUser
Description: An audit event for deleting a user in Immuta.
Event: UserLogout
Legacy events: None
Description: An audit event for a user logging out of Immuta.
Additional parameter details:
authenticationMethod
possible values include
OAuth
: The user authenticated using the 3rd party authentication OAuth.
OpenId
: The user authenticated using the 3rd party authentication OpenId.
SAML
: The user authenticated using the 3rd party authentication SAML.
apiKey
: The user authenticated or impersonated using an API key.
password
: The user authenticated with username and password.
logoutReason
possible values include
EXPIRATION
: The user was logged out because the token expired.
IDP_INITIATED
: The IdP initiated the logout.
USER_LOGOUT_TRIGGERED
: The user manually logged out.
Event: UserOneTimeTokenCreated
Legacy event: accessUser
Description: An audit event for creating a single use login token for a user.
Event: UserPasswordUpdated
Legacy event: accessUser
Description: An audit event for updating a user's Immuta password.
Event: UserUpdated
Legacy event: externalUserIdChanged
Description: An audit event for updating user details in Immuta.
Event: WebhookCreated
Legacy event: webhookCreate
Description: An audit event for creating an Immuta webhook.
Event: WebhookDeleted
Legacy event: webhookDelete
Description: An audit event for deleting an Immuta webhook.
Snowflake query audit logs is a feature that audits queries that users run natively in Snowflake and presents them in a universal format as Immuta audit logs. Immuta uses the Snowflake QUERY_HISTORY
and ACCESS_HISTORY
tables and translates them into the audit logs that can be viewed at query events in the Immuta UI or . Immuta audits the activity of Immuta users on Immuta data sources.
: Note that the users' . Without this, Immuta will not know the users are Immuta users and will not collect audit events for their data access activity.
By default Snowflake audit logs expire after 90 days. , and store audit logs outside of Immuta in order to retain the audit logs long-term.
Immuta collects audit records at the frequency , which is between 1 and 24 hours. The frequency is a global setting based on integration type, so organizations with multiple Snowflake integrations will have the same audit frequency for all of them. The more frequent the audit records are ingested, the more current the audit records; however, there could be performance and cost impacts from the frequent jobs.
Property | Description | Example |
---|
To learn more about Immuta's audit, see the .
Legacy event | UAM event | Description |
---|
To learn more about Immuta's audit, see the or view the examples below.
Immuta object | Events | Descriptions |
---|
action | The action associated with the audit log. |
|
actor.type | The Immuta user type of the actor who made the query. |
|
actor.id | The Immuta user ID of the actor who made the query. |
|
actor.name | The Immuta name of the user who made the query. |
|
actor.identityProvider | The IAM the user is registered in. |
|
sessionId | The session ID of the user who performed the action. |
|
actionStatus | Indicates whether or not the user was granted access to the data. Possible values are |
|
actionStatusReason | When available, the reason from Unity Catalog that the user’s query was denied. |
|
eventTimestamp | The time the query occurred. |
|
id | The unique ID of the audit record. |
|
userAgent | Client information of the user who made the query. |
|
tenantId | The Immuta SaaS tenant ID. |
|
targetType | The type of targets affected by the query; this value will always be |
|
targets | A list of the targets affected by the query. | See the example below |
auditPayload.type | The type of audit record; this value will always be: |
|
auditPayload.queryId | The unique ID of the query. If the query joins multiple tables, each table will appear as a separate log, but all will have the same query ID. |
|
auditPayload.query | The command text of the query that was run in the integration. Immuta truncates the query text to the first 2048 characters. |
|
auditPayload.startTime | The date and time the query started in UTC. |
|
auditPayload.duration | The time the query took in seconds. |
|
auditPayload.errorCode | The |
|
auditPayload.technologyContext.type | The technology the query was made in. |
|
auditPayload.technologyContext.host | The host that the integration is connected to. |
|
auditPayload.technologyContext.snowflakeUsername | The user's Snowflake username. |
|
auditPayload.technologyContext.rowsProduced | The number of rows returned in the query. Note that rows produced will show |
|
auditPayload.technologyContext.roleName | The Snowflake role the user used to make the query. |
|
auditPayload.technologyContext.warehouseId | The ID of the warehouse where the query was made. |
|
auditPayload.technologyContext.warehouseName | The name of the warehouse where the query was made. |
|
auditPayload.technologyContext.clusterNumber | The number of the cluster where the query was made. |
|
auditPayload.objectsAccessed | An array of the data sources accessed in the query. | See example below. |
auditPayload.objectsAccessed.name | The name of the data source accessed in the query. |
|
auditPayload.objectsAccessed.datasourceId | The Immuta data source ID. |
|
auditPayload.objectsAccessed.databaseName | The name of the Snowflake database. |
|
auditPayload.objectsAccessed.schemaName | The name of the Snowflake schema. |
|
auditPayload.objectsAccessed.type | Specifies if the queried data source is a table or view. |
|
auditPayload.objectsAccessed.columns | An array of the columns accessed in the query. | See example below. |
auditPayload.objectsAccessed.columns.name | The name of the column. |
|
auditPayload.objectsAccessed.columns.tags | An array of the tags on the column. | See example below. |
auditPayload.objectsAccessed.columns.securityProfile | See example below. |
auditPayload.objectsAccessed.columns.inferred | If |
|
auditPayload.objectsAccessed.securityProfile | See example below. |
auditPayload.securityProfile.sensitivity.score | The sensitivity score of the query. Classification must be configured for this field. |
|
receivedTimestamp | The timestamp of when the audit event was received and stored by Immuta. |
|
| An audit event for managing a group. |
| An audit event for managing a user. |
| An audit event for acknowledging a purpose for a project in Immuta. |
| An audit event for adding a data source to an Immuta project. |
| An audit event for when an API key is created or deleted on the Immuta app settings page or from an Immuta user's profile page. |
| An audit event for a user authenticating in Immuta. |
- | An audit event for a user logging out of Immuta. |
| An audit event for syncing an external catalog to tag Immuta data sources. |
| An audit event for updates to the configuration on the Immuta app settings page. |
| An audit event for creating a domain. |
| An audit event for updating a domain's data sources. |
| An audit event for updating a domain's data sources. |
| An audit event for updating a domain's data sources. |
| An audit event for deleting a domain. |
| An audit event for granting or revoking a user's domain-related permissions. |
| An audit event for granting or revoking a user's domain-related permissions. |
| An audit event for updating an Immuta domain. |
| An audit event for registering a table as an Immuta data source. |
| An audit event for deleting a data source in Immuta. |
- | An audit event for disabling a data source in Immuta. |
| An audit event for updating a data source with the new data source details. |
| The events for data source and project subscriptions. |
| An audit event for updating a data source with the new data source details. |
| An audit event for updating user details in Immuta. |
| An audit event for applying a global policy to a data source. |
| An audit event for a global policy approval rescinded in the approve to promote workflow. |
| An audit event for a global policy approved in the approve to promote workflow. |
| An audit event for a global policy being certified by a data owner for their data source. |
- | An audit event for a global policy being decertified on a data source. |
| An audit event for requested edits on a global policy in the approve to promote workflow. |
| An audit event for a global policy conflict being resolved on a data source. |
| An audit event for creating a global policy. |
| An audit event for deleting a global policy. |
| An audit event for a data owner disabling a global policy from their data source. |
| An audit event for when a global policy is fully approved and promoted to production in the approve to promote workflow. |
| An audit event for a data owner removing a global policy from their data source. |
| An audit event for when a global policy is ready and requests a review in the approve to promote workflow. |
| An audit event for updating a global policy with the new global policy details. |
| An audit event for creating an Immuta license. |
| An audit event for deleting an Immuta license. |
|
|
| An audit event for creating a policy adjustment in an Immuta project. |
| An audit event for deleting a policy adjustment in an Immuta project. |
| An audit event for a global policy certification expiring on a data source. |
| An audit event for creating a local policy for an Immuta data source. |
| An audit event for updating a local policy on an Immuta data source. |
| TrinoQuery |
| An audit event for creating a project in Immuta. |
| An audit event for deleting a project in Immuta. |
- | An audit event for disabling a project in Immuta. |
| An audit event for approving a purpose for a project in Immuta. |
| An audit event for denying a purpose for a project in Immuta. |
| The events for data source and project subscriptions. |
| An audit event for updating a project in Immuta. |
| An audit event for deleting a purpose in Immuta. |
| An audit event for updating a purpose in Immuta. |
| An audit event for creating a purpose in Immuta. |
| An audit event for removing a data source from a project. |
| An audit event for creating a sensitive data discovery (SDD) column name regex, regex, or dictionary identifier. |
| An audit event for deleting a sensitive data discovery (SDD) identifier. |
| An audit event for updating a sensitive data discovery (SDD) column name regex, regex, or dictionary identifier. |
| An audit event for the results from a sensitive data discovery (SDD) run that updates the tags on Immuta data sources. |
| An audit event for applying an identification framework to data sources. |
| An audit event for creating an identification framework. |
| An audit event for deleting an identification framework. |
| An audit event for updating an identification framework. |
| DatabricksQuery |
| An audit event for applying a tag to an object in Immuta. |
| An audit event for creating a tag in Immuta. |
| An audit event for deleting a tag in Immuta. |
| An audit event for removing a tag from an object in Immuta. |
| An audit event for updating a tag in Immuta. |
| An audit event for creating an Immuta webhook. |
| An audit event for deleting an Immuta webhook. |
API keys | Audit events for managing API keys. |
Attributes | Audit events for managing attributes. |
Configuration | An audit event for Immuta configuration changes. |
Data sources | Audit events for actions on data sources and their policies. |
Domains | Audit events for managing domains, domain policies, and domain permissions. |
Global policies | Audit events for managing global policies. |
Groups | Audit events for managing Immuta groups and group members. |
License | Audit events for managing Immuta licenses. |
Local policies | Audit events for managing local policies. |
Permissions | Audit events for managing user permissions. |
Policy adjustments | Audit events for managing policy adjustments in a project. |
Projects | Audit events for managing projects and their purposes. |
Purposes | Audit events for managing purposes. |
Queries | Audit events for user queries within data platforms. |
Sensitive data discovery (SDD) | Audit events for managing and running SDD. |
Tags | Audit events for managing tags and their application. |
Users | Audit events for user actions, managing users, and managing the objects users are subscribed to in Immuta. |
Webhooks | Audit events for managing webhooks. |
Details about the sensitivity of the column. Available when .
A classification for all the columns accessed together. Available when .
An audit event for a user's query in Snowflake or Databricks Unity Catalog. See the or the for additional details about the audit event schema.
An audit event for a user's query in Starburst (Trino). See the for additional details about the audit event schema.
An audit event for a user's query in Databricks. See the for additional details about the audit event schema.
DatabricksQuery: Available for or