Immuta AI FAQ
What is Immuta AI, and how is it used within the product?
Immuta AI is a suite of AI capabilities integrated within the Immuta platform. It is designed to address the challenges of data access and provisioning at the speed and scale required for AI initiatives and to integrate with Immuta's existing data security and governance workflows.
Immuta AI eliminates manual workflows and helps governance teams scale their role by leveraging AI for simplification and automation. Its primary functions, which use metadata only (no customer data) and foundational models hosted in AWS Bedrock, like Anthropic Claude, include:
Policy authoring (Immuta copilot):
Simplifies policy management by allowing users to provide a natural language description of the desired access control policy
Translates the description into a logical policy expression for human review and implementation
Data access decision-making (Review assist):
Automates and streamlines the process of reviewing data access requests
Classifies each request as low, medium, or high risk based on historical patterns, user metadata, and answers to request form questions
Provides an AI-generated rationale for recommended action (approve or deny)
What specific personal or system data elements will the AI system utilize?
Review assist and Immuta copilot utilize metadata (facts about data users, including attributes and groups) and tags (facts about the data, such as classifications and sensitivity levels). For access request recommendations, Immuta review assist analyzes the commonalities in approved users' request form answers, attributes, and groups compared to historical decisions to make a recommendation.
Will the Immuta data or prompts be used to train the AI model?
No. Immuta's AI features do not use your actual customer data to train or fine-tune the foundational AI model. Instead, these AI features leverage customer metadata (such as group names and attribute names) in conjunction with an external foundational model, like Anthropic Claude, to provide insights and recommendations for managing Immuta itself. The AWS Bedrock user guide indicates that AWS and external parties, including Anthropic, cannot access either the prompts or completions and do not use them for model training. The AWS Bedrock user guide also states that prompts and completions are not stored.
How long are Immuta prompts, data, and results stored in the AI system?
Immuta will log metadata being sent to and responses from the model for debugging purposes. This is metadata Immuta already knows about, so no new information is being generated and stored.
Can user prompts and AI results be audited?
Immuta’s AI only makes recommendations; if that recommendation is accepted, it is what is audited.
Are users of the AI system aware that they are interacting with an AI system?
Yes. The AI features are explicitly named, such as review assist and Immuta copilot, and the functionality is described as "infusing AI across the Immuta platform" and offering AI-driven recommendations. Users interacting with features like Immuta copilot, which generates policies from natural language, are aware that they are using an AI-powered tool and are a human-in-the-loop, requiring them to accept the recommendation before it takes any action.
Does Immuta take any steps to mitigate against bias, discrimination, and hallucination, particularly with generative AI?
Immuta is not making recommendations that negatively impact a human; they are simply access control recommendations.
Are humans involved in the AI-augmented decisions?
Yes. Human involvement is a key part of the workflow. Immuta AI is described as an "augmentation tool, not a full replacement for humans." While the AI provides policy recommendations and automates access decisions, a "bulk-approval or policy recommendation is green-lighted by the governance team," and humans will still play a crucial role in "setting strategy, defining ethical guidelines, and making high-stakes decisions." The platform provides a "human-in-the-loop" approval process for access requests, as the human reviewer must always accept or deny the recommendation; it does not happen automatically.
Does Immuta already have similar features or tools that could complete these actions without the need for AI?
Yes. Immuta's core data governance and access control features—such as the metadata registry, request/approve workflows, and policy entitlement engine—already address the problems of unified policy-setting and approvals. The AI features, such as review assist, are designed to accelerate and scale these existing governance workflows, transforming manual tasks (like ticket-based approvals and complex policy writing) into automated or natural-language-driven processes.
Immuta's AI features leverage foundational models to assist with administrative functions, such as creating access policies, without sharing sensitive customer data with model providers. However, the option to disable the features is available for organizations that want to maintain strict control over all aspects of their data governance workflows.
For details about the specific features Immuta offers, see the Immuta's AI features reference page.
Last updated
Was this helpful?

