LogoLogo
SaaSBook a demo
  • Immuta Documentation - SaaS
  • Configuration
    • Connect Data Platforms
      • Data Platforms Overview
      • Amazon S3 Integration
      • AWS Lake Formation
        • Getting Started with AWS Lake Formation
        • Register an AWS Lake Formation Connection
        • Reference Guides
          • AWS Lake Formation
          • Security and Compliance
          • Protecting Data
          • Accessing Data
      • Azure Synapse Analytics
        • Getting Started with Azure Synapse Analytics
        • Configure Azure Synapse Analytics Integration
        • Reference Guides
          • Azure Synapse Analytics Overview
          • Azure Synapse Analytics Pre-Configuration Details
      • Databricks
        • Databricks Spark
          • Getting Started with Databricks Spark
          • How-to Guides
            • Configure a Databricks Spark Integration
            • Manually Update Your Databricks Cluster
            • Install a Trusted Library
            • Project UDFs Cache Settings
            • Run R and Scala spark-submit Jobs on Databricks
            • DBFS Access
            • Troubleshooting
          • Reference Guides
            • Databricks Spark Integration Configuration
              • Installation and Compliance
              • Customizing the Integration
              • Setting Up Users
              • Spark Environment Variables
              • Ephemeral Overrides
            • Security and Compliance
            • Registering and Protecting Data
            • Accessing Data
              • Delta Lake API
        • Databricks Unity Catalog
          • Getting Started with Databricks Unity Catalog
          • How-to Guides
            • Register a Databricks Unity Catalog Connection
            • Configure a Databricks Unity Catalog Integration
            • Migrating to Unity Catalog
          • Databricks Unity Catalog Integration Reference Guide
      • Google BigQuery Integration
      • Redshift
        • Getting Started with Redshift
        • How-to Guides
          • Configure Redshift Integration
          • Configure Redshift Spectrum
        • Reference Guides
          • Redshift Overview
          • Redshift Pre-Configuration Details
      • Snowflake
        • Getting Started with Snowflake
        • How-to Guides
          • Register a Snowflake Connection
          • Configure a Snowflake Integration
          • Edit or Remove Your Snowflake Integration
          • Integration Settings
            • Snowflake Table Grants Private Preview Migration
            • Enable Snowflake Table Grants
            • Using Snowflake Data Sharing with Immuta
            • Enable Snowflake Low Row Access Policy Mode
              • Upgrade Snowflake Low Row Access Policy Mode
            • Configure Snowflake Lineage Tag Propagation
        • Reference Guides
          • Snowflake Integration
          • Snowflake Table Grants
          • Snowflake Data Sharing with Immuta
          • Snowflake Low Row Access Policy Mode
          • Snowflake Lineage Tag Propagation
          • Warehouse Sizing Recommendations
        • Explanatory Guides
          • Phased Snowflake Onboarding
      • Starburst (Trino)
        • Getting Started with Starburst (Trino)
        • How-to Guides
          • Configure Starburst (Trino) Integration
          • Customize Read and Write Access Policies for Starburst (Trino)
        • Starburst (Trino) Integration Reference Guide
      • Queries Immuta Runs in Your Data Platform
      • Connect Your Data
        • Registering a Connection
          • How-to Guides
            • Run Object Sync
            • Manage Connection Settings
            • Use the Connection Upgrade Manager
              • Troubleshooting
          • Reference Guides
            • Connections
            • Upgrading to Connections
              • Before You Begin
              • API Changes
              • FAQ
        • Registering Metadata
          • Data Sources in Immuta
          • Register Data Sources
            • Amazon S3 Data Source
            • Azure Synapse Analytics Data Source
            • Databricks Data Source
            • Google BigQuery Data Source
            • Redshift Data Source
            • Snowflake Data Source
              • Bulk Create Snowflake Data Sources
            • Create a Starburst (Trino) Data Source
          • Data Source Settings
            • How-to Guides
              • Manage Data Source Settings
              • Manage Data Source Members
              • Manage Access Requests and Tasks
              • Manage Data Dictionary Descriptions
              • Disable Immuta from Sampling Raw Data
            • Data Source Health Checks Reference Guide
          • Schema Monitoring
            • How-to Guides
              • Manage Schema Monitoring
              • Run Schema Monitoring and Column Detection Jobs
            • Reference Guides
              • Schema Monitoring
              • Schema Projects
            • Why Use Schema Monitoring Concept Guide
    • Manage Data Metadata
      • Connect External Catalogs
        • Configure an External Catalog
        • Reference Guides
          • External Catalog Introduction
          • Custom REST Catalog Interface Introduction
          • Custom REST Catalog Interface Endpoints
      • Data Discovery
        • Introduction
        • Getting Started with Data Discovery
        • How-to Guides
          • Use Identifiers in Domains
          • Use Sensitive Data Discovery (SDD)
          • Manage Identification Frameworks
          • Manage Identifiers
          • Run and Manage Sensitive Data Discovery on Data Sources
        • Reference Guides
          • Identifiers in Domains
          • Built-in Identifier Reference
          • Improved Pack: Built-in Identifier Reference
          • Built-in Discovered Tags Reference
          • How Competitive Pattern Analysis Works
      • Data Classification
        • How-to Guides
          • Activate Classification Frameworks
          • Adjust Identification and Classification Framework Tags
          • How to Use a Classification Framework with Your Own Tags
        • Reference Guide
          • Classification Frameworks
      • Manage Tags
        • How-to Guides
          • Create and Manage Tags
          • Add Tags to Data Sources and Projects
        • Tags Reference Guide
    • Manage Users
      • Getting Started with Users
      • Identity Managers (IAMs)
        • How-to Guides
          • Okta LDAP Interface
          • OpenID Connect
            • OpenID Connect Protocol
            • Okta and OpenID Connect
            • OneLogin with OpenID Connect
          • SAML
            • SAML Protocol
            • Microsoft Entra ID
            • Okta SAML SCIM
        • Reference Guides
          • Identity Managers
          • SAML Protocol Configuration Options
          • SAML Single Logout
      • Immuta Users
        • How-to Guides
          • Managing Personas and Permissions
          • User Impersonation
          • Manage Attributes and Groups
          • External User ID Mapping
          • External User Info Endpoint
        • Reference Guides
          • Permissions and Personas
          • Attributes and Groups in Immuta
    • Organize Data into Domains
      • Getting Started with Domains
      • Domains Reference Guide
    • Application Settings
      • How-to Guides
        • App Settings
        • Private Networking Support
          • Data Connection Private Networking
            • AWS PrivateLink for Redshift
            • AWS PrivateLink for API Gateway
            • Databricks Private Connectivity
              • AWS PrivateLink for Databricks
              • Azure Private Link for Databricks
            • Snowflake Private Connectivity
              • AWS PrivateLink for Snowflake
              • Azure Private Link for Snowflake
            • Starburst (Trino) Private Connectivity
              • AWS PrivateLink for Starburst (Trino)
              • Azure Private Link for Starburst (Trino)
          • Immuta SaaS Private Networking
            • Immuta SaaS Private Networking Over AWS PrivateLink
        • BI Tools
          • BI Tool Configuration Recommendations
          • Power BI Configuration Example
          • Tableau Configuration Example
        • IP Filtering
        • System Status Bundle
      • Reference Guides
        • Deployment Options
        • Data Processing
        • Encryption and Masking Practices
  • Marketplace
    • Introduction
      • User Types
      • Walkthrough
    • Share Data Products
      • How-to Guides
        • Manage Data Products
        • View and Respond to Access Requests
        • Manage Request Forms
        • Customize the Marketplace Branding
      • Reference Guides
        • Marketplace App Requirements
        • Data Products
        • Marketplace Permissions Matrix
        • Understanding Access Provisioning and Underlying Policies in Immuta
          • S3 Provisioning Best Practices
        • Integrating with Existing Catalogs
        • Setting Up Domains for Marketplace
    • Access Data Products
      • How-to Guides
        • Logging into Marketplace
        • Requesting Access to a Data Product
      • Reference Guide
        • Data Source Access Status
    • Short-Term Limitations
  • Governance
    • Introduction
      • Automate Data Access Control Decisions
        • The Two Paths
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
        • Test and Deploy Policy
      • Compliantly Open More Sensitive Data for ML and Analytics
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
    • Author Policies for Data Access Control
      • Introduction
        • Scalability and Evolvability
        • Understandability
        • Distributed Stewardship
        • Consistency
        • Availability of Data
      • Policies
        • Authoring Policies at Scale
        • Data Engineering with Limited Policy Downtime
        • Subscription Policies
          • Overview
          • How-to Guides
            • Author a Subscription Policy
            • Author an ABAC Subscription Policy
            • Subscription Policies Advanced DSL Guide
            • Author a Restricted Subscription Policy
            • Clone, Activate, or Stage a Global Policy
          • Reference Guides
            • Subscription Policy Access Types
            • Advanced Use of Special Functions
        • Data Policies
          • Overview
          • How-to Guides
            • Author a Masking Data Policy
            • Author a Minimization Policy
            • Author a Purpose-Based Restriction Policy
            • Author a Restricted Data Policy
            • Author a Row-Level Policy
            • Author a Time-Based Restriction Policy
            • Policy Certifications and Diffs
          • Reference Guides
            • Data Policy Types
            • Masking Policies
            • Row-Level Policies
            • Custom WHERE Clause Functions
            • Data Policy Conflicts and Fallback
            • Custom Data Policy Certifications
            • Orchestrated Masking Policies
      • Projects and Purpose-Based Access Control
        • Projects and Purpose Controls
          • Getting Started
          • How-to Guides
            • Create a Project
            • Create and Manage Purposes
            • Project Management
              • Manage Projects and Project Settings
              • Manage Project Data Sources
              • Manage Project Members
          • Reference Guides
            • Projects and Purposes
          • Concept Guide
            • Why Use Purposes?
        • Equalized Access
          • Manage Project Equalization How-to Guide
          • Equalized Access Reference Guide
          • Why Use Project Equalization?
        • Masked Joins
          • Enable Masked Joins How-to Guide
          • Why Use Masked Joins?
        • Writing to Projects
          • How-to Guides
            • Create and Manage Snowflake Project Workspaces
            • Create and Manage Databricks Spark Project Workspaces
            • Write Data to the Workspace
          • Reference Guides
            • Writing to Projects
            • Project UDFs (Databricks)
      • Data Consumers
        • Subscribe to a Data Source
        • Query Data
          • Querying Snowflake Data
          • Querying Databricks Data
          • Querying Starburst (Trino) Data
          • Querying Databricks SQL Data
          • Querying Redshift Data
          • Querying Azure Synapse Analytics Data
        • Subscribe to Projects
    • Observe Access and Activity
      • Introduction
      • Audit
        • How-to Guides
          • Export Audit Logs to S3
          • Export Audit Logs to ADLS
          • Use Immuta Audit
          • Run Governance Reports
        • Reference Guides
          • Universal Audit Model (UAM)
            • UAM Schema Reference Guide
          • Query Audit Logs
            • Snowflake Query Audit Logs
            • Databricks Unity Catalog Query Audit Logs
            • Databricks Spark Query Audit Logs
            • Starburst (Trino) Query Audit Logs
          • Audit Export GraphQL Reference Guide
          • Unknown Users in Audit Logs
          • Governance Report Types
      • Dashboards
        • Use the Audit Dashboards How-To Guide
        • Audit Dashboards Reference Guide
      • Monitors
        • Manage Monitors and Observations
        • Monitors Reference Guide
  • Releases
    • Deployment Notes
      • 2024
      • 2023
      • 2022
    • Scheduled Maintenance Windows
    • Immuta Support Matrix Overview
    • Immuta CLI Release Notes
    • Preview Features
      • Features in Preview
    • Deprecations
  • Developer Guides
    • The Immuta CLI
      • Install and Configure the Immuta CLI
      • Manage Your Immuta Tenant
      • Manage Data Sources
      • Manage Sensitive Data Discovery
        • Manage Sensitive Data Discovery Rules
        • Manage Identification Frameworks
        • Run Sensitive Data Discovery on Data Sources
      • Manage Policies
      • Manage Projects
      • Manage Purposes
      • Manage Audit Export
    • The Immuta API
      • Integrations API
        • Getting Started
        • How-to Guides
          • Configure an Amazon S3 Integration
          • Configure an Azure Synapse Analytics Integration
          • Configure a Databricks Unity Catalog Integration
          • Configure a Google BigQuery Integration
          • Configure a Redshift Integration
          • Configure a Snowflake Integration
          • Configure a Starburst (Trino) Integration
        • Reference Guides
          • Integrations API Endpoints
          • Integration Configuration Payload
          • Response Schema
          • HTTP Status Codes and Error Messages
      • Connections API
        • How-to Guides
          • Register a Connection
            • Register a Snowflake Connection
            • Register a Databricks Unity Catalog Connection
            • Register an AWS Lake Formation Connection
          • Manage a Connection
          • Deregister a Connection
        • Connection Registration Payloads Reference Guide
      • Marketplace API
        • Marketplace API Endpoints
        • Source Controlling Data Products
      • Immuta V2 API
        • Data Source Payload Attribute Details
          • Data Source Request Payload Examples
        • Create Policies API Examples
        • Create Projects API Examples
        • Create Purposes API Examples
      • Immuta V1 API
        • Authenticate with the API
        • Configure Your Instance of Immuta
          • Get Job Status
          • Manage Frameworks
          • Manage IAMs
          • Manage Licenses
          • Manage Notifications
          • Manage Identifiers in Domains
            • API Changes - Identification Frameworks to Identifiers in Domains
          • Manage Sensitive Data Discovery (SDD)
          • Manage Tags
          • Manage Webhooks
          • Search Filters
        • Connect Your Data
          • Create and Manage an Amazon S3 Data Source
          • Create an Azure Synapse Analytics Data Source
          • Create a Databricks Data Source
          • Create a Redshift Data Source
          • Create a Snowflake Data Source
          • Create a Starburst (Trino) Data Source
          • Manage the Data Dictionary
        • Use Domains
        • Manage Data Access
          • Manage Access Requests
          • Manage Data and Subscription Policies
          • Manage Write Policies
            • Write Policies Payloads and Response Schema Reference Guide
          • Policy Handler Objects
          • Search Connection Strings
          • Search for Organizations
          • Search Schemas
        • Subscribe to and Manage Data Sources
        • Manage Projects and Purposes
          • Manage Projects
          • Manage Purposes
        • Generate Governance Reports
Powered by GitBook

Self-managed versions

  • 2025.1
  • 2024.3
  • 2024.2

Resources

  • Immuta Changelog

Copyright © 2014-2025 Immuta Inc. All rights reserved.

On this page
  • User Impersonation with Amazon Redshift
  • 1 - Enable User Impersonation
  • 2 - Grant Users the IMPERSONATE_USER Permission
  • 3 - Impersonate a User
  • 4 - End User Impersonation
  • 5 - Revoke Users' IMPERSONATE_USER Permission
  • Redshift-Specific Caveats
  • User Impersonation with Azure Synapse Analytics
  • 1 - Enable User Impersonation
  • 2 - Grant Users the IMPERSONATE_USER Permission
  • 3 - Impersonate a User
  • 4 - End User Impersonation
  • 5 - Revoke Users' IMPERSONATE_USER Permission
  • Synapse-Specific Caveats
  • User Impersonation with Databricks Spark
  • 1 - Configure User Impersonation
  • 2 - Impersonate a User
  • 3 - Query Data
  • 4 - End User Impersonation
  • Databricks-Specific Caveat
  • User Impersonation with Starburst (Trino)
  • 1 - Grant Users the IMPERSONATE_USER Permission
  • 2 - Impersonate a User
  • 3 - Check User Impersonation
  • 4 - End User Impersonation
  • 5 - Revoke Users' IMPERSONATE_USER Permission
  • Caveat
  • User Impersonation with Snowflake
  • 1 - Enable User Impersonation
  • 2 - Grant Users the IMPERSONATE_USER Permission
  • 3 - Impersonate a User
  • 4 - Revoke Users' IMPERSONATE_USER Permission
  • Snowflake-Specific Caveats

Was this helpful?

Export as PDF
  1. Configuration
  2. Manage Users
  3. Immuta Users
  4. How-to Guides

User Impersonation

PreviousManaging Personas and PermissionsNextManage Attributes and Groups

Last updated 1 month ago

Was this helpful?

Impersonation allows users to query data as another Immuta user.

User impersonation is supported with

User Impersonation with Amazon Redshift

Impersonating users in projects

If you are impersonating a user who is currently in a project, you will only see data sources within that project. For details about this behavior, see the description of .

1 - Enable User Impersonation

Select Enable Impersonation when configuring the Redshift integration on the .

2 - Grant Users the IMPERSONATE_USER Permission

After enabling user impersonation with your Amazon Redshift integration, there are two ways to give a user permission to use the feature: in the Immuta UI or in Amazon Redshift. Use the tabs below to select one method.

As an Immuta user with the permission USER_ADMIN,

  1. Click Identities in the navigation menu and select Users.

  2. Select the user you want to edit and select the Settings tab.

  3. Click Add Permissions.

  4. Click the Select Permission dropdown, and select the IMPERSONATE_USER permission.

As a Redshift superuser,

  1. Navigate to your Redshift instance.

  2. Run ALTER GROUP <Impersonation Group> ADD USER <Redshift User>.

3 - Impersonate a User

To impersonate another user in Redshift,

  1. Run the following in Redshift: CALL immuta_procedures.impersonate_user(<Immuta username of the user to impersonate>).

  2. Run queries.

4 - End User Impersonation

To end user impersonation in Redshift, run CALL immuta_procedures.impersonate_user(<NULL>).

5 - Revoke Users' IMPERSONATE_USER Permission

There are two ways to revoke permission to impersonate users: in the Immuta UI or in Amazon Redshift. Use the tabs below to select one method.

As an Immuta user with the permission USER_ADMIN,

  1. Click Identities in the navigation menu and select Users.

  2. Select the user you want to edit and select the Settings tab.

  3. Click Remove for the IMPERSONATE_USER permission.

As a Redshift superuser,

  1. Navigate to your Redshift instance.

  2. Run the following in Redshift: ALTER GROUP <Impersonation Group> DROP USER <Redshift User>

Redshift-Specific Caveats

User impersonation is specific to the script and session in which it was set. Using a new script or running a subset of script queries without setting the context will result in the queries being run as the regular user.

User Impersonation with Azure Synapse Analytics

1 - Enable User Impersonation

2 - Grant Users the IMPERSONATE_USER Permission

After enabling user impersonation with your Azure Synapse Analytics integration, there are two ways to give a user permission to use the feature: in the Immuta UI or in Azure Synapse Analytics. Use the tabs below to select one method.

As an Immuta user with the permission USER_ADMIN,

  1. Click Identities in the navigation menu and select Users.

  2. Select the user you want to edit and select the Settings tab.

  3. Click Add Permissions.

  4. Click the Select Permission dropdown, and select the IMPERSONATE_USER permission.

As a Synapse user,

  1. Navigate to your Synapse instance.

  2. Run the following in Synapse: EXEC sp_addrolemember N'<Impersonation Role>', N'<Synapse User>'

3 - Impersonate a User

To impersonate another user in Synapse,

  1. Run the following command:

    EXEC sys.sp_set_session_context @key = N'immuta_user',
    @value = '<Synapse username linked to the Immuta user you want to impersonate>';
  2. Run queries.

4 - End User Impersonation

To end user impersonation in Synapse, run EXEC sys.sp_set_session_context @key = N'NULL', @value = '<NULL>'.

5 - Revoke Users' IMPERSONATE_USER Permission

There are two ways to revoke permission to impersonate users: in the Immuta UI or in Azure Synapse Analytics. Use the tabs below to select one method.

As an Immuta user with the permission USER_ADMIN,

  1. Click Identities in the navigation menu and select Users.

  2. Select the user you want to edit and select the Settings tab.

  3. Click Remove for the IMPERSONATE_USER permission.

As a Synapse user,

  1. Navigate to your Synapse.

  2. Run the following in Synapse: EXEC sp_droprolemember N'<Impersonation Role>', N'<Synapse User>'

Synapse-Specific Caveats

User impersonation is specific to the script and session in which it was set. Opening a new script will revert the user back to themselves.

User Impersonation with Databricks Spark

Databricks user impersonation allows a Databricks user to impersonate an Immuta user. With this feature,

  • the Immuta user who is being impersonated does not have to have a Databricks account, but they must have an Immuta account.

  • the Databricks user who is impersonating an Immuta user does not have to be associated with Immuta. For example, this could be a service account.

When acting under impersonation, the Databricks user loses their privileged access, so they can only access the tables the Immuta user has access to and only perform DDL commands when that user is acting under an allowed circumstance (such as workspaces, scratch paths, or non-Immuta reads/writes).

1 - Configure User Impersonation

"spark_env_vars.IMMUTA_SPARK_DATABRICKS_ALLOWED_IMPERSONATION_USERS": {
  "type": "fixed",
  "value": "edixon@example.com,dakota@example.com"
}

Prevent users from changing impersonation user in a given session

If your BI tool or other service allows users to submit arbitrary SQL or issue SET commands, set IMMUTA_SPARK_DATABRICKS_SINGLE_IMPERSONATION_USER to true to prevent users from changing their impersonation user once it has been set for a given Spark session.

2 - Impersonate a User

Once the cluster is configured with a list of Databricks users who are allowed to impersonate Immuta users, run the following SQL command to set the user you want to impersonate:

%sql
set immuta.impersonate.user=smwilliams@example.com

3 - Query Data

Run queries as the impersonated Immuta user:

%sql
set immuta.impersonate.user=smwilliams@example.com
select * from demo.hr_data limit 10;

Once impersonation is active, any query issued in the session will have the appropriate data and subscription policies applied for the impersonated user.

Audited Queries

Audited queries include an impersonationUser field, which identifies the Databricks user impersonating the Immuta user:

{
  "id": "query-a20e-493e-id-c1ada0a23a26",
  "dateTime": "1639684812845",
  "month": 1463,
  "profileId": 4,
  "userId": "smwilliams@example.com",
  "dataSourceId": 1,
  "dataSourceName": "Hr Data",
  "count": 1,
  "recordType": "spark",
  "success": true,
  "component": "dataSource",
  "accessType": "query",
  "query": "Relation[id#2644,first_name#2645,last_name#2646,email#2647,gender#2648,race#2649,ssn#2650,dept#2651,job#2652,skills#2653,salary#2654,type#2655] parquet\n",
  "extra": {
    "databricksWorkspaceID": "0",
    "maskedColumns": {},
    "metastoreTables": [
      "demo.hr_data"
    ],
    "clusterName": "your-cluster-name",
    "pathUris": [
      "dbfs:/user/hive/warehouse/demo.db/hr_data"
    ],
    "queryText": "select * from demo.hr_data limit 10;",
    "queryLanguage": "sql",
    "clusterID": "your-171358-cluster-id",
    "impersonationUser": "edixon@example.com"
  },
  "dataSourceTableName": "demo_hr_data",
  "createdAt": "2021-12-16T20:00:12.850Z",
  "updatedAt": "2021-12-16T20:00:12.850Z"
}

4 - End User Impersonation

To end user impersonation for the session, run

%sql
set immuta.impersonate.user=

Databricks-Specific Caveat

The only way to enable this feature is through cluster configuration. The IMPERSONATE_USER permission in Immuta will not allow a user to perform impersonation in Databricks.

User Impersonation with Starburst (Trino)

1 - Grant Users the IMPERSONATE_USER Permission

To grant the user IMPERSONATE_USER permission, as an Immuta user with the permission USER_ADMIN,

  1. Click Identities in the navigation menu and select Users.

  2. Select the user you want to edit and select the Settings tab.

  3. Click Add Permissions.

  4. Click the Select Permission dropdown, and select the IMPERSONATE_USER permission.

2 - Impersonate a User

The Starburst (Trino) integration supports the native Starburst or Trino impersonation approaches:

3 - Check User Impersonation

To view the user you are impersonating, run SHOW SESSION like 'immuta.immuta_user'.

4 - End User Impersonation

To end user impersonation, run RESET SESSION immuta.immuta_user.

5 - Revoke Users' IMPERSONATE_USER Permission

To revoke permission to impersonate users, as an Immuta user with the permission USER_ADMIN,

  1. Click Identities in the navigation menu and select Users.

  2. Select the user you want to edit and select the Settings tab.

  3. Click Remove for the IMPERSONATE_USER permission.

Caveat

The user's permissions to impersonate users are not checked until the query is run. If the user does not have the IMPERSONATE_USER permission in Immuta, they will be able to run the command to impersonate a role, but will not be able to query as that role.

User Impersonation with Snowflake

1 - Enable User Impersonation

2 - Grant Users the IMPERSONATE_USER Permission

After enabling user impersonation with your Snowflake integration, there are two ways to give a user permission to use the feature: in the Immuta UI or in Snowflake. Use the tabs below to select one method.

As an Immuta user with the permission USER_ADMIN,

  1. Click Identities in the navigation menu and select Users.

  2. Select the user you want to edit and select the Settings tab.

  3. Click Add Permissions.

  4. Click the Select Permission dropdown, and select the IMPERSONATE_USER permission.

As a Snowflake user with the ACCOUNTADMIN role,

  1. Navigate to your Snowflake instance.

  2. In a worksheet run GRANT ROLE <<Impersonation_Role>> TO USER "<<Snowflake User>>".

    In this example, the Impersonation Role is the name entered on the Immuta App Settings page when the feature was enabled. The default is IMMUTA_IMPERSONATION, but the admin may have customized it. The Snowflake User is the username of the Snowflake user that will now have permission to impersonate other users.

3 - Impersonate a User

To impersonate another user in Snowflake,

  1. Open a New Worksheet and set your role to the impersonation role specific to your organization.

  2. Run SET immuta_user = '<<Immuta username of the user to impersonate>>'.

  3. Run queries within that worksheet.

4 - Revoke Users' IMPERSONATE_USER Permission

There are two ways to revoke permission to impersonate users: in the Immuta UI or in Snowflake. Use the tabs below to select one method.

As an Immuta user with the permission USER_ADMIN,

  1. Click Identities in the navigation menu and select Users.

  2. Select the user you want to edit and select the Settings tab.

  3. Click Remove for the IMPERSONATE_USER permission.

As a Snowflake user with the ACCOUNTADMIN role,

  1. Navigate to your Snowflake instance.

  2. In a worksheet run the following: REVOKE ROLE <<Impersonation Role>> FROM USER "<<Snowflake User>>"

    In this example, the Impersonation Role is the name entered on the Immuta App Settings page when the feature was enabled. The default is IMMUTA_IMPERSONATION, but the admin may have customized it. The Snowflake User is the username of the Snowflake user that will now have permission to impersonate other users.

Snowflake-Specific Caveats

  • Impersonation is specific to the workspace and session in which it was set. Opening a new worksheet will revert the user back to themselves.

  • Snowflake auditing will show the user running the queries as the user logged in to Snowflake not as the user they are impersonating.

Select Enable Impersonation when configuring the Synapse Analytics integration on the .

In the , add a comma-separated list of Databricks users who are allowed to impersonate Immuta users for the IMMUTA_SPARK_DATABRICKS_ALLOWED_IMPERSONATION_USERS Spark environment variable.

This command generates an API token for the specified user that queries Immuta for metadata pertinent to that user. When generating the token, the impersonated username is matched with the corresponding IAM user. The IAM used by default is the built-in IAM in Immuta, but can be set using the .

User impersonation is automatically enabled with your Starburst (Trino) integration, but the authenticated user must be given the IMPERSONATE_USER permission in Immuta or match the Starburst (Trino) .

JDBC method: In your JDBC connection driver properties, set the sessionUser property to the Immuta user you want to impersonate. See the for details.

Trino CLI method: Set the --session-user property to specify the session user as the Immuta user you want to impersonate when invoking the . See the for details.

Select Enable Impersonation when configuring the Snowflake integration on the .

App Settings page
Starburst JDBC driver documentation
Trino CLI
Starburst release notes
App Settings page
App Settings page
Amazon Redshift
Azure Synapse Analytics
Databricks Spark
Starburst (Trino)
Snowflake
project contexts
cluster policy JSON in the Immuta UI
IMMUTA_USER_MAPPING_IAMID environment variable
immuta.user.admin regex configuration property