arrow-left

All pages
gitbookPowered by GitBook
1 of 27

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

AWS PrivateLink for API Gateway

circle-info

Private preview: This feature is available to select accounts. Contact your Immuta representative for details.

AWS PrivateLink provides private connectivity from the Immuta SaaS platform to API gateway endpoints hosted on AWS. It ensures that all traffic to the configured endpoints only traverses private networks.

This feature is supported in all regions across Immuta's global segments (NA, EU, and AP); contact your Immuta representative if you have questions about availability.

hashtag
Requirements

  • You have an Immuta SaaS tenant.

  • You have an .

  • Your private API must exist in .

hashtag
Configuring API gateway with AWS PrivateLink

  1. to allow for access from the Immuta VPC endpoint in the applicable AWS region. The Immuta VPC endpoint IDs are listed in the table below.

AWS region
VPC endpoint ID

Here is an example resource policy:

circle-exclamation

Once you have made changes to your resource policy, you must for the updates to take effect.

  1. You should now be able to connect to your private API from your Immuta SaaS tenant using your API endpoint, i.e. <api-gateway-id>.execute-api.<region>.amazonaws.com/<stage>/<endpoint>.

hashtag
Troubleshooting

hashtag
Issue: I received a permissions error when trying to invoke my private API from Immuta

If you get an error similar to the following:

Check to make sure that the following is true:

  • You have authorized the correct VPC endpoint for the region you are targeting in your resource policy.

  • Your resource policy allows for execute-api:Invoke privileges on the endpoint you are making requests to from Immuta.

  • You have deployed your API after making changes to your resource policy.

ca-central-1 Canada (Central)

vpce-07dfc91c761a8f2f9

eu-central-1 Europe (Frankfurt)

vpce-04bc9a3cd6020a865

eu-west-1 Europe (Ireland)

vpce-079feae086b944dad

eu-west-2 Europe (London)

vpce-091d282f539081cf5

us-east-1 US East (Virginia)

vpce-0421446f7bf694e56

us-east-2 US East (Ohio)

vpce-071ef6403fa277210

us-west-2 US West (Oregon)

vpce-01f8edfbf6da1095d

ap-northeast-1 Asia Pacific (Tokyo)

vpce-09b3a20743b64ecc9

ap-south-1 Asia Pacific (Mumbai)

vpce-00620d5f59239fa03

ap-southeast-1 Asia Pacific (Singapore)

vpce-0b470f0df2b0e03f3

ap-southeast-2 Asia Pacific (Sydney)

Amazon API gateway private APIarrow-up-right
one of the regions in our global segments
Update your API gateway resource policyarrow-up-right
deploy your APIarrow-up-right

vpce-0afc6a24f0959847c

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "execute-api:Invoke",
            "Resource": [
                "execute-api:/*"
            ]
        },
        {
            "Effect": "Deny",
            "Principal": "*",
            "Action": "execute-api:Invoke",
            "Resource": [
                "execute-api:/*"
            ],
            "Condition" : {
                "StringNotEquals": {
                    "aws:SourceVpce": [
                        "vpce-1a2b3c4d5e6f7g8h9", # customer internal VPC Endpoint
                        "vpce-0421446f7bf694e56"  # Immuta VPC Endpoint added to list
                    ]
                }
            }
        }
    ]
{"Message":"User: anonymous is not authorized to perform: execute-api:Invoke on resource: arn:aws:execute-api:us-east-1:****************/foo/GET/bar with an explicit deny"}

AWS PrivateLink for Databricks

provides private connectivity from the Immuta SaaS platform to Databricks accounts hosted on AWS. It ensures that all traffic to the configured endpoints only traverses private networks.

This front-end PrivateLink connection allows users to connect to the Databricks web application, REST API, and Databricks Connect API over a VPC interface endpoint. For details about AWS PrivateLink in Databricks and the network flow in a typical implementation, explore the .

This feature is supported in most regions across Immuta's global segments (NA, EU, and AP); contact your Immuta representative if you have questions about availability.

hashtag

Immuta SaaS Private Networking

This section contains information about private connectivity options for Immuta SaaS applications.

hashtag
Overview

Immuta SaaS applications support private connectivity from customer networks. This allows organizations to meet security and compliance controls by ensuring that no users can access these endpoints outside of their internal networks.

Support for AWS PrivateLink is available in most regions across Immuta's global segments (NA, EU, and AP); contact your Immuta representative if you have questions about availability.

hashtag
Configuration guide

AWS PrivateLink

Requirements

hashtag
Databricks

Ensure that your accounts meet the following requirements:

  • Your Databricks account is on the E2 version of the platform.

  • Your Databricks account is on the Enterprise pricing tierarrow-up-right.

  • You have your Databricks account ID from the account consolearrow-up-right.

  • You have an Immuta SaaS tenant.

  • has been enabled.

hashtag
Databricks workspace

Ensure that your workspace meets the following requirements:

  • Your workspace must be in an AWS region that supports the E2 version of the platformarrow-up-right.

  • Your Databricks workspace must use a customer-managed VPCarrow-up-right to add any PrivateLink connection.

  • Your workspaces must be configured with private_access_settings objectsarrow-up-right.

circle-exclamation

You cannot configure a connection to your workspace over the public internet if PrivateLink is enabled.

If you have PrivateLink configured on your workspace, Databricks will update the DNS records for that workspace URL to resolve to <region>.privatelink.cloud.databricks.com. Immuta SaaS uses these publicly-resolvable records to direct traffic to a PrivateLink endpoint on our network.

This means that if you have PrivateLink enabled on your workspace, you must follow these instructions to configure your integration. Even if your workspace is also publicly-routable, Databricks's DNS resolution forces the traffic over PrivateLink.

The two supported configurations are

  • A workspace with no PrivateLink configuration, which resolves to public IP addresses.

  • A workspace with PrivateLink configuration, which allows access from the Immuta SaaS regional endpoint (listed below).

hashtag
Enablement

Contact your Databricks representative to enable AWS PrivateLink on your account.

hashtag
Configure Databricks with AWS PrivateLink

  1. Register the Immuta VPC endpoint arrow-up-rightfor the applicable AWS region with your Databricks workspaces. The Immuta VPC endpoint IDs are listed in the table below.

AWS region
VPC endpoint ID

ap-northeast-1 Asia Pacific (Tokyo)

vpce-08cadda15f0f70462

ap-northeast-2 Asia Pacific (Seoul)

vpce-0e45ce26a7f8d00af

ap-south-1 Asia Pacific (Mumbai)

vpce-0efef886a4fbd9532

ap-southeast-1 Asia Pacific (Singapore)

  1. Identify your private access levelarrow-up-right (either ACCOUNT or ENDPOINT) and configure your Databricks workspace accordingly.

    1. If the private_access_level on your private_access_settings object is set to ACCOUNT, no additional configuration is required.

    2. If the private_access_level on your private_access_settings object is set to ENDPOINT, using the table above, you will need to add it to the allowed_vpc_endpoint_ids list inside your private_access_settings object in Databricks. For example,

  1. Configure Databricks depending on your integration type:

    1. Configure the Databricks Unity Catalog integration using your standard cloud.databricks.com workspace URL as the Host.

    2. Configure the Databricks Spark integration using your standard cloud.databricks.com URL. And using the cloud.databricks.com as the Server when registering data sources.

AWS PrivateLinkarrow-up-right
Databricks documentationarrow-up-right

Azure Private Link for Databricks

provides private connectivity from the Immuta SaaS platform, hosted on AWS, to Azure Databricks accounts. It ensures that all traffic to the configured endpoints only traverses private networks over the Immuta Private Cloud Exchange.

This front-end Private Link connection allows users to connect to the Databricks web application, REST API, and Databricks Connect API over an Azure Private Endpoint. For details about Azure Private Link for Databricks and the network flow in a typical implementation, explore the .

Support for Azure Private Link is available in .

hashtag

GCP BigQuery Private Service Connect

The Immuta SaaS platform uses Private Service Connect for all BigQuery connections through the Immuta Private Cloud Exchange. This means that all connectivity to Google APIs, including BigQuery, are over a private network connection. This functionality is enabled by default and no customer configuration is required for the private connectivity (unless you have VPC Service Controls enabled). This feature is supported in all regions across Immuta's global segments (NA, EU, and AP).

circle-exclamation

VPC Service Controls

If you have VPC Service Controls (VPC-SCs) enabled for BigQuery, contact your Immuta representative for the VPC information required to configure resource access restrictions. If you have VPC-SCs configured

"private_access_settings_name": "immuta-access",
"region": "us-east-1",
"public_access_enabled": false,
"private_access_level": "ENDPOINT",
"allowed_vpc_endpoint_ids": [
        "vpce-0fe5b17a0707d6fa5"
]

vpce-07e9890053f5084b2

ap-southeast-2 Asia Pacific (Sydney)

vpce-0d363d9ea82658bec

ca-central-1 Canada (Central)

vpce-01933bcf30ac4ed19

eu-central-1 Europe (Frankfurt)

vpce-0048e36edfb27d0aa

eu-west-1 Europe (Ireland)

vpce-0783d9412b046df1f

eu-west-2 Europe (London)

vpce-0f546cc413bf70baa

us-east-1 US East (Virginia)

vpce-0c6e8f337e0753aa9

us-east-2 US East (Ohio)

vpce-00ba42c4e2be20721

us-west-2 US West (Oregon)

vpce-029306c6a510f7b79

AWS PrivateLink for Databricksarrow-up-right
register your tables as Immuta data sources
Requirements

Ensure that your accounts meet the following requirements:

  • You have an Immuta SaaS tenant.

  • Your Azure Databricks workspace must be on the Premium or Enterprise pricing tierarrow-up-right.

  • Azure Private Link for Databricksarrow-up-right has been configured and enabled.

  • You have your Databricks account ID from the .

hashtag
Configure Databricks with Azure Private Link

  1. Contact your Immuta representative, and provide the following information for each Azure Databricks Workspace you wish to connect to:

    • Azure region

    • Azure Databricks hostname

    • Azure Databricks resource ID or alias

  2. Your representative will inform you when the two Azure Private Link connections have been made available. Accept them in your .

  3. Configure Databricks depending on your integration type:

    1. using your standard cloud.databricks.com workspace URL as the Host.

    2. Configure the using your standard azuredatabricks.net URL. And using the

Azure Private Linkarrow-up-right
Databricks documentationarrow-up-right
all Databricks-supported Azure regionsarrow-up-right
without
Immuta's VPC information, Immuta SaaS will be unable to connect to your Google BigQuery instances.

Azure Private Link for Azure Synapse Dedicated SQL Pools

circle-info

Public preview: This feature is available to all accounts that request to enable it for their tenant. Contact your Immuta representative to enable it.

Azure Private Linkarrow-up-right provides private connectivity from the Immuta SaaS platform, hosted on AWS, to Azure Synapse workspaces. It ensures that all traffic to the configured endpoints only traverses private networks over the Immuta private cloud exchange.

This uses the Sql Private Link connection to allow users to connect to the Azure Synapse dedicated SQL pools over an Azure Private Endpoint. For details about Azure Private Link for Azure Synapse and the network flow in a typical implementation, see the Azure Synapse documentationarrow-up-right.

Support for Azure Private Link is available in .

hashtag
Requirements

Ensure that your accounts meet the following requirements:

  • You have an Immuta SaaS tenant.

  • You have an Azure Synapse workspace with a dedicated SQL pool.

hashtag
Configure Synapse workspace with Azure Private Link

  1. Contact your Immuta representative and provide the following information for each Azure Synapse workspace you want to connect to:

    • Azure region

    • Azure Synapse workspace SQL

AWS PrivateLink for Redshift

AWS PrivateLink provides private connectivity from the Immuta SaaS platform to Redshift clusters hosted on AWS. It ensures that all traffic to the configured endpoints only traverses private networks.

This feature is supported in most regions across Immuta's global segments (NA, EU, and AP); contact your Immuta representative if you have questions about availability.

hashtag
Requirements

  • You have an Immuta SaaS tenant.

  • You have set up an for your Redshift Cluster endpoints.

    • When creating the service, make sure that the Require Acceptance option is checked (this does not allow anyone to connect, all connections will be blocked until the Immuta service principal is added).

hashtag
Configure Redshift with AWS PrivateLink

  1. Open a support ticket with with the following information:

    • AWS region

    • AWS subnet availability zones IDs (e.g. use1-az3; these are not the account-specific identifiers like us-east-1a

AWS PrivateLink for PostgreSQL

AWS PrivateLink provides private connectivity from the Immuta SaaS platform to PostgreSQL database engines hosted on Amazon Aurora and Amazon RDS. It ensures that all traffic to the configured endpoints only traverses private networks.

This feature is supported in most regions across Immuta's global segments (NA, EU, and AP); contact your Immuta representative if you have questions about availability.

hashtag
Requirements

privatelink-account-url
from the JSON object in step one as the
Server
when registering data sources.
account consolearrow-up-right
Azure Databricks workspace configurationarrow-up-right
Configure the Databricks Unity Catalog integration
Databricks Spark integration
register your tables as Immuta data sources
Azure Synapse workspace resource ID or alias (can be found in the console by going to the Synapse workspace -> Settings -> Properties and getting the Resource ID)
  • Your representative will inform you when the two Azure Private Link connections have been made available. Accept them in your Azure Synapse workspace configurationarrow-up-right.

  • Configure the Azure Synapse data source using your standard sql.azuresynapse.net URL for the dedicated SQL pool.

  • all Synapse-supported Azure regionsarrow-up-right
    hostnamearrow-up-right
    or
    eu-west-2c
    )
  • VPC endpoint service ID (e.g., vpce-0a02f54c1d339e98a)

  • Ports used

  • Authorize the service principalarrow-up-right provided by your representative so that Immuta can complete the VPC endpoint configuration.

  • Configure the Redshift integration.

  • Register your tables as Immuta data sources.

  • AWS PrivateLink Servicearrow-up-right
    Immuta Supportarrow-up-right

    You have an Immuta SaaS tenant.

  • You have set up an AWS PrivateLink Servicearrow-up-right for your Amazon Aurora/Amazon RDS endpoints.

    • When creating the service, make sure that the Require Acceptance option is checked (this does not allow anyone to connect, all connections will be blocked until the Immuta service principal is added).

  • circle-exclamation

    Like all data connection private networking integrations that Immuta offers for AWS, the Amazon Aurora/Amazon RDS PrivateLink integration relies on a Network Load Balancer (NLB) that targets a private IP address - in this case, the private IP address of the Aurora/RDS instance. In a Multi-AZ configuration, the primary instance's private IP address changes during a failover event.

    The NLB will not automatically detect this new IP address. Consequently, after an RDS failover, Immuta will be unable to connect to the database until the NLB's target group is updated with the new primary instance's private IP address.

    To avoid the need for manual updates to your NLB configuration, you must implement an automated solution. A common approach is to use an AWS Lambda function that listens for RDS failover events and automatically updates the NLB target group with the new IP address. For a detailed guide on this automation, refer to the AWS blog post: .

    hashtag
    Configure PostgreSQL with AWS PrivateLink

    1. Open a support ticket with Immuta Supportarrow-up-right with the following information:

      • AWS region

      • AWS subnet availability zones IDs (e.g. use1-az3; these are not the account-specific identifiers like us-east-1a or eu-west-2c)

      • VPC endpoint service ID (e.g., vpce-0a02f54c1d339e98a)

      • Ports used

    2. provided by your representative so that Immuta can complete the VPC endpoint configuration.

    3. .

    Access Amazon RDS across VPCs using AWS PrivateLink and Network Load Balancerarrow-up-right
    Authorize the service principalarrow-up-right
    Register the PostgreSQL connection

    Immuta SaaS Private Networking Over AWS PrivateLink

    Immuta SaaS hosts AWS PrivateLink services that organizations can configure Amazon VPC endpoint connections to, which ensures that all traffic to Immuta SaaS only traverses private networks.

    This feature is supported in most regions across Immuta's global segments (NA, EU, and AP); contact your Immuta representative if you have questions about availability.

    circle-exclamation

    This documentation is for configuring access to an Immuta SaaS tenant from an organization's network, not for configuring access from a tenant to an organization's data sources or APIs. For that, please see the documentation on Data connection private networking.

    Overview of Immuta SaaS Private Networking over AWS PrivateLink

    hashtag
    Configuring AWS PrivateLink connections to the Govern app

    hashtag
    Requirements

    • You have an Immuta SaaS tenant.

    • You have an Amazon VPC in one of the supported regions listed in the below.

    • Clients (users or services) can access the Amazon VPC network where the AWS PrivateLink endpoint will be created.

    hashtag
    Create PrivateLink endpoint

    You will need to create an AWS PrivateLink endpoint to connect directly to your tenant over the Immuta SaaS network. Please refer to the for instructions on creating an endpoint.

    circle-info

    Please note that the documentation uses connecting to an AWS service as an example, but you will want to configure your endpoint to connect to one of the PrivateLink service endpoints in the tables below.

    Immuta has a set of that you can connect to in different global segments. When creating your endpoint, please choose the service in the same region as your tenant. If you do not know what region your tenant is in, please contact your Immuta representative.

    hashtag
    NA global segment

    Region
    Endpoint service name
    Availability zones

    hashtag
    EU global segment

    Region
    Endpoint service name
    Availability zones

    hashtag
    AP global segment

    Region
    Endpoint service name
    Availability zones

    hashtag
    Configuring security group access

    VPC endpoints must be associated with at least one security group upon creation. Please ensure that traffic from your clients to port 443 is allowed.

    hashtag
    Configure privatelink.immutacloud.com DNS

    In order to direct traffic to your PrivateLink endpoint for your tenant hostname, you will need to set up DNS resolution in your network for the privatelink.immutacloud.com domain. For instructions on how to do this, please refer to your internal DNS provider's documentation.

    Once you have resolution for the domain configured, you will need to create a CNAME DNS record that resolves <tenant name>.privatelink.immutacloud.com to your newly-created VPC endpoint's DNS name.

    For example, if your tenant's hostname is example.hosted.immutacloud.com and your VPC endpoint DNS name is vpce-0d363d9ea82658bec-e4wo04x9.vpce-svc-0d12345ddd89101112.us-east-1.vpce.amazonaws.com, you should create a CNAME record that resolves example.privatelink.immutacloud.com to your VPC endpoint DNS name.

    The end result should be that, inside your network, DNS resolution for your tenant hostname will direct traffic to your VPC Endpoint.

    hashtag
    Have your connection request accepted

    Once you have configured DNS, you will need to contact your Immuta representative with the following information in order to have your VPC endpoint connection request accepted and PrivateLink enabled for your tenant:

    • Tenant name

    • AWS region

    • VPC endpoint ID

    After the request is completed, please continue to use your standard hostname (e.g. example.hosted.immutacloud.com) to access your tenant. An Immuta-managed CNAME record will direct that traffic to your PrivateLink hostname (e.g. example.privatelink.immutacloud.com).

    triangle-exclamation

    When Immuta completes this request, your tenant will no longer be publicly accessible. Traffic bound for your tenant hostname (e.g. example.hosted.immutacloud.com) will be directed to your PrivateLink hostname (e.g. example.privatelink.immutacloud.com).

    Any services or data platforms that make requests to the Govern app API will need to route their traffic over your VPC endpoint as well. The integrations that require this connectivity are:

    hashtag
    Configuring SCIM integrations that require public endpoints

    Identity providers that support SCIM often require that the endpoints that they interact with are publicly accessible. In order to accommodate this traffic, Immuta can configure a separate, SCIM-only public traffic ingress for your tenant.

    If needed, request a public SCIM ingress when you contact your Immuta representative to have Immuta SaaS private networking enabled.

    hashtag
    Configuring private networking for multiple tenants

    If you have multiple Immuta SaaS tenants that you need to enable Immuta SaaS private networking for, you only need to configure one endpoint per global segment. For example, if all your tenants are in the EU global segment, you only need to create a VPC endpoint in one of the EU regions from the table above.

    While having at least one is required, it is possible to configure multiple endpoints, either for redundancy or to support traffic to your tenants from distinct, isolated networks. If you do create multiple VPC endpoints, please provide all the VPC endpoint IDs from your connection requests to your Immuta representative.

    You will still need to create a privatelink.immutacloud.com CNAME record for each tenant. Please ensure that, if you've created separate VPC endpoints per tenant, the CNAME record references the correct VPC endpoint DNS name.

    hashtag
    Configuring AWS PrivateLink connections to the Request app

    hashtag
    Requirements

    • You have an Immuta SaaS tenant.

    • You have an Amazon VPC in one of the supported regions listed in the above.

    • Clients (users or services) can access the Amazon VPC network where the AWS PrivateLink endpoint will be created.

    circle-exclamation

    PrivateLink for the Request app is only supported if all your tenants reside in a single global segment. Having tenants in multiple global segments is very uncommon, so you are unlikely to be affected by this limitation.

    hashtag
    Configure Request app DNS

    Immuta SaaS Private Networking for the Request app will use the same VPC endpoint as the one created for the Govern app, so the only required additional configuration is DNS-related.

    In order to direct traffic to your PrivateLink endpoint for the Request app, you will need to set up DNS resolution in your network for the following domains:

    • app.immutacloud.com

    • marketplace-fe.immutacloud.com

    For instructions on how to do this, refer to your internal DNS provider's documentation.

    Once you have resolution for the domain configured, you will need to create a CNAME DNS record that resolves those domains to your PrivateLink VPC endpoint's DNS name (the same endpoints used for the Governance PrivateLink).

    For example, if your VPC endpoint DNS name is vpce-0d363d9ea82658bec-e4wo04x9.vpce-svc-0d12345ddd89101112.us-east-1.vpce.amazonaws.com, you should create CNAME records that resolve app.immutacloud.com and marketplace-fe.immutacloud.com to your VPC endpoint DNS name.

    The end result should be that, inside your network, DNS resolution to app.immutacloud.com and marketplace-fe.immutacloud.com will direct traffic to your VPC endpoint, and the Request app should be accessible along with your tenant.

    com.amazonaws.vpce.eu-west-2.vpce-svc-0cb6dcde93257e082

    • euw2-az1

    • euw2-az2

    • euw2-az3

    Starburst (Trino)

  • Databricks Spark

  • All SCIM Integrations

    • If your Identity Provider only supports public SCIM endpoints, please see this section.

  • In order to prevent these integrations from becoming degraded, please ensure that they can send traffic to your PrivateLink endpoint.

    You have successfully configured Immuta SaaS PrivateLink for the Govern app.

    us-east-1 US East (Virginia)

    com.amazonaws.vpce.us-east-1.vpce-svc-0c33df1aaf78a8955

    • use1-az2

    • use1-az4

    • use1-az6

    us-west-2 US West (Oregon)

    com.amazonaws.vpce.us-west-2.vpce-svc-0e35fa96fd264e0a6

    • usw2-az1

    • usw2-az2

    • usw2-az3

    eu-central-1 Europe (Frankfurt)

    com.amazonaws.vpce.eu-central-1.vpce-svc-027e6fd0c1cf62c68

    • euc1-az1

    • euc1-az2

    • euc1-az3

    eu-west-1 Europe (Ireland)

    com.amazonaws.vpce.eu-west-1.vpce-svc-0bd003f6352dc5e58

    • euw1-az1

    • euw1-az2

    • euw1-az3

    ap-northeast-1 Asia Pacific (Tokyo)

    com.amazonaws.vpce.ap-northeast-1.vpce-svc-056d170f71688f5f9

    • apne1-az1

    • apne1-az2

    • apne1-az4

    ap-southeast-2 Asia Pacific (Sydney)

    com.amazonaws.vpce.ap-southeast-2.vpce-svc-0f1fad760b7efc4d7

    • apse2-az1

    • apse2-az2

    • apse2-az3

    global segment tables
    AWS PrivateLink documentationarrow-up-right
    PrivateLink servicesarrow-up-right
    global segment tables

    eu-west-2 Europe (London)

    Databricks Private Connectivity

    This section contains information about private connectivity options for Databricks integrations.

    hashtag
    Overview

    The Immuta SaaS platform supports private connectivity to Databricks on AWSarrow-up-right and the Azure Databricks Servicearrow-up-right. This allows organizations to meet security and compliance controls by ensuring that traffic to data sources from Immuta SaaS only traverses private networks, never the public internet.

    • Support for AWS PrivateLink is available in most regions across Immuta's global segments (NA, EU, and AP); contact your Immuta representative if you have questions about availability.

    • Support for Azure Private Link is available in .

    hashtag
    Configuration guides

    all Databricks-supported Azure regionsarrow-up-right
    AWS PrivateLink
    Azure Private Link

    Private Networking Support

    Immuta SaaS supports two different kinds of private networking:

    • Data connection private networking: Enables private connectivity from the Immuta SaaS network and a user's data platforms or private APIs over either AWS PrivateLink or Azure Private Link. This is useful for organizations with security and compliance policies that require that their data platforms and APIs are not routable over the public internet (even with a firewall in place).

    • Immuta SaaS private networking: Enables private connectivity from a user's network to an Immuta SaaS tenant. It will require Immuta SaaS users to connect over a private endpoint. This is useful for organizations with security and compliance policies that require that the SaaS applications they use are not publicly accessible.

    A simple way to understand the difference between these two features: Data connection private networking is outbound from Immuta to an organization's data sources, where the organization creates the private service endpoint for Immuta SaaS to connect to. Immuta SaaS private networking is inbound to Immuta from an organization's network, where Immuta creates the private service endpoint for users to connect to.

    circle-exclamation

    Having one type of private networking enabled does not mean that the other is configured automatically. The two features perform different operations and use different networking interfaces that are configured independently.

    hashtag
    Data connection private networking

    Although AWS PrivateLink and Azure Private Link differ in their implementation details, they are fundamentally similar offerings. Organizations can expose private services on AWS or Azure networks that Immuta SaaS can establish a connection to. How this is done can vary significantly by both data platform and hosting cloud provider, which is why this documentation has been broken down into specific instructions for each combination in the support matrix below.

    AWS
    Azure
    GCP

    Over time, the breadth and depth of private networking support will continue to grow. If there are specific data platforms and/or cloud providers that you require, which are either not listed or not yet supported, please contact your Immuta representative.

    hashtag
    Private networking across regions and global segments

    Immuta SaaS's global network is divided into large geographic regions called . All Immuta SaaS tenants are deployed into an AWS region inside their chosen segment.

    Occasionally, it may be required to connect to data sources outside of a specific region. To meet those needs, Immuta SaaS supports both cross-region and cross-global-segment connectivity.

    hashtag
    Cross-region private networking

    This involves connecting to data sources in a different region within a given global segment.

    Examples

    • a tenant in us-east-1 needs to connect to a Snowflake account in AWS'sus-east-2 region.

    • a tenant in us-west-2 needs to connect to an Azure Databricks workspace in the westus2 region.

    hashtag
    Cross-global-segment private networking

    This involves connecting to data sources in a region outside of the tenant's global segment.

    Examples

    • a tenant in the EU Global Segment needs to connect to a Snowflake account in us-east-2.

    • a tenant in the AP Global Segment needs to connect to a Starburst instance hosted in Azure's eastus2 region.

    hashtag
    Immuta SaaS private networking

    circle-info

    Public preview: This feature is available to select accounts. Contact your Immuta representative for details.

    The fundamental mechanism that underlies Immuta SaaS private networking is an Immuta SaaS private endpoint service (e.g. an ) which organizations can establish a connection to via a private endpoint (e.g. an ).

    Once the endpoint-service connection is established, organizations then configure DNS resolution in their network to resolve their governance private FQDN (e.g.<tenant>.privatelink.immutacloud.com) to their private endpoint. Organizations can continue to access their Immuta SaaS tenants using their standard governance FQDN (e.g. <tenant>.hosted.immutacloud.com), which will now automatically resolve to their private FQDN.

    As with data connection private networking the specifics of configuring Immuta SaaS private networking can vary by the cloud provider the source network is hosted on. Please consult the support matrix below for specific instructions.

    Cloud provider
    Support level

    Starburst (Trino)

    N/A

    Amazon Redshift

    N/A

    N/A

    Amazon RDS/Aurora PostgreSQL

    N/A

    N/A

    Amazon S3

    Generally available

    N/A

    N/A

    AWS Lake Formation

    Private preview

    N/A

    N/A

    Amazon API gateway

    N/A

    N/A

    Azure Synapse Analytics

    N/A

    N/A

    GCP BigQuery

    N/A

    N/A

    Snowflake

    Generally available

    Generally available

    N/A

    Databricks

    Generally available

    AWS PrivateLink

    Generally available

    Azure Private Link

    Not yet supported

    global segments
    Amazon VPC endpoint Servicearrow-up-right
    Amazon VPC endpointarrow-up-right

    Generally available
    Private preview
    Generally available
    Generally available
    Generally available
    Generally available
    Public preview
    Public preview
    Generally available

    BI Tool Configuration Recommendations

    Immuta can enforce policies on data in your dashboards when your BI tools are connected directly to your compute layer.

    This page provides recommendations for configuring the interaction between your database, BI tools, and users.

    hashtag
    Connect directly to the database instead of extracts or imports

    To ensure that Immuta applies access controls to your dashboards, connect your BI tools directly to the compute layer where Immuta enforces policies without using extracts. Different tools may call this feature different names (such as live connections in Tableau or DirectQuery in Power BI).

    Connecting your tools directly to the compute layer without using extracts will not impact performance and provides host of other benefits. For details, see .

    hashtag
    Use personal credentials to authenticate and query data

    Personal credentials need to be used to query data from the BI tool so that Immuta can apply the correct policies for the user accessing the dashboard. Different authentication mechanisms are available, depending on the BI tool, connector, and compute layer. However, Immuta recommends to use one of the following methods:

    • Use OAuth single sign (SSO) on when available, as it offers the best user experience.

    • Use username and password authentication or personal access tokens as an alternative if OAuth is not supported.

    • Use impersonation if you cannot create and authenticate individual users in the compute layer. Impersonation allows users to query data as another Immuta user. For details, see the .

    For configuration guidance, see and .

    hashtag
    Authentication method matrix

    Immuta has verified several popular BI tool and compute platform combinations. The table below outlines these combinations and their recommended authentication methods. However, since these combinations depend on tools outside Immuta, consult the platform documentation to confirm these suggestions.

    Amazon Redshift
    Azure Synapse Analytics
    AWS Databricks
    Azure Databricks
    Google BigQuery
    Snowflake
    Starburst

    OAuth/SSO

    Not tested

    OAuth/SSO

    OAuth/SSO

    Power BI service

    OAuth/SSO

    Not tested

    Databricks personal access token (PAT)

    OAuth/SSO

    Not tested

    OAuth/SSO

    ❌

    Tableau Desktop

    Username and password

    OAuth/SSO

    OAuth/SSO

    OAuth/SSO

    OAuth/SSO

    OAuth/SSO

    Username and password

    Tableau Server

    Username and password

    OAuth/SSO

    OAuth/SSO

    OAuth/SSO

    OAuth/SSO

    OAuth/SSO

    Username and password

    QuickSight

    ❌

    ❌

    ❌

    ❌

    ❌

    ❌

    ❌

    Power BI client

    OAuth/SSO

    Not tested

    Moving from legacy BI extracts to modern data security and engineeringarrow-up-right
    reference guide for your data platform integration
    Power BI configuration example
    Tableau configuration example

    OAuth/SSO

    Azure Private Link for Snowflake

    provides private connectivity from the Immuta SaaS platform, hosted on AWS, to Snowflake Accounts on Azure. It ensures that all traffic to the configured endpoints only traverses private networks over the Immuta Private Cloud Exchange.

    Support for Azure Private Link is available in .

    hashtag
    Requirements

    You have an Immuta SaaS tenant.

  • Your Snowflake account is hosted on Azure.

  • Your Snowflake account is on the Business Critical Editionarrow-up-right.

  • You have ACCOUNTADMIN role on your Snowflake account to configure the Private Link connection.

  • hashtag
    Configure Snowflake with Azure Private Link

    circle-exclamation

    Snowflake requiresarrow-up-right that an Azure temporary access token be used when configuring the Azure Private Link connection. Due to the constraint imposed by the 1-hour token expiration, your Immuta representative will ask for a time window in which you can accept the connection in your Snowflake account. During this window, the token will be generated by Immuta and provided to you when you're ready to run the following SQL query.

    1. In your Snowflake environment, run the following SQL query, which will return a JSON object with the connection information you will need to include in your support ticket:

    2. Copy the returned JSON object into a support ticket with Immuta Supportarrow-up-right to request for the feature to be enabled on your Immuta SaaS tenant.

    3. Your Immuta representative will work with you to schedule a time in which to accept the connection in your Snowflake account. They will provide you with a SQL query to run using the ACCOUNTADMIN role. The SQL query will be in this format:

      The query should return the following response: Private link access authorized.

    4. using the privatelink-account-url from the JSON object in step one as the Host.

    Azure Private Linkarrow-up-right
    all Snowflake-supported Azure regionsarrow-up-right
    select SYSTEM$GET_PRIVATELINK_CONFIG()
    SELECT SYSTEM$AUTHORIZE_PRIVATELINK (
    '/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Network/privateEndpoints/abc12345.east-us-2.azure.snowflakecomputing.com-eus2',
      '<ACCESS_TOKEN>'
    );
    Configure the Snowflake integration

    Power BI Configuration Example

    hashtag
    Specify authentication method

    When creating a data source in Power BI, specify Microsoft Account as the authentication method, if available. This setting allows you to use your enterprise SSO to connect to your compute platform.

    hashtag
    Set up DirectQuery

    After connecting to the compute platform and the tables to use for your data source, select DirectQuery to connect to the data source. This setting is required for Immuta to enforce policies.

    hashtag
    Publish data

    After you publish the datasets to the Power BI service, force users to use their personal credentials to connect to the compute platform by following the steps below.

    1. Enable SSO in the tenant admin portal under Settings -> Admin portal -> Integration settings.

    2. Find the option to manage Data source credentials under Settings -> Datasets.

    3. For most connectors you can enable OAuth2 as the authentication method to the compute platform.

    hashtag
    Resources

    • Snowflake guides:

    System Status Bundle

    The system status tab allows administrators to export a zip file called the Immuta status bundle. This bundle includes information helpful to assess and solve issues within an Immuta tenant by providing a snapshot of Immuta, associated services, and information about the remote source backing any of the selected data sources.

    1. Click the App Settings icon.

    2. Select the System Status tab.

    Select the checkboxes for the information you want to export.
  • Click Generate Status Bundle to download the file.

  • Enable the option Report viewers can only access this data source with their own Power BI identities using DirectQuery. This forces end-users to use their personal credentials.

    Databricks guide:
  • Redshift guide:

  • https://www.snowflake.com/blog/using-sso-between-power-bi-and-snowflakearrow-up-right
    https://docs.snowflake.com/en/user-guide/oauth-powerbiarrow-up-right
    https://learn.microsoft.com/en-us/azure/databricks/partners/bi/power-bi#access-azure-databricks-data-source-using-the-power-bi-servicearrow-up-right
    https://aws.amazon.com/blogs/big-data/integrate-amazon-redshift-native-idp-federation-with-microsoft-azure-ad-and-power-biarrow-up-right

    How-to Guides

    AWS PrivateLink for Starburst (Trino)

    AWS PrivateLink provides private connectivity from the Immuta SaaS platform to Starburst (Trino) Clusters hosted on AWS. It ensures that all traffic to the configured endpoints only traverses private networks.

    This feature is supported in most regions across Immuta's global segments (NA, EU, and AP); contact your Immuta account manager if you have questions about availability.

    hashtag
    Requirements

    • You have an Immuta SaaS tenant.

    • Your Starburst (Trino) Cluster is hosted on AWS.

    • You have set up an for your Starburst Cluster endpoints.

      • When creating the service, make sure that the Require Acceptance option is checked (this does not allow anyone to connect; all connections will be blocked until the Immuta Service Principal is added).

      • Only TCP connections over IPv4 are supported.

    • Your Starburst (Trino) cluster endpoints are secured with TLS certificates for encryption in-transit.

      • The presented certificate must have the Fully-Qualified Domain Name (FQDN) of your cluster hostname as a Subject Alternative Name (SAN). This can either be a wildcard (e.g. *.example.com) or specific to the host (e.g. starburst.example.com)

    chevron-rightWhy are private top-level domains (TLDs) not supported?hashtag

    Private TLDs are not allowed for two reasons:

    1. A private TLD cannot have a publicly-trusted certificate authority (CA), which means that Immuta SaaS systems could not verify the certificate and would refuse to send traffic to the PrivateLink endpoint.

    hashtag
    Configure Starburst (Trino) with AWS PrivateLink

    1. Open a support ticket with with the following information:

      • AWS region

      • AWS subnet availability zones IDs (e.g. use1-az3; these are not the account-specific identifiers like us-east-1a

    The presented certificate must be issued from a publicly-trusted certificate authority (CA). Private CAs and/or internal top-level domains (TLDs) are not supported.
    Immuta SaaS cannot accept private CAs for private TLDs because there are no means of verifying domain ownership and preventing overlap between an organization's private DNS. For example, if Company A uses a .localTLD, Immuta cannot verify that they own it because it's impossible to have exclusive ownership over a private TLD. If Company B also uses a .local TLD, it becomes impossible to determine what the canonical trust chain for either company's private CA is.

    Immuta also cannot prevent DNS overlap between different tenants. If both companies use starburst-prod.local only one endpoint can be routable from that DNS hostname.

    or
    eu-west-2c
    )
  • VPC endpoint service ID (e.g., com.amazonaws.vpce.us-east-1.vpce-svc-0471015b106ad1d47)

  • DNS hostname

    • This is the hostname of the cluster that will be used to ingest data sources. It must match the Common Name (CN) or a Subject Alternative Name (SAN) on your cluster's TLS certificate.

  • Ports used

  • provided by your representative so that Immuta can complete the VPC endpoint configuration.

  • .

  • .

  • AWS PrivateLink Servicearrow-up-right
    Immuta Supportarrow-up-right
    Authorize the service principalarrow-up-right
    Configure the Starburst (Trino) integration
    Register your tables as Immuta data sources

    Data Connection Private Networking

    These sections contain information about private connectivity options from Immuta to various data platforms and APIs.

    hashtag
    Data platform connectivity guides

    • AWS PrivateLink for API gateway

    AWS PrivateLink for Redshift
    AWS PrivateLink for PostgreSQL
    AWS PrivateLink for S3
    Databricks private connectivity
    GCP BigQuery Private Service Connect
    Snowflake private connectivity
    Starburst (Trino) private connectivity
    Azure Private Link for Azure Synapse dedicated SQL pools

    App Settings

    hashtag
    Use Existing Identity Access Manager

    See the Identity managers documentation for how-to guides for your IAM protocol.

    hashtag
    Set Default Permissions

    To set the default permissions granted to users when they log in to Immuta, click the Default Permissions dropdown menu, and then select permissions from this list.

    hashtag
    Link External Catalogs

    See the .

    hashtag
    Add a Project Workspace

    1. Click the App Settings icon in the navigation menu.

    2. Select Add Workspace.

    3. Use the dropdown menu to select the Databricks Workspace Type.

    1. Enter the Name.

    2. Click Add Workspace.

    3. Enter the Hostname, Workspace ID, Account Name, Databricks API Token, and Storage Container.

    circle-info

    Databricks API Token Expiration

    The Databricks API Token used for project workspace access must be non-expiring. Using a token that expires risks losing access to projects that are created using that configuration.

    hashtag
    Add An Integration

    hashtag
    Integration settings

    Follow the to set up the integration.

    hashtag
    Global Integration Settings

    hashtag
    Snowflake Audit Sync Schedule

    Requirements: See the requirements for Snowflake audit on the .

    To configure the ,

    1. Click the App Settings icon in the navigation menu.

    2. Navigate to the Global Integration Settings section and within that the Snowflake Audit Sync Schedule.

    3. Enter an integer into the textbox. If you enter 12, the audit sync will happen once every 12 hours, so twice a day.

    hashtag
    Databricks Unity Catalog Configuration

    Audit

    Requirements: See the requirements for Databricks Unity Catalog audit on the .

    To configure the ,

    1. Click the App Settings icon in the navigation menu.

    2. Navigate to the Global Integration Settings section and within that the Databricks Unity Catalog Configuration.

    3. Enter an integer into the textbox. If you enter 12, the audit sync will happen once every 12 hours, so twice a day.

    Additional privileges required for access

    By default, Immuta will revoke Immuta users' USE CATALOG and USE SCHEMA privileges in Unity Catalog for users that do not have access to any of the resources within that catalog/schema. This includes any USE CATALOG or USE SCHEMA privileges that were granted outside of Immuta.

    To disable this setting,

    1. Click the App Settings icon in the navigation menu.

    2. Navigate to Global Integration Settings > Databricks Unity Catalog Configuration.

    3. Click the Revoke additional privileges required for access checkbox to disable the setting.

    See the for details about this setting.

    hashtag
    Initialize Kerberos

    To configure Immuta to protect data in a kerberized Hadoop cluster,

    1. Click the App Settings icon in the navigation menu.

    2. Upload your Kerberos Configuration File, and then you can modify the Kerberos configuration in the edit window.

    3. Upload your Keytab File.

    hashtag
    Generate System API Key

    1. Click the App Settings icon in the navigation menu.

    2. Click the Generate Key button.

    3. Save this API key in a secure location.

    hashtag
    Audit Settings

    hashtag
    Enable Exclude Query Text

    By default, query text is included in query audit events from Snowflake, Databricks, and Starburst (Trino).

    When query text is excluded from audit events, Immuta will retain query event metadata such as the columns and tables accessed. However, the query text used to make the query will not be included in the event. This setting is a global control for all configured integrations.

    To exclude query text from audit events,

    1. Click the App Settings icon in the navigation menu.

    2. Scroll to the Audit section.

    3. Check the box to Exclude query text from audit events.

    hashtag
    Configure Governor and Admin Settings

    These options allow you to restrict the power individual users with the GOVERNANCE and USER_ADMIN permissions have in Immuta. Click the checkboxes to enable or disable these options.

    hashtag
    Create Custom Permissions

    You can create custom permissions that can then be assigned to users and leveraged when building subscription policies. Note: You cannot configure actions users can take within the console when creating a custom permission, nor can the actions associated with existing permissions in Immuta be altered.

    To add a custom permission, click the Add Permission button, and then name the permission in the Enter Permission field.

    hashtag
    Create Custom Data Source Access Requests

    To create a custom questionnaire that all users must complete when requesting access to a data source, fill in the following fields:

    1. Click the App Settings icon in the navigation menu.

    2. Opt for the questionnaire to be required.

    3. Key: Any unique value that identifies the question.

    hashtag
    Create Custom Login Message

    To create a custom message for the login page of Immuta,

    1. Click the App Settings icon in the navigation menu.

    2. Enter text in the Enter Login Message box. Note: The message can be formatted in markdown.

    3. Opt to adjust the Message Text Color and Message Background Color by clicking in these dropdown boxes.

    hashtag
    Prevent Automatic Table Statistics

    circle-exclamation

    Without fingerprints, some policies will be unavailable

    These policies will be unavailable until a data owner manually generates a fingerprint for a Snowflake data source:

    • Masking with format preserving masking

    To disable the automatic collection of statistics with a particular tag,

    1. Click the App Settings icon in the navigation menu.

    2. Use the Select Tags dropdown to select the tag(s).

    3. Click Save.

    hashtag
    Randomized response

    circle-info

    Support limitation: This policy is only supported in Snowflake integrations.

    When a randomized response policy is applied to a data source, the columns targeted by the policy are queried under a fingerprinting process. To enforce the policy, Immuta generates and stores predicates and a list of allowed replacement values that may contain data that is subject to regulatory constraints (such as GDPR or HIPAA) in Immuta's metadata database.

    The location of the metadata database depends on your deployment:

    • Self-managed Immuta deployment: The metadata database is located in the server where you have your external metadata database deployed.

    • SaaS Immuta deployment: The metadata database is located in the AWS global segment you have chosen to deploy Immuta.

    To ensure this process does not violate your organization's data localization regulations, you need to first activate this masking policy type before you can use it in your Immuta tenant.

    1. Click the App Settings icon in the navigation menu.

    2. Navigate to the Other Settings section and scroll to the Randomized Response feature.

    3. Select the Allow users to create masking policies using Randomized Response checkbox to enable use of these policies for your organization.

    hashtag
    Advanced Settings

    hashtag
    Preview Features

    If you enable any Preview features, provide feedback on how you would like these features to evolve.

    hashtag
    Complex Data Types

    1. Click the App Settings icon in the navigation menu.

    2. Navigate to the Advanced Settings section, and scroll to the Preview Features.

    3. Check the Allow Complex Data Types checkbox.

    Before creating a workspace, the cluster must send its configuration to Immuta; to do this, run a simple query on the cluster (i.e., show tables). Otherwise, an error message will occur when users attempt to create a workspace.

  • The Databricks API Token used for project workspace access must be non-expiring. Using a token that expires risks losing access to projects that are created using that configuration.

  • Use the dropdown menu to select the Schema and refer to the corresponding tab below.

  • Enter the Workspace Base Directory.

  • Click Test Workspace Directory.

  • Once the credentials are successfully tested, click Save.

    1. Enter the Name.

    2. Click Add Workspace.

    3. Enter the Hostname, Workspace ID, Account Name, and Databricks API Token.

    4. Use the dropdown menu to select the Google Cloud Region.

    5. Enter the GCS Bucket.

    6. Opt to enter the GCS Object Prefix.

    7. Click Test Workspace Directory.

    8. Once the credentials are successfully tested, click Save.

    Click Save.

    Enter the principal Immuta will use to authenticate with your KDC in the Username field. Note: This must match a principal in the Keytab file.

  • Adjust how often (in milliseconds) Immuta needs to re-authenticate with the KDC in the Ticket Refresh Interval field.

  • Click Test Kerberos Initialization.

  • Click Save.
    Header: The text that will display on reports.
  • Label: The text that will display in the questionnaire for the user. They will be prompted to type the answer in a text box.

  • Masking using randomized response

    Click Save and confirm your changes.

    Click Save.
    External Catalogs page
    Configure a Databricks Spark integration guide
    Snowflake query audit logs page
    audit ingest frequency for Snowflake
    Databricks Unity Catalog query audit logs page
    audit ingest frequency for Databricks Unity Catalog
    Databricks Unity Catalog reference guide

    AWS PrivateLink for Snowflake

    AWS PrivateLinkarrow-up-right provides private connectivity from the Immuta SaaS platform to Snowflake accounts hosted on AWS. It ensures that all traffic to the configured endpoints only traverses private networks.

    This feature is supported in most regions across Immuta's global segments (NA, EU, and AP); contact your Immuta representative if you have questions about availability.

    hashtag
    Requirements

    • You have an Immuta SaaS tenant.

    • Your Snowflake account is hosted on AWS.

    • Your Snowflake account is on the .

    • You have ACCOUNTADMIN role on your Snowflake account to configure the Private Link connection.

    • You have enabled .

    hashtag
    Using Snowflake network policies with AWS PrivateLink

    allow you to limit access to your Snowflake service endpoints. can be used with those network policies to define the specific IP CIDR blocks or AWS VPC endpoints that are allowed. Immuta supports both, but we highly recommend that you configure your network rules to reference our VPC endpoints and not our CIDR block.

    hashtag
    VPC endpoint network rule

    With a network rule type of AWSVPCEID, you can use the following table of Immuta's VPC endpoints by AWS region to configure access from Immuta SaaS to your Snowflake service:

    AWS region
    VPC endpoint ID

    hashtag
    IPv4 network rule

    With a network rule type of IPV4, you must configure an IP address block of 10.0.0.0/8.

    This size of block is required because traffic could come from anywhere in Immuta's network. Immuta has globally distributed compute and does not assign static IP addresses to any workloads. This is why you should use VPC endpoint network rules instead.

    hashtag
    Configure Snowflake with AWS PrivateLink

    1. In your Snowflake environment, run the following SQL query, which will return a JSON object with the connection information you will need to include in your support ticket:

    2. Copy the returned JSON object into a support ticket with to request for the feature to be enabled on your Immuta SaaS tenant.

    3. using the privatelink-account-url from the JSON object in step one as the Host.

    ap-southeast-2 Asia Pacific (Sydney)

    vpce-0803dc2285d0d695f

    ca-central-1 Canada (Central)

    vpce-0ebff3192617126c9

    eu-central-1 Europe (Frankfurt)

    vpce-07f633ac50bc430c2

    eu-north-1 Europe (Stockholm)

    vpce-05c586fedca0a4112

    eu-west-1 Europe (Ireland)

    vpce-0ac01be5c06a919b0

    eu-west-2 Europe (London)

    vpce-0dd3c340c3dd64a5b

    us-east-1 US East (Virginia)

    vpce-03b3bf4334aa34d88

    us-east-2 US East (Ohio)

    vpce-04fdafe0ed07caace

    us-west-2 US West (Oregon)

    vpce-06624165eaa569250

    ap-northeast-1 Asia Pacific (Tokyo)

    vpce-0c738d241aa0bfde7

    ap-northeast-2 Asia Pacific (Seoul)

    vpce-00daddfa7477666eb

    ap-south-1 Asia Pacific (Mumbai)

    vpce-08a6d075ddd92df58

    ap-southeast-1 Asia Pacific (Singapore)

    Business Critical Editionarrow-up-right
    AWS PrivateLink for Snowflakearrow-up-right
    Snowflake network policiesarrow-up-right
    Network rulesarrow-up-right
    Immuta Supportarrow-up-right
    Configure the Snowflake integration

    vpce-030933ffc228d94ac

    select SYSTEM$GET_PRIVATELINK_CONFIG()

    AWS PrivateLink for S3

    The Immuta SaaS platform uses AWS PrivateLink for S3 to provide private connectivity for all S3 services in AWS. Immuta is utilizing S3 Gateway VPC endpoints for private connectivity from all SaaS networks, so connections to S3 connect over PrivateLink by default without any extra configuration. This feature is supported in all regions across Immuta's global segments (NA, EU, and AP).

    circle-exclamation

    If you have bucket policies limiting access to specific VPCs or have any questions about availability, contact your Immuta representative for more information about configuration (e.g., VPC ID) to be able to configure your bucket policy.

    Azure Private Link for Starburst (Trino)

    Azure Private Link provides private connectivity from the Immuta SaaS platform, hosted on AWS, to Starburst (Trino) clusters on Azure. It ensures that all traffic to the configured endpoints only traverses private networks over the Immuta Private Cloud Exchange.

    Support for Azure Private Link is available in all Azure regionsarrow-up-right.

    hashtag
    Prerequisites

    • You have an Immuta SaaS tenant.

    • Your Starburst (Trino) cluster is hosted on Azure.

    • You have set up an for your Starburst cluster.

      • The Private Link Service's should be set to Restricted by Subscription.

    hashtag
    Configure Starburst (Trino) with Azure Private Link

    1. Open a support ticket with with the following information:

      • Azure region

      • Azure Private Link service resource ID or alias

    DNS hostname
  • Your Immuta representative will provide you with the Immuta subscription ID that needs to be .

  • Once the Immuta Azure subscription is authorized, inform your representative so that Immuta can complete Private Link endpoint configuration.

  • Your representative will inform you when the two Azure Private Link connections have been made available. Accept them in the .

  • .

  • .

  • Azure Private Link Servicearrow-up-right
    Access Securityarrow-up-right
    Immuta Supportarrow-up-right
    authorized to consume the servicearrow-up-right
    Private Link Center of your Azure Portalarrow-up-right
    Configure the Starburst (Trino) integration
    Register your tables as Immuta data sources

    Tableau Configuration Example

    hashtag
    Specify authentication method

    When creating the data source in Tableau, specify the authentication method as Sign in using OAuth. This setting will allow you to use your enterprise SSO to connect to your compute platform.

    hashtag
    Set up a live connection

    After connecting to the compute platform, select the tables you will use for your data source. Then, select Live connection. This setting is required for Immuta to enforce policies.

    hashtag
    Credential prompt

    To share your dashboard to your organization, publish your data sources. During this process, set the authentication method to Prompt user. This option ensures that dashboard viewers will see the data according to their personal policies.

    hashtag
    Resources

    • Snowflake guide:

    • Databricks guides:

  • Redshift guide:

  • https://help.tableau.com/current/pro/desktop/en-us/snowflake_oauth.htmarrow-up-right
    https://help.tableau.com/current/pro/desktop/en-us/examples_databricks.htmarrow-up-right
    https://docs.databricks.com/partners/bi/tableau.htmlarrow-up-right
    https://learn.microsoft.com/en-us/azure/databricks/partners/bi/tableauarrow-up-right
    https://docs.databricks.com/integrations/configure-oauth-tableau.htmlarrow-up-right
    https://help.tableau.com/current/pro/desktop/en-us/examples_amazonredshift.htmarrow-up-right

    Snowflake Private Connectivity

    This section contains information about private connectivity options for Snowflake integrations.

    hashtag
    Overview

    The Immuta SaaS platform supports private connectivity to Snowflake accounts hosted in both AWSarrow-up-right and Azurearrow-up-right. This allows organizations to meet security and compliance controls by ensuring that traffic to data sources from Immuta SaaS only traverses private networks, never the public internet.

    • Support for AWS PrivateLink is available in most regions across Immuta's global segments (NA, EU, and AP); contact your Immuta representative if you have questions about availability.

    • Support for Azure Private Link is available in .

    hashtag
    Configuration guides

    all Snowflake-supported Azure regionsarrow-up-right
    AWS PrivateLink
    Azure Private Link

    GCP Private Service Connect for Databricks

    circle-info

    Private preview: This feature is available to select accounts. Contact your Immuta representative for details.

    GCP Private Service Connectarrow-up-right provides private connectivity from the Immuta SaaS platform to Databricks accounts hosted on Google Cloud Platform (GCP). It ensures that all traffic to the configured endpoints only traverses private networks over the Immuta private cloud exchange. This front-end Private Service Connect connection allows users to connect to the Databricks web application, REST API, and Databricks Connect API over a VPC endpoint.

    hashtag
    Requirements

    Ensure that your accounts meet the following requirements:

    • You have an Immuta SaaS tenant.

    • Your Databricks workspace is hosted on Google Cloud Platform (GCP) that has .

    • You have your Databricks Enterprise account ID.

    This process will require configuring a service account in GCP and administrative access to the Databricks account in GCP. For details about Databricks authentication with Google Identity, see the .

    hashtag
    Configure Databricks with GCP Private Service Connect

    Follow these steps to establish private connectivity between Immuta and your Databricks environment:

    1. Create a service account in GCP. Ensure that a principal (either a user or a different service account) has the roles/iam.serviceAccountTokenCreator role attached for this newly created service account. For more information, refer to the .

    2. Add the newly created service account email to your Databricks account with admin rights to be able to add network endpoints. For guidance, see the .

    3. Open an and provide the following information:

    1. Validate that any private access settings attached to your workspaces that need connectivity have the newly created endpoints added (either accepting the Account level or ).

    After these steps, you should be able to connect your Immuta tenant to Databricks workspaces in GCP under the connected account.

  • Service account email

  • Databricks account ID (See )

  • GCP region(s) and workspace URLs in each region

  • Immuta will create the Private Service Connect (PSC) endpoints in the different regions that contain your workspaces and attach a role to the provided service account that allows it to view the created VPC endpoints. Immuta will then provide you the following details:

    • VPC endpoint ID and region

    • Immuta project ID

  • Run the script (or manually make the ) to connect the Immuta-created PSC endpoints to your Databricks account using all of the information provided by the Immuta support team.

    To run the script, you will need to have gcloud, curl, and jq installed and be logged in with a principal that can impersonate the service account that was provided to Immuta.

  • been created with private access configuredarrow-up-right
    Databricks documentation on Google ID authenticationarrow-up-right
    GCP documentation on service account impersonationarrow-up-right
    Databricks documentation on adding user accountsarrow-up-right
    Immuta support ticketarrow-up-right
    adding the specific endpoints to the private access settingarrow-up-right
    #!/bin/bash
    
    # Script to connect a GCP PSC endpoint to a Databricks account by
    # impersonating a service account and retrieving tokens.
    #
    # Usage:
    #   ./accept-databricks-psc.sh -s SERVICE_ACCOUNT_EMAIL \
    #       -d DATABRICKS_ACCOUNT_ID \
    #       -e ENDPOINT_NAME \
    #       -r ENDPOINT_REGION \
    #       -p PROJECT_ID \
    #       [OPTIONS]
    #
    # Example:
    #   ./accept-databricks-psc.sh -s [email protected] \
    #       -d 12345678-90ab-cdef-1234-567890abcdef \
    #       -e dbx-company-project-region \
    #       -r us-east4 \
    #       -p my-gcp-project \
    #       -j # (with JSON logging)
    
    set -euo pipefail
    
    # Default values
    LOG_FORMAT="text"
    
    # Function to display usage
    usage() {
        cat << EOF
    Usage: $0 -s SERVICE_ACCOUNT_EMAIL [OPTIONS]
    
    Verify gcloud setup and service account impersonation, retrieving access and ID tokens.
    
    Required:
        -s SERVICE_ACCOUNT        Service account email to impersonate
        -d DATABRICKS_ACCOUNT_ID  Databricks account ID (UUID format)
        -e ENDPOINT_NAME          Name of the VPC endpoint to attach (e.g., "dbx-company-project-region")
        -r ENDPOINT_REGION        Region where the VPC endpoint is located (e.g., "us-east4")
        -p PROJECT_ID             GCP project ID where the VPC endpoint is located (Immuta's SaaS project ID)
    Options:
        -h                              Show this help message
        -j                              Enable JSON logging format
        -D DATABRICKS_ENDPOINT_NAME     Databricks Endpoint Name (defaults to "immuta-<region>-psc-endpoint")
    
    Example:
        # Basic usage
        $0 -s [email protected] \
           -d 12345678-90ab-cdef-1234-567890abcdef \
           -e dbx-company-project-region \
           -r us-east4 \
           -p immuta-gcp-project
    
    EOF
    }
    
    DATABRICKS_ENDPOINT_NAME=""
    
    # Parse command line arguments
    while getopts "s:d:e:r:p:D:jh" opt; do
      case ${opt} in
        s)
          SERVICE_ACCOUNT="${OPTARG}"
          ;;
        d)
          DATABRICKS_ACCOUNT_ID="${OPTARG}"
          ;;
        e)
          ENDPOINT_NAME="${OPTARG}"
          ;;
        r)
          ENDPOINT_REGION="${OPTARG}"
          ;;
        p)
          PROJECT_ID="${OPTARG}"
          ;;
        D)
          DATABRICKS_ENDPOINT_NAME="${OPTARG}"
          ;;
        j)
          LOG_FORMAT="json"
          ;;
        h)
          usage
          exit 0
          ;;
        \?)
          echo "Error: Invalid option -${OPTARG}" >&2
          echo "Run '$0 -h' for usage information"
          exit 1
          ;;
        :)
          echo "Error: Option -${OPTARG} requires an argument" >&2
          exit 1
          ;;
      esac
    done
    
    # Check if service account was provided
    if [[ -z "${SERVICE_ACCOUNT:-}" ]]; then
      echo "Error: Service account email is required"
      usage
      exit 1
    fi
    
    # Check if Databricks account ID was provided
    if [[ -z "${DATABRICKS_ACCOUNT_ID:-}" ]]; then
      echo "Error: Databricks account ID is required"
      usage
      exit 1
    fi
    
    # Check if VPC endpoint name was provided
    if [[ -z "${ENDPOINT_NAME:-}" ]]; then
      echo "Error: VPC endpoint name is required"
      usage
      exit 1
    fi
    
    # Check if VPC endpoint region was provided
    if [[ -z "${ENDPOINT_REGION:-}" ]]; then
      echo "Error: VPC endpoint region is required"
      usage
      exit 1
    fi
    
    # Check if GCP project ID was provided
    if [[ -z "${PROJECT_ID:-}" ]]; then
      echo "Error: GCP project ID is required"
      usage
      exit 1
    fi
    
    # Check if Databricks endpoint name was provided, if not set default name
    if [[ -z "${DATABRICKS_ENDPOINT_NAME:-}" ]]; then
      DATABRICKS_ENDPOINT_NAME="immuta-${ENDPOINT_REGION}-psc-endpoint"
    fi
    
    # Function to log with simplified or JSON format
    log() {
      local level="${1:-INFO}"
      # Shift the first argument (log level) so that $* does not include the level
      shift || true
      local message="$*"
      local timestamp
      # Use portable date format (macOS date doesn't support %N)
      if date --version &>/dev/null; then
        # GNU date
        timestamp=$(date -u +"%Y-%m-%dT%H:%M:%S.%3NZ")
      else
        # BSD date (macOS)
        timestamp=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
      fi
    
      if [[ "${LOG_FORMAT}" == "json" ]]; then
        # JSON format: timestamp, level, and message (properly escaped)
        jq -nc --arg ts "${timestamp}" --arg lvl "${level}" --arg msg "${message}" \
          '{timestamp: $ts, level: $lvl, message: $msg}'
      else
        # Color codes for different log levels
        local color_reset='\033[0m'
        local color_level=""
    
        case "${level}" in
          INFO)
            color_level='\033[0;36m'  # Cyan
            ;;
          WARN)
            color_level='\033[0;33m'  # Yellow
            ;;
          ERROR)
            color_level='\033[0;31m'  # Red
            ;;
          DEBUG)
            color_level='\033[0;90m'  # Gray
            ;;
          *)
            color_level='\033[0m'     # No color
            ;;
        esac
    
        # Simple format with color: [timestamp] [level] message
        printf "[%s] [${color_level}%s${color_reset}] %s\n" "${timestamp}" "${level}" "${message}"
      fi
    }
    
    if [[ "${LOG_FORMAT}" == "json" ]]; then
      log INFO "Starting Immuta Databricks GCP PSC Acceptance Script"
    else
      log INFO "================================================"
      log INFO "Immuta Databricks GCP PSC Acceptance Script"
      log INFO "================================================"
    fi
    log INFO "Service Account: ${SERVICE_ACCOUNT}"
    log INFO "Databricks Account ID: ${DATABRICKS_ACCOUNT_ID}"
    log INFO "VPC Endpoint Name: ${ENDPOINT_NAME}"
    log INFO "VPC Endpoint Region: ${ENDPOINT_REGION}"
    log INFO "Databricks Endpoint Name: ${DATABRICKS_ENDPOINT_NAME}"
    
    # Step 1: Check if gcloud, curl, and jq are installed
    log INFO "[1/6] Checking if dependencies are installed..."
    if ! command -v gcloud &> /dev/null; then
      log ERROR "❌ ERROR: gcloud CLI is not installed"
      log ERROR "   Please install it from: https://cloud.google.com/sdk/docs/install"
      exit 1
    fi
    if ! command -v curl &> /dev/null; then
      log ERROR "❌ ERROR: curl is not installed"
      log ERROR "   Please install it using your package manager (e.g., apt, yum, brew)"
      exit 1
    fi
    if ! command -v jq &> /dev/null; then
      log ERROR "❌ ERROR: jq is not installed"
      log ERROR "   Please install it using your package manager (e.g., apt, yum, brew)"
      exit 1
    fi
    GCLOUD_VERSION=$(gcloud version --format="value(core)" 2>/dev/null || echo "unknown")
    log INFO "βœ… gcloud is installed (version: ${GCLOUD_VERSION})"
    if [[ $LOG_FORMAT != "json" ]]; then log INFO ""; fi
    
    # Step 2: Check if user is logged in
    log INFO "[2/6] Checking if you are logged in to gcloud..."
    CURRENT_ACCOUNT=$(gcloud config get-value account 2>/dev/null || echo "")
    if [[ -z "${CURRENT_ACCOUNT}" ]]; then
      log ERROR "❌ ERROR: Not logged in to gcloud"
      log ERROR "   Run: gcloud auth login"
      exit 1
    fi
    log INFO "βœ… Logged in as: ${CURRENT_ACCOUNT}"
    if [[ $LOG_FORMAT != "json" ]]; then log INFO ""; fi
    
    # Step 3: Verify impersonation by getting an access token
    log INFO "[3/6] Verifying service account impersonation (access token)..."
    set +e  # Temporarily disable exit on error
    GCLOUD_OUTPUT=$(gcloud auth print-access-token \
      --impersonate-service-account="${SERVICE_ACCOUNT}" \
      --verbosity=error 2>&1)
    GCLOUD_EXIT=$?
    set -e  # Re-enable exit on error
    
    if [[ ${GCLOUD_EXIT} -ne 0 ]]; then
      log ERROR "Failed to impersonate service account for access token"
      log ERROR "${GCLOUD_OUTPUT}"
      # while IFS= read -r line; do
      #   [[ -n "${line}" ]] && log ERROR "${line}"
      # done <<< "${GCLOUD_OUTPUT}"
      exit 1
    fi
    ACCESS_TOKEN="${GCLOUD_OUTPUT}"
    log INFO "βœ… Successfully obtained access token"
    if [[ $LOG_FORMAT != "json" ]]; then log INFO ""; fi
    
    # Step 4: Get an ID token
    log INFO "[4/6] Getting ID token for service account..."
    set +e  # Temporarily disable exit on error
    GCLOUD_OUTPUT=$(gcloud auth print-identity-token \
      --impersonate-service-account="${SERVICE_ACCOUNT}" \
      --include-email \
      --verbosity=error \
      --audiences="https://accounts.gcp.databricks.com" 2>&1)
    GCLOUD_EXIT=$?
    set -e  # Re-enable exit on error
    
    if [[ ${GCLOUD_EXIT} -ne 0 ]]; then
      log ERROR "Failed to get ID token"
      while IFS= read -r line; do
        [[ -n "${line}" ]] && log ERROR "${line}"
      done <<< "${GCLOUD_OUTPUT}"
      exit 1
    fi
    ID_TOKEN="${GCLOUD_OUTPUT}"
    log INFO "βœ… Successfully obtained ID token"
    if [[ $LOG_FORMAT != "json" ]]; then log INFO ""; fi
    
    # Step 5: List existing VPC endpoints
    log INFO "[5/6] Listing existing VPC endpoints..."
    if ! RESULT=$(curl -s -XGET \
      --header "Authorization: Bearer ${ID_TOKEN}" \
      "https://accounts.gcp.databricks.com/api/2.0/accounts/${DATABRICKS_ACCOUNT_ID}/vpc-endpoints"); then
      log ERROR "❌ ERROR: Failed to call Databricks API to list VPC endpoints"
      log ERROR "   ${RESULT}"
      exit 1
    fi
    echo "${RESULT}" | jq -er '.[] | select(.gcp_vpc_endpoint_info.psc_endpoint_name == "'"${ENDPOINT_NAME}"'")' > /dev/null && {
      log ERROR "❌ Existing VPC endpoint found with name '${ENDPOINT_NAME}'"
      log ERROR "    If this is unexpected, please delete the existing endpoint in the Databricks console and try again"
      exit 1
    }
    log INFO "βœ… No existing VPC endpoint with name '${ENDPOINT_NAME}' found"
    if [[ $LOG_FORMAT != "json" ]]; then log INFO ""; fi
    
    log INFO "[6/6] Creating VPC endpoint attachment..."
    REQUEST=$(cat <<EOF
    {
      "gcp_vpc_endpoint_info": {
        "endpoint_region": "${ENDPOINT_REGION}",
        "project_id": "${PROJECT_ID}",
        "psc_endpoint_name": "${ENDPOINT_NAME}"
      },
      "vpc_endpoint_name": "${DATABRICKS_ENDPOINT_NAME}"
    }
    EOF
    )
    RESULT=$(curl -s -XPOST \
      -d "${REQUEST}" \
      --header "Content-Type: application/json" \
      --header "Authorization: Bearer ${ID_TOKEN}" \
      --header "X-Databricks-GCP-SA-Access-Token: ${ACCESS_TOKEN}" \
      "https://accounts.gcp.databricks.com/api/2.0/accounts/${DATABRICKS_ACCOUNT_ID}/vpc-endpoints")
    if [[ "$(echo "${RESULT}" | jq -r '.error_code // empty')" != "" ]]; then
      log ERROR "❌ ERROR: Failed to create VPC endpoint attachment"
      log ERROR "   ${RESULT}"
      exit 1
    fi
    
    
    # Output based on mode
    if [[ "${LOG_FORMAT}" == "json" ]]; then
      log INFO "βœ… VPC Endpoint Attachment Created"
    else
      # Display token information
      log INFO "================================================"
      log INFO "βœ… SUCCESS - VPC Endpoint Attachment Created"
      log INFO "================================================"
      log INFO "The Endpoint has been attached to your account in Databricks."
    fi
    Locate your Databricks account IDarrow-up-right
    necessary API calls to Databricksarrow-up-right

    BI Tools

    Starburst (Trino) Private Connectivity

    This section contains information about private connectivity options for Starburst (Trino) integrations.

    hashtag
    Overview

    The Immuta SaaS platform supports private connectivity to Starburst (Trino) clusters hosted in both AWS and Azure. This allows organizations to meet security and compliance controls by ensuring that traffic to data sources from Immuta SaaS only traverses private networks, never the public internet.

    • Support for AWS PrivateLink is available in most regions across Immuta's global segments (NA, EU, and AP); contact your Immuta representative if you have questions about availability.

    • Support for Azure Private Link is available in .

    hashtag
    Configuration guides

    all Azure regionsarrow-up-right
    AWS PrivateLink
    Azure Private Link