LogoLogo
SaaS
  • Immuta Documentation - SaaS
  • Configuration
    • Connect Data Platforms
      • Data Platforms Overview
      • Amazon S3 Integration
      • AWS Lake Formation
        • Getting Started with AWS Lake Formation
        • Register an AWS Lake Formation Connection
        • Reference Guides
          • AWS Lake Formation
          • Security and Compliance
          • Protecting Data
          • Accessing Data
      • Azure Synapse Analytics
        • Getting Started with Azure Synapse Analytics
        • Configure Azure Synapse Analytics Integration
        • Reference Guides
          • Azure Synapse Analytics Overview
          • Azure Synapse Analytics Pre-Configuration Details
      • Databricks
        • Databricks Spark
          • Getting Started with Databricks Spark
          • How-to Guides
            • Configure a Databricks Spark Integration
            • Manually Update Your Databricks Cluster
            • Install a Trusted Library
            • Project UDFs Cache Settings
            • Run R and Scala spark-submit Jobs on Databricks
            • DBFS Access
            • Troubleshooting
          • Reference Guides
            • Databricks Spark Integration Configuration
              • Installation and Compliance
              • Customizing the Integration
              • Setting Up Users
              • Spark Environment Variables
              • Ephemeral Overrides
            • Security and Compliance
            • Registering and Protecting Data
            • Accessing Data
              • Delta Lake API
        • Databricks Unity Catalog
          • Getting Started with Databricks Unity Catalog
          • How-to Guides
            • Configure a Databricks Unity Catalog Integration
            • Migrating to Unity Catalog
          • Databricks Unity Catalog Integration Reference Guide
      • Google BigQuery Integration
      • Redshift
        • Getting Started with Redshift
        • How-to Guides
          • Configure Redshift Integration
          • Configure Redshift Spectrum
        • Reference Guides
          • Redshift Overview
          • Redshift Pre-Configuration Details
      • Snowflake
        • Getting Started with Snowflake
        • How-to Guides
          • Configure a Snowflake Integration
          • Edit or Remove Your Snowflake Integration
          • Integration Settings
            • Snowflake Table Grants Private Preview Migration
            • Enable Snowflake Table Grants
            • Using Snowflake Data Sharing with Immuta
            • Enable Snowflake Low Row Access Policy Mode
              • Upgrade Snowflake Low Row Access Policy Mode
            • Configure Snowflake Lineage Tag Propagation
        • Reference Guides
          • Snowflake Integration
          • Snowflake Table Grants
          • Snowflake Data Sharing with Immuta
          • Snowflake Low Row Access Policy Mode
          • Snowflake Lineage Tag Propagation
          • Warehouse Sizing Recommendations
        • Explanatory Guides
          • Phased Snowflake Onboarding
      • Starburst (Trino)
        • Getting Started with Starburst (Trino)
        • How-to Guides
          • Configure Starburst (Trino) Integration
          • Customize Read and Write Access Policies for Starburst (Trino)
        • Starburst (Trino) Integration Reference Guide
      • Queries Immuta Runs in Your Data Platform
      • Connect Your Data
        • Registering a Connection
          • How-to Guides
            • Register a Snowflake Connection
            • Register a Databricks Unity Catalog Connection
            • Manually Run Object Sync
            • Manage Connection Settings
            • Use the Connection Upgrade Manager
              • Troubleshooting
          • Reference Guides
            • Connections
            • Upgrading to Connections
              • Before You Begin
              • API Changes
              • FAQ
        • Registering Metadata
          • Data Sources in Immuta
          • Register Data Sources
            • Amazon S3 Data Source
            • Azure Synapse Analytics Data Source
            • Databricks Data Source
            • Google BigQuery Data Source
            • Redshift Data Source
            • Snowflake Data Source
              • Bulk Create Snowflake Data Sources
            • Create a Starburst (Trino) Data Source
          • Data Source Settings
            • How-to Guides
              • Manage Data Source Settings
              • Manage Data Source Members
              • Manage Access Requests and Tasks
              • Manage Data Dictionary Descriptions
              • Disable Immuta from Sampling Raw Data
            • Data Source Health Checks Reference Guide
          • Schema Monitoring
            • How-to Guides
              • Manage Schema Monitoring
              • Run Schema Monitoring and Column Detection Jobs
            • Reference Guides
              • Schema Monitoring
              • Schema Projects
            • Why Use Schema Monitoring Concept Guide
    • Manage Data Metadata
      • Connect External Catalogs
        • Configure an External Catalog
        • Reference Guides
          • External Catalog Introduction
          • Custom REST Catalog Interface Introduction
          • Custom REST Catalog Interface Endpoints
      • Data Discovery
        • Introduction
        • Getting Started with Data Discovery
        • How-to Guides
          • Use Identifiers in Domains
          • Use Sensitive Data Discovery (SDD)
          • Manage Identification Frameworks
          • Manage Identifiers
          • Run and Manage Sensitive Data Discovery on Data Sources
        • Reference Guides
          • Identifiers in Domains
          • Built-in Identifier Reference
          • Improved Pack: Built-in Identifier Reference
          • Built-in Discovered Tags Reference
          • How Competitive Pattern Analysis Works
      • Data Classification
        • How-to Guides
          • Activate Classification Frameworks
          • Adjust Identification and Classification Framework Tags
          • How to Use a Classification Framework with Your Own Tags
        • Reference Guide
          • Classification Frameworks
      • Manage Tags
        • How-to Guides
          • Create and Manage Tags
          • Add Tags to Data Sources and Projects
        • Tags Reference Guide
    • Manage Users
      • Getting Started with Users
      • Identity Managers (IAMs)
        • How-to Guides
          • Okta LDAP Interface
          • OpenID Connect
            • OpenID Connect Protocol
            • Okta and OpenID Connect
            • OneLogin with OpenID Connect
          • SAML
            • SAML Protocol
            • Microsoft Entra ID
            • Okta SAML SCIM
        • Reference Guides
          • Identity Managers
          • SAML Protocol Configuration Options
          • SAML Single Logout
      • Immuta Users
        • How-to Guides
          • Managing Personas and Permissions
          • User Impersonation
          • Manage Attributes and Groups
          • External User ID Mapping
          • External User Info Endpoint
        • Reference Guides
          • Permissions and Personas
          • Attributes and Groups in Immuta
    • Organize Data into Domains
      • Getting Started with Domains
      • Domains Reference Guide
    • Application Settings
      • How-to Guides
        • App Settings
        • Private Networking Support
          • Data Connection Private Networking
            • AWS PrivateLink for Redshift
            • AWS PrivateLink for API Gateway
            • Databricks Private Connectivity
              • AWS PrivateLink for Databricks
              • Azure Private Link for Databricks
            • Snowflake Private Connectivity
              • AWS PrivateLink for Snowflake
              • Azure Private Link for Snowflake
            • Starburst (Trino) Private Connectivity
              • AWS PrivateLink for Starburst (Trino)
              • Azure Private Link for Starburst (Trino)
          • Immuta SaaS Private Networking
            • Immuta SaaS Private Networking Over AWS PrivateLink
        • BI Tools
          • BI Tool Configuration Recommendations
          • Power BI Configuration Example
          • Tableau Configuration Example
        • IP Filtering
        • System Status Bundle
      • Reference Guides
        • Deployment Options
        • Data Processing
        • Encryption and Masking Practices
  • Marketplace
    • Introduction
      • User Types
      • Walkthrough
    • Share Data Products
      • How-to Guides
        • Manage Data Products
        • View and Respond to Access Requests
        • Customize the Marketplace Branding
      • Reference Guides
        • Marketplace App Requirements
        • Data Products
        • Marketplace Permissions Matrix
        • Understanding Access Provisioning and Underlying Policies in Immuta
          • S3 Provisioning Best Practices
        • Integrating with Existing Catalogs
        • Setting Up Domains for Marketplace
    • Access Data Products
      • How-to Guides
        • Logging into Marketplace
        • Requesting Access to a Data Product
      • Reference Guide
        • Data Source Access Status
    • Short-Term Limitations
  • Governance
    • Introduction
      • Automate Data Access Control Decisions
        • The Two Paths
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
        • Test and Deploy Policy
      • Compliantly Open More Sensitive Data for ML and Analytics
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
    • Author Policies for Data Access Control
      • Introduction
        • Scalability and Evolvability
        • Understandability
        • Distributed Stewardship
        • Consistency
        • Availability of Data
      • Policies
        • Authoring Policies at Scale
        • Data Engineering with Limited Policy Downtime
        • Subscription Policies
          • Overview
          • How-to Guides
            • Author a Subscription Policy
            • Author an ABAC Subscription Policy
            • Subscription Policies Advanced DSL Guide
            • Author a Restricted Subscription Policy
            • Clone, Activate, or Stage a Global Policy
          • Reference Guides
            • Subscription Policy Access Types
            • Advanced Use of Special Functions
        • Data Policies
          • Overview
          • How-to Guides
            • Author a Masking Data Policy
            • Author a Minimization Policy
            • Author a Purpose-Based Restriction Policy
            • Author a Restricted Data Policy
            • Author a Row-Level Policy
            • Author a Time-Based Restriction Policy
            • Policy Certifications and Diffs
          • Reference Guides
            • Data Policy Types
            • Masking Policies
            • Row-Level Policies
            • Custom WHERE Clause Functions
            • Data Policy Conflicts and Fallback
            • Custom Data Policy Certifications
            • Orchestrated Masking Policies
      • Projects and Purpose-Based Access Control
        • Projects and Purpose Controls
          • Getting Started
          • How-to Guides
            • Create a Project
            • Create and Manage Purposes
            • Adjust a Policy
            • Project Management
              • Manage Projects and Project Settings
              • Manage Project Data Sources
              • Manage Project Members
          • Reference Guides
            • Projects and Purposes
            • Policy Adjustments
          • Concept Guide
            • Why Use Purposes?
        • Equalized Access
          • Manage Project Equalization How-to Guide
          • Equalized Access Reference Guide
          • Why Use Project Equalization?
        • Masked Joins
          • Enable Masked Joins How-to Guide
          • Why Use Masked Joins?
        • Writing to Projects
          • How-to Guides
            • Create and Manage Snowflake Project Workspaces
            • Create and Manage Databricks Spark Project Workspaces
            • Write Data to the Workspace
          • Reference Guides
            • Writing to Projects
            • Project UDFs (Databricks)
      • Data Consumers
        • Subscribe to a Data Source
        • Query Data
          • Querying Snowflake Data
          • Querying Databricks Data
          • Querying Starburst (Trino) Data
          • Querying Databricks SQL Data
          • Querying Redshift Data
          • Querying Azure Synapse Analytics Data
        • Subscribe to Projects
    • Observe Access and Activity
      • Introduction
      • Audit
        • How-to Guides
          • Export Audit Logs to S3
          • Export Audit Logs to ADLS
          • Use Immuta Audit
          • Run Governance Reports
        • Reference Guides
          • Universal Audit Model (UAM)
            • UAM Schema Reference Guide
          • Query Audit Logs
            • Snowflake Query Audit Logs
            • Databricks Unity Catalog Query Audit Logs
            • Databricks Spark Query Audit Logs
            • Starburst (Trino) Query Audit Logs
          • Audit Export GraphQL Reference Guide
          • Unknown Users in Audit Logs
          • Governance Report Types
        • Deprecated Audit Guides
          • Legacy to UAM Migration
          • Manage Audit Logs
      • Dashboards
        • Use the Audit Dashboards How-To Guide
        • Audit Dashboards Reference Guide
      • Monitors
        • Manage Monitors and Observations
        • Monitors Reference Guide
  • Releases
    • Deployment Notes
      • 2024
      • 2023
      • 2022
    • Scheduled Maintenance Windows
    • Immuta Support Matrix Overview
    • Immuta CLI Release Notes
    • Preview Features
      • Features in Preview
    • Deprecations
  • Developer Guides
    • The Immuta CLI
      • Install and Configure the Immuta CLI
      • Manage Your Immuta Tenant
      • Manage Data Sources
      • Manage Sensitive Data Discovery
        • Manage Sensitive Data Discovery Rules
        • Manage Identification Frameworks
        • Run Sensitive Data Discovery on Data Sources
      • Manage Policies
      • Manage Projects
      • Manage Purposes
      • Manage Audit Export
    • The Immuta API
      • Integrations API
        • Getting Started
        • How-to Guides
          • Configure an Amazon S3 Integration
          • Configure an Azure Synapse Analytics Integration
          • Configure a Databricks Unity Catalog Integration
          • Configure a Google BigQuery Integration
          • Configure a Redshift Integration
          • Configure a Snowflake Integration
          • Configure a Starburst (Trino) Integration
        • Reference Guides
          • Integrations API Endpoints
          • Integration Configuration Payload
          • Response Schema
          • HTTP Status Codes and Error Messages
      • Connections API
        • How-to Guides
          • Register a Connection
            • Register a Snowflake Connection
            • Register a Databricks Unity Catalog Connection
            • Register an AWS Lake Formation Connection
          • Manage a Connection
          • Deregister a Connection
        • Connection Registration Payloads Reference Guide
      • Marketplace API
        • Marketplace API Endpoints
        • Source Controlling Data Products
      • Immuta V2 API
        • Data Source Payload Attribute Details
          • Data Source Request Payload Examples
        • Create Policies API Examples
        • Create Projects API Examples
        • Create Purposes API Examples
      • Immuta V1 API
        • Authenticate with the API
        • Configure Your Instance of Immuta
          • Get Job Status
          • Manage Frameworks
          • Manage IAMs
          • Manage Licenses
          • Manage Notifications
          • Manage Sensitive Data Discovery (SDD)
          • Manage Tags
          • Manage Webhooks
          • Search Filters
        • Connect Your Data
          • Create and Manage an Amazon S3 Data Source
          • Create an Azure Synapse Analytics Data Source
          • Create a Databricks Data Source
          • Create a Redshift Data Source
          • Create a Snowflake Data Source
          • Create a Starburst (Trino) Data Source
          • Manage the Data Dictionary
        • Use Domains
        • Manage Data Access
          • Manage Access Requests
          • Manage Data and Subscription Policies
          • Manage Write Policies
            • Write Policies Payloads and Response Schema Reference Guide
          • Policy Handler Objects
          • Search Audit Logs
          • Search Connection Strings
          • Search for Organizations
          • Search Schemas
        • Subscribe to and Manage Data Sources
        • Manage Projects and Purposes
          • Manage Projects
          • Manage Purposes
        • Generate Governance Reports
Powered by GitBook

Self-managed versions

  • 2024.3
  • 2024.2

Resources

  • Immuta Changelog

Copyright © 2014-2025 Immuta Inc. All rights reserved.

On this page
  • Configuration
  • Protect your data
  • FAQs
  • Google BigQuery integration conceptual overview
  • Secure views
  • Managing access
  • Integration health status
  • Limitations
  • Supported policies
  • Additional resources
  • Configure the Google BigQuery integration
  • Prerequisites
  • Google Cloud service account and role used by Immuta to connect to Google BigQuery
  • Enable the Google BigQuery integration
  • Disable the Google BigQuery integration
  • Next steps

Was this helpful?

Export as PDF
  1. Configuration
  2. Connect Data Platforms

Google BigQuery Integration

PreviousDatabricks Unity Catalog Integration Reference GuideNextRedshift

Last updated 19 days ago

Was this helpful?

Private preview: This integration is available to select accounts. Contact your Immuta representative for details.

The Google BigQuery integration allows users to query policy protected data directly in BigQuery as secure views within an Immuta-created dataset. Immuta controls who can see what within the views, allowing data governors to create complex ABAC policies and data users to query the right data within the BigQuery console.

Configuration

Google BigQuery is configured through the Immuta console and a script provided by Immuta. While you can complete some steps within the BigQuery console, it is easiest to install using gcloud and the Immuta script.

Protect your data

Once Google BigQuery has been configured, BigQuery admins can start creating subscription and data policies to meet compliance requirements and users can start querying policy protected data directly in BigQuery.

  1. Create a global or .

  2. Revoke user access to the original datasets and grant users access to the Immuta created datasets in BigQuery.

  3. Users query data from the Immuta created datasets directly in BigQuery.

FAQs

  1. What permissions will Immuta have in my BigQuery environment?

    • You can find a list of the permissions the custom Immuta role has .

  2. What integration features will Immuta support for BigQuery?

    • For private preview, Immuta supports a basic version of the BigQuery integration where Immuta can enforce specific policies on data in a single BigQuery project. At this time, workspaces, tag ingestion, user impersonation, query audit, and multiple integrations are not supported.

Google BigQuery integration conceptual overview

In this policy push integration, Immuta creates views that contain all policy logic. Each view has a 1-to-1 relationship with the original table. Access controls are applied in the view, allowing users to leverage Immuta’s powerful set of attribute-based policies and query data directly in BigQuery.

BigQuery is organized by projects (which can be thought of as databases), datasets (which can be compared to schemas), tables, and views. When you enable the integration, an Immuta dataset is created in BigQuery that contains the Immuta-required user entitlements information. These objects within the Immuta dataset are intended to only be used and altered by the Immuta application.

After data sources are registered, Immuta uses the custom user and role, created before the integration is enabled, to push the Immuta data sources as views into a mirrored dataset of the original table. Immuta manages grants on the created view to ensure only users subscribed to the Immuta data source will see the data.

Secure views

Managing access

Following the principle of least privilege, Immuta does not have permission to manage Google Cloud Platform users, specifically in granting or denying access to a project and its datasets. This means that data governors should limit user access to original datasets to ensure data users are accessing the data through the Immuta created views and not the backing tables. The only users who need to have access to the backing tables are the credentials used to register the tables in Immuta.

Additionally, a data governor must grant users access to the mirrored datasets that Immuta will create and populate with views. Immuta and BigQuery’s best practice recommendation is to grant access via groups in Google Cloud Platform. Because users still must be registered in Immuta and subscribed to an Immuta data source to be able to query Immuta views, all Immuta users can be granted access to the mirrored datasets that Immuta creates.

Integration health status

The status of the integration is visible on the integrations tab of the Immuta application settings page. If errors occur in the integration, a banner will appear in the Immuta UI with guidance for remediating the error.

Limitations

  • This integration can only be enabled through a manual bootstrap using the Immuta API.

  • This integration can only be enabled to work in a single region.

Supported policies

This integration supports the following policy types:

  • Column masking

    • Mask using hashing (SHA256())

    • Mask by making NULL

    • Mask using constant

    • Mask using a regular expression

    • Mask by date rounding

    • Mask by numeric rounding

    • Mask using custom functions

  • Row-level masking

  • Row visibility based on user attributes and/or object attributes

  • Only show rows that fall within a given time window

  • Minimize rows

  • Filter rows using custom WHERE clause

  • Always hide rows

Additional resources

See the resources below to start implementing and using the BigQuery integration:

Configure the Google BigQuery integration

Follow this guide to connect your Google BigQuery data warehouse to Immuta.

Prerequisites

  • Immuta SaaS or Immuta v2023.1 or newer with Google BigQuery integration (PrPr) enabled.

Google Cloud service account and role used by Immuta to connect to Google BigQuery

The Google BigQuery integration requires you to create a Google Cloud service account and role that will be used by Immuta to

  • create a Google BigQuery dataset that will be used to store a table of user entitlements, UDFs for policy enforcement, etc.

  • manage the table of user entitlements via updates when entitlements change in Immuta.

  • create datasets and secure views with access control policies enforced, which mirror tables inside of datasets you ingest as Immuta data sources.

You have two options to create the required Google Cloud service account and role:

The Immuta script

The bootstrap.sh script is a shell script provided by Immuta that creates prerequisite Google Cloud IAM objects for the integration to connect. When you run this script from your command line, it will create the following items, :

  • A new Google Cloud IAM role

  • A new Google Cloud service account, which will be granted the newly-created role

  • A JSON keyfile for the newly-created service account

Google Cloud IAM roles required to run the script

To execute bootstrap.sh from your command line, you must be authenticated to the gcloud CLI utility as a user with all of the following roles:

  • roles/iam.roleAdmin

  • roles/iam.serviceAccountAdmin

  • roles/serviceusage.serviceUsageAdmin

Having these three roles is the least-privilege set of Google Cloud IAM roles required to successfully run the bootstrap.sh script from your command line. However, having either of the following Google Cloud IAM roles will also allow you to run the script successfully:

  • roles/editor

  • roles/owner

Create a service account and role by running the script provided by Immuta

  1. Set the account property in the core section for Google Cloud CLI to the account gcloud should use for authentication. (You can run gcloud auth list to see your currently available accounts):

    gcloud config set account ACCOUNT
  2. In Immuta, navigate to the App Settings page and click the Integrations tab.

  3. Click Add Integration and select Google BigQuery from the dropdown menu.

  4. Click Select Authentication Method and select Key File.

  5. Click Download Script(s).

  6. Before you run the script, update your permissions to execute it:

    chmod 755 <path to downloaded script>
  7. Run the script, where

    • PROJECT_ID is the Google Cloud Platform project to operate on.

    • ROLE_ID is the name of the custom role to create.

    • NAME will create a service account with the provided name.

    • OUTPUT_FILE is the path where the resulting private key should be written. File system write permission will be checked on the specified path prior to the key creation.

    • undelete-role (optional) will undelete the custom role from the project. Roles that have been deleted for a long time can't be undeleted. This option can fail for the following reasons:

      • The role specified does not exist.

      • The active user does not have permission to access the given role.

    • enable-api (optional) provided you’ve been granted access to enable the Google BigQuery API, will enable the service.

    $ bootstrap.sh \
        --project PROJECT_ID \
        --role ROLE_ID \
        --service_account NAME \
        --keyfile OUTPUT_FILE \
        [--undelete-role] \
        [--enable-api]

Create a service account and role by using Google Cloud console

Alternatively, you may use the Google Cloud Console to create the prerequisite role, service account, and private key file for the integration to connect to Google BigQuery.

    • bigquery.datasets.create

    • bigquery.datasets.delete

    • bigquery.datasets.get

    • bigquery.datasets.update

    • bigquery.jobs.create

    • bigquery.jobs.get

    • bigquery.jobs.list

    • bigquery.jobs.listAll

    • bigquery.routines.create

    • bigquery.routines.delete

    • bigquery.routines.get

    • bigquery.routines.list

    • bigquery.routines.update

    • bigquery.tables.create

    • bigquery.tables.delete

    • bigquery.tables.export

    • bigquery.tables.get

    • bigquery.tables.getData

    • bigquery.tables.list

    • bigquery.tables.setCategory

    • bigquery.tables.update

    • bigquery.tables.updateData

    • bigquery.tables.updateTag

Enable the Google BigQuery integration

  1. In Immuta, navigate to the App Settings page and click the Integrations tab.

  2. Click Add Integration and select Google BigQuery from the dropdown menu.

  3. Click Select Authentication Method and select Key File.

    • Project Id: The Google Cloud Platform project to operate on, where your Google BigQuery data warehouse is located. A new dataset will be provisioned in this Google BigQuery project to store the integration configuration.

  4. Complete the following fields:

    • Immuta Dataset: The name of the Google BigQuery dataset to provision inside of the project. Important: if you are using multiple environments in the same Google BigQuery project, this dataset to provision must be unique across environments.

    • Dataset Suffix: The suffix that will be postfixed to the name of each dataset created to store secure views, one per dataset that you ingest a table for as a data source in Immuta. Important: if you are using multiple environments in the same Google BigQuery project, this suffix must be unique across environments.

    • GCP Location: The dataset’s location. After a dataset is created, the location can't be changed. Note that

      • If you choose EU for the dataset location, your Core BigQuery Customer Data resides in the EU.

  5. Click Test Google BigQuery Integration.

  6. Click Save.

GCP location must match dataset region

The region set for the GCP location must match the region of your datasets. Set GCP location to a general region (for example, US) to include child regions.

Disable the Google BigQuery integration

You can disable the Google BigQuery integration automatically or manually.

Automatically disable integration

  1. Click the App Settings icon, and then click the Integrations tab.

  2. Select the Google BigQuery integration you would like to disable, and select the Disable Integration checkbox.

  3. Click Save.

Manually disable integration

The privileges required to run the cleanup script are the same as the Google Cloud IAM roles required to run the bootstrap.sh script.

  1. Click the App Settings icon, and then click the Integrations tab.

  2. Select the Google BigQuery integration you would like to disable, and click Download Scripts.

  3. Click Save. Wait until Immuta has finished saving your configuration changes before proceeding.

  4. Before you run the script, update your permissions to execute it:

    chmod 755 <path to downloaded script>
  5. Run the cleanup script.

Next steps

The Immuta integration uses a mirrored dataset approach. That is, if the source dataset is named mydataset, Immuta will create a dataset named mydataset_secure, assuming that _secure is the specified Immuta dataset suffix. This mirrored dataset is an , allowing it to access the data of the original dataset. It will contain the Immuta-managed views, which have identical names to the original tables they’re based on.

The definitions for each status and the state of configured data platform integrations is available in the . However, the UI consolidates these error statuses and provides detail in the error messages.

Building global and to govern data

to collaborate

Immuta role with SYSTEM_ADMIN permissions and an .

.

You will need to use the objects created in these steps to .

Install .

with the following privileges:

and grant it the custom role you just created.

.

Once the Google Cloud IAM custom role and service account are created, you can enable the Google BigQuery integration. This section illustrates how to enable the integration on the Immuta app settings page. To configure this integration via the Immuta API, see the .

Upload your GCP Service Account Key File. This is the private key file generated in . Uploading this file will auto-populate the following fields:

Service Account: The service account you created in .

Immuta Role: The custom role you created in .

Build and

to securely collaborate on analytical workloads

authorized dataset
Creating BigQuery data sources
subscription
data policies
Creating projects
API key
Install the gcloud CLI
gcloud
Create a custom role using the console
Create a service account
Enable the Google BigQuery API
Configure a Google BigQuery integration API guide
Create Google BigQuery data sources
global subscription policies
data policies
Create projects
subscription
Register your BigQuery tables and views in Immuta as data sources.
Recommended: Organize your data sources into domains and assign domain permissions to accountable teams.
Create a custom role and assign that role to a custom user to use as the Immuta system account.
Enable the integration in the Immuta console.
supported data policy
here
Configuring the Google BigQuery integration
Run the script provided by Immuta
Use the Google Cloud Console
enable the Google BigQuery integration
create a Google Cloud service account and role for Immuta to use to connect to Google BigQuery
create a Google Cloud service account and role for Immuta to use to connect to Google BigQuery
create a Google Cloud service account and role for Immuta to use to connect to Google BigQuery
response schema of the integrations API