LogoLogo
2024.2
  • Immuta Documentation - 2024.2
  • What is Immuta?
  • Self-Managed Deployment
    • Getting Started
    • Deployment Requirements
    • Install
      • Managed Public Cloud
      • Red Hat OpenShift
      • Generic Installation
      • Immuta in an Air-Gapped Environment
      • Deploy Immuta without Elasticsearch
    • Configure
      • Ingress Configuration
      • Cosign Verification
      • TLS Configuration
      • Immuta in Production
      • External Cache Configuration
      • Rotating Credentials
      • Enabling Legacy Query Engine and Fingerprint
    • Upgrade
      • Upgrade Immuta
      • Upgrade to Immuta 2024.2 LTS
    • Disaster Recovery
    • Troubleshooting
    • Conventions
    • Release Notes
  • Data and Integrations
    • Immuta Integrations
    • Snowflake
      • Getting Started
      • How-to Guides
        • Configure a Snowflake Integration
        • Snowflake Table Grants Migration
        • Edit or Remove Your Snowflake Integration
        • Integration Settings
          • Enable Snowflake Table Grants
          • Use Snowflake Data Sharing with Immuta
          • Configure Snowflake Lineage Tag Propagation
          • Enable Snowflake Low Row Access Policy Mode
            • Upgrade Snowflake Low Row Access Policy Mode
      • Reference Guides
        • Snowflake Integration
        • Snowflake Data Sharing
        • Snowflake Lineage Tag Propagation
        • Snowflake Low Row Access Policy Mode
        • Snowflake Table Grants
        • Warehouse Sizing Recommendations
      • Phased Snowflake Onboarding Concept Guide
    • Databricks Unity Catalog
      • Getting Started
      • How-to Guides
        • Configure a Databricks Unity Catalog Integration
        • Migrate to Unity Catalog
      • Databricks Unity Catalog Integration Reference Guide
    • Databricks Spark
      • How-to Guides
        • Configuration
          • Simplified Databricks Configuration
          • Manual Databricks Configuration
          • Manually Update Your Databricks Cluster
          • Install a Trusted Library
        • DBFS Access
        • Limited Enforcement in Databricks
        • Hide the Immuta Database in Databricks
        • Run spark-submit Jobs on Databricks
        • Configure Project UDFs Cache Settings
        • External Metastores
      • Reference Guides
        • Databricks Spark Integration
        • Databricks Spark Pre-Configuration Details
        • Configuration Settings
          • Cluster Policies
            • Python & SQL
            • Python & SQL & R
            • Python & SQL & R with Library Support
            • Scala
            • Sparklyr
          • Environment Variables
          • Ephemeral Overrides
          • Py4j Security Error
          • Scala Cluster Security Details
          • Databricks Security Configuration for Performance
        • Databricks Change Data Feed
        • Databricks Libraries Introduction
        • Delta Lake API
        • Spark Direct File Reads
        • Databricks Metastore Magic
    • Starburst (Trino)
      • Getting Started
      • How-to Guides
        • Configure Starburst (Trino) Integration
        • Customize Read and Write Access Policies for Starburst (Trino)
      • Starburst (Trino) Integration Reference Guide
    • Redshift
      • Getting Started
      • How-to Guides
        • Configure Redshift Integration
        • Configure Redshift Spectrum
      • Reference Guides
        • Redshift Integration
        • Redshift Pre-Configuration Details
    • Azure Synapse Analytics
      • Getting Started
      • Configure Azure Synapse Analytics Integration
      • Reference Guides
        • Azure Synapse Analytics Integration
        • Azure Synapse Analytics Pre-Configuration Details
    • Amazon S3
    • Google BigQuery
    • Legacy Integrations
      • Securing Hive and Impala Without Sentry
      • Enabling ImmutaGroupsMapping
    • Registering Metadata
      • Data Sources in Immuta
      • Register Data Sources
        • Create a Data Source
        • Create an Amazon S3 Data Source
        • Create a Google BigQuery Data Source
        • Bulk Create Snowflake Data Sources
      • Data Source Settings
        • How-to Guides
          • Manage Data Sources and Data Source Settings
          • Manage Data Source Members
          • Manage Access Requests and Tasks
          • Manage Data Dictionary Descriptions
          • Disable Immuta from Sampling Raw Data
        • Data Source Health Checks Reference Guide
      • Schema Monitoring
        • How-to Guides
          • Run Schema Monitoring and Column Detection Jobs
          • Manage Schema Monitoring
        • Reference Guides
          • Schema Monitoring
          • Schema Projects
        • Why Use Schema Monitoring?
    • Catalogs
      • Getting Started with External Catalogs
      • Configure an External Catalog
      • Reference Guides
        • External Catalogs
        • Custom REST Catalogs
          • Custom REST Catalog Interface Endpoints
    • Tags
      • How-to Guides
        • Create and Manage Tags
        • Add Tags to Data Sources and Projects
      • Tags Reference Guide
  • People
    • Getting Started
    • Identity Managers (IAMs)
      • How-to Guides
        • Microsoft Entra ID
        • Okta LDAP Interface
        • Okta and OpenID Connect
        • Integrate Okta SAML SCIM with Immuta
        • OneLogin with OpenID
        • Configure SAML IAM Protocol
      • Reference Guides
        • Identity Managers
        • SAML Single Logout
        • SAML Protocol Configuration Options
    • Immuta Users
      • How-to Guides
        • Managing Personas and Permissions
        • Manage Attributes and Groups
        • User Impersonation
        • External User ID Mapping
        • External User Info Endpoint
      • Reference Guides
        • Attributes and Groups in Immuta
        • Permissions and Personas
  • Discover Your Data
    • Getting Started
    • Introduction
    • Architecture
    • Data Discovery
      • How-to Guides
        • Enable Sensitive Data Discovery (SDD)
        • Manage Identification Frameworks
        • Manage Patterns
        • Manage Rules
        • Manage SDD on Data Sources
        • Manage Global SDD Settings
        • Migrate From Legacy to Native SDD
      • Reference Guides
        • How Competitive Pattern Analysis Works
        • Built-in Pattern Reference
        • Built-in Discovered Tags Reference
    • Data Classification
      • How-to Guides
        • Activate Classification Frameworks
        • Adjust Identification and Classification Framework Tags
        • How to Use a Built-In Classification Framework with Your Own Tags
      • Built-in Classification Frameworks Reference Guide
  • Detect Your Activity
    • Getting Started
      • Monitor and Secure Sensitive Data Platform Query Activity
        • User Identity Best Practices
        • Integration Architecture
        • Snowflake Roles Best Practices
        • Register Data Sources
        • Automate Entity and Sensitivity Discovery
        • Detect with Discover: Onboarding Guide
        • Using Immuta Detect
      • General Immuta Configuration
        • User Identity Best Practices
        • Integration Architecture
        • Databricks Roles Best Practices
        • Register Data Sources
    • Introduction
    • Audit
      • How-to Guides
        • Export Audit Logs to S3
        • Export Audit Logs to ADLS
        • Run Governance Reports
      • Reference Guides
        • Universal Audit Model (UAM)
        • Snowflake Query Audit Logs
        • Databricks Unity Catalog Audit Logs
        • Databricks Query Audit Logs
        • Starburst (Trino) Query Audit Logs
        • UAM Schema
        • Audit Export CLI
        • Governance Report Types
      • Deprecated Audit Guides
        • Legacy to UAM Migration
        • Download Audit Logs
        • System Audit Logs
    • Detection
      • Use the Detect Dashboards
      • Reference Guides
        • Detect
        • Detect Dashboards
        • Unknown Users in Audit Logs
    • Monitors
      • Manage Monitors and Observations
      • Detect Monitors Reference Guide
  • Secure Your Data
    • Getting Started with Secure
      • Automate Data Access Control Decisions
        • The Two Paths: Orchestrated RBAC and ABAC
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
        • Test and Deploy Policy
      • Compliantly Open More Sensitive Data for ML and Analytics
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
      • Federated Governance for Data Mesh and Self-Serve Data Access
        • Defining Domains
        • Managing Data Products
        • Managing Data Metadata
        • Apply Federated Governance
        • Discover and Subscribe to Data Products
    • Introduction
      • Scalability and Evolvability
      • Understandability
      • Distributed Stewardship
      • Consistency
      • Availability of Data
    • Authoring Policies in Secure
      • Authoring Policies at Scale
      • Data Engineering with Limited Policy Downtime
      • Subscription Policies
        • How-to Guides
          • Author a Subscription Policy
          • Author an ABAC Subscription Policy
          • Subscription Policies Advanced DSL Guide
          • Author a Restricted Subscription Policy
          • Clone, Activate, or Stage a Global Policy
        • Reference Guides
          • Subscription Policies
          • Subscription Policy Access Types
          • Advanced Use of Special Functions
      • Data Policies
        • Overview
        • How-to Guides
          • Author a Masking Data Policy
          • Author a Minimization Policy
          • Author a Purpose-Based Restriction Policy
          • Author a Restricted Data Policy
          • Author a Row-Level Policy
          • Author a Time-Based Restriction Policy
          • Certifications Exemptions and Diffs
          • External Masking Interface
        • Reference Guides
          • Data Policy Types
          • Masking Policies
          • Row-Level Policies
          • Custom WHERE Clause Functions
          • Data Policy Conflicts and Fallback
          • Custom Data Policy Certifications
          • Orchestrated Masking Policies
    • Domains
      • Getting Started with Domains
      • Domains Reference Guide
    • Projects and Purpose-Based Access Control
      • Projects and Purpose Controls
        • Getting Started
        • How-to Guides
          • Create a Project
          • Create and Manage Purposes
          • Adjust a Policy
          • Project Management
            • Manage Projects and Project Settings
            • Manage Project Data Sources
            • Manage Project Members
        • Reference Guides
          • Projects and Purposes
          • Policy Adjustments
        • Why Use Purposes?
      • Equalized Access
        • Manage Project Equalization
        • Project Equalization Reference Guide
        • Why Use Project Equalization?
      • Masked Joins
        • Enable Masked Joins
        • Why Use Masked Joins?
      • Writing to Projects
        • How-to Guides
          • Create and Manage Snowflake Project Workspaces
          • Create and Manage Databricks Project Workspaces
          • Write Data to the Workspace
        • Reference Guides
          • Project Workspaces
          • Project UDFs (Databricks)
    • Data Consumers
      • Subscribe to a Data Source
      • Query Data
        • Querying Snowflake Data
        • Querying Databricks Data
        • Querying Databricks SQL Data
        • Querying Starburst (Trino) Data
        • Querying Redshift Data
        • Querying Azure Synapse Analytics Data
      • Subscribe to Projects
  • Application Settings
    • How-to Guides
      • App Settings
      • BI Tools
        • BI Tool Configuration Recommendations
        • Power BI Configuration Example
        • Tableau Configuration Example
      • Add a License Key
      • Add ODBC Drivers
      • Manage Encryption Keys
      • System Status Bundle
    • Reference Guides
      • Data Processing, Encryption, and Masking Practices
      • Metadata Ingestion
  • Releases
    • Immuta v2024.2 Release Notes
    • Immuta Release Lifecycle
    • Immuta LTS Changelog
    • Immuta Support Matrix Overview
    • Immuta CLI Release Notes
    • Immuta Image Digests
    • Preview Features
      • Features in Preview
    • Deprecations
  • Developer Guides
    • The Immuta CLI
      • Install and Configure the Immuta CLI
      • Manage Your Immuta Tenant
      • Manage Data Sources
      • Manage Sensitive Data Discovery
        • Manage Sensitive Data Discovery Rules
        • Manage Identification Frameworks
        • Run Sensitive Data Discovery on Data Sources
      • Manage Policies
      • Manage Projects
      • Manage Purposes
    • The Immuta API
      • Integrations API
        • Getting Started
        • How-to Guides
          • Configure an Amazon S3 Integration
          • Configure an Azure Synapse Analytics Integration
          • Configure a Databricks Unity Catalog Integration
          • Configure a Google BigQuery Integration
          • Configure a Redshift Integration
          • Configure a Snowflake Integration
          • Configure a Starburst (Trino) Integration
        • Reference Guides
          • Integrations API Endpoints
          • Integration Configuration Payload
          • Response Schema
          • HTTP Status Codes and Error Messages
      • Immuta V2 API
        • Data Source Payload Attribute Details
        • Data Source Request Payload Examples
        • Create Policies API Examples
        • Create Projects API Examples
        • Create Purposes API Examples
      • Immuta V1 API
        • Authenticate with the API
        • Configure Your Instance of Immuta
          • Get Fingerprint Status
          • Get Job Status
          • Manage Frameworks
          • Manage IAMs
          • Manage Licenses
          • Manage Notifications
          • Manage Sensitive Data Discovery (SDD)
          • Manage Tags
          • Manage Webhooks
          • Search Filters
        • Connect Your Data
          • Create and Manage an Amazon S3 Data Source
          • Create an Azure Synapse Analytics Data Source
          • Create an Azure Blob Storage Data Source
          • Create a Databricks Data Source
          • Create a Presto Data Source
          • Create a Redshift Data Source
          • Create a Snowflake Data Source
          • Create a Starburst (Trino) Data Source
          • Manage the Data Dictionary
        • Manage Data Access
          • Manage Access Requests
          • Manage Data and Subscription Policies
          • Manage Domains
          • Manage Write Policies
            • Write Policies Payloads and Response Schema Reference Guide
          • Policy Handler Objects
          • Search Audit Logs
          • Search Connection Strings
          • Search for Organizations
          • Search Schemas
        • Subscribe to and Manage Data Sources
        • Manage Projects and Purposes
          • Manage Projects
          • Manage Purposes
        • Generate Governance Reports
Powered by GitBook

Other versions

  • SaaS
  • 2024.3

Copyright © 2014-2024 Immuta Inc. All rights reserved.

On this page
  • Azure Blob workflow
  • Create a data source
  • Request example
  • Response example
  • Get information about a data source
  • Request example
  • Response example
  • Manage data sources
  • Update a specific data source
  • Update multiple data sources
  • Re-crawl the data source

Was this helpful?

Export as PDF
  1. Developer Guides
  2. The Immuta API
  3. Immuta V1 API
  4. Connect Your Data

Create an Azure Blob Storage Data Source

Azure Blob Storage data source API reference guide

PreviousCreate an Azure Synapse Analytics Data SourceNextCreate a Databricks Data Source

Last updated 11 months ago

Was this helpful?

The azureblob endpoint allows you to connect and manage Azure Blob Storage data sources in Immuta.

Additional fields may be included in some responses you receive; however, these attributes are for internal purposes and are therefore undocumented.

Azure Blob workflow

  1. .

  2. .

  3. .

Create a data source

POST /azureblob/handler

Save the provided connection for an Azure Blob Storage data source.

Payload parameters

Attribute
Description
Required

private

boolean When false, the data source will be publicly available in the Immuta UI.

Yes

blobHandler

array[object] A list of full URLs providing the locations of all blob store handlers to use with this data source.

Yes

blobHandlerType

string Describes the type of underlying blob handler that will be used with this data source (e.g., MS SQL).

Yes

recordFormat

string The data format of blobs in the data source, such as json, xml, html, or jpeg.

Yes

type

string The type of data source: ingested (metadata will exist in Immuta) or queryable (metadata is dynamically queried).

Yes

name

string The name of the data source. It must be unique within the Immuta instance.

Yes

sqlTableName

string A string that represents this data source's table in Immuta.

Yes

organization

string The organization that owns the data source.

Yes

category

string The category of the data source.

No

description

string The description of the data source.

No

hasExamples

boolean When true, the data source contains examples.

No

Response parameters

Attribute
Description

id

integer The handler ID.

dataSourceId

integer The ID of the data source.

warnings

string This message describes issues with the created data source, such as the data source being unhealthy.

connectionString

string The connection string used to connect the data source to Immuta.

Request example

The following request saves the provided connection information (in example-payload.json) as a data source.

curl \
    --request POST \
    --header "Content-Type: application/json" \
    --header "Authorization: Bearer dea464c07bd07300095caa8" \
    --data @example-payload.json \
    https://your-immuta-url.com/azureblob/handler

Request payload example

{
  "handler": {
    "metadata": {
      "tagAttributes": [],
      "eventTimeAttribute": "",
      "useDirectoryForTags": false,
      "sasToken": "?sv=your=sas?token",
      "sasTokenUrl": "https://your.blob.example.windows.net/sastoken-url",
      "container": "demodata"
    }
  },
  "dataSource": {
    "blobHandler": {
      "scheme": "https",
      "url": ""
    },
    "blobHandlerType": "Azure Blob Storage",
    "recordFormat": "",
    "type": "ingested",
    "name": "dev",
    "sqlTableName": "dev"
  }
}

Response example

{
  "id": 18,
  "dataSourceId": 18
}

Get information about a data source

GET /azureblob/handler/{handlerId}

Return the handler metadata associated with the provided handler ID.

Query parameters

Attribute
Description
Required

handlerId

integer The specific handler ID.

Yes

skipCache

boolean If true, the handler cache will be skipped when retrieving the handler data.

No

Response parameters

Attribute
Description

dataSourceId

integer The data source ID.

value

array Details regarding the handler, including container, accountname, sasTokenURL, ingestUserId, tagAttributes, dataSourceName, refreshInterval, eventTimeAttribute, useDirectoryForTags.

Request example

The following request returns the handler metadata associated with the provided handler ID.

curl \
    --request GET \
    --header "Content-Type: application/json" \
    --header "Authorization: Bearer dea464c07bd07300095caa8" \
    https://your-immuta-url.com/azureblob/handler/67

Response example

{
  "dataSourceId": 427,
  "metadata": {
    "container": "integration",
    "accountName": "integration-tests",
    "sasTokenUrl": "https://your.blob.example.windows.net/",
    "ingestUserId": "azure blob storage_indexer_example",
    "tagAttributes": [],
    "dataSourceName": "Test",
    "refreshInterval": 0,
    "eventTimeAttribute": "",
    "useDirectoryForTags": false
  },
  "type": "azureBlobStorageHandler",
  "connectionString": "integration-tests/integration",
  "id": 427,
  "createdAt": "2021-09-22T18:45:47.744Z",
  "updatedAt": "2021-09-22T18:45:47.969Z"
}

Manage data sources

Method
Path
Purpose

PUT

/azureblob/handler/{handlerId}

PUT

/azureblob/bulk

PUT

/azureblob/handler/{handlerId}/crawl

Update a specific data source

PUT /azureblob/handler/{handlerId}

Update the provided information for an Azure Blob Storage data source.

Query parameters

Attribute
Description
Required

handlerId

integer The specific handler ID.

Yes

skipCache

boolean When true, will skip the handler cache when retrieving metadata.

No

Response parameters

Attribute
Description

id

integer The ID of the handler.

dataSourceId

integer The data source ID.

metadata

array Details regarding the updated information.

Request example

The following request with the payload below updates the metadata for the data source with the handler ID 18.

curl \
    --request PUT \
    --header "Content-Type: application/json" \
    --header "Authorization: Bearer dea464c07bd07300095caa8" \
    --data @example-payload.json \
    https://your-immuta-url.com/azureblob/handler/18

Payload example

{
  "dataSourceId": 18,
  "metadata": {
    "container": "testdata",
    "accountName": "integration-tests",
    "sasTokenUrl": "https://your.blob.example.windows.net/",
    "ingestUserId": "azure blob storage_indexer_example",
    "tagAttributes": [],
    "dataSourceName": "dev",
    "refreshInterval": 0,
    "eventTimeAttribute": "",
    "useDirectoryForTags": false
  },
  "type": "azureBlobStorageHandler",
  "connectionString": "your/testdata",
  "id": 18,
  "createdAt": "2021-09-23T18:47:52.976Z",
  "updatedAt": "2021-09-23T18:47:53.194Z"
}

Response example

{
  "id": 18,
  "dataSourceId": 18,
  "metadata": {
    "sasToken": "2:your?sastoken==",
    "container": "testdata",
    "accountName": "your-account-name",
    "sasTokenUrl": "2:your?sastokenurlTS",
    "ingestAPIKey": "996samplee89c1apia7ckey9",
    "ingestUserId": "azure blob storage_indexer_example",
    "tagAttributes": [],
    "dataSourceName": "dev",
    "refreshInterval": 0,
    "eventTimeAttribute": "",
    "useDirectoryForTags": false
  }
}

Update multiple data sources

PUT /azureblob/bulk

Update the data source metadata associated with the provided connection string.

Payload parameters

Attribute
Description
Required

handler

metadata Includes metadata about the handler, such as ssl, port, database, hostname, username, and password.

Yes

connectionString

string The connection string used to connect to the data sources.

Yes

Response parameters

Attribute
Description

bulkId

string The ID of the bulk data source update.

connectionString

string The connection string shared by the data sources bulk updated.

jobsCreated

integer The number of jobs that ran to update the data sources; this number corresponds to the number of data sources updated.

Request example

The following request updates the autoIngest value to true for data sources with the connection string specified in the payload below.

curl \
    --request PUT \
    --header "Content-Type: application/json" \
    --header "Authorization: Bearer dea464c07bd07300095caa8" \
    --data @example-payload.json \
    https://your-immuta-url.com/azureblob/bulk

Payload example

{
  "ids": [
    5, 6
  ],
  "connectionString": "integration-tests/integration",
  "handler": {
    "metadata": {
      "autoIngest": true
    }
  }
}

Response example

{
  "bulkId": "bulk_ds_update_dd2600809bf8418dbea2706d6f456636",
  "connectionString": "integration-tests/integration",
  "jobsCreated": 0
}

Re-crawl the data source

PUT /azureblob/handler/{handlerId}/crawl

Re-crawls the data source and updates the metadata.

Query parameters

Attribute
Description
Required

HandlerId

integer The specific handler ID.

Yes

Response parameters

The response returns a string of characters that identify the job run.

Request example

The following request re-crawls the data source.

curl \
    --request PUT \
    --header "Content-Type: application/json" \
    --header "Authorization: Bearer dea464c07bd07300095caa8" \
    https://your-immuta-url.com/azureblob/hanfler/427/crawl

Response example

a4de5af0-1be1-11ec-8131-6fe77107bfa9

.

.

.

Create a data source
Get information about a data source
Manage data sources
Update the provided information for an Azure Blob Storage data source
Update the handler metadata associated with the provided connection string
Re-crawl the data source and update the metadata