LogoLogo
2024.3
  • Immuta Documentation - 2024.3
  • What is Immuta?
  • Self-Managed Deployment
    • Requirements
    • Install
      • Managed Public Cloud
      • Red Hat OpenShift
    • Upgrade
      • Migrating to the New Helm Chart
      • Upgrading (IEHC)
      • Upgrading (IHC)
    • Guides
      • Ingress Configuration
      • TLS Configuration
      • Cosign Verification
      • Production Best Practices
      • Rotating Credentials
      • External Cache Configuration
      • Enabling Legacy Query Engine and Fingerprint
      • Private Container Registries
      • Air-Gapped Environments
    • Disaster Recovery
    • Troubleshooting
    • Conventions
  • Integrations
    • Immuta Integrations
    • Snowflake
      • Getting Started
      • How-to Guides
        • Configure a Snowflake Integration
        • Snowflake Table Grants Migration
        • Edit or Remove Your Snowflake Integration
        • Integration Settings
          • Enable Snowflake Table Grants
          • Use Snowflake Data Sharing with Immuta
          • Configure Snowflake Lineage Tag Propagation
          • Enable Snowflake Low Row Access Policy Mode
            • Upgrade Snowflake Low Row Access Policy Mode
      • Reference Guides
        • Snowflake Integration
        • Snowflake Data Sharing
        • Snowflake Lineage Tag Propagation
        • Snowflake Low Row Access Policy Mode
        • Snowflake Table Grants
        • Warehouse Sizing Recommendations
      • Phased Snowflake Onboarding Concept Guide
    • Databricks Unity Catalog
      • Getting Started
      • How-to Guides
        • Configure a Databricks Unity Catalog Integration
        • Migrate to Unity Catalog
      • Databricks Unity Catalog Integration Reference Guide
    • Databricks Spark
      • How-to Guides
        • Configuration
          • Simplified Databricks Spark Configuration
          • Manual Databricks Spark Configuration
          • Manually Update Your Databricks Cluster
          • Install a Trusted Library
        • DBFS Access
        • Limited Enforcement in Databricks Spark
        • Hide the Immuta Database in Databricks
        • Run spark-submit Jobs on Databricks
        • Configure Project UDFs Cache Settings
        • External Metastores
      • Reference Guides
        • Databricks Spark Integration
        • Databricks Spark Pre-Configuration Details
        • Configuration Settings
          • Databricks Spark Cluster Policies
            • Python & SQL
            • Python & SQL & R
            • Python & SQL & R with Library Support
            • Scala
            • Sparklyr
          • Environment Variables
          • Ephemeral Overrides
          • Py4j Security Error
          • Scala Cluster Security Details
          • Databricks Security Configuration for Performance
        • Databricks Change Data Feed
        • Databricks Libraries Introduction
        • Delta Lake API
        • Spark Direct File Reads
        • Databricks Metastore Magic
    • Starburst (Trino)
      • Getting Started
      • How-to Guides
        • Configure Starburst (Trino) Integration
        • Customize Read and Write Access Policies for Starburst (Trino)
      • Starburst (Trino) Integration Reference Guide
    • Redshift
      • Getting Started
      • How-to Guides
        • Configure Redshift Integration
        • Configure Redshift Spectrum
      • Reference Guides
        • Redshift Integration
        • Redshift Pre-Configuration Details
    • Azure Synapse Analytics
      • Getting Started
      • Configure Azure Synapse Analytics Integration
      • Reference Guides
        • Azure Synapse Analytics Integration
        • Azure Synapse Analytics Pre-Configuration Details
    • Amazon S3
    • Google BigQuery
    • Legacy Integrations
      • Securing Hive and Impala Without Sentry
      • Enabling ImmutaGroupsMapping
    • Catalogs
      • Getting Started with External Catalogs
      • Configure an External Catalog
      • Reference Guides
        • External Catalogs
        • Custom REST Catalogs
          • Custom REST Catalog Interface Endpoints
  • Data
    • Registering Metadata
      • Data Sources in Immuta
      • Register Data Sources
        • Create a Data Source
        • Create an Amazon S3 Data Source
        • Create a Google BigQuery Data Source
        • Bulk Create Snowflake Data Sources
      • Data Source Settings
        • How-to Guides
          • Manage Data Sources and Data Source Settings
          • Manage Data Source Members
          • Manage Access Requests and Tasks
          • Manage Data Dictionary Descriptions
          • Disable Immuta from Sampling Raw Data
        • Data Source Health Checks Reference Guide
      • Schema Monitoring
        • How-to Guides
          • Run Schema Monitoring and Column Detection Jobs
          • Manage Schema Monitoring
        • Reference Guides
          • Schema Monitoring
          • Schema Projects
        • Why Use Schema Monitoring?
    • Domains
      • Getting Started with Domains
      • Domains Reference Guide
    • Tags
      • How-to Guides
        • Create and Manage Tags
        • Add Tags to Data Sources and Projects
      • Tags Reference Guide
  • People
    • Getting Started
    • Identity Managers (IAMs)
      • How-to Guides
        • Okta LDAP Interface
        • OpenID Connect
          • OpenID Connect Protocol
          • Okta and OpenID Connect
          • OneLogin with OpenID
        • SAML
          • SAML Protocol
          • Microsoft Entra ID
          • Okta SAML SCIM
      • Reference Guides
        • Identity Managers
        • SAML Single Logout
        • SAML Protocol Configuration Options
    • Immuta Users
      • How-to Guides
        • Managing Personas and Permissions
        • Manage Attributes and Groups
        • User Impersonation
        • External User ID Mapping
        • External User Info Endpoint
      • Reference Guides
        • Attributes and Groups in Immuta
        • Permissions and Personas
  • Discover Your Data
    • Getting Started with Discover
    • Introduction
    • Data Discovery
      • How-to Guides
        • Enable Sensitive Data Discovery (SDD)
        • Manage Identification Frameworks
        • Manage Identifiers
        • Run and Manage SDD on Data Sources
        • Manage Sensitive Data Discovery Settings
        • Migrate From Legacy to Native SDD
      • Reference Guides
        • How Competitive Criteria Analysis Works
        • Built-in Identifier Reference
        • Built-in Discovered Tags Reference
    • Data Classification
      • How-to Guides
        • Activate Classification Frameworks
        • Adjust Identification and Classification Framework Tags
        • How to Use a Built-In Classification Framework with Your Own Tags
      • Built-in Classification Frameworks Reference Guide
  • Detect Your Activity
    • Getting Started with Detect
      • Monitor and Secure Sensitive Data Platform Query Activity
        • User Identity Best Practices
        • Integration Architecture
        • Snowflake Roles Best Practices
        • Register Data Sources
        • Automate Entity and Sensitivity Discovery
        • Detect with Discover: Onboarding Guide
        • Using Immuta Detect
      • General Immuta Configuration
        • User Identity Best Practices
        • Integration Architecture
        • Databricks Roles Best Practices
        • Register Data Sources
    • Introduction
    • Audit
      • How-to Guides
        • Export Audit Logs to S3
        • Export Audit Logs to ADLS
        • Run Governance Reports
      • Reference Guides
        • Universal Audit Model (UAM)
          • UAM Schema
        • Query Audit Logs
          • Snowflake Query Audit Logs
          • Databricks Unity Catalog Query Audit Logs
          • Databricks Spark Query Audit Logs
          • Starburst (Trino) Query Audit Logs
        • Audit Export GraphQL Reference Guide
        • Governance Report Types
        • Unknown Users in Audit Logs
      • Deprecated Audit Guides
        • Legacy to UAM Migration
        • Download Audit Logs
        • System Audit Logs
    • Dashboards
      • Use the Detect Dashboards How-To Guide
      • Detect Dashboards Reference Guide
    • Monitors
      • Manage Monitors and Observations
      • Detect Monitors Reference Guide
  • Secure Your Data
    • Getting Started with Secure
      • Automate Data Access Control Decisions
        • The Two Paths: Orchestrated RBAC and ABAC
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
        • Test and Deploy Policy
      • Compliantly Open More Sensitive Data for ML and Analytics
        • Managing User Metadata
        • Managing Data Metadata
        • Author Policy
      • Federated Governance for Data Mesh and Self-Serve Data Access
        • Defining Domains
        • Managing Data Products
        • Managing Data Metadata
        • Apply Federated Governance
        • Discover and Subscribe to Data Products
    • Introduction
      • Scalability and Evolvability
      • Understandability
      • Distributed Stewardship
      • Consistency
      • Availability of Data
    • Authoring Policies in Secure
      • Authoring Policies at Scale
      • Data Engineering with Limited Policy Downtime
      • Subscription Policies
        • How-to Guides
          • Author a Subscription Policy
          • Author an ABAC Subscription Policy
          • Subscription Policies Advanced DSL Guide
          • Author a Restricted Subscription Policy
          • Clone, Activate, or Stage a Global Policy
        • Reference Guides
          • Subscription Policies
          • Subscription Policy Access Types
          • Advanced Use of Special Functions
      • Data Policies
        • Overview
        • How-to Guides
          • Author a Masking Data Policy
          • Author a Minimization Policy
          • Author a Purpose-Based Restriction Policy
          • Author a Restricted Data Policy
          • Author a Row-Level Policy
          • Author a Time-Based Restriction Policy
          • Certifications Exemptions and Diffs
          • External Masking Interface
        • Reference Guides
          • Data Policy Types
          • Masking Policies
          • Row-Level Policies
          • Custom WHERE Clause Functions
          • Data Policy Conflicts and Fallback
          • Custom Data Policy Certifications
          • Orchestrated Masking Policies
    • Projects and Purpose-Based Access Control
      • Projects and Purpose Controls
        • Getting Started
        • How-to Guides
          • Create a Project
          • Create and Manage Purposes
          • Adjust a Policy
          • Project Management
            • Manage Projects and Project Settings
            • Manage Project Data Sources
            • Manage Project Members
        • Reference Guides
          • Projects and Purposes
          • Policy Adjustments
        • Why Use Purposes?
      • Equalized Access
        • Manage Project Equalization
        • Project Equalization Reference Guide
        • Why Use Project Equalization?
      • Masked Joins
        • Enable Masked Joins
        • Why Use Masked Joins?
      • Writing to Projects
        • How-to Guides
          • Create and Manage Snowflake Project Workspaces
          • Create and Manage Databricks Spark Project Workspaces
          • Write Data to the Workspace
        • Reference Guides
          • Project Workspaces
          • Project UDFs (Databricks)
    • Data Consumers
      • Subscribe to a Data Source
      • Query Data
        • Querying Snowflake Data
        • Querying Databricks Data
        • Querying Databricks SQL Data
        • Querying Starburst (Trino) Data
        • Querying Redshift Data
        • Querying Azure Synapse Analytics Data
      • Subscribe to Projects
  • Application Settings
    • How-to Guides
      • App Settings
      • BI Tools
        • BI Tool Configuration Recommendations
        • Power BI Configuration Example
        • Tableau Configuration Example
      • Add a License Key
      • Add ODBC Drivers
      • Manage Encryption Keys
      • System Status Bundle
    • Reference Guides
      • Data Processing, Encryption, and Masking Practices
      • Metadata Ingestion
  • Releases
    • Immuta v2024.3 Release Notes
    • Immuta Release Lifecycle
    • Immuta LTS Changelog
    • Immuta Support Matrix Overview
    • Immuta CLI Release Notes
    • Immuta Image Digests
    • Preview Features
      • Features in Preview
    • Deprecations
  • Developer Guides
    • The Immuta CLI
      • Install and Configure the Immuta CLI
      • Manage Your Immuta Tenant
      • Manage Data Sources
      • Manage Sensitive Data Discovery
        • Manage Sensitive Data Discovery Rules
        • Manage Identification Frameworks
        • Run Sensitive Data Discovery on Data Sources
      • Manage Policies
      • Manage Projects
      • Manage Purposes
      • Manage Audit
    • The Immuta API
      • Integrations API
        • Getting Started
        • How-to Guides
          • Configure an Amazon S3 Integration
          • Configure an Azure Synapse Analytics Integration
          • Configure a Databricks Unity Catalog Integration
          • Configure a Google BigQuery Integration
          • Configure a Redshift Integration
          • Configure a Snowflake Integration
          • Configure a Starburst (Trino) Integration
        • Reference Guides
          • Integrations API Endpoints
          • Integration Configuration Payload
          • Response Schema
          • HTTP Status Codes and Error Messages
      • Immuta V2 API
        • Data Source Payload Attribute Details
        • Data Source Request Payload Examples
        • Create Policies API Examples
        • Create Projects API Examples
        • Create Purposes API Examples
      • Immuta V1 API
        • Authenticate with the API
        • Configure Your Instance of Immuta
          • Get Fingerprint Status
          • Get Job Status
          • Manage Frameworks
          • Manage IAMs
          • Manage Licenses
          • Manage Notifications
          • Manage Sensitive Data Discovery (SDD)
          • Manage Tags
          • Manage Webhooks
          • Search Filters
        • Connect Your Data
          • Create and Manage an Amazon S3 Data Source
          • Create an Azure Synapse Analytics Data Source
          • Create an Azure Blob Storage Data Source
          • Create a Databricks Data Source
          • Create a Presto Data Source
          • Create a Redshift Data Source
          • Create a Snowflake Data Source
          • Create a Starburst (Trino) Data Source
          • Manage the Data Dictionary
        • Manage Data Access
          • Manage Access Requests
          • Manage Data and Subscription Policies
          • Manage Domains
          • Manage Write Policies
            • Write Policies Payloads and Response Schema Reference Guide
          • Policy Handler Objects
          • Search Audit Logs
          • Search Connection Strings
          • Search for Organizations
          • Search Schemas
        • Subscribe to and Manage Data Sources
        • Manage Projects and Purposes
          • Manage Projects
          • Manage Purposes
        • Generate Governance Reports
Powered by GitBook

Other versions

  • SaaS
  • 2024.3
  • 2024.2

Copyright © 2014-2024 Immuta Inc. All rights reserved.

On this page
  • Architectural Overview
  • General Concepts
  • Tags in Immuta
  • Descriptions in Immuta
  • Authentication
  • Endpoint Specification
  • GET /tags
  • POST /dataSource
  • GET /dataSource/page/{id}
  • GET /column/{id}

Was this helpful?

Export as PDF
  1. Integrations
  2. Catalogs
  3. Reference Guides
  4. Custom REST Catalogs

Custom REST Catalog Interface Endpoints

Was this helpful?

Architectural Overview

The diagram below contrasts Immuta's provided catalog integration architecture with this Customer REST Catalog interface - which gives the customer tremendous control over the metadata being provided to Immuta.

The custom-developed service must be built to receive and handle calls to the REST endpoints specified below. Immuta will call these endpoints as detailed below when certain events occur and at various intervals. The required responses to complete the connection are also detailed.

General Concepts

Tags in Immuta

Tags are attributes applied to data - either at the top, data source, level or at the individual column level.

Tags in Immuta take the form of a nested tree structure. There are "parents", "children", "grand-children", etc.:

| Parent (root)
|\_ Child1
|   \_ Grandchild1 (leaf)
 \_ Child2
    \_ Grandchild1 (leaf)

The REST Catalog interface interprets a tag's relationship mapping from a string based on a standard "dot" (.) notation, like:

"Parent.Child1.Grandchild1"

Tags returned must meet the following constraints:

  • They must be no longer than 500 characters. Longer tags will not throw an error but will be truncated silently at 500 characters.

  • They must be composed of letters, digits, underscores, dashes, and whitespace characters. A period (.) is used as a separator as described above. Other special characters are not supported.

A tag object has a single id property, which is used to uniquely identify the tag within the catalog. This id may be of either a string or integer type, and its value is completely up to the designer of the REST Catalog service. Common examples include: a standard integer value, a UUID, or perhaps a hash of the tag's string value (if it is unique within the system).

For this Customer REST Catalog interface, tags are represented in JSON like:

"<string>[.<string>[.<string>...]]": {
    "id": "<unique identifier, string or int>"
},

For example, the object below specifies 3 different tags:

"REST_Catalog_Root": {
    "id": "id_is_set_by_catalog_and_can_be_int_or_string"
},
"REST_Catalog_Root.Child1": {
    "id": "d3e859da-40e9-43d2-a302-294458e79a64"
},
"REST_Catalog_Root.Child2.Grandchild1": {
    "id": 10
}

Descriptions in Immuta

Descriptions are strings that, like tags, can be applied to either a data source or an individual column. These strings support UTF-8, including special and various language characters.

Authentication

Immuta can make requests to your REST Catalog service using any of the following authentication methods:

  • Username and password: Immuta can send requests with a username and a password in the Authorization HTTP header. In this case, the custom REST service will need to be able to parse a Basic Authorization Header and validate the credentials sent with it.

  • PKI Certificate: Immuta can also send requests using a CA certificate, a certificate, and a key.

  • NO Authentication: Immuta can make unauthenticated requests to your REST Catalog service. However, this should only be used if you have other security measures in place (e.g., if the service is in an isolated network that's reachable only by your Immuta environment).

Authentication and specific endpoints

When accessing the /dataSource and /tags endpoints, Immuta will use the configured username and password. If you choose to also protect the human-readable pages with authentication, users will be prompted to authenticate when they first visit those pages.

Endpoint Specification

GET /tags

Overview

The /tags endpoint is used to collect ALL the tags the catalog can provide. It is used by Immuta to populate Immuta's tags list in the Governance section. These tags can then be used for policy creation ahead of actual data sources being created that make use of them. This enables policies to immediately apply when data sources are registered with Immuta.

As with all external catalogs, tags ingested by Immuta from the REST catalog interface are not able to be modified locally within Immuta as this catalog becomes the "source of truth" for them. This results in the tags showing in Immuta with either a lock icon next to them, or without the delete button that would allow a user to manually remove them from an assigned data source or column.

Request

The /tags endpoint receives a simple GET request from Immuta. No payload nor query parameters are required.

Example request:

curl http://<your_custom_rest_catalog>/tags \
     --header 'Authorization: Basic <base64 of username:password>'

Response

The JSON format for this response object is:

{
  "Parent": {"id": <tag_id1>},
  "Parent.Child1": {"id": <tag_id2>},
  "Parent.Child1.Grandchild1": {"id": <tag_id3>},
}

Example response:

{
  "REST_Catalog_Root": {
      "id": "id_is_set_by_catalog_and_can_be_int_or_string"
  },
  "REST_Catalog_Root.Child1": {
      "id": "d3e859da-40e9-43d2-a302-294458e79a64"
  },
  "REST_Catalog_Root.Child2.Grandchild1": {
      "id": 10
  }
}

POST /dataSource

Overview

The /dataSource endpoint does the vast majority of the work. It receives a POST request from Immuta, and returns the mapping of a data source and its columns to the applied tags and descriptions.

Immuta will try to fetch metadata for a data source in the system at various times:

  1. During data source creation. During data source creation, Immuta will send metadata to the REST Catalog service, most notably the connection details of the data source, which includes the schema and table name. It is important that the Custom REST service implemented can parse this information and search its records for an appropriate record to return with an ID unique to this data source in its catalogMetadata object.

  2. When a user manually links the data source. Data sources that either fail to auto-link, or that were created prior to the Custom REST catalog being configured, can still be manually linked. To do so, a data source owner can provide the ID of the asset as defined by the Custom REST Catalog via the Immuta UI. In order for this to work, the Custom REST Catalog service must support matching data source assets by unique ID.

  3. During various refreshes. Once linked, Immuta will periodically call the /dataSource endpoint to ensure information is up to date.

Request

Immuta's POST requests to the /dataSource endpoint will consist of a payload containing many of the elements outlined below:

Attribute
Data Type
Description

catalogMetadata

dictionary

Object holding the data source's catalog metadata.

catalogMetadata.id

string or integer

The unique identifier of the data source in the catalog.

catalogMetadata.name

string

The name of the data source in the catalog.

handlerInfo

dictionary

Object holding the data source's connection details.

handlerInfo.schema

string

The data source’s schema name in the source system.

handlerInfo.table

string

The data source’s table name in the source system.

handlerInfo.hostname

string

The data source’s connection schema in the source storage system.

handlerInfo.port

integer

The data source’s connection port in the source storage system.

handlerInfo.query

string

The data source’s connection schema in the source storage system, if applicable.

dataSource

dictionary

Object holding general data source information from Immuta. This can be viewed with debugging, but is not usually required for catalog purposes.

This object must be parsed by the in Custom REST Catalog order to determine the specific data source metadata being requested.

For the most part, Immuta will provide the id of the data source as part of the catalogMetadata. This should be used as the primary metadata lookup value.

{
  "catalogMetadata": {
    "id": <unique integer or string value>
  }
}

When a data source is being created, such an id will not yet be known to Immuta. Immuta will instead send handlerInfo information as part of the request.

{
  "handlerInfo": {
    "schema": "schema_name",
    "table": "table_name"
  }
}

When an id is not specified, the schema and table name elements should be parsed in an attempt to identify the desired catalog entry and provide an appropriate id. If such a lookup is successful and an id is returned to Immuta in the catalogMetadata section, Immuta will establish an automatic link between the the new data source and the catalog entry, and future references will use that id.

Example request:

curl -X POST 'http://<your_custom_rest_catalog>/dataSource' \
     --header 'Authorization: Basic  <base64 of username:password>' \
     --header 'Content-Type: application/json' \
     --data '{"catalogMetadata": { "id": "this_is_1_unique_id"}}'

Response

Attribute
Data Type
Description

catalogMetadata

dictionary

Object holding the data source's catalog metadata.

catalogMetadata.id

string or integer

The unique identifier of the data source in the catalog.

catalogMetadata.name

string

The name of the data source in the catalog.

description

string

A description of the data source.

tags

<tags object>

Object containing the data source-level tags.

dictionary

dictionary

Object containing the column names of the data source as its keys.

dictionary.<column>

dictionary

Object containing a single column's metadata.

dictionary.<column>.catalogMetadata.id

string or integer

The unique identifier of the column in the catalog.

dictionary.<column>.description

string

A description of the column.

dictionary.<column>.tags

<tags object>

Object containing the column-level tags as keys.

The returned JSON object should have a format very similar to

  "catalogMetadata": {
    "id": <unique integer or string>
  },
  "description": <string>,
  "tags": {
    "Parent": {
      "id": <tag_id1>
    },
  },
  "dictionary": {
    "some_column_name": {
      "catalogMetadata": {
        "id": <col_id1>
      },
      "description": "This column has example data in it",
      "tags": {
        "Parent.Child1": {
          "id": <tag_id2>
        },
        "Parent.Child1.Grandchild1": {
          "id": <tag_id3>
        }
      }
    }
  }
}

Example response:

{
    "catalogMetadata": {
        "name": "Example Data Source",
        "id": "this_is_1_unique_id"
    },
    "description": "This description gets applied to the whole data source",
    "tags": {
        "Root": {
            "id": "id_is_set_by_catalog_and_can_be_int_or_string"
        }
    },
    "dictionary": {
        "ColName1": {
            "catalogMetadata": {
                "id": 502342
            },
            "description": "This description gets applied to just this column",
            "tags": {
                "REST_Catalog_Root.Child1": {
                    "id": "d3e859da-40e9-43d2-a302-294458e79a64"
                },
                "REST_Catalog_Root.Child2": {
                    "id": 49294
                },
                "REST_Catalog_Root.Child2.Grandchild1": {
                    "id": "grand-kid-2"
                }
            }
        }
    }
}

GET /dataSource/page/{id}

Overview

This endpoint returns a human-readable information page from the REST catalog for the data source associated with {id}. Immuta provides this as a mechanism for allowing the REST catalog to provide additional information about the data source that may not be directly ingested by or visible within Immuta. This link is accessed in the Immuta UI when a user clicks the catalog logo associated with the data source.

Request

Immuta will send a GET request to the /dataSource/page/{id} endpoint, where {id} will be:

Attribute
Data Type
Description

id

URL Parameter, integer or string

The unique identifier of the data source in the remote catalog system.

Example request:

curl http://<your_custom_rest_catalog>/dataSource/page/123

Response

The Custom REST Catalog can either provide such a page directly, or can redirect the user to any resource where the appropriate page would be provided - for example a backing full service catalog such as Collibra, if this Custom REST catalog is simply being used to support a custom data model.

Example response:

<html>
  <head>
    <title>data source 123</title>
  </head>
  <body>
    data source 123 is a data source that was created just for documentation.
  </body>
</html>

GET /column/{id}

Overview

This endpoint returns the catalog's human-readable information page for the column associated with {id}. Immuta provides this as a mechanism for allowing the REST catalog to provide additional information about the specific column that may not be directly ingested by or visible within Immuta.

Request

Immuta will send a GET request to the /column/{id} endpoint, where {id} will be:

Attribute
Data Type
Description

id

URL Parameter, integer or string

The unique identifier of the column in the remote catalog system.

Example request:

curl http://<your_custom_rest_catalog>/column/10

Response

The Custom REST Catalog can either provide such a page directly, or can redirect the user to any resource where the appropriate page would be provided - for example a backing full service catalog such as Collibra, if this Custom REST catalog is simply being used to support a custom data model.

Example response:

<html>
  <head>
    <title>data source 123 Column 10</title>
  </head>
  <body>
    Column 10 is full of example data for documentation reasons.
  </body>
</html>

For more information on tags and how they are created, managed, and displayed within Immuta, see our .

The Custom REST service must respond with an object that maps all tag name strings to associated ids. The fully-qualifies the location of the tag in the tree structure as detailed previously, and the id is a globally unique identifier assigned by the REST catalog to that tag.

The schema for the /dataSource response uses the same tag object structure from the , along with the following set of metadata keys for both data sources and columns.

tag documentation
tag name string
/tags response