# Red Hat OpenShift

This is a guide on how to deploy Immuta on OpenShift.

## Prerequisites

The following managed services must be provisioned and running before proceeding. For further assistance consult the [recommendations table](https://documentation.immuta.com/latest/configuration/deployment-requirements#infrastructure-recommendations) for your respective cloud provider.

{% hint style="warning" %}
**Feature availability**

If deployed without Elasticsearch/OpenSearch, several core services and features will be unavailable. See the [deployment requirements](https://documentation.immuta.com/latest/configuration/self-managed-deployment/deployment-requirements) for details.
{% endhint %}

* PostgreSQL
* (Optional) Elasticsearch/OpenSearch Service

### Checklist

This checklist outlines the necessary prerequisites for successfully deploying Immuta.

#### Credentials

* [ ] You have the credentials needed to access the ocir.immuta.com OCI registry. These can be viewed in your user profile at [support.immuta.com](https://support.immuta.com).

#### PostgreSQL

* [ ] The PostgreSQL instance's hostname/FQDN is [resolvable from within the Kubernetes cluster](https://documentation.immuta.com/latest/configuration/troubleshooting#common).
* [ ] The PostgreSQL instance is [accepting connections](https://documentation.immuta.com/latest/configuration/troubleshooting#postgresql).
* [ ] You have a PostgreSQL user account with the necessary privileges to create databases, extensions, and roles.
* [ ] The Helm chart only supports username/password authentication for PostgreSQL. At this time, other authentication mechanisms are not supported.

#### Elasticsearch

* [ ] The Elasticsearch instance's hostname/FQDN is [resolvable from within the Kubernetes cluster](https://documentation.immuta.com/latest/configuration/troubleshooting#common).
* [ ] The Elasticsearch instance is [accepting connections](https://documentation.immuta.com/latest/configuration/troubleshooting#elasticsearch).
* [ ] The user must have the [required permissions](https://documentation.immuta.com/latest/configuration/deployment-requirements#opensearch-user).
* [ ] The Helm chart supports the following authentication methods for Elasticsearch. At this time, other authentication mechanisms are not supported:
  * [ ] [Username and password](https://documentation.immuta.com/latest/configuration/self-managed-deployment/configure/opensearch-authentication/setting-up-opensearch-user-permissions-for-username-and-password-authentication)
  * [ ] [AWS assumed role for OpenSearch](https://documentation.immuta.com/latest/configuration/self-managed-deployment/configure/opensearch-authentication/setting-up-opensearch-user-permissions-for-an-aws-role)

## Setup

### Helm

#### Authenticate with OCI registry

```bash
echo <token> | helm registry login --password-stdin --username <username> ocir.immuta.com
```

### Kubernetes

{% hint style="info" %}
Creating a dedicated namespace ensures a logically isolated environment for your Immuta deployment, preventing resource conflicts with other applications.
{% endhint %}

#### Create project

1. Create an OpenShift project named `immuta`.

   ```bash
   oc new-project immuta
   ```
2. Get the UID range allocated to the project. Each running container's UID must fall within this range. This value will be referenced later on.

   ```bash
   oc get project immuta --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
   ```
3. Get the GID range allocated to the project. Each running container's GID must fall within this range. This value will be referenced later on.

   ```bash
   oc get project immuta --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
   ```
4. Switch to project `immuta`.

   ```bash
   oc project immuta
   ```

#### Create registry secret

Create a container registry pull secret. Your credentials to authenticate with ocir.immuta.com can be viewed in your user profile at [support.immuta.com](https://support.immuta.com).

```bash
oc create secret docker-registry immuta-oci-registry \
    --docker-server=https://ocir.immuta.com \
    --docker-username="<username>" \
    --docker-password="<token>" \
    --docker-email=support@immuta.com
```

### PostgreSQL

{% hint style="info" %}
**Connecting a client**

There are numerous ways to connect to a PostgreSQL database. This step demonstrates how to connect with psql by creating an ephemeral Kubernetes pod.
{% endhint %}

#### Connect to the database

Connect to the database as an admin (e.g., `postgres`) by creating an ephemeral container inside the Kubernetes cluster. A shell prompt will not be displayed after executing the `kubectl run` command outlined below. Wait 5 seconds, and then proceed by entering a password.

```sh
oc run pgclient \
    --stdin \
    --tty \
    --rm \
    --image docker.io/bitnami/postgresql -- \
    psql --host <postgres-fqdn> --username <postgres-admin> --dbname postgres --port 5432 --password
```

#### Create Role

{% hint style="info" %}
Temporal's upgrade mechanism utilizes SQL command `CREATE EXTENSION` when managing database schema changes. However, in cloud-managed PostgreSQL offerings, this command is typically restricted to roles with elevated privileges to protect the database and maintain the stability of the cloud environment.

To ensure Temporal can successfully manage its schema, a pre-defined administrator role must be granted. The role name varies depending on the cloud-managed service:

* Amazon RDS: `rds_superuser`
* Azure Database: `azure_pg_admin`
* Google Cloud SQL: `cloudsqlsuperuser`
  {% endhint %}

1. Create the `immuta` role.

   <pre class="language-plsql"><code class="lang-plsql"><strong>CREATE ROLE immuta with LOGIN ENCRYPTED PASSWORD '&#x3C;postgres-password>';
   </strong>ALTER ROLE immuta SET search_path TO bometadata,public;
   </code></pre>
2. Grant administrator privileges to the `immuta` role. Upon successfully completing this installation guide, you can optionally revoke this role grant.

   ```plsql
   GRANT <admin-role> TO immuta;
   ```

#### Create databases

1. Create databases.

   ```plsql
   CREATE DATABASE immuta OWNER immuta;
   CREATE DATABASE temporal OWNER immuta;
   CREATE DATABASE temporal_visibility OWNER immuta;
   ```
2. Grant role `immuta` additional privileges. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/ddl-priv.html) for further details on database roles and privileges.

   ```plsql
   GRANT ALL ON DATABASE immuta TO immuta;
   GRANT ALL ON DATABASE temporal TO immuta;
   GRANT ALL ON DATABASE temporal_visibility TO immuta;
   ```
3. Configure the `immuta` database.

   ```plsql
   \c immuta
   CREATE EXTENSION pgcrypto;
   ```
4. Configure the `temporal` database.

   ```plsql
   \c temporal
   GRANT CREATE ON SCHEMA public TO immuta;
   ```
5. Configure the `temporal_visibility` database.

   <pre class="language-plsql"><code class="lang-plsql">\c temporal_visibility
   <strong>GRANT CREATE ON SCHEMA public TO immuta;
   </strong>CREATE EXTENSION btree_gin;
   </code></pre>
6. Exit the interactive prompt. Type `\q`, and then press `Enter`.

## Install Immuta

This section demonstrates how to deploy Immuta using the Immuta Enterprise Helm chart once the prerequisite cloud-managed services are configured.

{% hint style="info" %}
**Why disable Ingress?**

In OpenShift, Ingress resources are managed by OpenShift Routes. These routes provide a more integrated and streamlined way to handle external access to your applications. To avoid conflicts and ensure proper functionality, it's necessary to disable the pre-defined Ingress resource in the Helm chart.
{% endhint %}

{% hint style="warning" %}
**Feature availability**

If deployed without Elasticsearch/OpenSearch, several core services and features will be unavailable. See the [deployment requirements](https://documentation.immuta.com/latest/configuration/self-managed-deployment/deployment-requirements) for details.
{% endhint %}

{% tabs %}
{% tab title="Default" %}

<pre class="language-yaml" data-title="immuta-values.yaml" data-line-numbers><code class="lang-yaml">global:
  imageRegistry: ocir.immuta.com
  imagePullSecrets:
    - name: immuta-oci-registry
  postgresql:
    host: &#x3C;postgres-fqdn>
    port: 5432
    username: immuta
    password: &#x3C;postgres-password>

audit:
  config:
    elasticsearchEndpoint: &#x3C;elasticsearch-endpoint>
    searchAuthenticationType: &#x3C;'<a data-footnote-ref href="#user-content-fn-1">UsernamePassword</a>' or '<a data-footnote-ref href="#user-content-fn-2">AWS</a>'>
  # If you use OpenSearch and authenticate with username and password, uncomment the lines below by deleting the hash symbols
    #elasticsearchUsername: &#x3C;elasticsearch-username>
    #elasticsearchPassword: &#x3C;elasticsearch-password>
  # If you use OpenSearch and authenticate with AWS role, uncomment the lines below by deleting the hash symbols. When using AWS role authentication, elasticsearchUsername (above) must be set to ''.
    #searchAwsRegion: '&#x3C;deployment-OS-region>'
  # If Immuta is deployed in an AWS account that is different than OpenSearch, then you must configure a trust relationship between the Immuta role and an OpenSearch role 
    #searchAwsRoleArn: '&#x3C;assumed-role-arn>'
  postgresql:
    database: immuta 

  deployment:
    podSecurityContext:
      # A number that is within the project range:
      #   oc get project &#x3C;project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
      runAsUser: &#x3C;user-id>
      # A number that is within the project range:
      #   oc get project &#x3C;project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
      runAsGroup: &#x3C;group-id>
      seccompProfile:
        type: RuntimeDefault
      
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL
  #init:
    #extraEnvVars:
      # <a data-footnote-ref href="#user-content-fn-3">Audit retention</a> defaults to 7 days. To change the retention period, uncomment the lines below by deleting the hash symbols and update the value
      #- name: AUDIT_RETENTION_POLICY_IN_DAYS
        #value: 90      
        
  worker:
    podSecurityContext:  
      # A number that is within the project range:
      #   oc get project &#x3C;project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
      runAsUser: &#x3C;user-id>
      # A number that is within the project range:
      #   oc get project &#x3C;project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
      runAsGroup: &#x3C;group-id>
      seccompProfile:
        type: RuntimeDefault
      
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL 

discover:
  deployment:
    podSecurityContext:
      # A number that is within the project range:
      #   oc get project &#x3C;project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
      runAsUser: &#x3C;user-id>
      # A number that is within the project range:
      #   oc get project &#x3C;project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
      runAsGroup: &#x3C;group-id>
      seccompProfile:
        type: RuntimeDefault
      
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL

secure:
  ingress:
    enabled: false

  postgresql:
    database: immuta
    ssl: false

  web:
    podSecurityContext:
      # A number that is within the project range:
      #   oc get project &#x3C;project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
      runAsUser: &#x3C;user-id>
      # A number that is within the project range:
      #   oc get project &#x3C;project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
      runAsGroup: &#x3C;group-id>
      seccompProfile:
        type: RuntimeDefault
  
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL

  backgroundWorker:
    podSecurityContext:
      # A number that is within the project range:
      #   oc get project &#x3C;project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
      runAsUser: &#x3C;user-id>
      # A number that is within the project range:
      #   oc get project &#x3C;project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
      runAsGroup: &#x3C;group-id>
      seccompProfile:
        type: RuntimeDefault
      
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL
  
temporal:
  enabled: true
  schema:
    createDatabase:
      enabled: false
  server:
    podSecurityContext:
        # A number that is within the project range:
        #   oc get project &#x3C;project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
        runAsUser: &#x3C;user-id>
        # A number that is within the project range:
        #   oc get project &#x3C;project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
        runAsGroup: &#x3C;group-id>
        seccompProfile:
          type: RuntimeDefault
    config:
      persistence:
        default:
          sql:
            database: temporal
            tls:
              enabled: true
        visibility:
          sql:
            database: temporal_visibility
            tls:
              enabled: true
    frontend:
      containerSecurityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - ALL
    history:
      containerSecurityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - ALL
    matching:
      containerSecurityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - ALL
    worker:
      containerSecurityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - ALL
  schema:
    podSecurityContext:
        # A number that is within the project range:
        #   oc get project &#x3C;project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
        runAsUser: &#x3C;user-id>
        # A number that is within the project range:
        #   oc get project &#x3C;project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
        runAsGroup: &#x3C;group-id>
        seccompProfile:
          type: RuntimeDefault
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL
  proxy:
    deployment:
      podSecurityContext:
        # A number that is within the project range:
        #   oc get project &#x3C;project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
        runAsUser: &#x3C;user-id>
        # A number that is within the project range:
        #   oc get project &#x3C;project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
        runAsGroup: &#x3C;group-id>
        seccompProfile:
          type: RuntimeDefault
      containerSecurityContext:
        enabled: true
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - ALL
</code></pre>

{% endtab %}

{% tab title="Without Elasticsearch" %}
{% code title="immuta-values.yaml" lineNumbers="true" %}

```yaml
global:
  imageRegistry: ocir.immuta.com
  imagePullSecrets:
    - name: immuta-oci-registry
  postgresql:
    host: <postgres-fqdn>
    port: 5432
    username: immuta
    password: <postgres-password>

audit:
  enabled: false

discover:
  deployment:
    podSecurityContext:
      # A number that is within the project range:
      #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
      runAsUser: <user-id>
      # A number that is within the project range:
      #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
      runAsGroup: <group-id>
      seccompProfile:
        type: RuntimeDefault

    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL

secure:
  ingress:
    enabled: false

  extraEnvVars:
    - name: FeatureFlag_AuditService
      value: false

  postgresql:
    database: immuta
    ssl: true

  web:
    podSecurityContext:
      # A number that is within the project range:
      #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
      runAsUser: <user-id>
      # A number that is within the project range:
      #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
      runAsGroup: <group-id>
      seccompProfile:
        type: RuntimeDefault

    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL

  backgroundWorker:
    podSecurityContext:
      # A number that is within the project range:
      #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
      runAsUser: <user-id>
      # A number that is within the project range:
      #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
      runAsGroup: <group-id>
      seccompProfile:
        type: RuntimeDefault

    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL

temporal:
  enabled: true
  schema:
    createDatabase:
      enabled: false
  server:
    podSecurityContext:
        # A number that is within the project range:
        #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
        runAsUser: <user-id>
        # A number that is within the project range:
        #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
        runAsGroup: <group-id>
        seccompProfile:
          type: RuntimeDefault
    config:
      persistence:
        default:
          sql:
            database: temporal
            tls:
              enabled: true
        visibility:
          sql:
            database: temporal_visibility
            tls:
              enabled: true
    frontend:
      containerSecurityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - ALL
    history:
      containerSecurityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - ALL
    matching:
      containerSecurityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - ALL
    worker:
      containerSecurityContext:
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - ALL
  schema:
    podSecurityContext:
        # A number that is within the project range:
        #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
        runAsUser: <user-id>
        # A number that is within the project range:
        #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
        runAsGroup: <group-id>
        seccompProfile:
          type: RuntimeDefault
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL
  proxy:
    deployment:
      podSecurityContext:
        # A number that is within the project range:
        #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
        runAsUser: <user-id>
        # A number that is within the project range:
        #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
        runAsGroup: <group-id>
        seccompProfile:
          type: RuntimeDefault
      containerSecurityContext:
        enabled: true
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - ALL
```

{% endcode %}
{% endtab %}
{% endtabs %}

1. Create a file named `immuta-values.yaml` with the above content, making sure to update all [placeholder values](https://documentation.immuta.com/latest/configuration/self-managed-deployment/conventions).

{% hint style="warning" %}
**Avoid these special characters in generated passwords**

whitespace, `$`, `&`, `:`, `\`, `/`, `'`
{% endhint %}

2. Deploy Immuta.

   ```bash
   helm install immuta oci://ocir.immuta.com/stable/immuta-enterprise \
       --values immuta-values.yaml \
       --version 2026.1.3
   ```
3. Wait for all pods to become ready.

   ```bash
   oc wait --for=condition=Ready pods --all
   ```

## Validation

{% hint style="info" %}
This section helps you validate your Immuta installation by temporarily accessing the application locally. However, this access is limited to your own computer. To enable access for other devices, you must proceed with configuring Ingress outlined in the [Next steps](#next-steps) section.
{% endhint %}

1. Determine the name of the Secure service.

   ```bash
   oc get service --selector "app.kubernetes.io/component=secure" --output name
   ```
2. Listen on local port `8080`, forwarding TCP traffic to the Secure service's port named `http`.

   ```bash
   oc port-forward <service-name> 8080:http
   ```
3. In a web browser, navigate to [localhost:8080](http://localhost:8080), to ensure the Immuta application loads.
4. Press `Control+C` to stop port forwarding.

## Next steps

* [Configure Ingress for OpenShift (required)](https://documentation.immuta.com/latest/configuration/configure/ingress-configuration#openshift-ingress-operator).
* [Learn about best practices for running Immuta in production](https://documentation.immuta.com/latest/configuration/self-managed-deployment/configure/immuta-in-production).

[^1]: If using `UsernamePassword`, you must include the `elasticsearchUsername` and `elasticsearchPassword` values.

[^2]: If using `AWS` role authentication, `elasticsearchUsername` must be set to `''` and you must include the `searchAwsRegion` value.

[^3]: For more details about Immuta's audit retention, see the [Audit best practices page](https://documentation.immuta.com/latest/configuration/configure/audit-best-practices#retention-period).
