Skip to content

Red Hat OpenShift

This is an OpenShift-specific guide on how to deploy Immuta with the following managed services:

  • Cloud-managed PostgreSQL
  • Cloud-managed Redis
  • Cloud-managed ElasticSearch

Prerequisites

Review the following criteria before proceeding with deploying Immuta.

PostgreSQL

  1. The PostgreSQL instance has been provisioned and is actively running.
  2. The PostgreSQL instance's hostname/FQDN is resolvable from within the Kubernetes cluster.
  3. The PostgreSQL instance is accepting connections.

Redis

  1. The Redis instance has been provisioned and is actively running.
  2. The Redis instance's hostname/FQDN is resolvable from within the Kubernetes cluster.
  3. The Redis instance is accepting connections.

Elasticsearch

  1. The Elasticsearch instance has been provisioned and is actively running.
  2. The Elasticsearch instance's hostname/FQDN is resolvable from within the Kubernetes cluster.
  3. The Elasticsearch instance is accepting connections.

Pull the Immuta Enterprise Helm chart

  1. Navigate to the Immuta releases page to obtain the Kubernetes Helm Installation Credentials to authenticate with Immuta's Helm registry.
  2. Copy the snippet below and replace the placeholder text with the credentials you obtained in the previous step to add the Helm repository:

    echo <token> | helm repo add --username <username> --password-stdin immuta https://archives.immuta.com/charts
    

    --pass-credentials flag

    If you encounter an unauthorized error when adding the Immuta Enterprise Helm chart (IEHC), run helm repo add --pass-credentials.

    Usernames and passwords are only passed to the URL location of the Helm repository by default. The username and password are scoped to the scheme, host, and port of the Helm repository. To pass the username and password to other domains Helm may encounter when it goes to retrieve a chart, the new --pass-credentials flag can be used. This flag restores the old behavior for a single repository as an opt-in behavior.

    If you use a username and password for a Helm repository, you can audit the Helm repository in order to check for another domain that could have received the credentials. In the index.yaml file for that repository, look for another domain in the URL's list for the chart versions. If there is another domain found and that chart version is pulled or installed, the credentials will be passed on.

  3. Run the commands below to pull the latest Immuta Enterprise Helm chart or a specific version of the Immuta Enterprise Helm chart:

    • Latest chart:

      helm pull immuta/immuta-enterprise
      
    • Specific version:

      helm pull immuta/immuta-enterprise --version 2024.2.2
      

Setup

  1. Create a new OpenShift project named immuta for Immuta.

    oc new-project immuta
    
  2. Get the UID range allocated to the project. Each running container's UID must fall within this range. This value will be referenced later on.

    oc get project immuta --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
    
  3. Get the GID range allocated to the project. Each running container's GID must fall within this range. This value will be referenced later on.

    oc get project immuta --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
    
  4. Switch to project immuta.

    oc project immuta
    
  5. Create a container registry pull secret.

    Registry credentials

    Navigate to download.immuta.com to obtain credentials used to authenticate with Immuta's container registry.

    oc create secret docker-registry immuta-registry \
        --docker-server=https://registry.immuta.com \
        --docker-username="<username>" \
        --docker-password="<token>" \
        --email=support@immuta.com
    

Cloud-managed PostgreSQL

  1. Connect to the database as superuser (postgres) by creating an ephemeral container inside the Kubernetes cluster.

    Connecting to the database

    There are numerous ways to connect to a PostgreSQL database. This step demonstrates how to connect by creating an ephemeral Kubernetes pod.

    Interactive shell

    A shell prompt will not be displayed after executing the oc run command outlined below. Wait 5 seconds, and then proceed by entering a password.

    oc run pgclient \
        --stdin \
        --tty \
        --rm \
        --image docker.io/bitnami/postgresql -- \
        psql --host <postgres-fqdn> --username postgres --port 5432 --password
    
  2. Create an immuta role and database.

    CREATE ROLE immuta with login encrypted password '<postgres-password>';
    
    GRANT immuta TO CURRENT_USER;
    
    CREATE DATABASE immuta OWNER immuta;
    
    GRANT all ON DATABASE immuta TO immuta;
    ALTER ROLE immuta SET search_path TO bometadata,public;
    
  3. Revoke privileges from CURRENT_USER as they're no longer required.

    REVOKE immuta FROM CURRENT_USER;
    
  4. Enable the pgcrypto extension.

    \c immuta
    CREATE EXTENSION pgcrypto;
    
  5. Type \q, and then press Enter to exit.

Install Immuta

This section demonstrates how to deploy Immuta using the Immuta Enterprise Helm chart once the prerequisite cloud-managed services are configured.

  1. Create a Helm values file named immuta-values.yaml with the content below. Because the Ingress resource will be managed by an OpenShift route you will create when configuring Ingress and not the Immuta Enterprise Helm chart, ingress is set to false below. TLS comes pre-configured with OpenShift, so tls is also set to false.

    immuta-values.yaml
    global:
      imageRegistry: registry.immuta.com
      imagePullSecrets:
        - name: immuta-registry
    
    audit:
      config:
        databaseConnectionString: postgres://immuta:<postgres-password>@pg-db-postgresql.immuta.svc.cluster.local:5432/immuta?schema=audit
        elasticsearchEndpoint: http://es-db-elasticsearch.immuta.svc.cluster.local:9200
        elasticsearchUsername: <elasticsearch-username>
        elasticsearchPassword: <elasticsearch-password>
    
      deployment:
        podSecurityContext:
          # A number that is within the project range:
          #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
          runAsUser: <user-id>
          # A number that is within the project range:
          #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
          runAsGroup: <group-id>
          seccompProfile:
            type: RuntimeDefault
    
        containerSecurityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
              - ALL
    
    discover:
      deployment:
        podSecurityContext:
          # A number that is within the project range:
          #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
          runAsUser: <user-id>
          # A number that is within the project range:
          #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
          runAsGroup: <group-id>
          seccompProfile:
            type: RuntimeDefault
    
        containerSecurityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
              - ALL
    
    secure:
      extraEnvVars:
        - name: FeatureFlag_AuditService
          value: "true"
        - name: FeatureFlag_detect
          value: "true"
        - name: FeatureFlag_auditLegacyViewHide
          value: "true"
    
      ingress:
        enabled: false
        tls: false
    
      postgresql:
        host: <postgres-fqdn>
        port: 5432
        database: immuta
        username: immuta
        password: <postgres-password>
        ssl: false
    
      web:
        podSecurityContext:
          # A number that is within the project range:
          #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
          runAsUser: <user-id>
          # A number that is within the project range:
          #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
          runAsGroup: <group-id>
          seccompProfile:
            type: RuntimeDefault
    
        containerSecurityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
              - ALL
    
      backgroundWorker:
        podSecurityContext:
          # A number that is within the project range:
          #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
          runAsUser: <user-id>
          # A number that is within the project range:
          #   oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
          runAsGroup: <group-id>
          seccompProfile:
            type: RuntimeDefault
    
        containerSecurityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
              - ALL
    
  2. Update all placeholder values in the immuta-values.yaml file.

  3. Deploy Immuta.

    helm install immuta immuta/immuta-enterprise \
        --values immuta-values.yaml
    

Validation

  1. Wait for all pods in the namespace to become ready.

    oc wait --for=condition=Ready pods --all
    
  2. Determine the name of the Secure service.

    oc get service --selector "app.kubernetes.io/component=secure" --output template='{{ .metadata.name }}'
    
  3. Listen on local port 8080, forwarding TCP traffic to the Secure service's port named http.

    oc port-forward service/<name> 8080:http
    

Next steps