Red Hat OpenShift
This is an OpenShift-specific guide on how to deploy Immuta with the following managed services:
- Cloud-managed PostgreSQL
- Cloud-managed Redis
- Cloud-managed ElasticSearch
Prerequisites
Review the following criteria before proceeding with deploying Immuta.
PostgreSQL
- The PostgreSQL instance has been provisioned and is actively running.
- The PostgreSQL instance's hostname/FQDN is resolvable from within the Kubernetes cluster.
- The PostgreSQL instance is accepting connections.
Redis
- The Redis instance has been provisioned and is actively running.
- The Redis instance's hostname/FQDN is resolvable from within the Kubernetes cluster.
- The Redis instance is accepting connections.
Elasticsearch
- The Elasticsearch instance has been provisioned and is actively running.
- The Elasticsearch instance's hostname/FQDN is resolvable from within the Kubernetes cluster.
- The Elasticsearch instance is accepting connections.
Pull the Immuta Enterprise Helm chart
- Navigate to the Immuta releases page to obtain the Kubernetes Helm Installation Credentials to authenticate with Immuta's Helm registry.
-
Copy the snippet below and replace the placeholder text with the credentials you obtained in the previous step to add the Helm repository:
echo <token> | helm repo add --username <username> --password-stdin immuta https://archives.immuta.com/charts
--pass-credentials flag
If you encounter an unauthorized error when adding the Immuta Enterprise Helm chart (IEHC), run
helm repo add --pass-credentials
.Usernames and passwords are only passed to the URL location of the Helm repository by default. The username and password are scoped to the scheme, host, and port of the Helm repository. To pass the username and password to other domains Helm may encounter when it goes to retrieve a chart, the new
--pass-credentials
flag can be used. This flag restores the old behavior for a single repository as an opt-in behavior.If you use a username and password for a Helm repository, you can audit the Helm repository in order to check for another domain that could have received the credentials. In the
index.yaml
file for that repository, look for another domain in the URL's list for the chart versions. If there is another domain found and that chart version is pulled or installed, the credentials will be passed on. -
Run the commands below to pull the latest Immuta Enterprise Helm chart or a specific version of the Immuta Enterprise Helm chart:
-
Latest chart:
helm pull immuta/immuta-enterprise
-
Specific version:
helm pull immuta/immuta-enterprise --version 2024.2.2
-
Setup
-
Create a new OpenShift project named
immuta
for Immuta.oc new-project immuta
-
Get the UID range allocated to the project. Each running container's UID must fall within this range. This value will be referenced later on.
oc get project immuta --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}'
-
Get the GID range allocated to the project. Each running container's GID must fall within this range. This value will be referenced later on.
oc get project immuta --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}'
-
Switch to project
immuta
.oc project immuta
-
Create a container registry pull secret.
Registry credentials
Navigate to download.immuta.com to obtain credentials used to authenticate with Immuta's container registry.
oc create secret docker-registry immuta-registry \ --docker-server=https://registry.immuta.com \ --docker-username="<username>" \ --docker-password="<token>" \ --email=support@immuta.com
Cloud-managed PostgreSQL
-
Connect to the database as superuser (postgres) by creating an ephemeral container inside the Kubernetes cluster.
Connecting to the database
There are numerous ways to connect to a PostgreSQL database. This step demonstrates how to connect by creating an ephemeral Kubernetes pod.
Interactive shell
A shell prompt will not be displayed after executing the
oc run
command outlined below. Wait 5 seconds, and then proceed by entering a password.oc run pgclient \ --stdin \ --tty \ --rm \ --image docker.io/bitnami/postgresql -- \ psql --host <postgres-fqdn> --username postgres --port 5432 --password
-
Create an
immuta
role and database.CREATE ROLE immuta with login encrypted password '<postgres-password>'; GRANT immuta TO CURRENT_USER; CREATE DATABASE immuta OWNER immuta; GRANT all ON DATABASE immuta TO immuta; ALTER ROLE immuta SET search_path TO bometadata,public;
-
Revoke privileges from
CURRENT_USER
as they're no longer required.REVOKE immuta FROM CURRENT_USER;
-
Enable the
pgcrypto
extension.\c immuta CREATE EXTENSION pgcrypto;
-
Type
\q
, and then pressEnter
to exit.
Install Immuta
This section demonstrates how to deploy Immuta using the Immuta Enterprise Helm chart once the prerequisite cloud-managed services are configured.
-
Create a Helm values file named
immuta-values.yaml
with the content below. Because the Ingress resource will be managed by an OpenShift route you will create when configuring Ingress and not the Immuta Enterprise Helm chart,ingress
is set tofalse
below. TLS comes pre-configured with OpenShift, sotls
is also set tofalse
.immuta-values.yamlglobal: imageRegistry: registry.immuta.com imagePullSecrets: - name: immuta-registry audit: config: databaseConnectionString: postgres://immuta:<postgres-password>@pg-db-postgresql.immuta.svc.cluster.local:5432/immuta?schema=audit elasticsearchEndpoint: http://es-db-elasticsearch.immuta.svc.cluster.local:9200 elasticsearchUsername: <elasticsearch-username> elasticsearchPassword: <elasticsearch-password> deployment: podSecurityContext: # A number that is within the project range: # oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}' runAsUser: <user-id> # A number that is within the project range: # oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}' runAsGroup: <group-id> seccompProfile: type: RuntimeDefault containerSecurityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL discover: deployment: podSecurityContext: # A number that is within the project range: # oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}' runAsUser: <user-id> # A number that is within the project range: # oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}' runAsGroup: <group-id> seccompProfile: type: RuntimeDefault containerSecurityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL secure: extraEnvVars: - name: FeatureFlag_AuditService value: "true" - name: FeatureFlag_detect value: "true" - name: FeatureFlag_auditLegacyViewHide value: "true" ingress: enabled: false tls: false postgresql: host: <postgres-fqdn> port: 5432 database: immuta username: immuta password: <postgres-password> ssl: false web: podSecurityContext: # A number that is within the project range: # oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}' runAsUser: <user-id> # A number that is within the project range: # oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}' runAsGroup: <group-id> seccompProfile: type: RuntimeDefault containerSecurityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL backgroundWorker: podSecurityContext: # A number that is within the project range: # oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.uid-range"}}{{"\n"}}' runAsUser: <user-id> # A number that is within the project range: # oc get project <project-name> --output template='{{index .metadata.annotations "openshift.io/sa.scc.supplemental-groups"}}{{"\n"}}' runAsGroup: <group-id> seccompProfile: type: RuntimeDefault containerSecurityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL
-
Update all placeholder values in the
immuta-values.yaml
file. -
Deploy Immuta.
helm install immuta immuta/immuta-enterprise \ --values immuta-values.yaml
Validation
-
Wait for all pods in the namespace to become ready.
oc wait --for=condition=Ready pods --all
-
Determine the name of the Secure service.
oc get service --selector "app.kubernetes.io/component=secure" --output template='{{ .metadata.name }}'
-
Listen on local port
8080
, forwarding TCP traffic to the Secure service's port namedhttp
.oc port-forward service/<name> 8080:http
Next steps
- Configure Ingress to complete your installation and access your Immuta application.
- Learn more about best practices for Immuta in Production.