Immuta Helm Chart: Release Notes
Audience: System Administrators
Content Summary: This page contains release notes for the Immuta Helm Chart.
Immuta Helm 4.9.3 includes bug-fixes and small database initialization configuration updates.
Support passing database initialization variables to
New Setting Description
Key/value dictionary of variables passed to database initialization process.
backup.restore.allowMissingArchivevalue reference typo.
- Fix validation logic for external database passwords when
Helm 4.9.2 adds functionality to improve security context granularity.
- Default to Immuta 2022.1.0.
- Remove service token auto mounting.
<component>.securityContextin favor of
Granular security context support.
New Setting Description
Pod level security features on all pods.
Container level security features on all containers.
Pod level security features for individual component.
Container level security features for individual component.
Immuta Helm 4.9.0 includes bug-fixes and updates to various components.
- Update immuta-deploy-tools image to v2.1.4
- Container registry runs in Kind
- Do not create or mount volume to
/var/lib/immutafor older versions of Immuta (<2022.1.0)
Immuta Helm 4.8.1 includes bug-fixes and enables some use-cases that were previously blocked.
- Add volume to support uploading Immuta plugins for restricted OpenShift security contexts.
- Add support for custom Azure blob domain for backups.
- Use credentials from existing secret when creating backups.
- Remove validation requirement for external database passwords when an existing secret is provided.
- Set Patroni config environment variable for interactive shell sessions.
- Don't create an ODBC volume (
/var/lib/immuta/odbc) for older versions of Immuta (< 2021.4.0).
This release includes changes to support deploying Immuta on OpenShift and updates the built-in Ingress-Nginx controller to support Kubernetes 1.22+.
Support for OpenShift
It is now possible to deploy the Immuta Helm Chart on OpenShift Kubernetes clusters. For more details see Deploy Immuta on OpenShift.
Nginx Ingress Controller v1
The Ingress-Nginx controller has been updated from v0.x to v1.x in order to
support Kubernetes 1.22+. Due to breaking changes, the v1 release of
Ingress-Nginx does not support Kubernetes versions older than 1.19. Users who
have existing deployments on older Kubernetes clusters can set
nginxIngress.controller.image.tag=v0.49.3 in the Immuta Helm values.
nginxIngress: controller: image: tag: v0.49.3
- Upgrade Ingress-Nginx controller to v1.1.0.
- Creation of Ingress resources can now be disabled.
- Support configuring Patroni to use ConfigMaps for leader election (required for deployment on OpenShift).
- Added pod anti-affinity rules for Redis cache Pods.
- Allow annotations to be set on the Query Engine client Service.
- Add Helm value
cache.updateStrategyto allow update strategy to be set for cache workloads.
- Fix for database restores from blob storage into an external database.
- Fix for backup SecurityContext overrides not taking effect.
This release includes new features and other changes intended to improve the usability of the Immuta Helm Chart.
New features include the ability to use an external PostgreSQL database for
Immuta metadata and the option to use Redis as a cache. Other notable
improvements are the ability to set a global image registry override, support
for the PodSecurityPolicy
runAsUser settings for most components, and support for
Kubernetes API resources that have moved out of beta.
External PostgreSQL Database Support
It is now possible to use an external PostgreSQL instance as the Immuta Metadata
Database when running Immuta in Kubernetes. When enabled, this functionality
replaces the built-in PostgreSQL metadata database that runs in Kubernetes. This
functionality is enabled by setting
database.enabled=false and passing the
connection information for the PostgreSQL instance under the key
database: enabled: false externalDatabase: hostname: external-postgres.database.hostname password: bometauserpassword superuser: username: postgres password: postgrespassword
For existing deployments it is possible to migrate from the built-in database
to an external database. In order to migrate, backups must be configured, and a
backup should be taken immediately prior to migrating. The process of migrating
can be done by running
helm upgrade with the external database Helm values
This functionality is compatible with Immuta 2021.3+.
There is now the option to use Redis as the cache implementation for Immuta. The primary motivation for offering this option is to enable the use of TLS for network traffic between the Immuta web service and cache pods.
In order to select Redis as the cache implementation for Immuta, set the Helm
cache.type=redis when installing the Immuta Helm Chart.
cache: type: redis
TLS is enabled for internal network traffic by default when Redis is used,
tls.enabledInternal=false is set, TLS will be used for network
traffic between Immuta and Redis.
When Redis is selected as the cache for Immuta, the Immuta Web Service pods
will contain an additional container that runs
envoy as an ambassador sidecar; the envoy
container is named "cache-proxy-sidecar." In this configuration,
commands must include the flag
--container=service in order to access logs
for the Immuta Web Service.
Global Image Registry Override
It is now possible to configure the container image registry globally for all images referenced by the Immuta Helm Chart.
global: imageRegistry: registry.mycorp.com
The default value for
Immuta images are referenced relative to the configured image registry by adding the
immuta/, for example
Third-party images are referenced without the
immuta/ prefix. The images that
this applies to are
If these images are not available in the custom registry under the root prefix,
it is possible to configure the image repository. This
may be the case if you have pulled these images and pushed them to your
registry under the
cache: redis: image: repository: immuta/redis memcached: image: repository: immuta/memcached proxySidecar: image: repository: immuta/envoyproxy-envoy-alpine nginxIngress: controller: image: repository: immuta/ingress-nginx-controller
Init containers for initializing database and Query Engine persistent volumes no
longer run as root. In addition to increasing overall security by running
init containers as an unprivileged user, this also means that Immuta is now
compatible with the PodSecurityPolicy
other similar policies.
runAsUser for Most Components
The Pod SecurityContext is now configurable for all components except for Nginx
Ingress. This means that the user ID that the Pods run as can now be customized
securityContext.runAsUser for the components that support it. The
following table shows which components are configurable for various Immuta
backup: securityContext: runAsUser: 31234 cache: securityContext: runAsUser: 31234 database: securityContext: runAsUser: 31234 fingerprnt: securityContext: runAsUser: 31234 queryEngine: securityContext: runAsUser: 31234 web: securityContext: runAsUser: 31234
Kubernetes API Version Updates
The resources created by the Immuta Helm Chart will now use stable versions of the following resources when they are available on the Kubernetes cluster.
networking.k8s.io/v1for Ingress resources.
batch/v1for CronJob resources.
policy/v1for PodDisruptionBudget resources.
Upgrading to 4.7.0 from 4.6 should be possible with minimal or no changes to the Helm values. The following section identifies any caveats or recommendations that should be taken into account when upgrading.
helm upgrade --reuse-values not supported for upgrade from 4.6 to 4.7
Due to changes in the Chart's default
--reuse-values does not work. If you have a saved values file, use that when
helm upgrade. If you don't have a saved values file, you can use
helm get values to save the Helm values locally before calling
helm get values <release-name> > release-values.yaml helm upgrade <release-name> immuta/immuta -f release-values.yaml
Deprecated Helm Values
Some values have been deprecated in this release. The following table indicates the deprecated values and the migration path for each one. Update any custom Helm values referencing the deprecated values at the earliest convenience to avoid any issues when the deprecated values are removed.
|Deprecated Value||Replacement Value|
- Update Kubernetes Batch API versions.
- Update Kubernetes Networking API versions.
- Update Kubernetes Policy API versions.
- Startup init container checks now have a time out.
- Support for disabling built-in database and using an external database.
- Database and Query Engine
initContainersno longer run as root.
- Runtime user ID is now configurable for all Pods except Nginx Ingress.
- Added shared memory volume for Database and Query Engine by default.
- It is now possible to set a Docker image registry globally in Helm values.
- The environment variable
noproxywill now be configured such that internal network connections do not use the proxy if
extraEnvvalues are used to set
- Added option to deploy Redis with TLS instead of Memcached for the cache.
- Database and Query Engine Pods now crash and restart when the backup restore process fails.
- The TLS generation hook will now regenerate TLS certs if the
This release makes it possible to increase the size of
/dev/shm in the
Database and Query Engine Pods. This functionality makes use of memory-backed
database: sharedMemoryVolume: enabled: true queryEngine: sharedMemoryVolume: enabled: true
- Add support for memory-backed volumes to increase the size of
- Update Helm chart
This release includes an update to the default image tag for the bundled Nginx Ingress Controller.
The ingress-nginx project has
released a few updates since
v0.47.0. This release updates the default version
v0.49.3, which includes an update to Alpine Linux
v3.14.2. This update
addresses CVE-2021-3711, which
v0.47.0 was vulnerable to.
To upgrade the Nginx Ingress Controller without upgrading the Immuta Helm Chart, you can set the following Helm values.
nginxIngress: controller: imageTag: v0.49.3
- Update Helm chart
- Update ingress-nginx controller to
This release updates the Immuta Helm Chart
appVersion to 2021.2.2 and adds
support for using an S3-compatible server for backup and restore. It also
contains a fix for the metrics export CronJob in Immuta 2021.2+.
Using a Custom S3 Endpoint
In order to use a custom S3 endpoint for backup and restore, new Helm values have been added. The example below shows how to configure an S3 compatible endpoint.
backup: s3: # The endpoint URL of an s3-compatible server endpoint: s3-compatible.mycorp.com # The CA bundle in PEM format used to verify the TLS certificates of the endpoint caBundle: | -----BEGIN CERTIFICATE----- MIIC5zCCAc+gAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl cm5ldGVzMB4XDTIxMDYzMDE1MTg0M1oXDTMxMDYyODE1MTg0M1owFTETMBEGA1UE AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMMR sFzDeAuCVy5JpkrCNp+A0zOrHX6BnTbDrzRV81Pfm2H6y8Pc3pASDgItiwqUBbeu fZ+5SAlZ2JY2O27mB1WM5Ajd5kSs7FausrHniSkKM+NlclPXXrBxcoli9UOEbo9T k+CrpV/I+EYYiMDKLT/tMX5AJSUavRHdb2n59bWEc2C1HTeBsr0jotn6zOHmhIAG O66SeCKjzR6MZ2TO8IWzGpqfY0a+QIqr4Z2Ihy+i6HvU3nXt2PVW5lQp5EN9Gmig g5x4re68paAmbU6FfeP9ruPoKjsQq5w70J/miPJb7TS59fdaWAM9yUS3ENG1hLZ7 ALGIGJoUgy0QR/2DaV0CAwEAAaNCMEAwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB /wQFMAMBAf8wHQYDVR0OBBYEFLSraVnEIKrTomzE15ga4jSLdPuFMA0GCSqGSIb3 DQEBCwUAA4IBAQBb5IcC45qcil48uSip7uBkfKPUFvQeTrq0Zg4zzYuNvdH+uDnH 05Tz3k8dYBNyvIbh+TahCmmrUUyKgtvHeKrXfrqHoJwM7YTSIJTf7aVSEjBMtvjs 4dnP3HoGOjJcXIadoZ5gvNUEepTQREu6/5j/Mq4F07UDhrYNZNMwdP3pXpadB9q3 8fxjl88quJ4wWhq82hrwrGBv+z6oAoUbskvticWuu5eB8QkA3+tDQ5q1ZGjKGSqf G5xnXhHnQYy9J+JZ09JoySL3R5+959hJBC03Dwb38rLt/+Vkzdz8ILPT+PUyp5m4 5KqYJfOVUzRqK0FJY67CkpFmi+4kuqgdGaGs -----END CERTIFICATE----- # Set to "true" to force the use of path-style addressing as opposed to hostname-style addressing forcePathStyle: true # Set to "true" to disable making an SSL connection to the endpoint #disableSSL:
- Update Helm chart
- Add blob storage endpoint support for backup storage.
- Fix Metrics CronJob errors for Immuta 2021.2+
This release includes usability and stability improvements and an update to the bundled Nginx Ingress Controller. Usability improvements include the option to use an existing Kubernetes Secret for Immuta database passwords. Stability improvements include a change in configuration for the Immuta database and Query Engine StatefulSets.
Existing Kubernetes Secret
This release adds the option to use an existing Kubernetes Secret for Immuta database passwords used in the Helm installation. Using an existing Secret can be useful when you want more control over the creation of the Secret or do not want to include the sensitive values in the Immuta Helm values file.
To use an existing Kubernetes Secret, you must first create a Secret containing the required data keys.
Note: This is an example showing the required data keys and some sample values. For more details on creating Secrets in Kubernetes, see the Kubernetes Documentation.
apiVersion: v1 kind: Secret metadata: name: existing-immuta-secret type: Opaque data: databasePassword: VUc5NWEySkxObWRXU0Rkbk4zQTNUQQ== databasePatroniApiPassword: b0hYY0RwWDNURUhQZ250VQ== databaseReplicationPassword: dzZyem9ndVgzYTZoRkVuMw== databaseSuperuserPassword: REZBUmg2VTdianBrVVRmZQ== queryEnginePassword: YTQyUHRoS3RvYUhOTDZFYg== queryEnginePatroniApiPassword: ZDNOZTNFb1FZaFU0UDNWTQ== queryEngineReplicationPassword: RWRXMlJEdmV6UjdFSkxVNA== queryEngineSuperuserPassword: N0tNelF3WEgzeFhHRHBwRw==
Reference the Secret by name in the Immuta Helm values.
If you have an existing release, you can migrate to an existing Kubernetes Secret by creating the Secret with the values previously defined in your Helm values by using the following mappings.
|Helm Value||Secret Data Key|
existingSecret value and removing the password values from your
Immuta Helm values file. Apply the change by running
helm upgrade for your
release, referencing the values file.
Database StatefulSet Configuration Update
This release includes a change in the default settings for Patroni, which is used in the database and Query Engine StatefulSets. This setting has been shown to improve the stability of the StatefulSets when the pods are restarted frequently due to cluster operations, such as node upgrades.
For new installations, no action is necessary to make use of this configuration change. For existing installations, the change must be applied manually to the StatefulSet pods.
kubectl exec -it RELEASE_NAME-immuta-database-0 -- patronictl edit-config --set postgresql.use_pg_rewind=true kubectl exec -it RELEASE_NAME-immuta-query-engine-0 -- patronictl edit-config --set postgresql.use_pg_rewind=true
Note: Replace "RELEASE_NAME" with the name of your Immuta Helm release.
Nginx Ingress Controller Update
- Add support for using an existing Kubernetes Secret for passwords.
use_pg_rewindfor Patroni by default.
- Update ingress-nginx controller to v0.47.0 to mitigate CVE-2021-23017.
This release updates versions for various components.
- Update Helm chart app version to 2021.2.0.
- Update immuta-deploy-tools image to 1.1.2.
- Update default ingress-nginx controller image to v0.46.0.
This release adds support for Immuta 2021.2 and to allow scaling Immuta web service pods to zero.
- Update container uid and gid in
- Remove dependency on
- Allow web pod replicas to be set to zero.
This release updates the default Immuta version to 2021.1.3 and the default memcached image tag to 1.6-alpine.
- Update memcached image to 1.6-alpine.
This release contains a fix for an issue where setting resource requests and limits was not possible for some containers created by the Immuta Helm Chart.
- Set configurable resource limits for all containers and init containers
This release contains updates that enable the configuration of options that were previously not exposed in the Helm Chart.
The Immuta Fingerprint service configuration can now be set in Helm values. For
example, to set the
worker_timeout option to a value of 300, set the
fingerprint: extraConfig: worker_timeout: 300
The log level for the Immuta Fingerprint service can also be configured. Valid
fingerprint: logLevel: DEBUG
- Add ability to pass in configuration for Immuta Fingerprint service
- Add port 8008 to Database and Query Engine Endpoints
When upgrading to 4.6.2 with
nginxIngress.enabled set to
true, a new ConfigMap was introduced to be used by the
NGINX Ingress Controller for leader election. This will cause the old ConfigMap, named
ingress-controller-leader-immuta-<RELEASE>-ingress, to be orphaned. This will not affect the operation of Immuta and
may be removed by running
kubectl delete configmap ingress-controller-leader-immuta-<RELEASE>-ingress
- When using
--reuse-valuescoming from 4.5.x charts,
extraEnvvalue throws a templating error.
- Use Immuta config item
publicPostgres.sslstarting with Immuta Version 2020.3.
- Adopt nginx-ingress leader election ConfigMap into the chart when using Optional Ingress Controller component.
log_error_verbosity = TERSEin Postgres configuration.
- Update default version to 2020.4.0.
- Add port 8008 to primary database and query-engine services.
- Add ability to set annotations on all service accounts.
.spec.terminationGracePeriodSecondsfor all Immuta pods.
- Update Query Engine readiness and liveness probes to use Patroni REST API endpoints.
- Update Database readiness and liveness probes to use Patroni REST API endpoints.
- Database password is no longer used in Query Engine init-container.
- Add support for LDAPS-backed Query Engine Authentication.
- Web Service pods now wait for database migrations to execute before startup.
This release contains a fix for a race condition introduced in 4.5.3 when performing a new install.
- Removed Endpoint resource for Database and Query Engine components to prevent race condition with Service. Endpoints are now created by Database and Query Engine Service resource.
This release contains a few fixes and updates the default Immuta application version to 2020.2.8.
- Support EndpointSlices for Database and Immuta Query Engine in Kubernetes 1.17+.
This release contains a few fixes and updates the default Immuta application version to 2020.2.7.
- Update default values for Immuta Query Engine to support Elastic.
- Helm hooks fail to run with helm older than 3.2.0.
- TLS Ciphers used in NGINX Ingress Controller incompatibility with default TLS cipher suites set in Databricks.
This release contains a fix for a bug introduced with the last release that
helm install to timeout and fail when used with the
If you are using Argo CD to deploy the Immuta Helm Chart, then you may notice that the TLS secret will be marked OutOfSync (requires pruning). The TLS secret (which is used for the encryption of inter-pod network traffic) should not be pruned. Update the Helm values with the following so that the TLS secret can be monitored as a resource without the risk of unwanted pruning.
tls: manageGeneratedSecret: true
- Fix issue with TLS generation and the helm install
Argo CD Support
The Immuta Helm Chart now supports deployment using Argo CD. Prior to this, the TLS generation hook would create a Secret that was not tracked by Helm. In Argo CD this resource would appear to need pruning. There was also an issue with the database and Query Engine endpoints, in which they would appear to be out of sync, but syncing them would remove the runtime changes that were being applied by Patroni. These issues have been resolved, and Argo CD is now supported.
For Argo CD versions older than 1.7.0 you must use the following Helm values in order for the TLS generation hook to run successfully.
hooks: tlsGeneration: hookAnnotations: helm.sh/hook-delete-policy: "before-hook-creation"
Starting with Argo CD version 1.7.0 the default Immuta Helm Chart values can be used.
Bundled Ingress Nginx Upgraded
The bundled version of ingress-nginx has been upgraded to 0.34.1. In addition to upgrading the default version, the cluster scoped resources that used to be created (ClusterRole and ClusterRoleBinding) are no longer required and have been removed.
Built-in Support for Azure LoadBalancer Annotations
Setting the Helm value
nginxIngress.controller.service.isInternal will now
cause an internal Azure load balancer to be created for nginx ingress.
Support Setting the
A new value was added to set the
externalTrafficPolicy on the nginx ingress
Service. Setting this value to "Local" can be useful for preserving client IP
addresses. See the
for more information on preserving the client source IP.
When upgrading an existing Helm release from chart version
<=4.4 that was
using the default values
tls.create=true, you must
first annotate and label the Immuta TLS Secret so that it can be adopted by
You will need to complete these steps if you encounter either of the following
errors when running the
helm upgrade command.
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: Secret "immuta-tls" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "immuta"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: Secret "immuta-tls" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"
To resolve these upgrade errors, run the following commands, being sure to substitute the proper Helm release name and namespace.
kubectl annotate secret \ -l app=immuta,component=generated-tls,release=<RELEASE_NAME> meta.helm.sh/release-name=<RELEASE_NAME> meta.helm.sh/release-namespace=<RELEASE_NAMESPACE> kubectl label secret \ -l app=immuta,component=generated-tls,release=immuta app.kubernetes.io/managed-by=Helm
After this, you can proceed to run
- Refactored Helm Chart hooks to work with Argo CD.
- Updated Helm Chart description.
- Update web pod annotations so that password changes cause a rolling restart.
- Upgrade ingress-nginx to 0.34.1.
- Support setting
externalTrafficPolicyon the nginx ingress Service.
- Update nginx ingress Service to include Azure annotations.
- Fix issue with release names that contain periods.
- Fix fingerprint configuration when TLS is disabled.
Setting Global Pod Annotations and Labels
It is now possible to set pod annotations and labels at a global level. When
set, these labels and annotations will be used for all pods that the Immuta
Helm Chart creates. Pod labels and annotations can be set using the Helm values
global.podLabels to a map of string to string.
global: # annotations to be added to every pod podAnnotations: example.org/latest-configuration: 3d0726f97faa2e4482d7bd31114a26c3976ed96dba5804d951bf480a6af8810c # labels to be added to every pod podLabels: example.org/team: "alpha"
Labels and annotations can also be set individually for each component in the
Immuta Helm Chart. To set labels and annotations for an individual component,
set the Helm values
<componentName>.podLabels to a map of string to string.
web: podAnnotations: example.org/latest-configuration: 3d0726f97faa2e4482d7bd31114a26c3976ed96dba5804d951bf480a6af8810c podLabels: example.org/team: "alpha" queryEngine: podAnnotations: example.org/latest-configuration: 7c6f707ce995b34b9a09a4df6f0b20e8580914f65b5117b10318a35a465a3aa8 podLabels: example.org/team: "beta"
- Support labels/annotations on all pods.
- Fix for Query Engine pod referencing image repository from database values.
- Remove the option to configure Data Source CA certificates using a ConfigMap.
Support for Custom
It is now possible to set custom
tolerations for each
component in the Immuta Helm Chart.
To set a custom
nodeSelector, set the Helm value for
<componentName>.nodeSelector to a valid
nodeSelector. See the
for more details.
web: nodeSelector: lifecycle: spot database: nodeSelector: lifecycle: on-demand
To set a custom
tolerations, set the Helm value for
<componentName>.tolerations to a valid
tolerations. See the
for more details.
web: tolerations: - key: lifecycle operator: Equal value: spot effect: NoSchedule database: tolerations: - key: lifecycle operator: Equal value: on-demand effect: NoSchedule
- Support setting
tolerationson all pods.