Immuta Helm Chart: Release Notes
This release contains a few fixes and updates the default Immuta application version to 2020.2.7.
- Update default values for Immuta Query Engine to support Elastic.
- Helm hooks fail to run with Helm older than 3.2.0.
- TLS Ciphers used in NGINX Ingress Controller are incompatible with default TLS cipher suites set in Databricks.
- Rolling updates fail to complete for Deployments when the number of replicas
equals the number of nodes and
This release contains a fix for a bug introduced with the last release that
helm install to timeout and fail when used with the
If you are using Argo CD to deploy the Immuta Helm Chart, then you may notice that the TLS secret will be marked OutOfSync (requires pruning). The TLS secret (which is used for the encryption of inter-pod network traffic) should not be pruned. Update the Helm values with the following so that the TLS secret can be monitored as a resource without the risk of unwanted pruning.
tls: manageGeneratedSecret: true
- Fix issue with TLS generation and the helm install
Argo CD Support
The Immuta Helm Chart now supports deployment using Argo CD. Prior to this, the TLS generation hook would create a Secret that was not tracked by Helm. In Argo CD this resource would appear to need pruning. There was also an issue with the database and Query Engine endpoints, in which they would appear to be out of sync, but syncing them would remove the runtime changes that were being applied by Patroni. These issues have been resolved, and Argo CD is now supported.
For Argo CD versions older than 1.7.0 you must use the following Helm values in order for the TLS generation hook to run successfully.
hooks: tlsGeneration: hookAnnotations: helm.sh/hook-delete-policy: "before-hook-creation"
Starting with Argo CD version 1.7.0 the default Immuta Helm Chart values can be used.
Bundled Ingress Nginx Upgraded
The bundled version of ingress-nginx has been upgraded to 0.34.1. In addition to upgrading the default version, the cluster scoped resources that used to be created (ClusterRole and ClusterRoleBinding) are no longer required and have been removed.
Built-in Support for Azure LoadBalancer Annotations
Setting the Helm value
nginxIngress.controller.service.isInternal will now
cause an internal Azure load balancer to be created for nginx ingress.
Support Setting the
A new value was added to set the
externalTrafficPolicy on the nginx ingress
Service. Setting this value to "Local" can be useful for preserving client IP
addresses. See the
for more information on preserving the client source IP.
When upgrading an existing Helm release from chart version
<=4.4 that was
using the default values
tls.create=true, you must
first annotate and label the Immuta TLS Secret so that it can be adopted by
You will need to complete these steps if you encounter either of the following
errors when running the
helm upgrade command.
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: Secret "immuta-tls" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "immuta"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: Secret "immuta-tls" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"
To resolve these upgrade errors, run the following commands, being sure to substitute the proper Helm release name and namespace.
kubectl annotate secret \ -l app=immuta,component=generated-tls,release=<RELEASE_NAME> meta.helm.sh/release-name=<RELEASE_NAME> meta.helm.sh/release-namespace=<RELEASE_NAMESPACE> kubectl label secret \ -l app=immuta,component=generated-tls,release=immuta app.kubernetes.io/managed-by=Helm
After this, you can proceed to run
- Refactored Helm Chart hooks to work with Argo CD.
- Updated Helm Chart description.
- Update web pod annotations so that password changes cause a rolling restart.
- Upgrade ingress-nginx to 0.34.1.
- Support setting
externalTrafficPolicyon the nginx ingress Service.
- Update nginx ingress Service to include Azure annotations.
- Fix issue with release names that contain periods.
- Fix fingerprint configuration when TLS is disabled.
Setting Global Pod Annotations and Labels
It is now possible to set pod annotations and labels at a global level. When
set, these labels and annotations will be used for all pods that the Immuta
Helm Chart creates. Pod labels and annotations can be set using the Helm values
global.podLabels to a map of string to string.
global: # annotations to be added to every pod podAnnotations: example.org/latest-configuration: 3d0726f97faa2e4482d7bd31114a26c3976ed96dba5804d951bf480a6af8810c # labels to be added to every pod podLabels: example.org/team: "alpha"
Labels and annotations can also be set individually for each component in the
Immuta Helm Chart. To set labels and annotations for an individual component,
set the Helm values
<componentName>.podLabels to a map of string to string.
web: podAnnotations: example.org/latest-configuration: 3d0726f97faa2e4482d7bd31114a26c3976ed96dba5804d951bf480a6af8810c podLabels: example.org/team: "alpha" queryEngine: podAnnotations: example.org/latest-configuration: 7c6f707ce995b34b9a09a4df6f0b20e8580914f65b5117b10318a35a465a3aa8 podLabels: example.org/team: "beta"
- Support labels/annotations on all pods.
- Fix for Query Engine pod referencing image repository from database values.
- Remove the option to configure Data Source CA certificates using a ConfigMap.
Support for Custom
It is now possible to set custom
tolerations for each
component in the Immuta Helm Chart.
To set a custom
nodeSelector, set the Helm value for
<componentName>.nodeSelector to a valid
nodeSelector. See the
for more details.
web: nodeSelector: lifecycle: spot database: nodeSelector: lifecycle: on-demand
To set a custom
tolerations, set the Helm value for
<componentName>.tolerations to a valid
tolerations. See the
for more details.
web: tolerations: - key: lifecycle operator: Equal value: spot effect: NoSchedule database: tolerations: - key: lifecycle operator: Equal value: on-demand effect: NoSchedule
- Support setting
tolerationson all pods.