Immuta Helm Chart: Release Notes
Audience: System Administrators
Content Summary: This page contains release notes for the Immuta Helm Chart.
This release updates the Immuta Helm Chart
appVersion to 2021.2.2 and adds
support for using an S3-compatible server for backup and restore. It also
contains a fix for the metrics export CronJob in Immuta 2021.2+.
Using a Custom S3 Endpoint
In order to use a custom S3 endpoint for backup and restore, new Helm values have been added. The example below shows how to configure an S3 compatible endpoint.
backup: s3: # The endpoint URL of an s3-compatible server endpoint: s3-compatible.mycorp.com # The CA bundle in PEM format used to verify the TLS certificates of the endpoint caBundle: | -----BEGIN CERTIFICATE----- MIIC5zCCAc+gAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl cm5ldGVzMB4XDTIxMDYzMDE1MTg0M1oXDTMxMDYyODE1MTg0M1owFTETMBEGA1UE AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMMR sFzDeAuCVy5JpkrCNp+A0zOrHX6BnTbDrzRV81Pfm2H6y8Pc3pASDgItiwqUBbeu fZ+5SAlZ2JY2O27mB1WM5Ajd5kSs7FausrHniSkKM+NlclPXXrBxcoli9UOEbo9T k+CrpV/I+EYYiMDKLT/tMX5AJSUavRHdb2n59bWEc2C1HTeBsr0jotn6zOHmhIAG O66SeCKjzR6MZ2TO8IWzGpqfY0a+QIqr4Z2Ihy+i6HvU3nXt2PVW5lQp5EN9Gmig g5x4re68paAmbU6FfeP9ruPoKjsQq5w70J/miPJb7TS59fdaWAM9yUS3ENG1hLZ7 ALGIGJoUgy0QR/2DaV0CAwEAAaNCMEAwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB /wQFMAMBAf8wHQYDVR0OBBYEFLSraVnEIKrTomzE15ga4jSLdPuFMA0GCSqGSIb3 DQEBCwUAA4IBAQBb5IcC45qcil48uSip7uBkfKPUFvQeTrq0Zg4zzYuNvdH+uDnH 05Tz3k8dYBNyvIbh+TahCmmrUUyKgtvHeKrXfrqHoJwM7YTSIJTf7aVSEjBMtvjs 4dnP3HoGOjJcXIadoZ5gvNUEepTQREu6/5j/Mq4F07UDhrYNZNMwdP3pXpadB9q3 8fxjl88quJ4wWhq82hrwrGBv+z6oAoUbskvticWuu5eB8QkA3+tDQ5q1ZGjKGSqf G5xnXhHnQYy9J+JZ09JoySL3R5+959hJBC03Dwb38rLt/+Vkzdz8ILPT+PUyp5m4 5KqYJfOVUzRqK0FJY67CkpFmi+4kuqgdGaGs -----END CERTIFICATE----- # Set to "true" to force the use of path-style addressing as opposed to hostname-style addressing forcePathStyle: true # Set to "true" to disable making an SSL connection to the endpoint #disableSSL:
- Update Helm chart
- Add blob storage endpoint support for backup storage.
- Fix Metrics CronJob errors for Immuta 2021.2+
This release includes usability and stability improvements and an update to the bundled Nginx Ingress Controller. Usability improvements include the option to use an existing Kubernetes Secret for Immuta database passwords. Stability improvements include a change in configuration for the Immuta database and Query Engine StatefulSets.
Existing Kubernetes Secret
This release adds the option to use an existing Kubernetes Secret for Immuta database passwords used in the Helm installation. Using an existing Secret can be useful when you want more control over the creation of the Secret or do not want to include the sensitive values in the Immuta Helm values file.
To use an existing Kubernetes Secret, you must first create a Secret containing the required data keys.
Note: This is an example showing the required data keys and some sample values. For more details on creating Secrets in Kubernetes, see the Kubernetes Documentation.
apiVersion: v1 kind: Secret metadata: name: existing-immuta-secret type: Opaque data: databasePassword: VUc5NWEySkxObWRXU0Rkbk4zQTNUQQ== databasePatroniApiPassword: b0hYY0RwWDNURUhQZ250VQ== databaseReplicationPassword: dzZyem9ndVgzYTZoRkVuMw== databaseSuperuserPassword: REZBUmg2VTdianBrVVRmZQ== queryEnginePassword: YTQyUHRoS3RvYUhOTDZFYg== queryEnginePatroniApiPassword: ZDNOZTNFb1FZaFU0UDNWTQ== queryEngineReplicationPassword: RWRXMlJEdmV6UjdFSkxVNA== queryEngineSuperuserPassword: N0tNelF3WEgzeFhHRHBwRw==
Reference the Secret by name in the Immuta Helm values.
If you have an existing release, you can migrate to an existing Kubernetes Secret by creating the Secret with the values previously defined in your Helm values by using the following mappings.
|Helm Value||Secret Data Key|
existingSecret value and removing the password values from your
Immuta Helm values file. Apply the change by running
helm upgrade for your
release, referencing the values file.
Database StatefulSet Configuration Update
This release includes a change in the default settings for Patroni, which is used in the database and Query Engine StatefulSets. This setting has been shown to improve the stability of the StatefulSets when the pods are restarted frequently due to cluster operations, such as node upgrades.
For new installations, no action is necessary to make use of this configuration change. For existing installations, the change must be applied manually to the StatefulSet pods.
kubectl exec -it RELEASE_NAME-immuta-database-0 -- patronictl edit-config --set postgresql.use_pg_rewind=true kubectl exec -it RELEASE_NAME-immuta-query-engine-0 -- patronictl edit-config --set postgresql.use_pg_rewind=true
Note: Replace "RELEASE_NAME" with the name of your Immuta Helm release.
Nginx Ingress Controller Update
- Add support for using an existing Kubernetes Secret for passwords.
use_pg_rewindfor Patroni by default.
- Update ingress-nginx controller to v0.47.0 to mitigate CVE-2021-23017.
This release updates versions for various components.
- Update Helm chart app version to 2021.2.0.
- Update immuta-deploy-tools image to 1.1.2.
- Update default ingress-nginx controller image to v0.46.0.
This release adds support for Immuta 2021.2 and to allow scaling Immuta web service pods to zero.
- Update container uid and gid in
- Remove dependency on
- Allow web pod replicas to be set to zero.
This release updates the default Immuta version to 2021.1.3 and the default memcached image tag to 1.6-alpine.
- Update memcached image to 1.6-alpine.
This release contains a fix for an issue where setting resource requests and limits was not possible for some containers created by the Immuta Helm Chart.
- Set configurable resource limits for all containers and init containers
This release contains updates that enable the configuration of options that were previously not exposed in the Helm Chart.
The Immuta Fingerprint service configuration can now be set in Helm values. For
example, to set the
worker_timeout option to a value of 300, set the
fingerprint: extraConfig: worker_timeout: 300
The log level for the Immuta Fingerprint service can also be configured. Valid
fingerprint: logLevel: DEBUG
- Add ability to pass in configuration for Immuta Fingerprint service
- Add port 8008 to Database and Query Engine Endpoints
When upgrading to 4.6.2 with
nginxIngress.enabled set to
true, a new ConfigMap was introduced to be used by the
NGINX Ingress Controller for leader election. This will cause the old ConfigMap, named
ingress-controller-leader-immuta-<RELEASE>-ingress, to be orphaned. This will not affect the operation of Immuta and
may be removed by running
kubectl delete configmap ingress-controller-leader-immuta-<RELEASE>-ingress
- When using
--reuse-valuescoming from 4.5.x charts,
extraEnvvalue throws a templating error.
- Use Immuta config item
publicPostgres.sslstarting with Immuta Version 2020.3.
- Adopt nginx-ingress leader election ConfigMap into the chart when using Optional Ingress Controller component.
log_error_verbosity = TERSEin Postgres configuration.
- Update default version to 2020.4.0.
- Add port 8008 to primary database and query-engine services.
- Add ability to set annotations on all service accounts.
.spec.terminationGracePeriodSecondsfor all Immuta pods.
- Update Query Engine readiness and liveness probes to use Patroni REST API endpoints.
- Update Database readiness and liveness probes to use Patroni REST API endpoints.
- Database password is no longer used in Query Engine init-container.
- Add support for LDAPS-backed Query Engine Authentication.
- Web Service pods now wait for database migrations to execute before startup.
This release contains a fix for a race condition introduced in 4.5.3 when performing a new install.
- Removed Endpoint resource for Database and Query Engine components to prevent race condition with Service. Endpoints are now created by Database and Query Engine Service resource.
This release contains a few fixes and updates the default Immuta application version to 2020.2.8.
- Support EndpointSlices for Database and Immuta Query Engine in Kubernetes 1.17+.
This release contains a few fixes and updates the default Immuta application version to 2020.2.7.
- Update default values for Immuta Query Engine to support Elastic and Solr.
- Helm hooks fail to run with helm older than 3.2.0.
- TLS Ciphers used in NGINX Ingress Controller incompatibility with default TLS cipher suites set in Databricks.
This release contains a fix for a bug introduced with the last release that
helm install to timeout and fail when used with the
If you are using Argo CD to deploy the Immuta Helm Chart, then you may notice that the TLS secret will be marked OutOfSync (requires pruning). The TLS secret (which is used for the encryption of inter-pod network traffic) should not be pruned. Update the Helm values with the following so that the TLS secret can be monitored as a resource without the risk of unwanted pruning.
tls: manageGeneratedSecret: true
- Fix issue with TLS generation and the helm install
Argo CD Support
The Immuta Helm Chart now supports deployment using Argo CD. Prior to this, the TLS generation hook would create a Secret that was not tracked by Helm. In Argo CD this resource would appear to need pruning. There was also an issue with the database and Query Engine endpoints, in which they would appear to be out of sync, but syncing them would remove the runtime changes that were being applied by Patroni. These issues have been resolved, and Argo CD is now supported.
For Argo CD versions older than 1.7.0 you must use the following Helm values in order for the TLS generation hook to run successfully.
hooks: tlsGeneration: hookAnnotations: helm.sh/hook-delete-policy: "before-hook-creation"
Starting with Argo CD version 1.7.0 the default Immuta Helm Chart values can be used.
Bundled Ingress Nginx Upgraded
The bundled version of ingress-nginx has been upgraded to 0.34.1. In addition to upgrading the default version, the cluster scoped resources that used to be created (ClusterRole and ClusterRoleBinding) are no longer required and have been removed.
Built-in Support for Azure LoadBalancer Annotations
Setting the Helm value
nginxIngress.controller.service.isInternal will now
cause an internal Azure load balancer to be created for nginx ingress.
Support Setting the
A new value was added to set the
externalTrafficPolicy on the nginx ingress
Service. Setting this value to "Local" can be useful for preserving client IP
addresses. See the
for more information on preserving the client source IP.
When upgrading an existing Helm release from chart version
<=4.4 that was
using the default values
tls.create=true, you must
first annotate and label the Immuta TLS Secret so that it can be adopted by
You will need to complete these steps if you encounter either of the following
errors when running the
helm upgrade command.
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: Secret "immuta-tls" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "immuta"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: Secret "immuta-tls" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"
To resolve these upgrade errors, run the following commands, being sure to substitute the proper Helm release name and namespace.
kubectl annotate secret \ -l app=immuta,component=generated-tls,release=<RELEASE_NAME> meta.helm.sh/release-name=<RELEASE_NAME> meta.helm.sh/release-namespace=<RELEASE_NAMESPACE> kubectl label secret \ -l app=immuta,component=generated-tls,release=immuta app.kubernetes.io/managed-by=Helm
After this, you can proceed to run
- Refactored Helm Chart hooks to work with Argo CD.
- Updated Helm Chart description.
- Update web pod annotations so that password changes cause a rolling restart.
- Upgrade ingress-nginx to 0.34.1.
- Support setting
externalTrafficPolicyon the nginx ingress Service.
- Update nginx ingress Service to include Azure annotations.
- Fix issue with release names that contain periods.
- Fix fingerprint configuration when TLS is disabled.
Setting Global Pod Annotations and Labels
It is now possible to set pod annotations and labels at a global level. When
set, these labels and annotations will be used for all pods that the Immuta
Helm Chart creates. Pod labels and annotations can be set using the Helm values
global.podLabels to a map of string to string.
global: # annotations to be added to every pod podAnnotations: example.org/latest-configuration: 3d0726f97faa2e4482d7bd31114a26c3976ed96dba5804d951bf480a6af8810c # labels to be added to every pod podLabels: example.org/team: "alpha"
Labels and annotations can also be set individually for each component in the
Immuta Helm Chart. To set labels and annotations for an individual component,
set the Helm values
<componentName>.podLabels to a map of string to string.
web: podAnnotations: example.org/latest-configuration: 3d0726f97faa2e4482d7bd31114a26c3976ed96dba5804d951bf480a6af8810c podLabels: example.org/team: "alpha" queryEngine: podAnnotations: example.org/latest-configuration: 7c6f707ce995b34b9a09a4df6f0b20e8580914f65b5117b10318a35a465a3aa8 podLabels: example.org/team: "beta"
- Support labels/annotations on all pods.
- Fix for Query Engine pod referencing image repository from database values.
- Remove the option to configure Data Source CA certificates using a ConfigMap.
Support for Custom
It is now possible to set custom
tolerations for each
component in the Immuta Helm Chart.
To set a custom
nodeSelector, set the Helm value for
<componentName>.nodeSelector to a valid
nodeSelector. See the
for more details.
web: nodeSelector: lifecycle: spot database: nodeSelector: lifecycle: on-demand
To set a custom
tolerations, set the Helm value for
<componentName>.tolerations to a valid
tolerations. See the
for more details.
web: tolerations: - key: lifecycle operator: Equal value: spot effect: NoSchedule database: tolerations: - key: lifecycle operator: Equal value: on-demand effect: NoSchedule
- Support setting
tolerationson all pods.