Skip to content

Immuta Helm Chart: Release Notes

Audience: System Administrators

Content Summary: This page contains release notes for the Immuta Helm Chart.

4.7.0

This release includes new features and other changes intended to improve the usability of the Immuta Helm Chart.

New features include the ability to use an external PostgreSQL database for Immuta metadata and the option to use Redis as a cache. Other notable improvements are the ability to set a global image registry override, support for the PodSecurityPolicy RunAsUser rule MustRunAsNonRoot, configurable SecurityContext runAsUser settings for most components, and support for Kubernetes API resources that have moved out of beta.

External PostgreSQL Database Support

It is now possible to use an external PostgreSQL instance as the Immuta Metadata Database when running Immuta in Kubernetes. When enabled, this functionality replaces the built-in PostgreSQL metadata database that runs in Kubernetes. This functionality is enabled by setting database.enabled=false and passing the connection information for the PostgreSQL instance under the key externalDatabase.

database:
  enabled: false
externalDatabase:
  hostname: external-postgres.database.hostname
  password: bometauserpassword
  superuser:
    username: postgres
    password: postgrespassword

For existing deployments it is possible to migrate from the built-in database to an external database. In order to migrate, backups must be configured, and a backup should be taken immediately prior to migrating. The process of migrating can be done by running helm upgrade with the external database Helm values and backup.restore.enabled=true.

This functionality is compatible with Immuta 2021.3+.

Redis Cache

There is now the option to use Redis as the cache implementation for Immuta. The primary motivation for offering this option is to enable the use of TLS for network traffic between the Immuta web service and cache pods.

In order to select Redis as the cache implementation for Immuta, set the Helm value cache.type=redis when installing the Immuta Helm Chart.

cache:
  type: redis

TLS is enabled for internal network traffic by default when Redis is used, so unless tls.enabledInternal=false is set, TLS will be used for network traffic between Immuta and Redis.

When Redis is selected as the cache for Immuta, the Immuta Web Service pods will contain an additional container that runs envoy as an ambassador sidecar; the envoy container is named "cache-proxy-sidecar." In this configuration, kubectl logs commands must include the flag --container=service in order to access logs for the Immuta Web Service.

Global Image Registry Override

It is now possible to configure the container image registry globally for all images referenced by the Immuta Helm Chart.

global:
  imageRegistry: registry.mycorp.com

The default value for global.imageRegistry is registry.immuta.com.

Immuta images are referenced relative to the configured image registry by adding the repository prefix immuta/, for example registry.mycorp.com/immuta/immuta-service.

Third-party images are referenced without the immuta/ prefix. The images that this applies to are

  • ingress-ngnix-controller
  • memcached
  • redis
  • envoyproxy-envoy-alpine

If these images are not available in the custom registry under the root prefix, it is possible to configure the image repository. This may be the case if you have pulled these images and pushed them to your registry under the immuta/ prefix.

cache:
  redis:
    image:
      repository: immuta/redis
  memcached:
    image:
      repository: immuta/memcached
  proxySidecar:
    image:
      repository: immuta/envoyproxy-envoy-alpine
nginxIngress:
  controller:
    image:
      repository: immuta/ingress-nginx-controller

Support for MustRunAsNonRoot Policies

Init containers for initializing database and Query Engine persistent volumes no longer run as root. In addition to increasing overall security by running init containers as an unprivileged user, this also means that Immuta is now compatible with the PodSecurityPolicy RunAsUser rule MustRunAsNonRoot or other similar policies.

Configurable SecurityContext runAsUser for Most Components

The Pod SecurityContext is now configurable for all components except for Nginx Ingress. This means that the user ID that the Pods run as can now be customized by setting securityContext.runAsUser for the components that support it. The following table shows which components are configurable for various Immuta versions.

Immuta Version Components
Any backup, cache
2021.2+ backup, cache, fingerprint
2021.4+ backup, cache, database, fingerprint, queryEngine, web
backup:
  securityContext:
    runAsUser: 31234
cache:
  securityContext:
    runAsUser: 31234
database:
  securityContext:
    runAsUser: 31234
fingerprnt:
  securityContext:
    runAsUser: 31234
queryEngine:
  securityContext:
    runAsUser: 31234
web:
  securityContext:
    runAsUser: 31234

Kubernetes API Version Updates

The resources created by the Immuta Helm Chart will now use stable versions of the following resources when they are available on the Kubernetes cluster.

  • networking.k8s.io/v1 for Ingress resources.
  • batch/v1 for CronJob resources.
  • policy/v1 for PodDisruptionBudget resources.

Migration Notes

Upgrading to 4.7.0 from 4.6 should be possible with minimal or no changes to the Helm values. The following section identifies any caveats or recommendations that should be taken into account when upgrading.

helm upgrade --reuse-values not supported for upgrade from 4.6 to 4.7

Due to changes in the Chart's default values.yaml file, helm upgrade --reuse-values does not work. If you have a saved values file, use that when calling helm upgrade. If you don't have a saved values file, you can use helm get values to save the Helm values locally before calling helm upgrade.

helm get values <release-name> > release-values.yaml
helm upgrade <release-name> immuta/immuta -f release-values.yaml

Deprecated Helm Values

Some values have been deprecated in this release. The following table indicates the deprecated values and the migration path for each one. Update any custom Helm values referencing the deprecated values at the earliest convenience to avoid any issues when the deprecated values are removed.

Deprecated Value Replacement Value
configHook.imageRepository deployTools.image.registry and deployTools.image.repository
configHook.imageTag deployTools.image.tag
configHook.imagePullPolicy deployTools.imagePullPolicy
database.imageRepository database.image.registry and database.image.repository
database.imageTag database.image.tag
fingerprint.imageRepository fingerprint.image.registry and fingerprint.image.repository
fingerprint.imageTag fingerprint.image.tag
memcached.imagePullPolicy cache.memcached.imagePullPolicy
memcached.imageRepository cache.memcached.image.registry and cache.memcached.image.repository
memcached.imageTag cache.memcached.image.tag
memcached.nodeSelector cache.nodeSelector
memcached.podAnnotations cache.podAnnotations
memcached.podLabels cache.podLabels
memcached.replicas cache.replicas
memcached.resources cache.resources
memcached.tolerations cache.tolerations
memcached.maxItemMemory cache.memcached.maxItemMemory
queryEngine.imageRepository queryEngine.image.registry and queryEngine.image.repository
queryEngine.imageTag queryEngine.image.tag
web.imageRepository web.image.registry and web.image.repository
web.imageTag web.image.tag

Changes

  • Update Kubernetes Batch API versions.
  • Update Kubernetes Networking API versions.
  • Update Kubernetes Policy API versions.
  • Startup init container checks now have a time out.
  • Support for disabling built-in database and using an external database.
  • Database and Query Engine initContainers no longer run as root.
  • Runtime user ID is now configurable for all Pods except Nginx Ingress.
  • Added shared memory volume for Database and Query Engine by default.
  • It is now possible to set a Docker image registry globally in Helm values.
  • The environment variable noproxy will now be configured such that internal network connections do not use the proxy if extraEnv values are used to set HTTP_PROXY or HTTPS_PROXY.
  • Added option to deploy Redis with TLS instead of Memcached for the cache.

Fix:

  • Database and Query Engine Pods now crash and restart when the backup restore process fails.
  • The TLS generation hook will now regenerate TLS certs if the externalHostname value changes.

4.6.12

This release makes it possible to increase the size of /dev/shm in the Database and Query Engine Pods. This functionality makes use of memory-backed emptyDir Volumes.

database:
  sharedMemoryVolume:
    enabled: true
queryEngine:
  sharedMemoryVolume:
    enabled: true

Changes

Update:

  • Add support for memory-backed volumes to increase the size of /dev/shm.
  • Update Helm chart appVersion to 2021.3.5.

4.6.11

This release includes an update to the default image tag for the bundled Nginx Ingress Controller.

The ingress-nginx project has released a few updates since v0.47.0. This release updates the default version to v0.49.3, which includes an update to Alpine Linux v3.14.2. This update addresses CVE-2021-3711, which v0.47.0 was vulnerable to.

To upgrade the Nginx Ingress Controller without upgrading the Immuta Helm Chart, you can set the following Helm values.

nginxIngress:
  controller:
    imageTag: v0.49.3

Changes

Update:

  • Update Helm chart appVersion to 2021.3.4.
  • Update ingress-nginx controller to v0.49.3.

4.6.10

This release updates the Immuta Helm Chart appVersion to 2021.2.2 and adds support for using an S3-compatible server for backup and restore. It also contains a fix for the metrics export CronJob in Immuta 2021.2+.

Using a Custom S3 Endpoint

In order to use a custom S3 endpoint for backup and restore, new Helm values have been added. The example below shows how to configure an S3 compatible endpoint.

backup:
  s3:
    # The endpoint URL of an s3-compatible server
    endpoint: s3-compatible.mycorp.com
    # The CA bundle in PEM format used to verify the TLS certificates of the endpoint
    caBundle: |
      -----BEGIN CERTIFICATE-----
      MIIC5zCCAc+gAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
      cm5ldGVzMB4XDTIxMDYzMDE1MTg0M1oXDTMxMDYyODE1MTg0M1owFTETMBEGA1UE
      AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMMR
      sFzDeAuCVy5JpkrCNp+A0zOrHX6BnTbDrzRV81Pfm2H6y8Pc3pASDgItiwqUBbeu
      fZ+5SAlZ2JY2O27mB1WM5Ajd5kSs7FausrHniSkKM+NlclPXXrBxcoli9UOEbo9T
      k+CrpV/I+EYYiMDKLT/tMX5AJSUavRHdb2n59bWEc2C1HTeBsr0jotn6zOHmhIAG
      O66SeCKjzR6MZ2TO8IWzGpqfY0a+QIqr4Z2Ihy+i6HvU3nXt2PVW5lQp5EN9Gmig
      g5x4re68paAmbU6FfeP9ruPoKjsQq5w70J/miPJb7TS59fdaWAM9yUS3ENG1hLZ7
      ALGIGJoUgy0QR/2DaV0CAwEAAaNCMEAwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB
      /wQFMAMBAf8wHQYDVR0OBBYEFLSraVnEIKrTomzE15ga4jSLdPuFMA0GCSqGSIb3
      DQEBCwUAA4IBAQBb5IcC45qcil48uSip7uBkfKPUFvQeTrq0Zg4zzYuNvdH+uDnH
      05Tz3k8dYBNyvIbh+TahCmmrUUyKgtvHeKrXfrqHoJwM7YTSIJTf7aVSEjBMtvjs
      4dnP3HoGOjJcXIadoZ5gvNUEepTQREu6/5j/Mq4F07UDhrYNZNMwdP3pXpadB9q3
      8fxjl88quJ4wWhq82hrwrGBv+z6oAoUbskvticWuu5eB8QkA3+tDQ5q1ZGjKGSqf
      G5xnXhHnQYy9J+JZ09JoySL3R5+959hJBC03Dwb38rLt/+Vkzdz8ILPT+PUyp5m4
      5KqYJfOVUzRqK0FJY67CkpFmi+4kuqgdGaGs
      -----END CERTIFICATE-----
    # Set to "true" to force the use of path-style addressing as opposed to hostname-style addressing
    forcePathStyle: true
    # Set to "true" to disable making an SSL connection to the endpoint
    #disableSSL:

Changes

Update:

  • Update Helm chart appVersion to 2021.2.2.
  • Add blob storage endpoint support for backup storage.

Fix:

  • Fix Metrics CronJob errors for Immuta 2021.2+

4.6.9

This release includes usability and stability improvements and an update to the bundled Nginx Ingress Controller. Usability improvements include the option to use an existing Kubernetes Secret for Immuta database passwords. Stability improvements include a change in configuration for the Immuta database and Query Engine StatefulSets.

Existing Kubernetes Secret

This release adds the option to use an existing Kubernetes Secret for Immuta database passwords used in the Helm installation. Using an existing Secret can be useful when you want more control over the creation of the Secret or do not want to include the sensitive values in the Immuta Helm values file.

To use an existing Kubernetes Secret, you must first create a Secret containing the required data keys.

Note: This is an example showing the required data keys and some sample values. For more details on creating Secrets in Kubernetes, see the Kubernetes Documentation.

apiVersion: v1
kind: Secret
metadata:
  name: existing-immuta-secret
type: Opaque
data:
  databasePassword: VUc5NWEySkxObWRXU0Rkbk4zQTNUQQ==
  databasePatroniApiPassword: b0hYY0RwWDNURUhQZ250VQ==
  databaseReplicationPassword: dzZyem9ndVgzYTZoRkVuMw==
  databaseSuperuserPassword: REZBUmg2VTdianBrVVRmZQ==
  queryEnginePassword: YTQyUHRoS3RvYUhOTDZFYg==
  queryEnginePatroniApiPassword: ZDNOZTNFb1FZaFU0UDNWTQ==
  queryEngineReplicationPassword: RWRXMlJEdmV6UjdFSkxVNA==
  queryEngineSuperuserPassword: N0tNelF3WEgzeFhHRHBwRw==

Reference the Secret by name in the Immuta Helm values.

existingSecret: existing-immuta-secret

If you have an existing release, you can migrate to an existing Kubernetes Secret by creating the Secret with the values previously defined in your Helm values by using the following mappings.

Helm Value Secret Data Key
database.password databasePassword
database.patroniApiPassword databasePatroniApiPassword
database.replicationPassword databaseReplicationPassword
database.superuserPassword databaseSuperuserPassword
queryEngine.password queryEnginePassword
queryEngine.patroniApiPassword queryEnginePatroniApiPassword
queryEngine.replicationPassword queryEngineReplicationPassword
queryEngine.superuserPassword queryEngineSuperuserPassword

Set the existingSecret value and removing the password values from your Immuta Helm values file. Apply the change by running helm upgrade for your release, referencing the values file.

Database StatefulSet Configuration Update

This release includes a change in the default settings for Patroni, which is used in the database and Query Engine StatefulSets. This setting has been shown to improve the stability of the StatefulSets when the pods are restarted frequently due to cluster operations, such as node upgrades.

For new installations, no action is necessary to make use of this configuration change. For existing installations, the change must be applied manually to the StatefulSet pods.

kubectl exec -it RELEASE_NAME-immuta-database-0 -- patronictl edit-config --set postgresql.use_pg_rewind=true
kubectl exec -it RELEASE_NAME-immuta-query-engine-0 -- patronictl edit-config --set postgresql.use_pg_rewind=true

Note: Replace "RELEASE_NAME" with the name of your Immuta Helm release.

Nginx Ingress Controller Update

The ingress-nginx project released an update to mitigate a vulnerability in Nginx. We've updated the bundled ingress controller to this new release. More details are available in this GitHub issue.

Changes

Update:

  • Add support for using an existing Kubernetes Secret for passwords.
  • Enable use_pg_rewind for Patroni by default.
  • Update ingress-nginx controller to v0.47.0 to mitigate CVE-2021-23017.

4.6.8

This release updates versions for various components.

Changes

Update:

  • Update Helm chart app version to 2021.2.0.
  • Update immuta-deploy-tools image to 1.1.2.
  • Update default ingress-nginx controller image to v0.46.0.

4.6.7

This release adds support for Immuta 2021.2 and to allow scaling Immuta web service pods to zero.

Changes

Update:

  • Update container uid and gid in securityContext for 2021.2.
  • Remove dependency on pgrep and getopt from scripts.
  • Allow web pod replicas to be set to zero.

4.6.6

This release updates the default Immuta version to 2021.1.3 and the default memcached image tag to 1.6-alpine.

Changes

Update:

  • Update memcached image to 1.6-alpine.

4.6.5

This release contains a fix for an issue where setting resource requests and limits was not possible for some containers created by the Immuta Helm Chart.

Changes

Fix:

  • Set configurable resource limits for all containers and init containers

4.6.4

This release contains updates that enable the configuration of options that were previously not exposed in the Helm Chart.

New Features

The Immuta Fingerprint service configuration can now be set in Helm values. For example, to set the worker_timeout option to a value of 300, set the following values.

fingerprint:
  extraConfig:
    worker_timeout: 300

The log level for the Immuta Fingerprint service can also be configured. Valid values include DEBUG, INFO, WARNING, ERROR, and CRITICAL.

fingerprint:
  logLevel: DEBUG

Changes

Feature:

  • Add ability to pass in configuration for Immuta Fingerprint service

4.6.3

Fix:

  • Add port 8008 to Database and Query Engine Endpoints

4.6.2

Upgrade Notes

When upgrading to 4.6.2 with nginxIngress.enabled set to true, a new ConfigMap was introduced to be used by the NGINX Ingress Controller for leader election. This will cause the old ConfigMap, named ingress-controller-leader-immuta-<RELEASE>-ingress, to be orphaned. This will not affect the operation of Immuta and may be removed by running

kubectl delete configmap ingress-controller-leader-immuta-<RELEASE>-ingress

Changes

Fix:

  • When using --reuse-values coming from 4.5.x charts, extraEnv value throws a templating error.
  • Use Immuta config item publicPostgres.useSSL instead of publicPostgres.ssl starting with Immuta Version 2020.3.
  • Adopt nginx-ingress leader election ConfigMap into the chart when using Optional Ingress Controller component.

Feature:

  • Set log_error_verbosity = TERSE in Postgres configuration.
  • Update default version to 2020.4.0.
  • Add port 8008 to primary database and query-engine services.
  • Add ability to set annotations on all service accounts.
  • Set .spec.terminationGracePeriodSeconds for all Immuta pods.
  • Update Query Engine readiness and liveness probes to use Patroni REST API endpoints.
  • Update Database readiness and liveness probes to use Patroni REST API endpoints.

4.6.1

Changes

Fix:

  • Database password is no longer used in Query Engine init-container.

4.6.0

Changes

Feature:

  • Add support for LDAPS-backed Query Engine Authentication.

Fix:

  • Web Service pods now wait for database migrations to execute before startup.

4.5.4

This release contains a fix for a race condition introduced in 4.5.3 when performing a new install.

Changes

Fix:

  • Removed Endpoint resource for Database and Query Engine components to prevent race condition with Service. Endpoints are now created by Database and Query Engine Service resource.

4.5.3

This release contains a few fixes and updates the default Immuta application version to 2020.2.8.

Changes

Feature:

  • Update appVersion to 2020.2.8.

Fix:

  • Support EndpointSlices for Database and Immuta Query Engine in Kubernetes 1.17+.

4.5.2

This release contains a few fixes and updates the default Immuta application version to 2020.2.7.

Changes

Feature:

  • Update appVersion to 2020.2.7.
  • Update default values for Immuta Query Engine to support Elastic and Solr.

Fix:

  • Helm hooks fail to run with helm older than 3.2.0.
  • TLS Ciphers used in NGINX Ingress Controller incompatibility with default TLS cipher suites set in Databricks.

4.5.1

This release contains a fix for a bug introduced with the last release that caused helm install to timeout and fail when used with the --wait flag.

Upgrade Notes

If you are using Argo CD to deploy the Immuta Helm Chart, then you may notice that the TLS secret will be marked OutOfSync (requires pruning). The TLS secret (which is used for the encryption of inter-pod network traffic) should not be pruned. Update the Helm values with the following so that the TLS secret can be monitored as a resource without the risk of unwanted pruning.

tls:
  manageGeneratedSecret: true

Changes

Fixes:

  • Fix issue with TLS generation and the helm install --wait flag.

4.5.0

New Features

Argo CD Support

The Immuta Helm Chart now supports deployment using Argo CD. Prior to this, the TLS generation hook would create a Secret that was not tracked by Helm. In Argo CD this resource would appear to need pruning. There was also an issue with the database and Query Engine endpoints, in which they would appear to be out of sync, but syncing them would remove the runtime changes that were being applied by Patroni. These issues have been resolved, and Argo CD is now supported.

For Argo CD versions older than 1.7.0 you must use the following Helm values in order for the TLS generation hook to run successfully.

hooks:
  tlsGeneration:
    hookAnnotations:
      helm.sh/hook-delete-policy: "before-hook-creation"

Starting with Argo CD version 1.7.0 the default Immuta Helm Chart values can be used.

Bundled Ingress Nginx Upgraded

The bundled version of ingress-nginx has been upgraded to 0.34.1. In addition to upgrading the default version, the cluster scoped resources that used to be created (ClusterRole and ClusterRoleBinding) are no longer required and have been removed.

Built-in Support for Azure LoadBalancer Annotations

Setting the Helm value nginxIngress.controller.service.isInternal will now cause an internal Azure load balancer to be created for nginx ingress.

Support Setting the externalTrafficPolicy

A new value was added to set the externalTrafficPolicy on the nginx ingress Service. Setting this value to "Local" can be useful for preserving client IP addresses. See the Kubernetes Documentation for more information on preserving the client source IP.

Upgrade Notes

When upgrading an existing Helm release from chart version <=4.4 that was using the default values tls.enabled=true and tls.create=true, you must first annotate and label the Immuta TLS Secret so that it can be adopted by Helm.

You will need to complete these steps if you encounter either of the following errors when running the helm upgrade command.

Error: UPGRADE FAILED: rendered manifests contain a resource that already exists.
Unable to continue with update: Secret "immuta-tls" in namespace "default" exists
and cannot be imported into the current release: invalid ownership metadata;
label validation error: missing key "app.kubernetes.io/managed-by": must be
set to "Helm"; annotation validation error: missing key
"meta.helm.sh/release-name": must be set to "immuta";
annotation validation error: missing key "meta.helm.sh/release-namespace":
must be set to "default"
Error: UPGRADE FAILED: rendered manifests contain a resource that already
exists. Unable to continue with update: Secret "immuta-tls" in namespace
"default" exists and cannot be imported into the current release: invalid
ownership metadata; label validation error: missing key
"app.kubernetes.io/managed-by": must be set to "Helm"

To resolve these upgrade errors, run the following commands, being sure to substitute the proper Helm release name and namespace.

kubectl annotate secret \
  -l app=immuta,component=generated-tls,release=<RELEASE_NAME> meta.helm.sh/release-name=<RELEASE_NAME> meta.helm.sh/release-namespace=<RELEASE_NAMESPACE>

kubectl label secret \
  -l app=immuta,component=generated-tls,release=immuta app.kubernetes.io/managed-by=Helm

After this, you can proceed to run helm upgrade.

Changes

Feature:

  • Refactored Helm Chart hooks to work with Argo CD.
  • Updated Helm Chart description.
  • Update web pod annotations so that password changes cause a rolling restart.
  • Upgrade ingress-nginx to 0.34.1.
  • Support setting externalTrafficPolicy on the nginx ingress Service.
  • Update nginx ingress Service to include Azure annotations.

Bug:

  • Fix issue with release names that contain periods.

4.4.3

Changes

Bug:

  • Fix fingerprint configuration when TLS is disabled.

4.4.2

New Features

Setting Global Pod Annotations and Labels

It is now possible to set pod annotations and labels at a global level. When set, these labels and annotations will be used for all pods that the Immuta Helm Chart creates. Pod labels and annotations can be set using the Helm values global.podAnnotations and global.podLabels to a map of string to string.

global:
  # annotations to be added to every pod
  podAnnotations:
    example.org/latest-configuration: 3d0726f97faa2e4482d7bd31114a26c3976ed96dba5804d951bf480a6af8810c
  # labels to be added to every pod
  podLabels:
    example.org/team: "alpha"

Labels and annotations can also be set individually for each component in the Immuta Helm Chart. To set labels and annotations for an individual component, set the Helm values <componentName>.podAnnotations and <componentName>.podLabels to a map of string to string.

web:
  podAnnotations:
    example.org/latest-configuration: 3d0726f97faa2e4482d7bd31114a26c3976ed96dba5804d951bf480a6af8810c
  podLabels:
    example.org/team: "alpha"
queryEngine:
  podAnnotations:
    example.org/latest-configuration: 7c6f707ce995b34b9a09a4df6f0b20e8580914f65b5117b10318a35a465a3aa8
  podLabels:
    example.org/team: "beta"

Changes

Feature:

  • Support labels/annotations on all pods.

Bug:

  • Fix for Query Engine pod referencing image repository from database values.

4.4.1

Changes

Cleanup:

  • Remove the option to configure Data Source CA certificates using a ConfigMap.

4.4.0

New Features

Support for Custom nodeSelector and tolerations

It is now possible to set custom nodeSelector and tolerations for each component in the Immuta Helm Chart.

To set a custom nodeSelector, set the Helm value for <componentName>.nodeSelector to a valid nodeSelector. See the Kubernetes documentation for more details.

web:
  nodeSelector:
    lifecycle: spot
database:
  nodeSelector:
    lifecycle: on-demand

To set a custom tolerations, set the Helm value for <componentName>.tolerations to a valid tolerations. See the Kubernetes documentation for more details.

web:
  tolerations:
  - key: lifecycle
    operator: Equal
    value: spot
    effect: NoSchedule
database:
  tolerations:
  - key: lifecycle
    operator: Equal
    value: on-demand
    effect: NoSchedule

Changes

Feature:

  • Support setting nodeSelector and tolerations on all pods.