Skip to content

You are viewing documentation for Immuta version 2.8.

For the latest version, view our documentation for Immuta SaaS or the latest self-hosted version.

Kubernetes Helm Upgrade

Audience: System Administrators

Content Summary: This guide provides two methods for upgrading Helm installs of Immuta in Kubernetes.

  • Procedure A is for basic upgrades and is typically acceptable when moving between minor point releases of Immuta (such as 2.7.0 to 2.7.1).
  • Procedure B is a more involved method that leverages a full backup and restore procedure. This approach is needed when conducting more significant upgrades such as major releases (such as 2.6.x to 2.7.x), when making large jumps in versions (for example going from 2.5.x to 2.7.x), or simply whenever advised by your Immuta professional.

Please note that the backup and restore procedure can be used in all cases, so if in doubt elect that procedure.

If using a Kubernetes namespace...

If deploying Immuta into a Kubernetes namespace other than the default, you must include the --namespace option into all helm and kubectl commands provided throughout this section.

Prerequisites

In order to conduct any upgrades of Immuta in a Kubernetes environment, you will need Helm access to the Kubernetes cluster running Immuta.

Step 1.) Environment Check

a.) Check Helm version

Immuta's Helm Chart requires Helm version 2.16+ or 3+.

  • Helm 2.16+ is only supported for existing Immuta installations.
  • New installations of Immuta must use the latest version of Helm 3 and Immuta's latest Chart.

Run helm version to verify the version of Helm you are using:

helm version
Example Output
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
If using Helm 2, ensure your Helm version matches Tiller in the cluster

Keeping your Helm version matched to your Tiller version is best practice and can avoid incompatibility issues. Run helm version to compare versions and ensure sync.

$ helm version
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}

b.) Configure Immuta's Helm Chart repo

In order to deploy Immuta to your Kubernetes cluster, you must be able to access the Immuta Helm Chart Repository and the Immuta Docker Registry. You can obtain credentials and instructions to set up both of these by accessing the Immuta Release Portal.

Run helm repo list to ensure Immuta's Helm Chart repository has been successfully added:

helm repo list
Example Output
NAME            URL
stable          https://kubernetes-charts.storage.googleapis.com
local           http://127.0.0.1:8879/charts
immuta          https://archives.immuta.com/charts
Don't forget the image pull secret!

As detailed in Immuta Release Portal, you must create a Kubernetes Image Pull Secret in the namespace that you are deploying Immuta in, or the installation will fail.

Run kubectl get secrets to confirm your Kubernetes image pull secret is in place:

kubectl get secrets
Example Output
NAME                  TYPE                                  DATA   AGE
immuta-registry       kubernetes.io/dockerconfigjson        1      5s

c.) Check/Update your local Immuta Helm Chart version

Run helm search repo immuta to check the version of your local copy of Immuta's Helm Chart:

helm search repo immuta
Example Output
NAME          CHART VERSION APP VERSION DESCRIPTION
immuta/immuta 4.4.1         2.7.0       The Immuta

Update your local Chart by running helm repo update.

To perform an upgrade without upgrading to the latest version of the Chart, run helm list to determine the Chart version of the installed release, and then specify that version using the --version argument of helm repo update.

d.) Confirm connectivity with your current Immuta Helm installation

Run helm list to confirm Helm connectivity and verify the current Immuta installation:

helm list
Example Output
NAME  REVISION  UPDATED                   STATUS    CHART         APP VERSION NAMESPACE
test  1         Tue Dec 17 01:04:36 2019  DEPLOYED  immuta-4.2.3  2.6.0       ns

Make note of:

  • NAME - This is the '<YOUR RELEASE NAME>' that will be used in the remainder of these instructions.
  • CHART - This is the version of Immuta's Helm Chart that your instance was deployed under.

e.) Confirm access to the Helm values used in your current Immuta installation

You will need the Helm values associated with your installation, which are typically stored in an immuta-values.yaml file. If you do not possess the original values file, these can be extracted from the existing installation using:

helm get values <YOUR RELEASE NAME> > immuta-values.yaml

Determine Your Upgrade Path

Use the following bullets to determine whether Procedure A can be used for your upgrade, or whether you need to elect to use the more thorough Procedure B:

  • If your current Immuta version is < 2.6.0, follow Procedure B.
  • If your Immuta Helm Chart version is < 4.3, you can upgrade using Procedure A so long as your pin your Chart version to the version of the Chart retrieved in Step 4 above. However, if time is available to do the more involved Procedure B to upgrade the Chart + Immuta, that is the recommended approach.
  • For major revisions of Immuta (i.e. 2.7.0 -> 2.8.0), Procedure B is typically recommended, if not required. Your Immuta representative can help guide you if in doubt.
  • For minor revisions of Immuta (i.e. 2.8.0 -> 2.8.2), use Procedure A.

PROCEDURE A: Basic Helm Upgrade

Step 1.) Run the Upgrade

After you make any desired changes in your immuta-values.yaml file, you can apply these changes by running helm upgrade:

helm upgrade <YOUR RELEASE NAME> immuta/immuta \
    --values immuta-values.yaml
    --version <YOUR DESIRED CHART VERSION>
helm upgrade immuta/immuta \
    --values immuta-values.yaml \
    --name <YOUR RELEASE NAME> \
    --version <YOUR DESIRED CHART VERSION>

Note: Errors in upgrades can result when upgrading Chart versions on the installation. These are typically easily resolved by making slight modifications of your values to accommodate the changes in the Chart. Downloading an updated copy of the immuta-values.yaml and comparing to your existing values is often a great way to debug such occurrences.

Download immuta-values.yaml

PROCEDURE B: Complete Backup and Restore Upgrade

This procedure executes a complete backup and restore of Immuta.

The main steps involved are

  1. Capture a complete, current backup of your existing Immuta installation.
  2. Delete your existing Immuta install.
  3. Update the immuta-values.yaml to include all desired changes and reinstall Immuta with the "restore" option set.

Step 1.) Capture a Complete Current Backup

To leverage this procedure, you need Immuta's built-in backup/restore mechanism enabled in your Immuta install if it's not already in place.

a.) Enable Immuta's built-in backup capabilities

Immuta's current Helm Chart provides a built-in backup/restore mechanism based on a PersistentVolumeClaim with an access mode of ReadWriteMany. If such a resource is available, Immuta's Helm Chart will set everything up for you if you enable backups and comment out the claimName.

backup:
  # set to true to enable backups. requires RWX persistent volume support
  enabled: true
  # if claimName is set to an existing PVC no PV/PVC will be created
  # claimName: <YOUR ReadWriteMany PersistentVolumeClaim NAME>
  restore:
    # set to true to enable restoring from backups. This should be enabled whenever backups are enabled
    enabled: false

If you need to make modifications to these values to enable backups, please do so and then follow Procedure A to commit them into your cluster before proceeding here.

Caution

If your Kubernetes cluster doesn't support PersistentVolumes with an access mode of ReadWriteMany, consult the subsections in this section of the documentation that are specific to your cloud provider for assistance in configuring a compatible resource. If your Kubernetes environment is not represented there, or a workable solution does not appear available, please contact your Immuta representative to discuss options.

b.) Check existing backups

After checking/enabling the backup/restore features, you can check to see if any previous backups have run, and if so when, by running

kubectl get pods | grep backup
Example Output
test-immuta-backup-1576368000-bzmcd                     0/2     Completed          0          2d
test-immuta-backup-1576454400-gtlrv                     0/2     Completed          0          1d
test-immuta-backup-1576540800-5tcg2                     0/2     Completed          0          6h

c.) Manually trigger a backup to ensure the latest from your current Immuta installation is captured

Even if you have a fairly recent backup, it's advised to take a current, manual one to ensure nothing is lost between that backup and the present time. To do so, execute an ad-hoc backup with the following commands.

Begin by searching for the default backup Cron job:

kubectl get cronjobs
Example Output
NAME                 SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
test-immuta-backup   0 0 * * *   False     0        <none>          14m

Now use that Cron job name to create your own ad-hoc job:

kubectl create job adhoc-backup --from cronjob/test-immuta-backup

Once that job is created, you should see it running in your cluster when you get pods:

kubectl get pods
Example Output
NAME                                       READY   STATUS      RESTARTS   AGE
adhoc-backup-tj8k7                         0/2     Completed   0          8h
test-immuta-database-0                     1/1     Running     0          9h
test-immuta-database-1                     1/1     Running     0          9h
test-immuta-fingerprint-57b5c99f4c-bt5vk   1/1     Running     0          9h
test-immuta-fingerprint-57b5c99f4c-lr969   1/1     Running     0          9h
test-immuta-memcached-0                    1/1     Running     0          9h
test-immuta-memcached-1                    1/1     Running     0          9h
test-immuta-query-engine-0                 1/1     Running     0          9h
test-immuta-query-engine-1                 1/1     Running     0          9h
test-immuta-web-b5c7654fb-2brwb            1/1     Running     0          9h
test-immuta-web-b5c7654fb-csm7n            1/1     Running     0          9h

d.) Confirm backup completed successfully

Once that job has completed, you can confirm that everything ran successfully by checking the logs of that pod. There are actually 2 containers in that pod, one that backs up Immuta's metadata database and one that backs up the query engine. If all went as expected your outputs to the following commands should be similar to the examples below.

For the metadata database:

kubectl logs adhoc-backup-<RANDOM> -c database-backup
Example Output
2019-12-17 07:29:55 UTC LOG: Driver staging directory: /var/lib/immuta/odbc/install/
2019-12-17 07:29:55 UTC LOG: Driver install directory: /var/lib/immuta/odbc/drivers/
==> Backing up database roles
==> Backing up role bometadata
==> Creating backup archive
==> Finished creating backup. Backup can be found at /var/lib/immuta/postgresql/backups/immuta-20191217072955.tar.gz

For the query engine database:

kubectl logs adhoc-backup-<RANDOM> -c query-engine-backup
Example Output
2019-12-17 07:29:55 UTC LOG: Driver staging directory: /var/lib/immuta/odbc/install/
2019-12-17 07:29:55 UTC LOG: Driver install directory: /var/lib/immuta/odbc/drivers/
==> Backing up database roles
==> Backing up role immuta
==> Creating backup archive
==> Finished creating backup. Backup can be found at /var/lib/immuta/postgresql/backups/immuta-20191217072955.tar.gz

By default Immuta's backups will be stored in a persistent volume that's typically mounted at var/lib/immuta/postgresql/backups/ in the Cron job and var/lib/immuta/postgresql/restore/ in the database pods. In the normal case, this volume will persist across even the deletion of the Immuta installation and be picked back up during re-installation.

e.) (Optional) Create a local backup of the backup

While optional, out of an abundance of caution, Immuta recommends a manual copy of this backup be made and held for safe-keeping until the upgrade has successfully completed:

kubectl cp test-immuta-database-0:/var/lib/immuta/postgresql/restore/immuta-<LATEST DATE/TIME STAMP>.tar.gz db.tar.gz
kubectl cp test-immuta-query-engine-0:/var/lib/immuta/postgresql/restore/immuta-<LATEST DATE/TIME STAMP>.tar.gz qe.tar.gz

Step 2.) Delete the Current Immuta Installation

a.) Helm delete your existing Immuta installation

Once you have the backups secured, you are ready to delete the current Immuta installation and prepare to reinstall the upgraded version.

Caution!

This command is destructive! Please ensure you have a backup (Step 1) prior to running this.

Deleting the Immuta installation can be done by executing:

helm delete <YOUR RELEASE NAME>
helm delete <YOUR RELEASE NAME> --purge
Example Output

These resources were kept due to the resource policy:
[PersistentVolumeClaim] test-immuta-backup

release "test" deleted
You should see the above prints regarding the persistent volume claim that holds the backup being preserved. Assuming you see that, then the claim is available for connecting back into the new installation for restoration. If you do not see a similar message, please see Troubleshooting.

b.) Confirm deletion and resource clean-up

When the helm delete command finishes, confirm that all associated resources have been torn down by issuing a kubectl get pods and that all pods associated with the Immuta installation have been removed.

Step 3.) Update Your Values & Re-Install Immuta

Please see the Kubernetes Helm Installation Instructions for detailed procedures to complete the re-install.

Once in a while Kubernetes resources are not completely freed by the previous helm delete. If upon reinstall you have errors relating to conflicting resources, please see Troubleshooting.