Skip to content

Method B: Complete Backup and Restore Upgrade

Audience: System Administrators

Content Summary: This guide outlines the complete backup and restore method for upgrading Helm installations of Immuta in Kubernetes. This approach is needed when conducting more significant upgrades such as major releases (e.g., 2.6.x to 2.7.x), when making large jumps in versions (for example going from 2.5.x to 2.7.x), or simply whenever advised by your Immuta professional.

Method A is for basic upgrades and is typically acceptable when moving between minor point releases of Immuta (such as 2.7.0 to 2.7.1). However, Method B can be used in all cases.

Prerequisite

Ensure that you have updated your local Helm Chart before you complete the steps in this guide.

This procedure executes a complete backup and restore of Immuta.

The main steps involved are

  1. Capture a complete, current backup of your existing Immuta installation.
  2. Delete your existing Immuta install.
  3. Update the immuta-values.yaml to include all desired changes and re-install Immuta with the "restore" option set.

1 - Capture a Complete Current Backup

To leverage this procedure, you need Immuta's built-in backup/restore mechanism enabled in your Immuta install if it's not already in place.

1.1 - Enable Immuta's Built-in Backup Capabilities

Immuta's current Helm Chart provides a built-in backup/restore mechanism based on a PersistentVolumeClaim with an access mode of ReadWriteMany. If such a resource is available, Immuta's Helm Chart will set everything up for you if you enable backups and comment out the claimName.

backup:
# set to true to enable backups. requires RWX persistent volume support
    enabled: true
    # if claimName is set to an existing PVC no PV/PVC will be created
    # claimName: <YOUR ReadWriteMany PersistentVolumeClaim NAME>
    restore:
    # set to true to enable restoring from backups. This should be enabled whenever backups are enabled
    enabled: false

If you need to make modifications to these values to enable backups, please do so and then follow Method A) to commit them into your cluster before proceeding here.

Caution

If your Kubernetes cluster doesn't support PersistentVolumes with an access mode of ReadWriteMany, consult the subsections in this section of the documentation that are specific to your cloud provider for assistance in configuring a compatible resource. If your Kubernetes environment is not represented there, or a workable solution does not appear available, please contact your Immuta representative to discuss options.

1.2 - Check Existing Backups

After checking/enabling the backup/restore features, you can check to see if any previous backups have run, and if so when, by running kubectl get pods | grep backup:

kubectl get pods | grep backup
test-immuta-backup-1576368000-bzmcd                     0/2     Completed          0          2d
test-immuta-backup-1576454400-gtlrv                     0/2     Completed          0          1d
test-immuta-backup-1576540800-5tcg2                     0/2     Completed          0          6h

1.3 - Manually Trigger a Backup

Even if you have a fairly recent backup, it is advised to take a current, manual one to ensure nothing is lost between that backup and the present time. To do so, execute an ad-hoc backup with the following commands.

Begin by searching for the default backup Cron job:

kubectl get cronjobs
NAME                 SCHEDULE    SUSPEND   ACTIVE   LAST SCHEDULE   AGE
test-immuta-backup   0 0 * * *   False     0        <none>          14m

Now use that Cron job name to create your own ad-hoc job:

kubectl create job adhoc-backup --from cronjob/test-immuta-backup

Once that job is created, you should see it running in your cluster when you get pods:

kubectl get pods
NAME                                       READY   STATUS      RESTARTS   AGE
adhoc-backup-tj8k7                         0/2     Completed   0          8h
test-immuta-database-0                     1/1     Running     0          9h
test-immuta-database-1                     1/1     Running     0          9h
test-immuta-fingerprint-57b5c99f4c-bt5vk   1/1     Running     0          9h
test-immuta-fingerprint-57b5c99f4c-lr969   1/1     Running     0          9h
test-immuta-memcached-0                    1/1     Running     0          9h
test-immuta-memcached-1                    1/1     Running     0          9h
test-immuta-query-engine-0                 1/1     Running     0          9h
test-immuta-query-engine-1                 1/1     Running     0          9h
test-immuta-web-b5c7654fb-2brwb            1/1     Running     0          9h
test-immuta-web-b5c7654fb-csm7n            1/1     Running     0          9h

1.4 - Confirm Backup Completed Successfully

Once that job has completed, you can confirm that everything ran successfully by checking the logs of that pod. There are actually 2 containers in that pod: one that backs up Immuta's Metadata Database and one that backs up the Query Engine. If all went as expected your outputs to the following commands should be similar to the examples below.

Metadata Database

kubectl logs adhoc-backup-<RANDOM> -c database-backup
2019-12-17 07:29:55 UTC LOG: Driver staging directory: /var/lib/immuta/odbc/install/
2019-12-17 07:29:55 UTC LOG: Driver install directory: /var/lib/immuta/odbc/drivers/
==> Backing up database roles
==> Backing up role bometadata
==> Creating backup archive
==> Finished creating backup. Backup can be found at /var/lib/immuta/postgresql/backups/immuta-20191217072955.tar.gz

Query Engine Database

kubectl logs adhoc-backup-<RANDOM> -c query-engine-backup
2019-12-17 07:29:55 UTC LOG: Driver staging directory: /var/lib/immuta/odbc/install/
2019-12-17 07:29:55 UTC LOG: Driver install directory: /var/lib/immuta/odbc/drivers/
==> Backing up database roles
==> Backing up role immuta
==> Creating backup archive
==> Finished creating backup. Backup can be found at /var/lib/immuta/postgresql/backups/immuta-20191217072955.tar.gz

By default Immuta's backups will be stored in a persistent volume that's typically mounted at var/lib/immuta/postgresql/backups/ in the Cron job and var/lib/immuta/postgresql/restore/ in the database pods. In the normal case, this volume will persist across even the deletion of the Immuta installation and be picked back up during re-installation.

1.5 - (Optional) Create a Local Backup of the Backup

Best Practice: Keep Manual Copy of Backup

Immuta recommends a manual copy of this backup be made and held for safe-keeping until the upgrade has successfully completed:

kubectl cp test-immuta-database-0:/var/lib/immuta/postgresql/restore/immuta-<LATEST DATE/TIME STAMP>.tar.gz db.tar.gz
kubectl cp test-immuta-query-engine-0:/var/lib/immuta/postgresql/restore/immuta-<LATEST DATE/TIME STAMP>.tar.gz qe.tar.gz

2 - Delete the Current Immuta Installation

2.1 - Helm Delete Your Existing Immuta Installation

Once you have the backups secured, you are ready to delete the current Immuta installation and prepare to re-install the upgraded version.

Caution!

This command is destructive! Please ensure you have a backup (Step 1) prior to running this.

Deleting the Immuta installation can be done by executing

helm delete <YOUR RELEASE NAME>
helm delete <YOUR RELEASE NAME> --purge
Example Output

These resources were kept due to the resource policy:
[PersistentVolumeClaim] test-immuta-backup

release "test" deleted
You should see the above prints regarding the persistent volume claim that holds the backup being preserved. Assuming you see that, then the claim is available for connecting back into the new installation for restoration. If you do not see a similar message, please see Troubleshooting.

2.2 - Confirm Deletion and Resource Clean-Up

When the helm delete command finishes, confirm that all associated resources have been torn down by issuing a kubectl get pods and that all pods associated with the Immuta installation have been removed.

3 - Update Your Values and Re-Install Immuta

Please see the Kubernetes Helm Installation Instructions for detailed procedures to complete the re-install.

Once in a while Kubernetes resources are not completely freed by the previous helm delete. If upon re-install you have errors relating to conflicting resources, please see Troubleshooting.