Method B: Complete Backup and Restore Upgrade
This guide outlines the complete backup and restore method for upgrading Helm installations of
Immuta in Kubernetes. This approach is needed when conducting more significant upgrades such as major
2.7.x), when making large jumps in versions (for example going from
or simply whenever advised by your Immuta professional.
Method A is for basic upgrades and is typically acceptable when moving
between minor point releases of Immuta (such as
However, Method B can be used in all cases.
This procedure executes a complete backup and restore of Immuta.
The main steps involved are
- Capture a complete, current backup of your existing Immuta installation.
- Delete your existing Immuta install.
- Update the
immuta-values.yamlto include all desired changes and re-install Immuta with the "restore" option set.
Review the potential impacts of Immuta's CentOS 9 upgrade to your environment before proceeding.
Update your local Helm Chart before you complete the steps in this guide.
1 - Capture a Complete Current Backup
To leverage this procedure, you need Immuta's built-in backup/restore mechanism enabled in your Immuta install if it's not already in place.
1.1 - Enable Immuta's Built-in Backup Capabilities
Immuta's current Helm Chart provides a built-in backup/restore mechanism based on a
an access mode of
ReadWriteMany. If such a resource is available, Immuta's Helm Chart will set everything up for you
if you enable backups and comment out the
backup: # set to true to enable backups. requires RWX persistent volume support enabled: true # if claimName is set to an existing PVC no PV/PVC will be created # claimName: <YOUR ReadWriteMany PersistentVolumeClaim NAME> restore: # set to true to enable restoring from backups. This should be enabled whenever backups are enabled enabled: false
If you need to make modifications to these values to enable backups, please do so and then follow Method A) to commit them into your cluster before proceeding here.
If your Kubernetes cluster doesn't support
PersistentVolumes with an access mode of
consult the subsections in this section of the documentation that are specific to your cloud provider for assistance
in configuring a compatible resource. If your Kubernetes environment is not represented there, or a workable solution
does not appear available, please contact your Immuta representative to discuss options.
1.2 - Check Existing Backups
After checking/enabling the backup/restore features, you can check to see if any previous backups have run, and if so
when, by running
kubectl get pods | grep backup:
kubectl get pods | grep backup
test-immuta-backup-1576368000-bzmcd 0/2 Completed 0 2d test-immuta-backup-1576454400-gtlrv 0/2 Completed 0 1d test-immuta-backup-1576540800-5tcg2 0/2 Completed 0 6h
1.3 - Manually Trigger a Backup
Even if you have a fairly recent backup, it is advised to take a current, manual one to ensure nothing is lost between that backup and the present time. To do so, execute an ad-hoc backup with the following commands.
Begin by searching for the default backup Cron job:
kubectl get cronjobs
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE test-immuta-backup 0 0 * * * False 0 <none> 14m
Now use that Cron job name to create your own ad-hoc job:
kubectl create job adhoc-backup --from cronjob/test-immuta-backup
Once that job is created, you should see it running in your cluster when you get pods:
kubectl get pods
NAME READY STATUS RESTARTS AGE adhoc-backup-tj8k7 0/2 Completed 0 8h test-immuta-database-0 1/1 Running 0 9h test-immuta-database-1 1/1 Running 0 9h test-immuta-fingerprint-57b5c99f4c-bt5vk 1/1 Running 0 9h test-immuta-fingerprint-57b5c99f4c-lr969 1/1 Running 0 9h test-immuta-memcached-0 1/1 Running 0 9h test-immuta-memcached-1 1/1 Running 0 9h test-immuta-query-engine-0 1/1 Running 0 9h test-immuta-query-engine-1 1/1 Running 0 9h test-immuta-web-b5c7654fb-2brwb 1/1 Running 0 9h test-immuta-web-b5c7654fb-csm7n 1/1 Running 0 9h
1.4 - Confirm Backup Completed Successfully
Once that job has completed, you can confirm that everything ran successfully by checking the logs of that pod. There are actually 2 containers in that pod: one that backs up Immuta's Metadata Database and one that backs up the Query Engine. If all went as expected your outputs to the following commands should be similar to the examples below.
kubectl logs adhoc-backup-<RANDOM> -c database-backup
2019-12-17 07:29:55 UTC LOG: Driver staging directory: /var/lib/immuta/odbc/install/ 2019-12-17 07:29:55 UTC LOG: Driver install directory: /var/lib/immuta/odbc/drivers/ ==> Backing up database roles ==> Backing up role bometadata ==> Creating backup archive ==> Finished creating backup. Backup can be found at /var/lib/immuta/postgresql/backups/immuta-20191217072955.tar.gz
Query Engine Database
kubectl logs adhoc-backup-<RANDOM> -c query-engine-backup
2019-12-17 07:29:55 UTC LOG: Driver staging directory: /var/lib/immuta/odbc/install/ 2019-12-17 07:29:55 UTC LOG: Driver install directory: /var/lib/immuta/odbc/drivers/ ==> Backing up database roles ==> Backing up role immuta ==> Creating backup archive ==> Finished creating backup. Backup can be found at /var/lib/immuta/postgresql/backups/immuta-20191217072955.tar.gz
By default Immuta's backups will be stored in a persistent volume that's typically mounted at
var/lib/immuta/postgresql/backups/ in the Cron job and
var/lib/immuta/postgresql/restore/ in the database pods.
In the normal case, this volume will persist across even the deletion of the Immuta installation and be picked back
up during re-installation.
1.5 - (Optional) Create a Local Backup of the Backup
Best Practice: Keep Manual Copy of Backup
Immuta recommends a manual copy of this backup be made and held for safe-keeping until the upgrade has successfully completed:
kubectl cp test-immuta-database-0:/var/lib/immuta/postgresql/backups/immuta-<LATEST DATE/TIMESTAMP>.tar.gz db.tar.gz kubectl cp test-immuta-query-engine-0:/var/lib/immuta/postgresql/backups/immuta-<LATEST DATE/TIMESTAMP>.tar.gz qe.tar.gz
2 - Delete the Current Immuta Installation
2.1 - Helm Delete Your Existing Immuta Installation
Once you have the backups secured, you are ready to delete the current Immuta installation and prepare to re-install the upgraded version.
This command is destructive! Please ensure you have a backup (Step 1) prior to running this.
Deleting the Immuta installation can be done by executing
helm delete <YOUR RELEASE NAME>
helm delete <YOUR RELEASE NAME> --purge
If using cloud storage (S3, etc) for your backups, you should see something like
release "test" deleted
If using a persistent volume for your backups, you should see something like
These resources were kept due to the resource policy: [PersistentVolumeClaim] test-immuta-backup release "test" deleted
If using a PVC to store the backup
If you are using a PVC to store your backup data (as opposed to cloud storage), you should see prints regarding the persistent volume claim that holds the backup being "kept" as shown in the example output immediately above. This notification means the PVC and underlying PV will remain available for connecting back into the new installation for restoration. This is critical if this is where your backups stored! If you are using a persistent volume to backup your instance and you do not see such a message, please see Troubleshooting.
2.2 - If Using
persistence, Delete Database and Query Engine PVCs
As noted with the backup PVC (if used).
helm delete will not delete PVCs by default. When using
persistence in your Helm deployments (which should be used for all but POV and/or testing deployments),
the database and Query Engine pods will be backed by PVCs in order to preserve their data. When doing a
full backup and restore, it is necessary to manually delete any PVCs before proceeding with the reinstall.
If you do not, these existing PVCs will be picked up by the new installation, and the old data will
interfere with the restore process.
To delete these PVCs:
2.2.1 - List Relevant PVCs
helm delete command finishes, confirm that all associated resources have been torn down by
issuing the following and confirming that all pods associated with the Immuta installation have been
kubectl get pvc
pg-data-test-immuta-database-0 Bound pvc-36ef1f8f-f07f-4f32-9631-f1bada8f70ed 6Gi RWO gp2 3h32m ... pg-data-test-immuta-query-engine-0 Bound pvc-19b4a097-eae3-4d70-9373-bf74a937c6d9 2Gi RWO gp2 3h32m ...
You should see a PVC listed for each
query-engine pod you have. All of these will
need to be deleted.
2.2.2 - Manually Delete Database and Query-Engine PVCs
To delete these volumes, issue a
kubectl delete command for each:
kubectl delete pvc/pg-data-test-immuta-database-0
persistentvolumeclaim "pg-data-test-immuta-database-0" deleted
Delete all PVCs for both
Leave any other PVCs that may exist!
2.3 - Confirm Deletion and Resource Clean-Up
Confirm that all associated resources have been torn down by issuing the following and confirming that all pods and PVCs associated with the Immuta installation have been removed.
2.3.1 - Confirm Deletion of All Pods
kubectl get pods
No resources found
2.3.2 - Confirm Deletion of All Database PVCs
kubectl get pvc
No resources found
3 - Update Your Values and Re-Install Immuta
Please see the Kubernetes Helm Installation Instructions for detailed procedures to complete the
re-install. As clarified in Method A, it
is recommended that you remove
upgrading from Kubernetes 1.22+; otherwise, your ingress
service may not start after the upgrade.
See the Helm Chart release notes for details.
Once in a while Kubernetes resources are not completely freed by the previous
helm delete. If upon re-install you
have errors relating to conflicting resources, please see Troubleshooting.