# Migrating to the New Helm Chart

This guide demonstrates how to upgrade an existing Immuta deployment installed with the older Immuta Helm chart (IHC) to v2024.2 LTS using the Immuta Enterprise Helm chart (IEHC).

{% hint style="warning" %}
**Helm chart deprecation notice**

As of Immuta version 2024.2, the IHC has been deprecated in favor of the IEHC. Their respective `immuta-values.yaml` Helm values files are not compatible.
{% endhint %}

## Prerequisites

### Create a PostgreSQL database

1. The PostgreSQL instance has been provisioned and is actively running.
2. The PostgreSQL instance's hostname/FQDN is [resolvable from within the Kubernetes cluster](https://documentation.immuta.com/2024.3/troubleshooting#common).
3. The PostgreSQL instance is [accepting connections](https://documentation.immuta.com/2024.3/troubleshooting#postgresql).

For additional information, consult the Deployment requirements.

### Validate the Helm release

1. Fetch the metadata for the Helm release associated with Immuta.

   ```shell
   helm get metadata --output yaml <helm-release-name>
   ```
2. Review the output from the previous step and verify the following:
   * The Immuta version (`appVersion`) is
     * The last LTS (2022.5.x) **or** 2024.1 or newer
     * Less than 2024.2
   * The Immuta Helm chart (`version`) is greater than or equal to 4.13.5
   * The Immuta Helm chart name (`chart`) is `immuta`
3. If any of the criteria is not met, it's first necessary to perform a Helm upgrade using the IHC. Contact your Immuta representative for guidance.

## Metadata database

The new IEHC no longer supports deploying a Metadata database (PostgreSQL) inside the Kubernetes cluster. Before transitioning to the new IEHC, it's first necessary to externalize the Metadata database.

### Built-in

The following demonstrates how to take a database backup and import the data into each cloud provider's managed PostgreSQL service.

#### Create backup of old database

1. Get the metadata database pod name.

   ```shell
   kubectl get pod --selector "app.kubernetes.io/component=database" --output name
   ```
2. Spawn a shell inside the running metadata database pod.

   ```shell
   kubectl exec --stdin --tty <metadata-database-pod-name> -- sh
   ```
3. Perform a database backup.

   ```shell
   pg_dump --dbname=bometadata --file=/tmp/bometadata.dump --format=custom --no-owner --no-privileges
   ```
4. Type `exit`, and then press `Enter` to exit the shell prompt.
5. Copy file `bometadata.dump` from the pod to the host's working directory.

   ```shell
   kubectl cp <metadata-database-pod-name>:/tmp/bometadata.dump .
   ```

#### Setup new database

1. Create a pod named `immuta-setup-db` and spawn a shell.

   ```shell
   kubectl run immuta-setup-db --stdin --tty --rm --image docker.io/bitnami/postgresql:latest -- sh
   ```
2. Connect to the new PostgreSQL database as a superuser. Depending on the cloud provider, the default superuser name (`postgres`) might differ.

   ```sh
   psql --host <postgres-fqdn> --username postgres --port 5432 --password
   ```
3. Create `immuta`, `temporal`, and `temporal_visiblity` databases and an `immuta` role.

   ```sql
   CREATE ROLE immuta with login encrypted password '<postgres-password>';
   GRANT immuta TO CURRENT_USER;

   CREATE DATABASE immuta OWNER immuta;
   CREATE DATABASE temporal OWNER immuta;
   CREATE DATABASE temporal_visibility OWNER immuta;

   GRANT all ON DATABASE immuta TO immuta;
   GRANT all ON DATABASE temporal TO immuta;
   GRANT all ON DATABASE temporal_visibility TO immuta;
   ALTER ROLE immuta SET search_path TO bometadata,public;
   REVOKE immuta FROM CURRENT_USER;

   \c immuta
   CREATE EXTENSION pgcrypto;

   \c temporal
   GRANT CREATE ON SCHEMA public TO immuta;

   \c temporal_visibility
   GRANT CREATE ON SCHEMA public TO immuta;
   CREATE EXTENSION btree_gin;
   ```
4. Type `\q`, and then press `Enter` to exit the psql prompt.

#### Restore backup to new database

1. Create a pod named `immuta-restore-db` and spawn a shell.

   ```shell
   kubectl run immuta-restore-db --image docker.io/bitnami/postgresql:latest -- sleep infinity
   ```
2. Copy file `bometadata.dump` from the host's working directory to pod `immuta-restore-db`.

   ```shell
   kubectl cp bometadata.dump immuta-restore-db:/tmp
   ```
3. Spawn a shell inside pod `immuta-restore-db`.

   ```shell
   kubectl exec immuta-restore-db --stdin --tty -- sh
   ```
4. Perform a database restore while authenticated as role `immuta`. Refer to the value substituted for `<postgres-password>` when prompted to enter a password.

   ```
   pg_restore --host=<postgres-fqdn> --port=5432 --username=immuta --password --dbname=immuta --no-owner --role=immuta < /tmp/bometadata.dump
   ```
5. Type `exit`, and then press `Enter` to exit the shell prompt.
6. Delete pod `immuta-restore-db` that was previously created.

   ```shell
   kubectl delete pod/immuta-restore-db
   ```

### External

No additional work is required. The existing database can be reused with the new IEHC.

## Helm values

{% hint style="info" %}
**Helm values file compatibility**

The `immuta-values.yaml` Helm values file used by the IHC is not compatible with the new IEHC.
{% endhint %}

1. Rename the existing `immuta-values.yaml` Helm values file used by the IHC.

   ```sh
   mv immuta-values.yaml immuta-values.ihc.yaml
   ```
2. Follow the [installation guide](https://documentation.immuta.com/2024.3/self-managed-deployment/install) for your Kubernetes distribution of choice.
