Install Helm
Prerequisites
Kubernetes 1.16 or greater
Helm 3.2 or greater
Rocky Linux 9
Review the potential impacts of Immuta's Rocky Linux 9 upgrade to your environment before proceeding:
ODBC Drivers
Your ODBC drivers should use a driver compatible with Enterprise Linux 9 or Red Hat Enterprise Linux 9.
Container Runtimes
You must run a supported version of Kubernetes.
Use at least Docker v20.10.10 if using Docker as the container runtime.
Use at least containerd 1.4.10 if using containerd as the container runtime.
OpenSSL 3.0
CentOS Stream 9 uses OpenSSL 3.0, which has deprecated support for older insecure hashes and TLS versions, such as TLS 1.0 and TLS 1.1. This shouldn't impact you unless you are using an old, insecure certificate. In that case, the certificate will no longer work. See the OpenSSL migration guide for more information.
FIPS Environments
If you run Immuta 2022.5.x containers in a FIPS-enabled environment, they will now fail. Helm Chart 4.11 contains a feature for you to override the openssl.cnf
file, which can be used to allow Immuta to run in your environment, mimicking the CentOS 7 behavior.
Using a Kubernetes namespace
If deploying Immuta into a Kubernetes namespace other than the default, you must include the --namespace
option into all helm
and kubectl
commands provided throughout this section.
1 - Configure the Environment
1.1 - Check Helm Version
Immuta's Helm Chart requires Helm version 3+.
Run helm version
to verify the version of Helm you are using:
Helm 3 Example Output
1.2 - Configure Immuta's Helm Chart Repo
In order to deploy Immuta to your Kubernetes cluster, you must be able to access the Immuta Helm Chart Repository and the Immuta Docker Registry. You can obtain credentials from your Immuta support professional.
--pass-credentials Flag
If you encounter an unauthorized error when adding Immuta's Helm Chart, you could run helm repo add --pass-credentials
.
Usernames and passwords are only passed to the URL location of the Helm repository by default. The username and password are scoped to the scheme, host, and port of the Helm repository. To pass the username and password to other domains Helm may encounter when it goes to retrieve a chart, the new --pass-credentials
flag can be used. This flag restores the old behavior for a single repository as an opt-in behavior.
If you use a username and password for a Helm repository, you can audit the Helm repository in order to check for another domain that could have received the credentials. In the index.yaml
file for that repository, look for another domain in the URL's list for the chart versions. If there is another domain found and that chart version is pulled or installed the credentials will be passed on.
Run helm repo list
to ensure Immuta's Helm Chart repository has been successfully added:
Example Output
Don't forget the image pull secret!
You must create a Kubernetes Image Pull Secret in the namespace that you are deploying Immuta in, or the Pods will fail to start due to ErrImagePull
.
Run kubectl get secrets
to confirm your Kubernetes image pull secret is in place:
Example Output
1.3 - Check/Update Your Local Immuta Helm Chart Version
Run helm search repo immuta
to check the version of your local copy of Immuta's Helm Chart:
Example Output
Update your local Chart by running helm repo update
.
To perform an upgrade without upgrading to the latest version of the Chart, run helm list
to determine the Chart version of the installed release, and then specify that version using the --version
argument of helm repo update
.
2 - Configure Immuta Helm Values
Once you have the Immuta Docker Registry and Helm Chart Repository configured, download the immuta-values.yaml file. This file is a recommended starting point for your installation.
Modify
immuta-values.yaml
based on the determined configuration for your Kubernetes cluster and the desired Immuta installation. You can change a number of settings in this file, such asGuidance for configuring persistence, backups, and resource limits are provided below. See Immuta Helm Chart Options for a comprehensive list of configuration options.
Replace the placeholder password value
"<SPECIFY_PASSWORD_THAT_MEETS_YOUR_ORGANIZATIONS_POLICIES>"
with a secure password that meets your organization's password policies.
Avoid these special characters in generated passwords
whitespace, $
, &
, :
, \
, /
, '
Default Helm Values
Modifying any file bundled inside the Helm Chart could cause unforeseen issues and as such is not supported by Immuta. This includes but is not limited to the values.yaml
file that contains default configurations for the Helm deployment. Any custom configurations can be made in the immuta-values.yaml
file can then be passed into helm install immuta
by using the --values
flag as described in Deploy Immuta.
3 - Configure Persistence
If you would like to disable persistence to disk for the database
and query-engine
components, you can do so by configuring database.persistence.enabled=false
and/or queryEngine.persistence.enabled=false
in immuta-values.yaml
. Disabling persistence can be done for test environments. However, we strongly recommend against disabling persistence in production environments as this leaves your database in ephemeral storage.
By default, database.persistence.enabled
and queryEngine.persistence.enabled
are set to true
and request 120Gi
of storage for each component. Recommendations for the Immuta Metadata Database storage size for POV, Staging, and Production deployments are provided in the immuta-values.yaml
as shown below. However, the actual size needed is a function of the number of data sources you intend to create and the amount of logging/auditing (and its retention) that will be used in your system.
Provide Room for Growth
Provide plenty of room for growth here, as Immuta's operation will be severely impacted should database storage reach capacity.
While the Immuta query engine persistent storage size is configurable as well, the default size of 20Gi
should be sufficient for operations in nearly all environments.
Limitations on modifying database and query-engine persistence
Once persistence is set to either true
or false
for database
or query-engine
, it cannot be changed for the deployment. Modifying persistence will require a fresh installation or a full backup and restore procedure as per 4.2 - Restore From Backup - Immuta Kubernetes Re-installation.
4 - Configure Backup and Restoration Values
At this point this procedure forks depending on whether you are installing with the intent of restoring from a backup or not. Use the bullets below to determine which step to follow.
If this is a new install with no restoration needed, follow Step 4.1.
If you are upgrading a previous installation using the full backup and restore (Method B), follow Step 4.2.
4.1 - Initial Immuta Kubernetes Installation -- No Backup Restoration
Immuta's Helm Chart has support for taking backups and storing them in a PersistentVolume
or copying them directly to cloud provider blob storage including AWS S3, Azure Blob Storage, and Google Storage.
To configure backups with blob storage, reference the backup
section in immuta-values.yaml
and consult the subsections of this section of the documentation that are specific to your cloud provider for assistance in configuring a compatible resource. If your Kubernetes environment is not represented there, or a workable solution does not appear available, please contact your Immuta representative to discuss options.
If using volumes, the Kubernetes cluster Immuta is being installed into must support PersistentVolumes
with an access mode of ReadWriteMany
. If such a resource is available, Immuta's Helm Chart will set everything up for you if you enable backups and comment out the volume
and claimName
.
4.2 - Restore From Backup - Immuta Kubernetes Re-Installation
If you are upgrading a previous installation using the full backup and restore procedure (Method B), a valid backup configuration must be available in the Helm values. Enable the functionality to restore from backups by setting the restore.enabled
option to true in immuta-values.yaml
.
If using the volume backup type, an existing PersistentVolumeClaim
name needs to be configured in your immuta-values.yaml
because the persistentVolumeClaimSpec
is only used to create a new, empty volume.
If you are unsure of the value for <YOUR ReadWriteMany PersistentVolumeClaim NAME>
, the command kubectl get pvc
will list it for you
Example Output
5 - Set Replicas and Resource Limits
Adhering to the guidelines and best practices for replicas and resource limits outlined below is essential for optimizing performance, ensuring cluster stability, controlling costs, and maintaining a secure and manageable environment. These settings help strike a balance between providing sufficient resources to function optimally and making efficient use of the underlying infrastructure.
5.1 - Configure Replicas
Set the following replica parameters in your Helm Chart to the values listed below:
5.2 - Set Resource Limits
The Immuta Helm Chart supports Resource Limits for all components. Set resources and limits to the database and query engine in the Helm values. Without those limits, the pods will be the first target for eviction, which can cause issues during backup and restore, since this process consumes a lot of memory.
Add this YAML snippet to your Helm values file:
Database is the only component that needs a lot of resources, especially if you don’t use the query engine. For a small installation, you can set the database resources to 2Gi
, and if you see slower performance over time, you can increase this number to improve performance.
Setting CPU resources and limits is optional. Resource contention over CPU is not a common occurrence for Immuta, so it won’t have a significant effect if the CPU resource and limit is set.
6 - Deploy Immuta
Run the following command to deploy Immuta:
Troubleshooting
If you encounter errors while deploying the Immuta Helm Chart, see Troubleshooting.
Advanced Installations
Manage TLS
HTTP communication using TLS certificates is enabled by default in Immuta's Helm Chart for both internal (inside the Kubernetes cluster) and external (between the Kubernetes ingress and the outside world) communications. This is accomplished through the generation of a local certificate authority (CA) which signs certificates for each service - all handled automatically by the Immuta installation. While not recommended, if TLS must be disabled for some reason, this can be done by setting tls.enabled
to false
in the values file.
Use Your Own TLS Certificate(s)
Best Practice: TLS Certification
Immuta recommends to use your own TLS certificate for external (outside the Kubernetes cluster) communications for Immuta production deployments.
Using your own certificates requires you to create a Kubernetes Secret containing the private key, certificate, and certificate authority certificate. This can be easily done using kubectl:
Make sure your certificates are correct
Make sure the certificate's Common Name (CN) and/or Subject Alternative Name (SAN) matches the specified externalHostname
or contains an appropriate wildcard.
After creating the Kubernetes Secret, specify its use in the external ingress by setting tls.externalSecretName
= immuta-external-tls
in your immuta-values.yaml file:
Using Argo CD
The Immuta Helm Chart (version 4.5.0+) can be deployed using Argo CD.
For Argo CD versions older than 1.7.0 you must use the following Helm values in order for the TLS generation hook to run successfully.
Starting with Argo CD version 1.7.0 the default TLS generation hook values can be used.
tls.manageGeneratedSecret
must be set to true when using Argo CD to deploy Immuta; otherwise, the generated TLS secret will be shown as OutOfSync (requires pruning) in Argo CD. Pruning the Secret would break TLS for the deployment, so it is important to set this value to prevent that from happening.
Troubleshoot
For detailed assistance in troubleshooting your installation, contact your Immuta representative or see Helm Troubleshooting.
Last updated