Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The following procedure walks through the process of changing passwords for the database users in the Immuta Database.
The commands outlined here will need to be altered depending on your Helm release name and chosen passwords. Depending on your environment, there may be other changes required for the commands to complete successfully, including, but not limited to, Kubernetes namespace, kubectl context, and Helm values file name.
This process results in downtime.
Scale database StatefulSet
to 1 replica:
Change database.superuserPassword
:
Alter Postgres user password:
Update database.superuserPassword
with <new-password>
in immuta-values.yaml
.
Change database.replicationPassword
:
Alter replicator user password:
Update database.replicationPassword
with <new-password>
in immuta-values.yaml
.
Change database.password
:
Alter bometa
user password:
Update database.password
with <new-password>
in immuta-values.yaml
.
Update database.patroniApiPassword
with <new-password>
in immuta-values.yaml
.
Run helm upgrade
to persist the changes and scale the database StatefulSet
up:
Restart web pods:
Users have the option to use an existing Kubernetes secret for Immuta database passwords used in Helm installations.
Update your existingSecret
values in your Kubernetes environment.
Get the current replica counts:
Scale database StatefulSet
to 1 replica:
Change the value corresponding to database.superuserPassword
in the existing Kubernetes Secret.
Alter Postgres user password:
Change the value corresponding to database.replicationPassword
in the existing Kubernetes Secret.
Alter replicator user password:
Change the value corresponding to database.password
in the existing Kubernetes Secret.
Alter bometa
user password:
Scale the immuta-database StatefulSet
up to the previous replica count determined in the previous step:
Restart web pods:
Using a Kubernetes namespace
If deploying Immuta into a Kubernetes namespace other than the default, you must include the --namespace
option into all helm
and kubectl
commands provided throughout this section.
Immuta's Helm Chart requires Helm version 3+.
New installations of Immuta must use the latest version of Helm 3 and Immuta's latest Chart.
Run helm version
to verify the version of Helm you are using:
In order to deploy Immuta to your Kubernetes cluster, you must be able to access the Immuta Helm Chart Repository and the Immuta Docker Registry. You can obtain credentials from your Immuta support professional.
Run helm repo list
to ensure Immuta's Helm Chart repository has been successfully added:
Example Output
If you do not create a Kubernetes Image Pull Secret, installation will fail.
You must create a Kubernetes Image Pull Secret in the namespace that you are deploying Immuta in, or the installation will fail.
Run kubectl get secrets
to confirm your Kubernetes image pull secret is in place:
Example Output
No Rollback
Immuta's migrations to your database are one way; this means that there is no way to revert back to an earlier version of the software. If you must rollback, you will need to backup and delete what you have and then proceed to restore from the backup to the appropriate version of the software.
No Modifying Persistence
Once persistence is set to either true
or false
for the database
or query-engine
, it cannot be changed for the deployment. Modifying persistence will require a fresh installation or a full backup and restore procedure as per Method B: Complete Backup and Restore Upgrade.
Run helm search repo immuta
to check the version of your local copy of Immuta's Helm Chart:
Example Output
Update your local Chart by running helm repo update
.
To perform an upgrade without upgrading to the latest version of the Chart, run helm list
to determine the Chart version of the installed release, and then specify that version using the --version
argument of helm repo update
.
Run helm list
to confirm Helm connectivity and verify the current Immuta installation:
Example Output
Make note of:
NAME - This is the '<YOUR RELEASE NAME>
' that will be used in the remainder of these instructions.
CHART - This is the version of Immuta's Helm Chart that your instance was deployed under.
You will need the Helm values associated with your installation, which are typically stored in an immuta-values.yaml
file. If you do not possess the original values file, these can be extracted from the existing installation using:
Select your method:
Method B - Backup and Restore: This method is intended primarily for recovery scenarios and is only to be used if you have been advised to by an Immuta representative. Reach out to your Immuta representative for instructions.
Rocky Linux 9
Review the potential impacts of Immuta's Rocky Linux 9 upgrade to your environment before proceeding:
ODBC Drivers
Your ODBC drivers should use a driver compatible with Enterprise Linux 9 or Red Hat Enterprise Linux 9.
Container Runtimes
You must run a supported version of Kubernetes.
Use at least Docker v20.10.10 if using Docker as the container runtime.
Use at least containerd 1.4.10 if using containerd as the container runtime.
OpenSSL 3.0
CentOS Stream 9 uses OpenSSL 3.0, which has deprecated support for older insecure hashes and TLS versions, such as TLS 1.0 and TLS 1.1. This shouldn't impact you unless you are using an old, insecure certificate. In that case, the certificate will no longer work. See the OpenSSL migration guide for more information.
FIPS Environments
If you run Immuta 2022.5.x containers in a FIPS-enabled environment, they will now fail. Helm Chart 4.11 contains a feature for you to override the openssl.cnf
file, which can be used to allow Immuta to run in your environment, mimicking the CentOS 7 behavior.
After you make any desired changes in your immuta-values.yaml
file, you can apply these changes by running helm upgrade
:
Note: Errors in upgrades can result when upgrading Chart versions on the installation. These are typically easily resolved by making slight modifications of your values to accommodate the changes in the Chart. Downloading an updated copy of the immuta-values.yaml
and comparing to your existing values is often a great way to debug such occurrences.
If you are on Kubernetes 1.22+, remove nginxIngress.controller.image.tag=v0.49.3
when upgrading; otherwise, your ingress service may not start after the upgrade.
Rocky Linux 9
Review the potential impacts of Immuta's Rocky Linux 9 upgrade to your environment before proceeding:
ODBC Drivers
Your ODBC drivers should use a driver compatible with Enterprise Linux 9 or Red Hat Enterprise Linux 9.
Container Runtimes
You must run a supported version of Kubernetes.
Use at least Docker v20.10.10 if using Docker as the container runtime.
Use at least containerd 1.4.10 if using containerd as the container runtime.
OpenSSL 3.0
CentOS Stream 9 uses OpenSSL 3.0, which has deprecated support for older insecure hashes and TLS versions, such as TLS 1.0 and TLS 1.1. This shouldn't impact you unless you are using an old, insecure certificate. In that case, the certificate will no longer work. See the OpenSSL migration guide for more information.
FIPS Environments
If you run Immuta 2022.5.x containers in a FIPS-enabled environment, they will now fail. Helm Chart 4.11 contains a feature for you to override the openssl.cnf
file, which can be used to allow Immuta to run in your environment, mimicking the CentOS 7 behavior.
Helm 3.2 or greater
Kubernetes: See the Install Immuta page for a list of versions Immuta supports.
Immuta uses Helm to manage and orchestrate Kubernetes deployments. Check the Helm Release Notes to ensure you are using the correct Helm Chart with your version of Immuta.
Database backups for the metadata database and Query Engine may be stored in either cloud-based blob storage or a Persistent Volume in Kubernetes.
Backups may be stored using one of the following cloud-based blob storage services:
AWS S3
Supports authentication via AWS Access Key ID / Secret Key, IAM Roles via kube2iam or kiam, or IAM Roles in EKS.
Azure Blob Storage
Supports authentication via Azure Storage Key, Azure SAS Token, or Azure Managed Identities.
Google Cloud Storage
Supports authentication via Google Service Account Key
When database persistence is enabled, Immuta requires access to PersistentVolumes
through the use of a persistent volume claim template. These volumes should normally be provided by a block device, such as AWS EBS, Azure Disk, or GCE Persistent Disk.
Additionally, when database persistence is enabled, Immuta requires the ability to run an initContainer
as root. When PodSecurityPolicies
are in place, service accounts must be granted access to use a PodSecurityPolicy
with the ability to RunAsAny user
.
The Immuta Helm Chart supports RBAC and will try to create all needed RBAC roles by default.
Best Practice: Use Nginx Ingress Controller
Immuta recommends that you use the Nginx Ingress Controller because it supports both HTTP and TCP ingress.
Immuta needs Ingress for two services:
Immuta Web Service (HTTP)
Immuta Query Engine (TCP)
The Immuta Helm Chart creates Ingress resources for HTTP services (the Immuta Web Service), but because of limitations with Kubernetes Ingress resources TCP ingress must be configured separately. The configuration for TCP ingress is dependent on the Ingress Controller that you are using in your cluster. Also, the configuration for TCP ingress is optional if you will only integrations, and it can be disabled.
To simplify the configuration for cluster Ingress, the Immuta Helm Chart contains an optional Nginx Ingress component that may be used to configure a Nginx Ingress Controller to be used specifically for Immuta. Contact your Immuta Support Professional for more information.
Immuta’s suggested minimum node size has 4 CPUs and 16GB RAM. The default Immuta Helm deployment requires at least 3 nodes.
All Immuta services use TLS certificates to enable communication over HTTPS. In order to support many configurations, the Immuta Helm Chart has the ability to configure internal and external communication independently. If TLS is enabled, by default, a certificate authority will be generated then used to sign a certificate for both internal and external communications. See Enabling TLS for instructions to configuring TLS.
Internal HTTPS communication refers to all communication between Immuta services. External HTTPS communication refers to communication between clients and the Immuta Query Engine and Web Service, which is configured using a Kubernetes Ingress resource.
Audience: System Administrators
Content Summary: The Immuta Helm installation integrates well with Kubernetes on AWS. This guide walks through the various components that can be set up.
Prerequisite: An Amazon EKS cluster with a recommended minimum of 3 m5.xlarge worker nodes.
Using a Kubernetes namespace
If deploying Immuta into a Kubernetes namespace other than the default, you must include the --namespace
option into all helm
and kubectl
commands provided throughout this section.
As of Kubernetes 1.23+ on EKS, you have to configure the EBS CSI driver in order for the Immuta Helm deployment to be able to request volumes for storage. Follow these instructions:
Upon cluster creation, create an IAM policy and role and associate it with the cluster. See AWS documentation for details: Creating the Amazon EBS CSI driver IAM role for service accounts.
Upon cluster creation, add the EBS CSI driver as an add-on to the cluster. See AWS documentation for details: Managing the Amazon EBS CSI driver as an Amazon EKS add-on.
For deploying Immuta on a Kubernetes cluster using the AWS cloud provider, you can mostly follow the Kubernetes Helm installation guide.
The only deviations from that guide are in the custom values file(s) you create. You will want to incorporate any changes referenced throughout this guide, particularly in the Backups and Load Balancing sections below.
Best Practice: Use S3 for Shared Storage
On AWS Immuta recommends that you use S3 for shared storage.
AWS IAM Best Practices
When using AWS IAMs make sure that you are using the best practices outlined here: AWS IAM Best Practices.
Best Practice: Production and Persistence
If deploying Immuta to a production environment using the built-in metadata database, it is recommended to resize the /
partition on each node to at least 50GB. The default size for many cloud providers is 20 GB.
To begin, you will need an IAM role that Immuta can use to access the S3 bucket from your Kubernetes cluster. There are four options for role assumption:
IAM Roles for Service Accounts : recommended for EKS.
Kube2iam or kiam: recommended if you have other workloads running in the cluster.
Instance profile: recommended if only Immuta is running in the cluster.
AWS secret access keys: simplest set-up if access keys and secrets are allowed in your environment.
The role you choose above must have at least the following IAM permissions:
The easiest way to expose your Immuta deployment running on Kubernetes with the AWS cloud provider is to set up nginx ingress as serviceType: LoadBalancer
and let the chart handle creation of an ELB.
Best Practices: ELB Listeners Configured to Use SSL
For best performance and to avoid any issues with web sockets, the ELB listeners need to be configured to use SSL instead of HTTPS.
If you are using the included ingress controller, it will create a Kubernetes LoadBalancer Service to expose Immuta outside of your cluster. The following options are available for configuring the LoadBalancer Service:
nginxIngress.controller.service.annotations
: Useful for setting options such as creating an internal load balancer or configuring TLS termination at the load balancer.
nginxIngress.controller.service.loadBalancerSourceRanges
: Used to limit which client IP addresses can access the load balancer.
nginxIngress.controller.service.externalTrafficPolicy
: Useful when working with Network Load Balancers on AWS. It can be set to “Local” to allow the client IP address to be propagated to the Pods.
Possible values for these various settings can be found in the Kubernetes Documentation.
If you would like to use automatic ELB provisioning, you can use the following values:
You can then manually edit the ELB configuration in the AWS console to use ACM TLS certificates to ensure your HTTPS traffic is secured by a trusted certificate. For instructions on doing this, please see Amazon's guide on how to Configure an HTTPS Listener for Your Classic Load Balancer
Another option is to set up nginx ingress with serviceType: NodePort
and configure load balancers outside of the cluster.
For example,
In order to determine the ports to configure the load balancer for, examine the Service configuration:
This will print out the port name and port. For example,
The Immuta deployment to EKS has a very low maintenance burden.
Best Practices: Installation Maintenance
Immuta recommends the following basic procedures for monitoring and periodic maintenance of your installation:
Periodically examine the contents of S3 to ensure database backups exist for the expected time range.
Ensure your Immuta installation is current and update it if it is not per the update instructions.
Be aware of the solutions to common management tasks for Kubernetes deployment
If kubectl
does not meet your monitoring needs, we recommend installing the Kubernetes Dashboard using the AWS provided instructions.
Ensure that your Immuta deployment is taking regular backups to AWS S3.
Your Immuta deployment is highly available and resilient to failure. For some catastrophic failures, recovery from backup may be required. Below is a list of failure conditions and the steps necessary to ensure Immuta is operational.
Internal Immuta Service Failure: Because Immuta is running in a Kubernetes deployment, no action should be necessary. Should a failure occur that is not automatically resolved, follow Immuta backup restoration procedures.
EKS Cluster Failure: Should your EKS cluster experience a failure, simply create a new cluster and follow Immuta backup restoration procedures.
Availability Zone Failure: Because EKS and ELB as well as the Immuta installation within EKS are designed to tolerate the failure of an availability zone, there are no steps needed to address the failure of an availability zone.
Region Failure: To provide recovery capability in the unlikely event of an AWS Region failure, Immuta recommends periodically copying database backups into an S3 bucket in a different AWS region. Should you experience a region failure, simply create a new cluster in a working region and follow Immuta backup restoration procedures.
See the AWS Documentation for more information on managing service limits to allow for proper disaster recovery.
Kubernetes 1.16 or greater
Helm 3.2 or greater
Rocky Linux 9
Review the potential impacts of Immuta's Rocky Linux 9 upgrade to your environment before proceeding:
ODBC Drivers
Your ODBC drivers should use a driver compatible with Enterprise Linux 9 or Red Hat Enterprise Linux 9.
Container Runtimes
You must run a supported version of Kubernetes.
Use at least Docker v20.10.10 if using Docker as the container runtime.
Use at least containerd 1.4.10 if using containerd as the container runtime.
OpenSSL 3.0
CentOS Stream 9 uses OpenSSL 3.0, which has deprecated support for older insecure hashes and TLS versions, such as TLS 1.0 and TLS 1.1. This shouldn't impact you unless you are using an old, insecure certificate. In that case, the certificate will no longer work. See the OpenSSL migration guide for more information.
FIPS Environments
If you run Immuta 2022.5.x containers in a FIPS-enabled environment, they will now fail. Helm Chart 4.11 contains a feature for you to override the openssl.cnf
file, which can be used to allow Immuta to run in your environment, mimicking the CentOS 7 behavior.
Using a Kubernetes namespace
If deploying Immuta into a Kubernetes namespace other than the default, you must include the --namespace
option into all helm
and kubectl
commands provided throughout this section.
Immuta's Helm Chart requires Helm version 3+.
Run helm version
to verify the version of Helm you are using:
Helm 3 Example Output
In order to deploy Immuta to your Kubernetes cluster, you must be able to access the Immuta Helm Chart Repository and the Immuta Docker Registry. You can obtain credentials from your Immuta support professional.
--pass-credentials Flag
If you encounter an unauthorized error when adding Immuta's Helm Chart, you could run helm repo add --pass-credentials
.
Usernames and passwords are only passed to the URL location of the Helm repository by default. The username and password are scoped to the scheme, host, and port of the Helm repository. To pass the username and password to other domains Helm may encounter when it goes to retrieve a chart, the new --pass-credentials
flag can be used. This flag restores the old behavior for a single repository as an opt-in behavior.
If you use a username and password for a Helm repository, you can audit the Helm repository in order to check for another domain that could have received the credentials. In the index.yaml
file for that repository, look for another domain in the URL's list for the chart versions. If there is another domain found and that chart version is pulled or installed the credentials will be passed on.
Run helm repo list
to ensure Immuta's Helm Chart repository has been successfully added:
Example Output
Don't forget the image pull secret!
You must create a Kubernetes Image Pull Secret in the namespace that you are deploying Immuta in, or the Pods will fail to start due to ErrImagePull
.
Run kubectl get secrets
to confirm your Kubernetes image pull secret is in place:
Example Output
Run helm search repo immuta
to check the version of your local copy of Immuta's Helm Chart:
Example Output
Update your local Chart by running helm repo update
.
To perform an upgrade without upgrading to the latest version of the Chart, run helm list
to determine the Chart version of the installed release, and then specify that version using the --version
argument of helm repo update
.
Once you have the Immuta Docker Registry and Helm Chart Repository configured, download the immuta-values.yaml file. This file is a recommended starting point for your installation.
Modify immuta-values.yaml
based on the determined configuration for your Kubernetes cluster and the desired Immuta installation. You can change a number of settings in this file, such as
Guidance for configuring persistence, backups, and resource limits are provided below. See Immuta Helm Chart Options for a comprehensive list of configuration options.
Replace the placeholder password value "<SPECIFY_PASSWORD_THAT_MEETS_YOUR_ORGANIZATIONS_POLICIES>"
with a secure password that meets your organization's password policies.
Avoid these special characters in generated passwords
whitespace, $
, &
, :
, \
, /
, '
Default Helm Values
Modifying any file bundled inside the Helm Chart could cause unforeseen issues and as such is not supported by Immuta. This includes but is not limited to the values.yaml
file that contains default configurations for the Helm deployment. Any custom configurations can be made in the immuta-values.yaml
file can then be passed into helm install immuta
by using the --values
flag as described in Deploy Immuta.
If you would like to disable persistence to disk for the database
and query-engine
components, you can do so by configuring database.persistence.enabled=false
and/or queryEngine.persistence.enabled=false
in immuta-values.yaml
. Disabling persistence can be done for test environments. However, we strongly recommend against disabling persistence in production environments as this leaves your database in ephemeral storage.
By default, database.persistence.enabled
and queryEngine.persistence.enabled
are set to true
and request 120Gi
of storage for each component. Recommendations for the Immuta Metadata Database storage size for POV, Staging, and Production deployments are provided in the immuta-values.yaml
as shown below. However, the actual size needed is a function of the number of data sources you intend to create and the amount of logging/auditing (and its retention) that will be used in your system.
Provide Room for Growth
Provide plenty of room for growth here, as Immuta's operation will be severely impacted should database storage reach capacity.
While the Immuta query engine persistent storage size is configurable as well, the default size of 20Gi
should be sufficient for operations in nearly all environments.
Limitations on modifying database and query-engine persistence
Once persistence is set to either true
or false
for database
or query-engine
, it cannot be changed for the deployment. Modifying persistence will require a fresh installation or a full backup and restore procedure as per 4.2 - Restore From Backup - Immuta Kubernetes Re-installation.
At this point this procedure forks depending on whether you are installing with the intent of restoring from a backup or not. Use the bullets below to determine which step to follow.
If this is a new install with no restoration needed, follow Step 4.1.
If you are upgrading a previous installation using the full backup and restore (Method B), follow Step 4.2.
Immuta's Helm Chart has support for taking backups and storing them in a PersistentVolume
or copying them directly to cloud provider blob storage including AWS S3, Azure Blob Storage, and Google Storage.
To configure backups with blob storage, reference the backup
section in immuta-values.yaml
and consult the subsections of this section of the documentation that are specific to your cloud provider for assistance in configuring a compatible resource. If your Kubernetes environment is not represented there, or a workable solution does not appear available, please contact your Immuta representative to discuss options.
If using volumes, the Kubernetes cluster Immuta is being installed into must support PersistentVolumes
with an access mode of ReadWriteMany
. If such a resource is available, Immuta's Helm Chart will set everything up for you if you enable backups and comment out the volume
and claimName
.
If you are upgrading a previous installation using the full backup and restore procedure (Method B), a valid backup configuration must be available in the Helm values. Enable the functionality to restore from backups by setting the restore.enabled
option to true in immuta-values.yaml
.
If using the volume backup type, an existing PersistentVolumeClaim
name needs to be configured in your immuta-values.yaml
because the persistentVolumeClaimSpec
is only used to create a new, empty volume.
If you are unsure of the value for <YOUR ReadWriteMany PersistentVolumeClaim NAME>
, the command kubectl get pvc
will list it for you
Example Output
Adhering to the guidelines and best practices for replicas and resource limits outlined below is essential for optimizing performance, ensuring cluster stability, controlling costs, and maintaining a secure and manageable environment. These settings help strike a balance between providing sufficient resources to function optimally and making efficient use of the underlying infrastructure.
Set the following replica parameters in your Helm Chart to the values listed below:
The Immuta Helm Chart supports Resource Limits for all components. Set resources and limits to the database and query engine in the Helm values. Without those limits, the pods will be the first target for eviction, which can cause issues during backup and restore, since this process consumes a lot of memory.
Add this YAML snippet to your Helm values file:
Database is the only component that needs a lot of resources, especially if you don’t use the query engine. For a small installation, you can set the database resources to 2Gi
, and if you see slower performance over time, you can increase this number to improve performance.
Setting CPU resources and limits is optional. Resource contention over CPU is not a common occurrence for Immuta, so it won’t have a significant effect if the CPU resource and limit is set.
Run the following command to deploy Immuta:
Troubleshooting
If you encounter errors while deploying the Immuta Helm Chart, see Troubleshooting.
HTTP communication using TLS certificates is enabled by default in Immuta's Helm Chart for both internal (inside the Kubernetes cluster) and external (between the Kubernetes ingress and the outside world) communications. This is accomplished through the generation of a local certificate authority (CA) which signs certificates for each service - all handled automatically by the Immuta installation. While not recommended, if TLS must be disabled for some reason, this can be done by setting tls.enabled
to false
in the values file.
Best Practice: TLS Certification
Immuta recommends to use your own TLS certificate for external (outside the Kubernetes cluster) communications for Immuta production deployments.
Using your own certificates requires you to create a Kubernetes Secret containing the private key, certificate, and certificate authority certificate. This can be easily done using kubectl:
Make sure your certificates are correct
Make sure the certificate's Common Name (CN) and/or Subject Alternative Name (SAN) matches the specified externalHostname
or contains an appropriate wildcard.
After creating the Kubernetes Secret, specify its use in the external ingress by setting tls.externalSecretName
= immuta-external-tls
in your immuta-values.yaml file:
The Immuta Helm Chart (version 4.5.0+) can be deployed using Argo CD.
For Argo CD versions older than 1.7.0 you must use the following Helm values in order for the TLS generation hook to run successfully.
Starting with Argo CD version 1.7.0 the default TLS generation hook values can be used.
tls.manageGeneratedSecret
must be set to true when using Argo CD to deploy Immuta; otherwise, the generated TLS secret will be shown as OutOfSync (requires pruning) in Argo CD. Pruning the Secret would break TLS for the deployment, so it is important to set this value to prevent that from happening.
For detailed assistance in troubleshooting your installation, contact your Immuta representative or see Helm Troubleshooting.
Audience: System Administrators
Content Summary: This page outlines instructions for troubleshooting specific issues with Helm.
Using a Kubernetes namespace
If deploying Immuta into a Kubernetes namespace other than the default, you must include the --namespace
option into the helm
and kubectl
commands provided throughout this section.
If you encounter Immuta Pods that have had the status Pending
or Init:0/1
for an extended period of time, then there may be an issue mounting volumes to the Pods. You may find error messages by describing one of the pods that had the Pending
or Init:0/1
status.
If an event with the message pod has unbound PersistentVolumeClaims
is seen on the frozen pod, then there is most likely an issue with the database backup storage Persistent Volume Claims. Typically this is caused by the database backup PVC not binding because there are no Kubernetes Storage Classes configured to provide the correct storage type.
Solution
Review your backup configuration and ensure that you either have the proper storageClassName
or claimName
set.
Once you have updated the immuta-values.yaml
to contain the proper PVC configuration, you will want to first delete the Immuta deployment, then run helm install
.
Occasionally Helm has bugs or loses track of Kubernetes resources that it has created. Immuta has created a Bash script that you may download and use to cleanup all resources that are tied to an Immuta deployment. This command should only be run after helm delete <YOUR RELEASE NAME>
.
Download cleanup-immuta-deployment.sh.
Run the script:
After a configuration change or cluster outage you may need to perform a rolling-restart to refresh the database pods without data loss. Use the command below to update a restart
annotation on the database pods to instruct the database StatefulSet to roll the pods.
After a configuration change or cluster outage you may need to perform a rolling-restart to refresh the web service pods. Use the command below to update a restart
annotation on the web pods to Deployment to roll the pods.
Solution
Should the need arise that you need to regenerate internal TLS certificates follow the instructions below.
Solution
Delete the internal TLS secret
Recreate the internal TLS secret by running Helm Upgrade.
Note
If you need to modify any postgres settings such as TLS certificate verification for the Query Engine. Be sure to modify values.yaml
file before running this command.
Helm 3: helm upgrade immuta/immuta --values
Helm 2: helm upgrade immuta/immuta --values --name
WAIT FOR PODS TO RESTART BEFORE CONTINUING
Restart Query Engine:
WAIT FOR PODS TO RESTART BEFORE CONTINUING
Restart Web Service:
Should you need to rotate external TLS certificates, follow the instructions below:
Solution
Create a new secret with the relevant TLS files.
Update your tls.externalSecretName
in immuta_values.yaml with the new external TLS secret.
Run Helm Upgrade to update the certificates for the deployment.
Delete the old secret.
Audience: System Administrators
Content Summary: This page outlines how to configure an external metadata database for Immuta instead of using Immuta's built-in PostgreSQL Metadata Database that runs in Kubernetes.
Helm Chart Version
Update to the latest Helm Chart before proceeding any further.
The Metadata Database can optionally be configured to run outside of Kubernetes, which eliminates the variability introduced by the Kubernetes scheduler and/or scaler without compromising high-availability. This is the preferred configuration, as it offers infrastructure administrators a greater level of control in the event of disaster recovery.
PostgreSQL Version incompatibilities
PostgreSQL versions 12
through 16
are only supported when Query Engine rehydration is enabled; otherwise, the PostgreSQL version must be pinned at 12
. PostgreSQL abstraction layers such as AWS Aurora are not supported.
Enable an external metadata database by setting database.enabled=false
in the immuta-values.yaml
file and passing the connection information for the PostgreSQL instance under the key externalDatabase
.
Set queryEngine.rehydration.enabled=true
. If set to false
, then externalDatabase.superuser.username
and externalDatabase.superuser.password
must be provided.
Superuser Role
Prior to Helm Chart 4.13
, declaring externalDatabase.superuser.username
and externalDatabase.superuser.password
was a required field. This requirement has since been made optional when Query Engine rehydration is enabled. If a superuser is omitted, then the chart will no longer manage the database backup/restore process. In this configuration, customers are responsible for backing up their external metadata database.
The externalDatabase
object is detailed below and in the Immuta Helm Chart Options.
Additionally, it is possible to use existingSecret
instead of setting externalDatabase.password
in the Helm values. These passwords map to the same keys that are used for the built-in database. For example,
Role Creation
The role's password set below should match Helm value externalDatabase.password
.
Azure Database for PostgreSQL
During restore the built-in database's backup expects role postgres
to exist. This role is not present by default, and must be created when using Azure Database for PostgreSQL.
Log in to the external metadata database as a user with the superuser role attribute (such as the postgres
user) using your preferred tool (e.g., psql, pgAdmin).
Connect to database postgres
, and execute the following SQL.
Connect to database bometadata
that was created in the previous step, and execute the following SQL. Azure Database for PostgreSQL: Extensions must be configured in the web portal.
Helm Releases
Run helm list
to view all existing Helm releases. Refer to the Helm docs to learn more.
For existing deployments, you can migrate from the built-in database to an external database. To migrate, backups must be configured. Reach out to your Immuta representative for instructions.
(Optional) Set default namespace:
Trigger manual backup:
Validate backup succeeded:
Follow the steps outlined in section First-Time PostgreSQL Setup.
Edit immuta-values.yaml
to enable the external metadata database and restore.
Apply the immuta-values.yaml
changes made in the previous step:
Wait until the Kubernetes resources become ready.
Edit immuta-values.yaml
to enable Query Engine rehydration and disable backup/restore.
Rerun the previous helm upgrade
command to apply the latest immuta-values.yaml
changes.
Connect to database postgres
, and execute the following SQL. Azure Database for PostgreSQL: Delete the previously created role by running DROP ROLE postgres;
.
Audience: System Administrators
Content Summary: This page outlines how to deploy Immuta on OpenShift.
Immuta OpenShift Support
Immuta officially supports OpenShift 4 (versions supported by Red Hat) and does not support OpenShift 3.
Run the following command in your terminal:
runAsUser
and fsGroup
The Immuta Helm Chart must be configured to set two values within the approved ranges for the OpenShift project Immuta is being deployed into: runAsUser
and fsGroup
.
runAsUser
: On a Pod SecurityContext, this field defines the user ID that will run the processes within the pod. In the next step, this can be set to any value within the range defined in sa.scc.uid-range
. See details below.
fsGroup
: This field defines a group ID that will be added as a supplemental group to the Pod. Files in PersistentVolumes
will be writable by this group ID. In the next step, this must be set to the minimum value in the range defined in sa.scc.supplemental-groups
. See details below.
View the approved ranges in OpenShift using one of the two methods below:
OpenShift Console
Navigate to the Project Details page and click the link under Annotations.
Take note of the values for openshift.io/sa.scc.uid-range
and openshift.io/sa.scc.supplemental-groups
.
OpenShift CLI
Alternatively, use the OpenShift CLI to inspect the relevant values directly. For example,
In both illustrations above, the first part of the value (leading up to the /
) is the assigned user ID/group ID range, and the second part (trailing the /
) is the extent of the range.
For example, the minimum UID for sa.scc.uid-range=1000620000/10000
is 1000620000
and the maximum is 1000629999
(1000620000 + 10000
).
For the examples throughout the rest of this tutorial, 1000620000
will be set as the value for both runAsUser
and fsGroup
.
For more details on security context restraints and how the user and group ID ranges are allocated, see the OpenShift documentation.
Set these OpenShift-specific Helm values in a YAML file that will be passed to helm install
in the next step:
externalHostname
: Set to a subdomain of the domain configured for the OpenShift Ingress controller. Contact your OpenShift administrator to get the configured domain if it is unknown.
securityContext.runAsUser
: Set this to a user ID in the range specified by the annotation openshift.io/sa.scc.uid-range
in the OpenShift project for the following components:
backup.securityContext.runAsUser
cache.securityContext.runAsUser
database.securityContext.runAsUser
fingerprint.securityContext.runAsUser
queryEngine.securityContext.runAsUser
web.securityContext.runAsUser
securityContext.fsGroup
: Set this to the minimum value in the range defined in sa.scc.supplemental-groups
in the OpenShift project for the following components:
backup.securityContext.fsGroup
database.securityContext.fsGroup
queryEngine.securityContext.fsGroup
web.securityContext.fsGroup
patroniKubernetes.use_endpoints
: Set to false
for the components below. This change is required for Patroni to be able to successfully elect a leader.
database.patroniKubernetes.use_endpoints
queryEngine.patroniKubernetes.use_endpoints
queryEngine.clientService.type
: Set to LoadBalancer
so that a LoadBalancer will be created to handle the TCP traffic for the Query Engine. The LoadBalancer that OpenShift creates will have its own hostname/IP address, and you must update the Public Query Engine Hostname in Application Settings (instructions below). This step can be omitted if the Query Engine is not being used.
web.ingress.enabled
: Set to false
to disable creation of Ingress resources for the Immuta Web Service. OpenShift provides its own Ingress controller for handling HTTP ingress, and this is configured by creating Routes.
Follow the standard Immuta deployment with Helm, but supply the additional values file using the --values
flag in the helm install
step.
To set up ingress for Immuta using the OpenShift Ingress controller, get the CA certificate used by Immuta for internal TLS. This will be used by the OpenShift Ingress controller to validate the upstream TLS connection to Immuta.
Create a Route using the OpenShift CLI. The hostname flag should be set to match the value configured for externalHostname
in the Helm values file, and it should be a subdomain of the domain that the OpenShift Ingress controller is configured for.
This will create a route to be served by the OpenShift Ingress controller. At this point, Immuta is installed and should be accessible at the configured hostname.
Run kubectl get svc immuta-query-engine-clients
to inspect the Query Engine client's service in Kubernetes to get the assigned External IP address. For example,
Copy the External-IP address. You will paste this value in the Immuta App Settings page to update the Public Query Engine Hostname.
In the Immuta UI, click the App Settings icon in the left sidebar and scroll to the Public URLs section.
Enter the value you copied from the EXTERNAL-IP
column in the Public Query Engine Hostname field.
Click Save to update the configuration.
Nginx Ingress must be disabled to run with the restricted SCC. Immuta's built-in Nginx Ingress controller will not run with the restricted SCC and must be disabled to run in this configuration. OpenShift has its own Ingress controller that can be used for HTTP traffic for the Immuta Web Service. However, since the OpenShift Ingress controller does not support TCP traffic, a separate LoadBalancer service must be used for the Query Engine, and the Public Query Engine Hostname must be updated accordingly.
The Helm Chart includes components that make up your Immuta infrastructure, and you can change these values to tailor your Immuta infrastructure to suit your needs. The tables below include parameter descriptions and default values for all components in the Helm Chart.
When installing Immuta, download immuta-values.yaml
and update the values to your preferred settings.
See the Helm installation page for guidance and best practices.
Parameter | Description | Default |
---|---|---|
These values are used when backup.type=s3
.
These values are used when backup.type=azblob
.
These values are used when backup.type=gs
.
tls.manageGeneratedSecret
may cause issues with helm install
.
In most cases, tls.manageGeneratedSecret
should only be set to true when Helm is not being used to install the release (i.e., Argo CD).
If tls.manageGeneratedSecret
is set to true when used with the default TLS generation hook configuration, you will encounter an error similar to the following.
Error: secrets "immuta-tls" already exists
You can work around this error by configuring the TLS generation hook to run as a post-install
hook.
However, this configuration is not compatible with helm install --wait
. If the --wait
flag is used, the command will timeout and fail.
The Metadata Database component can be configured to use either the built-in Kubernetes deployment or an external PostgreSQL database.
The following Helm values are shared between both built-in and external databases.
These values are used when database.enabled=true
.
These values are used when database.enabled=false
.
If you will only use integrations, port 5432 is optional. Using the built-in Ingress Nginx Controller, you can disable it by setting the value to false
.
The Cleanup hook is a Helm post-delete hook that is responsible for cleaning up some resources that are not deleted by Helm.
The database initialize hook is used to initialize the external database when database.enabled=false
.
The TLS generation hook is a Helm pre-install hook that is responsible for generating TLS certificates used for connections between the Immuta pods.
Deprecation Warning
The following values are deprecated. Values should be migrated to cache
and cache.memcached
. See Cache for replacement values.
Audience: System Administrators
Content Summary: Before installing Immuta, you will need to spin up your AKS or ACS cluster. This page outlines how to deploy Immuta cluster infrastructure on AKS and ACS.
If you would like to install Immuta on an existing AKS or ACS cluster, you can skip this section. However, we recommend deploying a dedicated resource group and cluster for Immuta if possible.
Once you have deployed your cluster infrastructure, please visit to finish installing Immuta.
Best Practice: Use AKS
Immuta highly recommends to use the improved version of Azure Kubernetes Service, AKS. Immuta on AKS will exhibit superior stability, performance, and scalability than a deployment on the deprecated version known as ACS.
You will need a resource group to deploy your AKS or ACS cluster in:
Note: There is no naming requirement for the Immuta resource group.
Now it is time to spin up your cluster resources in Azure. This step will be different depending on whether you are deploying an AKS or ACS cluster.
After running the command, you will have to wait a few moments as the cluster resources are starting up.
Create AKS Cluster (Recommended):
Create ACS Cluster (Deprecated):
You will need to configure the kubectl
command line utility to use the Immuta cluster.
If you do not have kubectl
installed, you can install it through the Azure CLI.
If you are using AKS, run
For ACS clusters, run
If you are using AKS, run
For ACS clusters, run
Audience: System Administrators
Content Summary: This guide illustrates the deployment of an Immuta cluster on Microsoft Azure Kubernetes Service. Requirements may vary depending on the Azure Cloud environment and/or region. For comprehensive assistance, please contact an Immuta Support Professional.
This guide is intended to supplement the main , which is referred to often throughout this page.
Prerequisites:
Software: 2.3.0 or greater and 2.0.30 or greater
Node Size: Immuta's suggested minimum Azure VM size for Azure Kubernetes Service deployments is
Standard_D3_v2
(4 vCPU, 14GB RAM, 200 GB SSD) or equivalent. The Immuta helm installation requires a minimum of 3 nodes. Additional nodes can be added on demand.TLS Certificates: See the for TLS certificate requirements.
To install Azure CLI 2.0, please visit and follow the instructions for your chosen platform. You can also use the .
For more information on nodes, see the .
Before installing Immuta, you will need to spin up your AKS cluster. If you would like to install Immuta on an existing AKS cluster, you can skip this step. If wish to deploy a dedicated cluster for Immuta, please visit .
Navigate to the installation method of your choice:
Since you are deploying Immuta as an Azure cloud application in AKS, you can easily configure the Nginx Ingress Controller that is bundled with the Immuta Helm deployment as a load balancer using the generated hostname from Azure.
Confirm that you have the following configurations in your values.yaml
file before deploying:
If you are using the included ingress controller, it will create a Kubernetes LoadBalancer Service to expose Immuta outside of your cluster. The following options are available for configuring the LoadBalancer Service:
nginxIngress.controller.service.annotations
: Useful for setting options such as creating an internal load balancer or configuring TLS termination at the load balancer.
nginxIngress.controller.service.loadBalancerSourceRanges
: Used to limit which client IP addresses can access the load balancer.
nginxIngress.controller.service.externalTrafficPolicy
: Useful when working with Network Load Balancers on AWS. It can be set to “Local” to allow the client IP address to be propagated to the Pods.
After running helm install
, you can find the public IP address of the nginx controller by running
If the public IP address shows up as <pending>
, wait a few moments and check again. Once you have the IP address, run the following commands to configure the Immuta Azure Cloud Application to use your ingress controller:
Shortly after running these commands, you should be able to reach the Immuta console in your web browser at the configured externalHostName
.
Best Practice: Network Security Group
Prepare the Helm values file,
Register the required secrets to pull Immuta's Docker images,
Run the Helm installation, and
Create the mapping between the external IP address Ingress Controller (the cluster's load balancer) and the cluster's public DNS name.
Please Note
Running the automated deployment script will make a series of decisions for you:
The installation will set up backup volumes by default. Set the BACKUPS
value to 0
to disable Immuta backups.
Download the script:
Make it executable by running:
Below is the list of the parameters that the script accepts. These parameters are environment variables that are prepended to the execution command.
You can use the same script to destroy a deployment you had previously run with this script, by running the following command:
The value of CLUSTER_NAME
should be identical to the name of the CLUSTER_NAME
value you've used to deploy Immuta.
Property | Description | Default Value |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Please see the for the full walkthrough of installing Immuta via our Helm Chart. This section will focus on the specific requirements for the helm installation on AKS.
Possible values for these various settings can be found in the .
Immuta recommends that you set up the network security group for the Immuta cluster to be closed to public traffic outside of your organization. If your organization already has rules and guidelines for your Azure Cloud Application security groups, then you should adhere to those. Otherwise, we recommend visiting Microsoft's for configuring Network security groups to find a solution that fits your environment.
To configure backups with Azure, see the .
If you've previously provisioned an AKS cluster (see ) and have installed the Installation Prerequisites, you can run an automated script that will
The TLS certificates will be generated on-the-fly and will be self-signed. You can easily change this later by following the instructions in the .
The number of replicas from each component will be automatically derived from your AKS cluster's node count. This can be easily modified by overriding the .
Variable Name | Description | Required | Default |
---|
To run the script and deploy, you can simply prepend the above-mentioned to the execution command, with the action deploy
. For example,
host
(required)
Hostname of the external PostgreSQL database instance.
nil
port
Port of the external PostgreSQL database instance.
5432
sslmode
(required)
The mode for the database connection. Supported values are disable
, require
, verify-ca
, and verify-fully
.
nil
superuser.username
Username for the superuser used to initialize the PostgreSQL instance.
nil
superuser.password
Password for the superuser used to initialize the PostgreSQL instance.
nil
username
Username that Immuta creates and uses for the application.
bometa
password
(required)
Password associated with username
.
nil
dbname
Database name that Immuta uses.
bometadata
backup.enabled
Whether or not to turn on automatic backups
true
backup.restore.enabled
Whether or not to restore from backups if present
false
backup.type
Backup storage type. Must be defined if backup.enabled
is true
. Must be one of: s3
, gs
, or azblob
.
nil
backup.cronJob.nodeSelector
Node selector for backup cron job.
{"kubernetes.io/os": "linux"}
backup.cronJob.resources
Container resources.
{}
backup.cronJob.tolerations
Tolerations for backup CronJob.
nil
backup.extraEnv
Mapping of key-value pairs to be set on backup Job containers.
{}
backup.failedJobsHistoryLimit
Number of failed jobs to exist before stopping
1
backup.keepBackupVolumes
Whether or not to delete backup volumes when uninstalling Immuta
false
backup.maxBackupCount
Max number of backups to exist at a given time.
10
backup.podAnnotations
Annotations to add to all pods associated with backups
nil
backup.podLabels
Labels to add to all pods associated with backups.
nil
backup.restore.databaseFile
Name of the file in the database
backup folder to restore from.
nil
backup.restore.queryEngineFile
Name of the file in the query-engine
backup folder to restore from.
nil
backup.schedule
Kubernetes CronJob schedule expression.
0 0 * * *
backup.securityContext
SecurityContext for backup Pods.
{}
backup.serviceAccountAnnotations
Annotations to add to all ServiceAccounts associated with backups.
nil
backup.successfulJobsHistoryLimit
Number of successful jobs to exist before cleanup.
3
backup.podSecurityContext
Pod level security features.
{}
backup.containerSecurityContext
Container level security.
{}
backup.s3.awsAccessKeyId
AWS Access Key ID.
nil
backup.s3.awsSecretAccessKey
AWS Secret Access Key.
nil
backup.s3.awsRegion
AWS Region.
nil
backup.s3.bucket
S3 Bucket to store backups in.
nil
backup.s3.bucketPrefix
Prefix to append to all backups.
nil
backup.s3.endpoint
Endpoint URL of an s3-compatible server.
nil
backup.s3.caBundle
CA bundle in PEM format. Used to verify TLS certificates of custom s3 endpoint.
nil
backup.s3.forcePathStyle
Set to "true" to force the use of path-style addressing.
nil
backup.s3.disableSSL
Set to "true" to disable SSL connections for the s3 endpoint.
nil
backup.azblob.azStorageAccount
Azure Storage Account Name
nil
backup.azblob.azStorageKey
Azure Storage Account Key
nil
backup.azblob.azStorageSASToken
Azure Storage Account SAS Token
nil
backup.azblob.container
Azure Storage Account Container Name
nil
backup.azblob.containerPrefix
Prefix to append to all backups
nil
backup.gs.gsKeySecretName
Kubernetes Secret containing key.json
for Google Service Account
nil
backup.gs.bucket
Google Cloud Storage Bucket
nil
backup.gs.bucketPrefix
Prefix to append to all backups
nil
tls.enabled
Whether or not to use TLS.
true
tls.create
Whether or not to generate TLS certificates.
true
tls.manageGeneratedSecret
When true, the generated TLS secret will be created as a resource of the Helm Chart.
false
tls.secretName
Secret name to use for internal and external communication. (For self-provided certs only)
nil
tls.enabledInternal
Whether or not to use TLS for all internal communication.
true
tls.internalSecretName
Secret name to use for internal communication. (For self-provided certs only)
nil
tls.enabledExternal
Whether or not to use TLS for all external communication.
true
tls.externalSecretName
Secret name to use for external communication. (For self-provided certs only)
nil
web.extraEnv
Mapping of key-value pairs to be set on web containers.
{}
web.extraVolumeMounts
List of extra volume mounts to be added to web containers.
[]
web.extraVolumes
List of extra volumes to be added to web containers.
[]
web.image.registry
Image registry for the Immuta service image.
Value from global.imageRegistry
web.image.repository
Image repository for the Immuta service image.
immuta/immuta-service
web.image.tag
Image tag for the Immuta service image.
Value from imageTag
or immutaVersion
web.image.digest
Image digest for the Immuta service image in format of sha256:<DIGEST>
.
web.imagePullPolicy
ImagePullPolicy for the Immuta service container.
{{ .Values.imageTag }}
web.imageRepository
deprecated
Use web.image.registry
and web.image.repository
.
nil
web.imageTag
deprecated
Use web.image.tag
.
nil
web.replicas
Number of replicas of web service to deploy. Maximum: 3
1
web.workerCount
Number of web service worker processes to deploy.
2
web.threadPoolSize
Number of threads to use for each NodeJS process.
nil
web.ingress.enabled
Controls the creation of an Ingress resource for the web service.
true
web.ingress.clientMaxBodySize
client_max_body_size
passed through to nginx.
1g
web.resources
Container resources.
{}
web.podAnnotations
Additional annotations to apply to web pods.
{}
web.podLabels
Additional labels to apply to web pods.
{}
web.nodeSelector
Node selector for web pods.
{"kubernetes.io/os": "linux"}
web.serviceAccountAnnotations
Annotations for the web ServiceAccount.
{}
web.tolerations
Tolerations for web pods.
nil
web.podSecurityContext
Pod level security features.
{}
web.containerSecurityContext
Container level security features.
{}
fingerprint.image.registry
Image registry for the Immuta fingerprint image.
Value from global.imageRegistry
fingerprint.image.repository
Image repository for the Immuta fingerprint image.
immuta/immuta-fingerprint
fingerprint.image.tag
Image tag for the Immuta fingerprint image.
Value from imageTag
or immutaVersion
fingerprint.image.digest
Image digest for the Immuta fingerprint image in format of sha256:<DIGEST>
.
fingerprint.imagePullPolicy
ImagePullPolicy for the Immuta fingerprint container.
{{ .Values.imageTag }}
fingerprint.imageRepository
deprecated
Use fingerprint.image.registry
and fingerprint.image.repository
.
nil
fingerprint.imageTag
deprecated
Use fingerprint.image.tag
.
nil
fingerprint.replicas
Number of replicas of fingerprint service to deploy.
1
fingerprint.logLevel
Log level for the Fingerprint service.
WARNING
fingerprint.extraConfig
Object containing configuration options for the Immuta Fingerprint service.
{}
fingerprint.resources
Container resources.
{}
fingerprint.podAnnotations
Additional annotations to apply to fingerprint Pods.
{}
fingerprint.podLabels
Additional labels to apply to fingerprint Pods.
{}
fingerprint.nodeSelector
Node selector for fingerprint Pods.
{"kubernetes.io/os": "linux"}
fingerprint.serviceAccountAnnotations
Annotations for the fingerprint ServiceAccount.
{}
fingerprint.tolerations
Tolerations for fingerprint Pods.
nil
<component>.podSecurityContext
Pod level security features.
<component>.containerSecurityContext
Container level security features.
database.enabled
Enabled flag. Used to disable the built-in database when an external database is used.
true
database.image.registry
Image registry for the Immuta database image.
Value from global.imageRegistry
database.image.repository
Image repository for the Immuta database image.
immuta/immuta-db
database.image.tag
Image tag for the Immuta database image.
Value from imageTag
or immutaVersion
database.image.digest
Image digest for the Immuta database image in format of sha256:<DIGEST>
.
database.imagePullPolicy
ImagePullPolicy for the Immuta database container.
{{ .Values.imageTag }}
database.imageRepository
deprecated
Use database.image.registry
and database.image.repository
.
nil
database.imageTag
deprecated
Use database.image.tag
.
nil
database.extraEnv
Mapping of key-value pairs to be set on database containers.
{}
database.extraVolumeMounts
List of extra volume mounts to be added to database containers.
[]
database.extraVolumes
List of extra volumes to be added to database containers.
[]
database.nodeSelector
Node selector for database pods.
{"kubernetes.io/os": "linux"}
database.password
Password for immuta metadata database
secret
database.patroniApiPassword
Password for Patroni REST API.
secret
database.patroniKubernetes
Patroni Kubernetes settings.
{"use_endpoints": true}
database.persistence.enabled
Set this to true to enable data persistence on all database pods. It should be set to true
for all non-testing environments.
false
database.podAnnotations
Additional annotations to apply to database pods.
{}
database.podLabels
Additional labels to apply to database pods.
{}
database.replicas
Number of database replicas.
1
database.replicationPassword
Password for replication user.
secret
database.resources
Container resources.
{}
database.sharedMemoryVolume.enabled
Enable the use of a memory-backed emptyDir
volume for /dev/shm
.
false
database.sharedMemoryVolume.sizeLimit
Size limit for the shared memory volume. Only available when the SizeMemoryBackedVolumes
feature gate is enabled.
nil
database.superuserPassword
Password for PostgreSQL superuser.
secret
database.tolerations
Tolerations for database pods.
nil
database.podSecurityContext
Pod level security features.
{}
database.containerSecurityContext
Container level security features.
{}
externalDatabase.host
required
Hostname of the external database instance.
nil
externalDatabase.port
Port for the external database instance.
5432
externalDatabase.sslmode
PostgreSQL sslmode
option for the external database connection. Behavior when unset is require
.
nil
externalDatabase.dbname
Immuta database name.
bometadata
externalDatabase.username
Immuta database username.
bometa
externalDatabase.password
required
Immuta database user password.
nil
externalDatabase.superuser.username
required
Username for the superuser used to initialize the database instance.
true
externalDatabase.superuser.password
required
Password for the superuser used to initialize the database instance.
true
externalDatabase.backup.enabled
(Deprecated) Enable flag for external database backups. Refer to backup.enabled=true
.
true
externalDatabase.restore.enabled
(Deprecated) Enable flag for the external database restore. Refer to backup.restore.enabled=true
.
true
queryEngine.extraEnv
Mapping of key-value pairs to be set on Query Engine containers.
{}
queryEngine.extraVolumeMounts
List of extra volume mounts to be added to Query Engine containers.
[]
queryEngine.extraVolumes
List of extra volumes to be added to Query Engine containers.
[]
queryEngine.image.registry
Image registry for the Immuta Query Engine image.
Value from global.imageRegistry
queryEngine.image.repository
Image repository for the Immuta Query Engine image.
immuta/immuta-db
queryEngine.image.tag
Image tag for the Immuta Query Engine image.
Value from imageTag
or immutaVersion
queryEngine.image.digest
Image digest for the Immuta Query Engine image in format of sha256:<DIGEST>
.
queryEngine.imagePullPolicy
ImagePullPolicy for the Immuta Query Engine container.
{{ .Values.imageTag }}
queryEngine.imageRepository
deprecated
Use queryEngine.image.registry
and queryEngine.image.repository
.
nil
queryEngine.imageTag
deprecated
Use queryEngine.image.tag
.
nil
queryEngine.replicas
Number of database replicas
1
queryEngine.password
Password for immuta feature store database
secret
queryEngine.superuserPassword
Password for PostgreSQL superuser.
secret
queryEngine.replicationPassword
Password for replication user.
secret
queryEngine.patroniApiPassword
Password for Patroni REST API.
secret
queryEngine.patroniKubernetes
Patroni Kubernetes settings.
{"use_endpoints": true}
queryEngine.persistence.enabled
This should be set to true
for all non-testing environments.
false
queryEngine.resources
Container resources.
{}
queryEngine.service
Service configuration for Query Engine service if not using an Ingress Controller.
queryEngine.podAnnotations
Additional annotations to apply to Query Engine pods.
{}
queryEngine.podLabels
Additional labels to apply to Query Engine pods.
{}
queryEngine.nodeSelector
Node selector for Query Engine pods.
{"kubernetes.io/os": "linux"}
queryEngine.sharedMemoryVolume.enabled
Enable the use of a memory-backed emptyDir
volume for /dev/shm
.
false
queryEngine.sharedMemoryVolume.sizeLimit
Size limit for the shared memory volume. Only available when the SizeMemoryBackedVolumes
feature gate is enabled.
nil
queryEngine.tolerations
Tolerations for Query Engine pods.
nil
queryEngine.podSecurityContext
Pod level security features.
{}
queryEngine.containerSecurityContext
Container level security features.
{}
queryEngine.publishPort
Controls whether or not the Query Engine port (5432) is published on the built-in Ingress Controller service.
true
hooks.cleanup.resources
Container resources.
{}
hooks.cleanup.serviceAccountAnnotations
Annotations for the cleanup hook ServiceAccount.
{}
hooks.cleanup.nodeSelector
Node selector for pods.
{"kubernetes.io/os": "linux"}
hooks.cleanup.tolerations
Tolerations for pods.
nil
hooks.cleanup.podSecurityContext
Pod level security features.
hooks.cleanup.containerSecurityContext
Container level security features.
hooks.databaseInitialize.resources
Container resources.
{}
hooks.databaseInitialize.serviceAccountAnnotations
Annotations for the database initialize hook ServiceAccount.
{}
hooks.databaseInitialize.verbose
Flag to enable or disable verbose logging in the database initialize hook.
true
hooks.databaseInitialize.nodeSelector
Node selector for pods.
{"kubernetes.io/os": "linux"}
hooks.databaseInitialize.tolerations
Tolerations for pods.
nil
hooks.databaseInitialize.podSecurityContext
Pod level security features.
hooks.databaseInitialize.containerSecurityContext
Container level security features.
hooks.tlsGeneration.hookAnnotations."helm.sh/hook-delete-policy"
Delete policy for the TLS generation hook.
"before-hook-creation,hook-succeeded"
hooks.tlsGeneration.resources
Container resources.
{}
hooks.tlsGeneration.serviceAccountAnnotations
Annotations for the cleanup hook ServiceAccount.
{}
hooks.tlsGeneration.nodeSelector
Node selector for pods.
{"kubernetes.io/os": "linux"}
hooks.tlsGeneration.tolerations
Tolerations for pods.
nil
hooks.tlsGeneration.podSecurityContext
Pod level security features.
hooks.tlsGeneration.containerSecurityContext
Container level security features.
cache.type
Type to use for the cache. Valid values are memcached
.
memcached
cache.replicas
Number of replicas.
1
cache.resources
Container resources.
{}
cache.nodeSelector
Node selector for pods.
{"kubernetes.io/os": "linux"}
cache.podSecurityContext
SecurityContext for cache Pods.
{"runAsUser": 65532}
cache.containerSecurityContext
Container level security features.
{}
cache.updateStrategy
UpdateStrategy Spec for cache workloads.
{}
cache.tolerations
Tolerations for pods.
nil
cache.memcached.image.registry
Image registry for Memcached image.
Value from global.imageRegistry
cache.memcached.image.repository
Image repository for Memcached image.
memcached
cache.memcached.image.tag
Image tag for Memcached image.
1.6-alpine
cache.memcached.image.digest
Image digest for the Immuta Memcached image in format of sha256:<DIGEST>
.
cache.memcached.imagePullPolicy
Image pull policy.
Value from imagePullPolicy
cache.memcached.maxItemMemory
Limit for max item memory in cache (in MB).
64
deployTools.image.registry
Image registry for Immuta deploy tools image.
Value from global.imageRegistry
deployTools.image.repository
Image repository for Immuta deploy tools image.
immuta/immuta-deploy-tools
deployTools.image.tag
Image tag for Immuta deploy tools image.
2.4.3
deployTools.image.digest
Image digest for the Immuta deploy tools image in format of sha256:<DIGEST>
.
deployTools.imagePullPolicy
Image pull policy.
Value from imagePullPolicy
nginxIngress.enabled
Enable nginx ingress deployment
true
nginxIngress.podSecurityContext
Pod level security features.
{}
nginxIngress.containerSecurityContext
Container level security features.
{capabilities: {drop: [ALL], add: [NET_BIND_SERVICE]}, runAsUser: 101}
nginxIngress.controller.image.registry
Image registry for the Nginx Ingress controller image.
Value from global.imageRegistry
nginxIngress.controller.image.repository
Image repository for the Nginx Ingress controller image.
ingress-nginx-controller
nginxIngress.controller.image.tag
Image tag for the Nginx Ingress controller image.
v1.1.0
nginxIngress.controller.image.digest
Image digest for the Immuta Nginx Ingress controller image in format of sha256:<DIGEST>
.
nginxIngress.controller.imagePullPolicy
ImagePullPolicy for the Nginx Ingress controller container.
{{ .Values.imageTag }}
nginxIngress.controller.imageRepository
deprecated
Use nginxIngress.controller.image.registry
and nginxIngress.controller.image.repository
.
nil
nginxIngress.controller.imageTag
deprecated
Use nginxIngress.controller.image.tag
.
nil
nginxIngress.controller.service.annotations
Used to set arbitrary annotations on the Nginx Ingress Service.
{}
nginxIngress.controller.service.type
Controller service type.
LoadBalancer
nginxIngress.controller.service.isInternal
Whether or not to use an internal ELB
false
nginxIngress.controller.service.acmCertArn
ARN for ACM certificate
nginxIngress.controller.replicas
Number of controller replicas
1
nginxIngress.controller.minReadySeconds
Minimum ready seconds
0
nginxIngress.controller.electionID
Election ID for nginx ingress controller
ingress-controller-leader
nginxIngress.controller.hostNetwork
Run nginx ingress controller on host network
false
nginxIngress.controller.config.proxy-read-timeout
Controller proxy read timeout.
300
nginxIngress.controller.config.proxy-send-timeout
Controller proxy send timeout.
300
nginxIngress.controller.podAnnotations
Additional annotations to apply to nginx ingress controller pods.
{}
nginxIngress.controller.podLabels
Additional labels to apply to nginx ingress controller pods.
{}
nginxIngress.controller.nodeSelector
Node selector for nginx ingress controller pods.
{"kubernetes.io/os": "linux"}
nginxIngress.controller.tolerations
Tolerations for nginx ingress controller pods.
nil
nginxIngress.controller.resources
Container resources.
{}
memcached.pdbMinAvailable
Minimum pdb available.
1
memcached.maxItemMemory
Limit for max item memory in cache (in MB).
64
memcached.resources
Container resources.
{requests: {memory: 64Mi}}
memcached.podAnnotations
Additional annotations to apply to memcached pods.
{}
memcached.podLabels
Additional labels to apply to memcached pods.
{}
memcached.nodeSelector
Node selector for memcached pods.
{"kubernetes.io/os": "linux"}
memcached.tolerations
Tolerations for memcached pods.
nil
| The name of your AKS cluster | Required | - |
| The Azure Subscription ID | Required | - |
| The resource group that contains the cluster | Required | - |
| Obtain from your Immuta support professional | Required | - |
| Obtain from your Immuta support professional | Required | - |
| An arbitrary metadata database password | Required | - |
| An arbitrary metadata database super-user password | Required | - |
| An arbitrary metadata database replication password | Required | - |
| An arbitrary metadata database Patroni API password | Required | - |
| An arbitrary Query Engine password | Required | - |
| An arbitrary Query Engine super-user password | Required | - |
| An arbitrary Query Engine replication password | Required | - |
| An arbitrary Query Engine Patroni API password | Required | - |
| The version tag of the desired Immuta installation | Optional |
|
| The Kubernetes namespace to create and deploy Immuta to | Optional | default |
| The number of replicas of each main component in the cluster | Optional | 1 |
| Whether or not backups should be enabled | Optional | 1 |
| Backup Storage Account resource group | Optional | Same as |
immutaVersion
Version of Immuta
<Current Immuta Version>
imageTag
Docker image tag
<Current Version Tag>
imagePullPolicy
Image pull policy
IfNotPresent
imagePullSecrets
List of image pull secrets to use
[immuta-registry]
existingSecret
Name of an existing Kubernetes Secret for the Helm install to use. A managed Secret is not created when this value is set.
nil
externalHostname
External hostname assigned to this immuta instance.
nil
podSecurityContext
Pod level security features on all pods.
{}
containerSecurityContext
Container level security features on all containers.
{}
global.imageRegistry
Global override for image registry.
registry.immuta.com
global.podAnnotations
Annotations to be set on all pods.
{}
global.podLabels
Labels that will be set on all pods.
{}