Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This section illustrates how to install Immuta using Kubernetes, which allows Immuta to easily scale to meet all your future growth needs.
See the Helm installation prerequisites guide for details about system requirements.
Immuta Query Engine Port
The required firewall rules depend on whether you will use the Immuta Query Engine or exclusively use integrations. If you only use integrations, port 5432 is optional.
The following firewall rules are required to be opened to any host or network that need access to the Immuta service. Navigate to the tab of the technology you plan to use:
Port | Protocol | Source |
---|---|---|
Port | Protocol | Source |
---|---|---|
Immuta has a Helm chart available for installation on Kubernetes:
Specific guides are available for the following Kubernetes cloud providers:
Immuta supports the Kubernetes distributions outlined below.
1.25
1.26
1.27
1.28
1.29
1.25
1.26
1.27
1.28
1.29
1.24
1.25
1.26
1.27
1.28
4.12
4.13
4.14
1.24
1.25
1.26
1.27
1.28
Ingress Controller
The Immuta Helm Chart's built-in ingress controller is enabled by default, but will be disabled by default in future versions. If you have production workloads, consider moving away from using the built-in ingress controller.
Immuta depends on the Helm functionality outlined below.
templates and functions
Helm hooks:
pre-install
pre-upgrade
post-upgrade
post-delete: This hook is not strictly necessary and is only used to clean up some resources that are not deleted by Helm itself. If the post-delete hook is not supported, some resources may be left on the cluster after running helm delete
.
Immuta support ends at our Helm implementation; wrapping Helm in another orchestration tool falls outside the Immuta support window.
Identify a team to set up, configure, and maintain your Kubernetes environment. Immuta will help you with the installation of our platform, but the Kubernetes environment is your company's responsibility. Review Kubernetes best practices here.
Only use the Immuta-provided default Nginx Ingress Controller if you are using the Immuta query engine. Otherwise, opt to use your own ingress controller or no controller at all.
Test your backups at least once a month.
Create the proper IAM roles and IAM permissions (if using IAM roles). Your backups will fail if this is not configured correctly.
Implementing infrastructure monitoring for the systems hosting your Immuta application is critical to ensuring its optimal performance, availability, and security. With today's complex IT environments, any disruption or delay in the underlying infrastructure can significantly impact your Immuta operations, affecting data governance processes and business outcomes. Infrastructure monitoring
allows you to proactively oversee your servers, networks, and other hardware components in real time.
identifies potential bottlenecks, hardware failures, or performance anomalies before they lead to significant issues or downtime.
can alert you to unusual activities that might indicate security threats, allowing for swift mitigation.
By monitoring your hosting infrastructure, you ensure that your Immuta application continues to run smoothly, securely, and effectively.
Use any monitoring tool that is already deployed. If you're not using any monitoring tools yet, consider some of the following options:
CloudTrail (if using AWS EKS or other cloud technologies)
DataDog (generally platform agnostic)
Prometheus (free and open-source software)
Using a log aggregation tool for your Immuta application is vital to maintaining operational efficiency and security. Modern applications' complex ecosystems generate vast amounts of log data that can be challenging to manage and analyze manually. A log aggregation tool centralizes these logs, making it easier to monitor the application's performance and health in real time. It can help detect anomalies, identify patterns, and troubleshoot issues more efficiently, thereby reducing downtime. Moreover, in the context of security, these tools can help detect suspicious activities or potential breaches by analyzing log data, contributing significantly to your overall data governance and risk mitigation strategy.
The logs in Kubernetes pods get renewed often, preventing Immuta support from viewing log history that is days or weeks old. Because these logs are necessary when investigating the behavior of pods and troubleshooting deployment related issues, enable log aggregation to capture the log history.
Use any logging tool that is already deployed. If you're not using a log aggregation tool yet, consider one of the following options:
Splunk
DataDog
Grafana Loki (free and open-source software)
Once your log aggregation tool is deployed, follow these general best-practice guidelines:
Pull logs from Immuta on a daily basis. These logs contain all of the information you will need to support auditing and compliance for access, queries, and changes in your environment.
Store logs for at least 30 days in a log aggregator for monitoring and compliance.
Discuss with your compliance group or lines of business which fields you want to monitor or report on from the Immuta logs. Immuta captures a wealth of information each time a user logs in, changes a policy, or runs a query, so work with your team to determine which items to capture in a log aggregation tool or store long-term.
To ensure top performance, audit records should not be stored in Immuta longer than the default of 60 days. For long-term audit records, use an external audit storage solution to ensure long-term data retention, data preservation, centralized monitoring, enhanced security, and scalability. Using an external audit storage solution also empowers your organization to meet compliance requirements and derive valuable insights from audit data for informed decision-making and analysis.
By default, most Immuta audit records expire after 60 days, but there are some audit record types that do not expire after 60 days. See the Immuta system audit logs page for details.
Backup frequency and retention settings directly impact your data protection and disaster recovery capabilities. While a daily backup is the default frequency and provides a standard level of data security, it's essential to evaluate your specific needs and the sensitivity of your data. For organizations dealing with more sensitive or critical information, increasing the backup frequency beyond daily backups can help minimize the risk of data loss and potential downtime. However, balancing the backup frequency with resource use is vital to avoid impacting performance: longer retention periods enable historical data recovery, while shorter periods optimize storage usage. It is crucial to assess regulatory requirements, data validation practices, and your organization's tolerance for data loss to set an effective retention policy.
Configuring backup settings that align with your desired recovery capabilities and data validation frequency ensures a resilient and reliable application deployment. With the flexibility provided by Helm values, you can fine-tune these settings to match your unique business needs and data protection goals effectively:
Backup frequency: By default, backups are taken once a day at midnight. This can be changed in the backup.schedule
parameter in Helm values file using CronJob syntax to specify the frequency of these backups. Daily backups are standard but if there are more sensitive data, you can do more than one backup every day and vice versa.
Backup file retention: Additionally, the number of backup files retained also matters. By default, 10 backup files are stored at all times in your storage of choice. Every time a new backup is taken, the oldest file is removed from the storage. This can be changed in the backup.maxBackupCount
parameter in the Helm values file.
For smaller deployments, 10 backup files is acceptable, assuming the backups are taken once a day.
For production deployment, work with your Immuta representative to determine the right number of backup files for your environment.
Kubernetes Distribution | Logging | Ingress | Storage | Backup and Restore | External Metadata Database |
---|---|---|---|---|---|
443
TCP
Web Service
5432
TCP
PostgreSQL
443
TCP
Web Service
AWS EKS
AWS Cloud Watch or third-party logging solution
Built-in ingress controller or third-party ingress controller
AWS EBS (default storage class in EKS)
AWS S3
AWS RDS Postgres (Use the supported version identified in the External Metadata Database Configuration guide.)
Azure EKS
Third-party logging solution
Built-in ingress controller or third-party ingress controller
Azure managed disks (default storage class in AKS)
Azure Blob Storage
Azure Database for PostgreSQL (Use the supported version identified in the External Metadata Database Configuration guide.)
Google GKE
Third-party logging solution
Built-in ingress controller or third-party ingress controller
Google Cloud Persistent Disks (default storage class in GKE)
Google Cloud Storage
Google Cloud SQL for PostgreSQL (Use the supported version identified in the External Metadata Database Configuration guide.)
Red Hat OpenShift
Third-party logging solution
Built-in ingress controller or third-party ingress controller
Cloud disks (AWS EBS, Azure managed disks, or Google Cloud Persistent Disks)
Cloud storage (S3, Azure Blob, Google Cloud Storage) or self-hosted object storage (such as MinIO)
Cloud-managed PostgreSQL, such as AWS RDS Postgres, Azure Database for PostgreSQL, or Google Cloud SQL for PostgreSQL (Use the supported version identified in the External Metadata Database Configuration guide.)
Rancher RKE2
Third-party logging solution
Built-in ingress controller or third-party ingress controller
Cloud Disks (AWS EBS, Azure managed disks, Google Cloud Persistent Disks)
Cloud storage (S3, Azure Blob, Google Cloud Storage) or self-hosted object storage (such as MinIO)
Cloud-managed PostgreSQL, such as AWS RDS Postgres, Azure Database for PostgreSQL, or Google Cloud SQL for PostgreSQL (Use the supported version identified in the External Metadata Database Configuration guide.)
Rocky Linux 9
Review the potential impacts of Immuta's Rocky Linux 9 upgrade to your environment before proceeding:
ODBC Drivers
Your ODBC drivers should use a driver compatible with Enterprise Linux 9 or Red Hat Enterprise Linux 9.
Container Runtimes
You must run a supported version of Kubernetes.
Use at least Docker v20.10.10 if using Docker as the container runtime.
Use at least containerd 1.4.10 if using containerd as the container runtime.
OpenSSL 3.0
CentOS Stream 9 uses OpenSSL 3.0, which has deprecated support for older insecure hashes and TLS versions, such as TLS 1.0 and TLS 1.1. This shouldn't impact you unless you are using an old, insecure certificate. In that case, the certificate will no longer work. See the OpenSSL migration guide for more information.
FIPS Environments
If you run Immuta 2022.5.x containers in a FIPS-enabled environment, they will now fail. Helm Chart 4.11 contains a feature for you to override the openssl.cnf
file, which can be used to allow Immuta to run in your environment, mimicking the CentOS 7 behavior.
Helm 3.2 or greater
Kubernetes: See the Install Immuta page for a list of versions Immuta supports.
Immuta uses Helm to manage and orchestrate Kubernetes deployments. Check the Helm Release Notes to ensure you are using the correct Helm Chart with your version of Immuta.
Database backups for the metadata database and Query Engine may be stored in either cloud-based blob storage or a Persistent Volume in Kubernetes.
Backups may be stored using one of the following cloud-based blob storage services:
AWS S3
Supports authentication via AWS Access Key ID / Secret Key, IAM Roles via kube2iam or kiam, or IAM Roles in EKS.
Azure Blob Storage
Supports authentication via Azure Storage Key, Azure SAS Token, or Azure Managed Identities.
Google Cloud Storage
Supports authentication via Google Service Account Key
When database persistence is enabled, Immuta requires access to PersistentVolumes
through the use of a persistent volume claim template. These volumes should normally be provided by a block device, such as AWS EBS, Azure Disk, or GCE Persistent Disk.
Additionally, when database persistence is enabled, Immuta requires the ability to run an initContainer
as root. When PodSecurityPolicies
are in place, service accounts must be granted access to use a PodSecurityPolicy
with the ability to RunAsAny user
.
The Immuta Helm Chart supports RBAC and will try to create all needed RBAC roles by default.
Best Practice: Use Nginx Ingress Controller
Immuta recommends that you use the Nginx Ingress Controller because it supports both HTTP and TCP ingress.
Immuta needs Ingress for two services:
Immuta Web Service (HTTP)
Immuta Query Engine (TCP)
The Immuta Helm Chart creates Ingress resources for HTTP services (the Immuta Web Service), but because of limitations with Kubernetes Ingress resources TCP ingress must be configured separately. The configuration for TCP ingress is dependent on the Ingress Controller that you are using in your cluster. Also, the configuration for TCP ingress is optional if you will only integrations, and it can be disabled.
To simplify the configuration for cluster Ingress, the Immuta Helm Chart contains an optional Nginx Ingress component that may be used to configure a Nginx Ingress Controller to be used specifically for Immuta. Contact your Immuta Support Professional for more information.
Immuta’s suggested minimum node size has 4 CPUs and 16GB RAM. The default Immuta Helm deployment requires at least 3 nodes.
All Immuta services use TLS certificates to enable communication over HTTPS. In order to support many configurations, the Immuta Helm Chart has the ability to configure internal and external communication independently. If TLS is enabled, by default, a certificate authority will be generated then used to sign a certificate for both internal and external communications. See Enabling TLS for instructions to configuring TLS.
Internal HTTPS communication refers to all communication between Immuta services. External HTTPS communication refers to communication between clients and the Immuta Query Engine and Web Service, which is configured using a Kubernetes Ingress resource.
Audience: All Immuta Users
Content Summary: This page details the major components, installation, scalability, availability, and security of the Immuta platform.
Immuta's server-side software comprises the following major components:
Fingerprint Service: When enabled, additional statistical queries made during the health check are distilled into summary statistics, called fingerprints. During this process, statistical query results and data samples (which may contain PII) are temporarily held in memory by the Fingerprint Service.
Immuta Metadata Database: The database that contains instance metadata that powers the core functionality of Immuta, including policy data and attributes about data sources (tags, audit data, etc.).
Immuta Web Service: This component includes the Immuta UI and API and is responsible for all web-based user interaction with Immuta, metadata ingest, and the data fingerprinting process. Notionally a single web service, the fingerprinting functionality runs as a separate service internally and can be independently scaled.
Immuta's standard installation is a Helm installation to a Kubernetes cluster. This could be a Kubernetes cluster you manage or a hosted solution such as AKS, EKS, or GKE. This is the preferred deployment because of the minimal administration needed to achieve scale and availability.
Immuta is designed to be scalable in several dimensions. For the standard Immuta deployment, minimal administrative effort is required to manage scaling beyond the addition of nodes to the Immuta system. Scalability can also be achieved in non-standard deployments, but requires the time of skilled systems administrator resources.
The Immuta web service is stateless and horizontally scalable.
By keeping a metadata catalog rather than maintaining separate copies of data, Immuta's database is designed to remain small and responsive. By running replicated instances of this internal database, the catalog can scale in support of the web service.
Because each component of Immuta is designed to be horizontally scalable, Immuta can be configured for high availability. Upgrades and major configuration changes may require scheduled downtime, but even if Immuta's master internal database fails, recovery happens within seconds. With the addition of an external load balancer, Immuta's standard deployment comes preconfigured with these availability features.
Immuta’s core function of policy enforcement and management is designed to improve your data security. Beyond this primary feature, Immuta protects your data in several other ways.
Immuta is designed to leverage your existing identity management system when desired. This design allows Immuta to benefit from the work your security team has already done to validate users, protect credentials, and define roles and attributes.
By default, all network communications with Immuta and within Immuta are encrypted via TLS. This practice ensures your data is protected while in transit.
Immuta does not make any persistent copies of data.
Immuta does not store raw customer data. However, it may temporarily cache samples of their data for SDD and fingerprinting. These samples are stored in the metadata database and cache containers.
Kubernetes 1.16 or greater
Helm 3.2 or greater
Rocky Linux 9
Review the potential impacts of Immuta's Rocky Linux 9 upgrade to your environment before proceeding:
ODBC Drivers
Your ODBC drivers should use a driver compatible with Enterprise Linux 9 or Red Hat Enterprise Linux 9.
Container Runtimes
You must run a supported version of Kubernetes.
Use at least Docker v20.10.10 if using Docker as the container runtime.
Use at least containerd 1.4.10 if using containerd as the container runtime.
OpenSSL 3.0
CentOS Stream 9 uses OpenSSL 3.0, which has deprecated support for older insecure hashes and TLS versions, such as TLS 1.0 and TLS 1.1. This shouldn't impact you unless you are using an old, insecure certificate. In that case, the certificate will no longer work. See the OpenSSL migration guide for more information.
FIPS Environments
If you run Immuta 2022.5.x containers in a FIPS-enabled environment, they will now fail. Helm Chart 4.11 contains a feature for you to override the openssl.cnf
file, which can be used to allow Immuta to run in your environment, mimicking the CentOS 7 behavior.
Using a Kubernetes namespace
If deploying Immuta into a Kubernetes namespace other than the default, you must include the --namespace
option into all helm
and kubectl
commands provided throughout this section.
Immuta's Helm Chart requires Helm version 3+.
Run helm version
to verify the version of Helm you are using:
Helm 3 Example Output
In order to deploy Immuta to your Kubernetes cluster, you must be able to access the Immuta Helm Chart Repository and the Immuta Docker Registry. You can obtain credentials from your Immuta support professional.
--pass-credentials Flag
If you encounter an unauthorized error when adding Immuta's Helm Chart, you could run helm repo add --pass-credentials
.
Usernames and passwords are only passed to the URL location of the Helm repository by default. The username and password are scoped to the scheme, host, and port of the Helm repository. To pass the username and password to other domains Helm may encounter when it goes to retrieve a chart, the new --pass-credentials
flag can be used. This flag restores the old behavior for a single repository as an opt-in behavior.
If you use a username and password for a Helm repository, you can audit the Helm repository in order to check for another domain that could have received the credentials. In the index.yaml
file for that repository, look for another domain in the URL's list for the chart versions. If there is another domain found and that chart version is pulled or installed the credentials will be passed on.
Run helm repo list
to ensure Immuta's Helm Chart repository has been successfully added:
Example Output
Don't forget the image pull secret!
You must create a Kubernetes Image Pull Secret in the namespace that you are deploying Immuta in, or the Pods will fail to start due to ErrImagePull
.
Run kubectl get secrets
to confirm your Kubernetes image pull secret is in place:
Example Output
Run helm search repo immuta
to check the version of your local copy of Immuta's Helm Chart:
Example Output
Update your local Chart by running helm repo update
.
To perform an upgrade without upgrading to the latest version of the Chart, run helm list
to determine the Chart version of the installed release, and then specify that version using the --version
argument of helm repo update
.
Once you have the Immuta Docker Registry and Helm Chart Repository configured, download the immuta-values.yaml file. This file is a recommended starting point for your installation.
Modify immuta-values.yaml
based on the determined configuration for your Kubernetes cluster and the desired Immuta installation. You can change a number of settings in this file, such as
Guidance for configuring persistence, backups, and resource limits are provided below. See Immuta Helm Chart Options for a comprehensive list of configuration options.
Replace the placeholder password value "<SPECIFY_PASSWORD_THAT_MEETS_YOUR_ORGANIZATIONS_POLICIES>"
with a secure password that meets your organization's password policies.
Avoid these special characters in generated passwords
whitespace, $
, &
, :
, \
, /
, '
Default Helm Values
Modifying any file bundled inside the Helm Chart could cause unforeseen issues and as such is not supported by Immuta. This includes but is not limited to the values.yaml
file that contains default configurations for the Helm deployment. Any custom configurations can be made in the immuta-values.yaml
file can then be passed into helm install immuta
by using the --values
flag as described in Deploy Immuta.
If you would like to disable persistence to disk for the database
and query-engine
components, you can do so by configuring database.persistence.enabled=false
and/or queryEngine.persistence.enabled=false
in immuta-values.yaml
. Disabling persistence can be done for test environments. However, we strongly recommend against disabling persistence in production environments as this leaves your database in ephemeral storage.
By default, database.persistence.enabled
and queryEngine.persistence.enabled
are set to true
and request 120Gi
of storage for each component. Recommendations for the Immuta Metadata Database storage size for POV, Staging, and Production deployments are provided in the immuta-values.yaml
as shown below. However, the actual size needed is a function of the number of data sources you intend to create and the amount of logging/auditing (and its retention) that will be used in your system.
Provide Room for Growth
Provide plenty of room for growth here, as Immuta's operation will be severely impacted should database storage reach capacity.
While the Immuta query engine persistent storage size is configurable as well, the default size of 20Gi
should be sufficient for operations in nearly all environments.
Limitations on modifying database and query-engine persistence
Once persistence is set to either true
or false
for database
or query-engine
, it cannot be changed for the deployment. Modifying persistence will require a fresh installation or a full backup and restore procedure as per 4.2 - Restore From Backup - Immuta Kubernetes Re-installation.
At this point this procedure forks depending on whether you are installing with the intent of restoring from a backup or not. Use the bullets below to determine which step to follow.
If this is a new install with no restoration needed, follow Step 4.1.
If you are upgrading a previous installation using the full backup and restore (Method B), follow Step 4.2.
Immuta's Helm Chart has support for taking backups and storing them in a PersistentVolume
or copying them directly to cloud provider blob storage including AWS S3, Azure Blob Storage, and Google Storage.
To configure backups with blob storage, reference the backup
section in immuta-values.yaml
and consult the subsections of this section of the documentation that are specific to your cloud provider for assistance in configuring a compatible resource. If your Kubernetes environment is not represented there, or a workable solution does not appear available, please contact your Immuta representative to discuss options.
If using volumes, the Kubernetes cluster Immuta is being installed into must support PersistentVolumes
with an access mode of ReadWriteMany
. If such a resource is available, Immuta's Helm Chart will set everything up for you if you enable backups and comment out the volume
and claimName
.
If you are upgrading a previous installation using the full backup and restore procedure (Method B), a valid backup configuration must be available in the Helm values. Enable the functionality to restore from backups by setting the restore.enabled
option to true in immuta-values.yaml
.
If using the volume backup type, an existing PersistentVolumeClaim
name needs to be configured in your immuta-values.yaml
because the persistentVolumeClaimSpec
is only used to create a new, empty volume.
If you are unsure of the value for <YOUR ReadWriteMany PersistentVolumeClaim NAME>
, the command kubectl get pvc
will list it for you
Example Output
Adhering to the guidelines and best practices for replicas and resource limits outlined below is essential for optimizing performance, ensuring cluster stability, controlling costs, and maintaining a secure and manageable environment. These settings help strike a balance between providing sufficient resources to function optimally and making efficient use of the underlying infrastructure.
Set the following replica parameters in your Helm Chart to the values listed below:
The Immuta Helm Chart supports Resource Limits for all components. Set resources and limits to the database and query engine in the Helm values. Without those limits, the pods will be the first target for eviction, which can cause issues during backup and restore, since this process consumes a lot of memory.
Add this YAML snippet to your Helm values file:
Database is the only component that needs a lot of resources, especially if you don’t use the query engine. For a small installation, you can set the database resources to 2Gi
, and if you see slower performance over time, you can increase this number to improve performance.
Setting CPU resources and limits is optional. Resource contention over CPU is not a common occurrence for Immuta, so it won’t have a significant effect if the CPU resource and limit is set.
Run the following command to deploy Immuta:
Troubleshooting
If you encounter errors while deploying the Immuta Helm Chart, see Troubleshooting.
HTTP communication using TLS certificates is enabled by default in Immuta's Helm Chart for both internal (inside the Kubernetes cluster) and external (between the Kubernetes ingress and the outside world) communications. This is accomplished through the generation of a local certificate authority (CA) which signs certificates for each service - all handled automatically by the Immuta installation. While not recommended, if TLS must be disabled for some reason, this can be done by setting tls.enabled
to false
in the values file.
Best Practice: TLS Certification
Immuta recommends to use your own TLS certificate for external (outside the Kubernetes cluster) communications for Immuta production deployments.
Using your own certificates requires you to create a Kubernetes Secret containing the private key, certificate, and certificate authority certificate. This can be easily done using kubectl:
Make sure your certificates are correct
Make sure the certificate's Common Name (CN) and/or Subject Alternative Name (SAN) matches the specified externalHostname
or contains an appropriate wildcard.
After creating the Kubernetes Secret, specify its use in the external ingress by setting tls.externalSecretName
= immuta-external-tls
in your immuta-values.yaml file:
The Immuta Helm Chart (version 4.5.0+) can be deployed using Argo CD.
For Argo CD versions older than 1.7.0 you must use the following Helm values in order for the TLS generation hook to run successfully.
Starting with Argo CD version 1.7.0 the default TLS generation hook values can be used.
tls.manageGeneratedSecret
must be set to true when using Argo CD to deploy Immuta; otherwise, the generated TLS secret will be shown as OutOfSync (requires pruning) in Argo CD. Pruning the Secret would break TLS for the deployment, so it is important to set this value to prevent that from happening.
For detailed assistance in troubleshooting your installation, contact your Immuta representative or see Helm Troubleshooting.
Audience: System Administrators
Content Summary: This page outlines how to configure an external metadata database for Immuta instead of using Immuta's built-in PostgreSQL Metadata Database that runs in Kubernetes.
Helm Chart Version
Update to the latest Helm Chart before proceeding any further.
The Metadata Database can optionally be configured to run outside of Kubernetes, which eliminates the variability introduced by the Kubernetes scheduler and/or scaler without compromising high-availability. This is the preferred configuration, as it offers infrastructure administrators a greater level of control in the event of disaster recovery.
PostgreSQL Version incompatibilities
PostgreSQL versions 12
through 16
are only supported when Query Engine rehydration is enabled; otherwise, the PostgreSQL version must be pinned at 12
. PostgreSQL abstraction layers such as AWS Aurora are not supported.
Enable an external metadata database by setting database.enabled=false
in the immuta-values.yaml
file and passing the connection information for the PostgreSQL instance under the key externalDatabase
.
Set queryEngine.rehydration.enabled=true
. If set to false
, then externalDatabase.superuser.username
and externalDatabase.superuser.password
must be provided.
Superuser Role
Prior to Helm Chart 4.13
, declaring externalDatabase.superuser.username
and externalDatabase.superuser.password
was a required field. This requirement has since been made optional when Query Engine rehydration is enabled. If a superuser is omitted, then the chart will no longer manage the database backup/restore process. In this configuration, customers are responsible for backing up their external metadata database.
The externalDatabase
object is detailed below and in the Immuta Helm Chart Options.
Additionally, it is possible to use existingSecret
instead of setting externalDatabase.password
in the Helm values. These passwords map to the same keys that are used for the built-in database. For example,
Role Creation
The role's password set below should match Helm value externalDatabase.password
.
Azure Database for PostgreSQL
During restore the built-in database's backup expects role postgres
to exist. This role is not present by default, and must be created when using Azure Database for PostgreSQL.
Log in to the external metadata database as a user with the superuser role attribute (such as the postgres
user) using your preferred tool (e.g., psql, pgAdmin).
Connect to database postgres
, and execute the following SQL.
Connect to database bometadata
that was created in the previous step, and execute the following SQL. Azure Database for PostgreSQL: Extensions must be configured in the web portal.
Helm Releases
Run helm list
to view all existing Helm releases. Refer to the Helm docs to learn more.
For existing deployments, you can migrate from the built-in database to an external database. To migrate, backups must be configured. Reach out to your Immuta representative for instructions.
(Optional) Set default namespace:
Trigger manual backup:
Validate backup succeeded:
Follow the steps outlined in section First-Time PostgreSQL Setup.
Edit immuta-values.yaml
to enable the external metadata database and restore.
Apply the immuta-values.yaml
changes made in the previous step:
Wait until the Kubernetes resources become ready.
Edit immuta-values.yaml
to enable Query Engine rehydration and disable backup/restore.
Rerun the previous helm upgrade
command to apply the latest immuta-values.yaml
changes.
Connect to database postgres
, and execute the following SQL. Azure Database for PostgreSQL: Delete the previously created role by running DROP ROLE postgres;
.
Audience: System Administrators
Content Summary: This page outlines how to deploy Immuta on OpenShift.
Immuta OpenShift Support
Immuta officially supports OpenShift 4 (versions supported by Red Hat) and does not support OpenShift 3.
Run the following command in your terminal:
runAsUser
and fsGroup
The Immuta Helm Chart must be configured to set two values within the approved ranges for the OpenShift project Immuta is being deployed into: runAsUser
and fsGroup
.
runAsUser
: On a Pod SecurityContext, this field defines the user ID that will run the processes within the pod. In the next step, this can be set to any value within the range defined in sa.scc.uid-range
. See details below.
fsGroup
: This field defines a group ID that will be added as a supplemental group to the Pod. Files in PersistentVolumes
will be writable by this group ID. In the next step, this must be set to the minimum value in the range defined in sa.scc.supplemental-groups
. See details below.
View the approved ranges in OpenShift using one of the two methods below:
OpenShift Console
Navigate to the Project Details page and click the link under Annotations.
Take note of the values for openshift.io/sa.scc.uid-range
and openshift.io/sa.scc.supplemental-groups
.
OpenShift CLI
Alternatively, use the OpenShift CLI to inspect the relevant values directly. For example,
In both illustrations above, the first part of the value (leading up to the /
) is the assigned user ID/group ID range, and the second part (trailing the /
) is the extent of the range.
For example, the minimum UID for sa.scc.uid-range=1000620000/10000
is 1000620000
and the maximum is 1000629999
(1000620000 + 10000
).
For the examples throughout the rest of this tutorial, 1000620000
will be set as the value for both runAsUser
and fsGroup
.
For more details on security context restraints and how the user and group ID ranges are allocated, see the OpenShift documentation.
Set these OpenShift-specific Helm values in a YAML file that will be passed to helm install
in the next step:
externalHostname
: Set to a subdomain of the domain configured for the OpenShift Ingress controller. Contact your OpenShift administrator to get the configured domain if it is unknown.
securityContext.runAsUser
: Set this to a user ID in the range specified by the annotation openshift.io/sa.scc.uid-range
in the OpenShift project for the following components:
backup.securityContext.runAsUser
cache.securityContext.runAsUser
database.securityContext.runAsUser
fingerprint.securityContext.runAsUser
queryEngine.securityContext.runAsUser
web.securityContext.runAsUser
securityContext.fsGroup
: Set this to the minimum value in the range defined in sa.scc.supplemental-groups
in the OpenShift project for the following components:
backup.securityContext.fsGroup
database.securityContext.fsGroup
queryEngine.securityContext.fsGroup
web.securityContext.fsGroup
patroniKubernetes.use_endpoints
: Set to false
for the components below. This change is required for Patroni to be able to successfully elect a leader.
database.patroniKubernetes.use_endpoints
queryEngine.patroniKubernetes.use_endpoints
queryEngine.clientService.type
: Set to LoadBalancer
so that a LoadBalancer will be created to handle the TCP traffic for the Query Engine. The LoadBalancer that OpenShift creates will have its own hostname/IP address, and you must update the Public Query Engine Hostname in Application Settings (instructions below). This step can be omitted if the Query Engine is not being used.
web.ingress.enabled
: Set to false
to disable creation of Ingress resources for the Immuta Web Service. OpenShift provides its own Ingress controller for handling HTTP ingress, and this is configured by creating Routes.
Follow the standard Immuta deployment with Helm, but supply the additional values file using the --values
flag in the helm install
step.
To set up ingress for Immuta using the OpenShift Ingress controller, get the CA certificate used by Immuta for internal TLS. This will be used by the OpenShift Ingress controller to validate the upstream TLS connection to Immuta.
Create a Route using the OpenShift CLI. The hostname flag should be set to match the value configured for externalHostname
in the Helm values file, and it should be a subdomain of the domain that the OpenShift Ingress controller is configured for.
This will create a route to be served by the OpenShift Ingress controller. At this point, Immuta is installed and should be accessible at the configured hostname.
Run kubectl get svc immuta-query-engine-clients
to inspect the Query Engine client's service in Kubernetes to get the assigned External IP address. For example,
Copy the External-IP address. You will paste this value in the Immuta App Settings page to update the Public Query Engine Hostname.
In the Immuta UI, click the App Settings icon in the left sidebar and scroll to the Public URLs section.
Enter the value you copied from the EXTERNAL-IP
column in the Public Query Engine Hostname field.
Click Save to update the configuration.
Nginx Ingress must be disabled to run with the restricted SCC. Immuta's built-in Nginx Ingress controller will not run with the restricted SCC and must be disabled to run in this configuration. OpenShift has its own Ingress controller that can be used for HTTP traffic for the Immuta Web Service. However, since the OpenShift Ingress controller does not support TCP traffic, a separate LoadBalancer service must be used for the Query Engine, and the Public Query Engine Hostname must be updated accordingly.
Audience: System Administrators
Content Summary: This page outlines instructions for troubleshooting specific issues with Helm.
Using a Kubernetes namespace
If deploying Immuta into a Kubernetes namespace other than the default, you must include the --namespace
option into the helm
and kubectl
commands provided throughout this section.
If you encounter Immuta Pods that have had the status Pending
or Init:0/1
for an extended period of time, then there may be an issue mounting volumes to the Pods. You may find error messages by describing one of the pods that had the Pending
or Init:0/1
status.
If an event with the message pod has unbound PersistentVolumeClaims
is seen on the frozen pod, then there is most likely an issue with the database backup storage Persistent Volume Claims. Typically this is caused by the database backup PVC not binding because there are no Kubernetes Storage Classes configured to provide the correct storage type.
Solution
Review your and ensure that you either have the proper storageClassName
or claimName
set.
Once you have updated the immuta-values.yaml
to contain the proper PVC configuration, you will want to first delete the Immuta deployment, then run helm install
.
Occasionally Helm has bugs or loses track of Kubernetes resources that it has created. Immuta has created a Bash script that you may download and use to cleanup all resources that are tied to an Immuta deployment. This command should only be run after helm delete <YOUR RELEASE NAME>
.
Download cleanup-immuta-deployment.sh.
Run the script:
After a configuration change or cluster outage you may need to perform a rolling-restart to refresh the database pods without data loss. Use the command below to update a restart
annotation on the database pods to instruct the database StatefulSet to roll the pods.
After a configuration change or cluster outage you may need to perform a rolling-restart to refresh the web service pods. Use the command below to update a restart
annotation on the web pods to Deployment to roll the pods.
Solution
Should the need arise that you need to regenerate internal TLS certificates follow the instructions below.
Solution
Delete the internal TLS secret
Recreate the internal TLS secret by running Helm Upgrade.
Note
If you need to modify any postgres settings such as TLS certificate verification for the Query Engine. Be sure to modify values.yaml
file before running this command.
Helm 3: helm upgrade immuta/immuta --values
Helm 2: helm upgrade immuta/immuta --values --name
WAIT FOR PODS TO RESTART BEFORE CONTINUING
Restart Query Engine:
WAIT FOR PODS TO RESTART BEFORE CONTINUING
Restart Web Service:
Should you need to rotate external TLS certificates, follow the instructions below:
Solution
Create a new secret with the relevant TLS files.
Update your tls.externalSecretName
in immuta_values.yaml with the new external TLS secret.
Run Helm Upgrade to update the certificates for the deployment.
Delete the old secret.
Using a Kubernetes namespace
If deploying Immuta into a Kubernetes namespace other than the default, you must include the --namespace
option into all helm
and kubectl
commands provided throughout this section.
:
Immuta's Helm Chart requires Helm version 3+.
New installations of Immuta must use the latest version of Helm 3 and Immuta's latest Chart.
Run helm version
to verify the version of Helm you are using:
In order to deploy Immuta to your Kubernetes cluster, you must be able to access the Immuta Helm Chart Repository and the Immuta Docker Registry. You can obtain credentials from your Immuta support professional.
Run helm repo list
to ensure Immuta's Helm Chart repository has been successfully added:
Example Output
If you do not create a Kubernetes Image Pull Secret, installation will fail.
You must create a Kubernetes Image Pull Secret in the namespace that you are deploying Immuta in, or the installation will fail.
Run kubectl get secrets
to confirm your Kubernetes image pull secret is in place:
Example Output
No Rollback
Immuta's migrations to your database are one way; this means that there is no way to revert back to an earlier version of the software. If you must rollback, you will need to backup and delete what you have and then proceed to restore from the backup to the appropriate version of the software.
No Modifying Persistence
Run helm search repo immuta
to check the version of your local copy of Immuta's Helm Chart:
Example Output
Update your local Chart by running helm repo update
.
To perform an upgrade without upgrading to the latest version of the Chart, run helm list
to determine the Chart version of the installed release, and then specify that version using the --version
argument of helm repo update
.
Run helm list
to confirm Helm connectivity and verify the current Immuta installation:
Example Output
Make note of:
NAME - This is the '<YOUR RELEASE NAME>
' that will be used in the remainder of these instructions.
CHART - This is the version of Immuta's Helm Chart that your instance was deployed under.
You will need the Helm values associated with your installation, which are typically stored in an immuta-values.yaml
file. If you do not possess the original values file, these can be extracted from the existing installation using:
Select your method:
Method B - Backup and Restore: This method is intended primarily for recovery scenarios and is only to be used if you have been advised to by an Immuta representative. Reach out to your Immuta representative for instructions.
Rocky Linux 9
Review the potential impacts of Immuta's Rocky Linux 9 upgrade to your environment before proceeding:
ODBC Drivers
Your ODBC drivers should use a driver compatible with Enterprise Linux 9 or Red Hat Enterprise Linux 9.
Container Runtimes
OpenSSL 3.0
FIPS Environments
If you run Immuta 2022.5.x containers in a FIPS-enabled environment, they will now fail. Helm Chart 4.11 contains a feature for you to override the openssl.cnf
file, which can be used to allow Immuta to run in your environment, mimicking the CentOS 7 behavior.
After you make any desired changes in your immuta-values.yaml
file, you can apply these changes by running helm upgrade
:
Note: Errors in upgrades can result when upgrading Chart versions on the installation. These are typically easily resolved by making slight modifications of your values to accommodate the changes in the Chart. Downloading an updated copy of the immuta-values.yaml
and comparing to your existing values is often a great way to debug such occurrences.
If you are on Kubernetes 1.22+, remove nginxIngress.controller.image.tag=v0.49.3
when upgrading; otherwise, your ingress service may not start after the upgrade.
Audience: System Administrators
Content Summary: This guide illustrates the deployment of an Immuta cluster on Microsoft Azure Kubernetes Service. Requirements may vary depending on the Azure Cloud environment and/or region. For comprehensive assistance, please contact an Immuta Support Professional.
This guide is intended to supplement the main , which is referred to often throughout this page.
Prerequisites:
Software: 2.3.0 or greater and 2.0.30 or greater
Node Size: Immuta's suggested minimum Azure VM size for Azure Kubernetes Service deployments is
Standard_D3_v2
(4 vCPU, 14GB RAM, 200 GB SSD) or equivalent. The Immuta helm installation requires a minimum of 3 nodes. Additional nodes can be added on demand.TLS Certificates: See the for TLS certificate requirements.
To install Azure CLI 2.0, please visit and follow the instructions for your chosen platform. You can also use the .
For more information on nodes, see the .
Before installing Immuta, you will need to spin up your AKS cluster. If you would like to install Immuta on an existing AKS cluster, you can skip this step. If wish to deploy a dedicated cluster for Immuta, please visit .
Navigate to the installation method of your choice:
Since you are deploying Immuta as an Azure cloud application in AKS, you can easily configure the Nginx Ingress Controller that is bundled with the Immuta Helm deployment as a load balancer using the generated hostname from Azure.
Confirm that you have the following configurations in your values.yaml
file before deploying:
If you are using the included ingress controller, it will create a Kubernetes LoadBalancer Service to expose Immuta outside of your cluster. The following options are available for configuring the LoadBalancer Service:
nginxIngress.controller.service.annotations
: Useful for setting options such as creating an internal load balancer or configuring TLS termination at the load balancer.
nginxIngress.controller.service.loadBalancerSourceRanges
: Used to limit which client IP addresses can access the load balancer.
nginxIngress.controller.service.externalTrafficPolicy
: Useful when working with Network Load Balancers on AWS. It can be set to “Local” to allow the client IP address to be propagated to the Pods.
After running helm install
, you can find the public IP address of the nginx controller by running
If the public IP address shows up as <pending>
, wait a few moments and check again. Once you have the IP address, run the following commands to configure the Immuta Azure Cloud Application to use your ingress controller:
Shortly after running these commands, you should be able to reach the Immuta console in your web browser at the configured externalHostName
.
Best Practice: Network Security Group
Prepare the Helm values file,
Register the required secrets to pull Immuta's Docker images,
Run the Helm installation, and
Create the mapping between the external IP address Ingress Controller (the cluster's load balancer) and the cluster's public DNS name.
Please Note
Running the automated deployment script will make a series of decisions for you:
The installation will set up backup volumes by default. Set the BACKUPS
value to 0
to disable Immuta backups.
Download the script:
Make it executable by running:
Below is the list of the parameters that the script accepts. These parameters are environment variables that are prepended to the execution command.
You can use the same script to destroy a deployment you had previously run with this script, by running the following command:
The value of CLUSTER_NAME
should be identical to the name of the CLUSTER_NAME
value you've used to deploy Immuta.
Audience: System Administrators
Content Summary: Before installing Immuta, you will need to spin up your AKS or ACS cluster. This page outlines how to deploy Immuta cluster infrastructure on AKS and ACS.
If you would like to install Immuta on an existing AKS or ACS cluster, you can skip this section. However, we recommend deploying a dedicated resource group and cluster for Immuta if possible.
Once you have deployed your cluster infrastructure, please visit to finish installing Immuta.
Best Practice: Use AKS
Immuta highly recommends to use the improved version of Azure Kubernetes Service, AKS. Immuta on AKS will exhibit superior stability, performance, and scalability than a deployment on the deprecated version known as ACS.
You will need a resource group to deploy your AKS or ACS cluster in:
Note: There is no naming requirement for the Immuta resource group.
Now it is time to spin up your cluster resources in Azure. This step will be different depending on whether you are deploying an AKS or ACS cluster.
After running the command, you will have to wait a few moments as the cluster resources are starting up.
Create AKS Cluster (Recommended):
Create ACS Cluster (Deprecated):
You will need to configure the kubectl
command line utility to use the Immuta cluster.
If you do not have kubectl
installed, you can install it through the Azure CLI.
If you are using AKS, run
For ACS clusters, run
If you are using AKS, run
For ACS clusters, run
Property | Description | Default Value |
---|---|---|
Once persistence is set to either true
or false
for the database
or query-engine
, it cannot be changed for the deployment. Modifying persistence will require a fresh installation or a full backup and restore procedure as per .
You must run a .
Use at least Docker v20.10.10 if using Docker as the .
Use at least containerd 1.4.10 if using containerd as the .
CentOS Stream 9 uses OpenSSL 3.0, which has deprecated support for older insecure hashes and TLS versions, such as TLS 1.0 and TLS 1.1. This shouldn't impact you unless you are using an old, insecure certificate. In that case, the certificate will no longer work. See the for more information.
Please see the for the full walkthrough of installing Immuta via our Helm Chart. This section will focus on the specific requirements for the helm installation on AKS.
Possible values for these various settings can be found in the .
Immuta recommends that you set up the network security group for the Immuta cluster to be closed to public traffic outside of your organization. If your organization already has rules and guidelines for your Azure Cloud Application security groups, then you should adhere to those. Otherwise, we recommend visiting Microsoft's for configuring Network security groups to find a solution that fits your environment.
To configure backups with Azure, see the .
If you've previously provisioned an AKS cluster (see ) and have installed the Installation Prerequisites, you can run an automated script that will
The TLS certificates will be generated on-the-fly and will be self-signed. You can easily change this later by following the instructions in the .
The number of replicas from each component will be automatically derived from your AKS cluster's node count. This can be easily modified by overriding the .
Variable Name | Description | Required | Default |
---|
To run the script and deploy, you can simply prepend the above-mentioned to the execution command, with the action deploy
. For example,
host
(required)
Hostname of the external PostgreSQL database instance.
nil
port
Port of the external PostgreSQL database instance.
5432
sslmode
(required)
The mode for the database connection. Supported values are disable
, require
, verify-ca
, and verify-fully
.
nil
superuser.username
Username for the superuser used to initialize the PostgreSQL instance.
nil
superuser.password
Password for the superuser used to initialize the PostgreSQL instance.
nil
username
Username that Immuta creates and uses for the application.
bometa
password
(required)
Password associated with username
.
nil
dbname
Database name that Immuta uses.
bometadata
| The name of your AKS cluster | Required | - |
| The Azure Subscription ID | Required | - |
| The resource group that contains the cluster | Required | - |
| Obtain from your Immuta support professional | Required | - |
| Obtain from your Immuta support professional | Required | - |
| An arbitrary metadata database password | Required | - |
| An arbitrary metadata database super-user password | Required | - |
| An arbitrary metadata database replication password | Required | - |
| An arbitrary metadata database Patroni API password | Required | - |
| An arbitrary Query Engine password | Required | - |
| An arbitrary Query Engine super-user password | Required | - |
| An arbitrary Query Engine replication password | Required | - |
| An arbitrary Query Engine Patroni API password | Required | - |
| The version tag of the desired Immuta installation | Optional |
|
| The Kubernetes namespace to create and deploy Immuta to | Optional | default |
| The number of replicas of each main component in the cluster | Optional | 1 |
| Whether or not backups should be enabled | Optional | 1 |
| Backup Storage Account resource group | Optional | Same as |
Audience: System Administrators
Content Summary: This page outlines how to install Immuta in an air-gapped environment.
Process for Saving and Loading Docker Images
The process outlined for saving and loading the Docker images will be different for everyone. With the exception of the list of Docker images that all users need to copy to their container registry, all code blocks provided are merely examples.
This high-level overview makes these assumptions:
a container registry is accessible from inside the air-gapped environment
Docker and Helm are already installed
All users should copy these Docker images to their container registry.
See the Helm Chart Options page for the values: IMMUTA_DEPLOY_TOOLS_VERSION
, MEMCACHED_TAG
, and INGRESS_NGINX_TAG
.
Docker Registry Authentication
Contact your Immuta support professional for your Immuta Docker Registry credentials.
Authenticate with Immuta's Docker registry.
Pull the images.
Save the images.
The .tar.gz
files will now be in your working directory.
Add Immuta's Chart repository to Helm.
Download the Helm Chart.
The .tgz
files will now be in your working directory.
Move the Helm Chart and Docker images onto a machine connected to the air-gapped network.
Copy these Docker images to your container registry. Note: You may need to reload the environment variables.
Validate that the images are present.
Tag the images.
Push the images to your registry.
Create the Helm values file (i.e., myValues.yaml
) and point it to your registry (i.e., web.imageRepository
). Be sure to replace $CUSTOMER_REGISTRY
with the actual URL, including any additional prefixes before immuta
, with the URL for the actual registry.
Deploy the Helm Chart.
Audience: System Administrators
Content Summary: The Immuta Helm installation integrates well with Kubernetes on AWS. This guide walks through the various components that can be set up.
Prerequisite: An Amazon EKS cluster with a recommended minimum of 3 m5.xlarge worker nodes.
Using a Kubernetes namespace
If deploying Immuta into a Kubernetes namespace other than the default, you must include the --namespace
option into all helm
and kubectl
commands provided throughout this section.
As of Kubernetes 1.23+ on EKS, you have to configure the EBS CSI driver in order for the Immuta Helm deployment to be able to request volumes for storage. Follow these instructions:
Upon cluster creation, create an IAM policy and role and associate it with the cluster. See AWS documentation for details: Creating the Amazon EBS CSI driver IAM role for service accounts.
Upon cluster creation, add the EBS CSI driver as an add-on to the cluster. See AWS documentation for details: Managing the Amazon EBS CSI driver as an Amazon EKS add-on.
For deploying Immuta on a Kubernetes cluster using the AWS cloud provider, you can mostly follow the Kubernetes Helm installation guide.
The only deviations from that guide are in the custom values file(s) you create. You will want to incorporate any changes referenced throughout this guide, particularly in the Backups and Load Balancing sections below.
Best Practice: Use S3 for Shared Storage
On AWS Immuta recommends that you use S3 for shared storage.
AWS IAM Best Practices
When using AWS IAMs make sure that you are using the best practices outlined here: AWS IAM Best Practices.
Best Practice: Production and Persistence
If deploying Immuta to a production environment using the built-in metadata database, it is recommended to resize the /
partition on each node to at least 50GB. The default size for many cloud providers is 20 GB.
To begin, you will need an IAM role that Immuta can use to access the S3 bucket from your Kubernetes cluster. There are four options for role assumption:
IAM Roles for Service Accounts : recommended for EKS.
Kube2iam or kiam: recommended if you have other workloads running in the cluster.
Instance profile: recommended if only Immuta is running in the cluster.
AWS secret access keys: simplest set-up if access keys and secrets are allowed in your environment.
The role you choose above must have at least the following IAM permissions:
The easiest way to expose your Immuta deployment running on Kubernetes with the AWS cloud provider is to set up nginx ingress as serviceType: LoadBalancer
and let the chart handle creation of an ELB.
Best Practices: ELB Listeners Configured to Use SSL
For best performance and to avoid any issues with web sockets, the ELB listeners need to be configured to use SSL instead of HTTPS.
If you are using the included ingress controller, it will create a Kubernetes LoadBalancer Service to expose Immuta outside of your cluster. The following options are available for configuring the LoadBalancer Service:
nginxIngress.controller.service.annotations
: Useful for setting options such as creating an internal load balancer or configuring TLS termination at the load balancer.
nginxIngress.controller.service.loadBalancerSourceRanges
: Used to limit which client IP addresses can access the load balancer.
nginxIngress.controller.service.externalTrafficPolicy
: Useful when working with Network Load Balancers on AWS. It can be set to “Local” to allow the client IP address to be propagated to the Pods.
Possible values for these various settings can be found in the Kubernetes Documentation.
If you would like to use automatic ELB provisioning, you can use the following values:
You can then manually edit the ELB configuration in the AWS console to use ACM TLS certificates to ensure your HTTPS traffic is secured by a trusted certificate. For instructions on doing this, please see Amazon's guide on how to Configure an HTTPS Listener for Your Classic Load Balancer
Another option is to set up nginx ingress with serviceType: NodePort
and configure load balancers outside of the cluster.
For example,
In order to determine the ports to configure the load balancer for, examine the Service configuration:
This will print out the port name and port. For example,
The Immuta deployment to EKS has a very low maintenance burden.
Best Practices: Installation Maintenance
Immuta recommends the following basic procedures for monitoring and periodic maintenance of your installation:
Periodically examine the contents of S3 to ensure database backups exist for the expected time range.
Ensure your Immuta installation is current and update it if it is not per the update instructions.
Be aware of the solutions to common management tasks for Kubernetes deployment
If kubectl
does not meet your monitoring needs, we recommend installing the Kubernetes Dashboard using the AWS provided instructions.
Ensure that your Immuta deployment is taking regular backups to AWS S3.
Your Immuta deployment is highly available and resilient to failure. For some catastrophic failures, recovery from backup may be required. Below is a list of failure conditions and the steps necessary to ensure Immuta is operational.
Internal Immuta Service Failure: Because Immuta is running in a Kubernetes deployment, no action should be necessary. Should a failure occur that is not automatically resolved, follow Immuta backup restoration procedures.
EKS Cluster Failure: Should your EKS cluster experience a failure, simply create a new cluster and follow Immuta backup restoration procedures.
Availability Zone Failure: Because EKS and ELB as well as the Immuta installation within EKS are designed to tolerate the failure of an availability zone, there are no steps needed to address the failure of an availability zone.
Region Failure: To provide recovery capability in the unlikely event of an AWS Region failure, Immuta recommends periodically copying database backups into an S3 bucket in a different AWS region. Should you experience a region failure, simply create a new cluster in a working region and follow Immuta backup restoration procedures.
See the AWS Documentation for more information on managing service limits to allow for proper disaster recovery.
The following procedure walks through the process of changing passwords for the database users in the Immuta Database.
The commands outlined here will need to be altered depending on your Helm release name and chosen passwords. Depending on your environment, there may be other changes required for the commands to complete successfully, including, but not limited to, Kubernetes namespace, kubectl context, and Helm values file name.
This process results in downtime.
Scale database StatefulSet
to 1 replica:
Change database.superuserPassword
:
Alter Postgres user password:
Update database.superuserPassword
with <new-password>
in immuta-values.yaml
.
Change database.replicationPassword
:
Alter replicator user password:
Update database.replicationPassword
with <new-password>
in immuta-values.yaml
.
Change database.password
:
Alter bometa
user password:
Update database.password
with <new-password>
in immuta-values.yaml
.
Update database.patroniApiPassword
with <new-password>
in immuta-values.yaml
.
Run helm upgrade
to persist the changes and scale the database StatefulSet
up:
Restart web pods:
Users have the option to use an existing Kubernetes secret for Immuta database passwords used in Helm installations.
Update your existingSecret
values in your Kubernetes environment.
Get the current replica counts:
Scale database StatefulSet
to 1 replica:
Change the value corresponding to database.superuserPassword
in the existing Kubernetes Secret.
Alter Postgres user password:
Change the value corresponding to database.replicationPassword
in the existing Kubernetes Secret.
Alter replicator user password:
Change the value corresponding to database.password
in the existing Kubernetes Secret.
Alter bometa
user password:
Scale the immuta-database StatefulSet
up to the previous replica count determined in the previous step:
Restart web pods:
The Helm Chart includes components that make up your Immuta infrastructure, and you can change these values to tailor your Immuta infrastructure to suit your needs. The tables below include parameter descriptions and default values for all components in the Helm Chart.
When installing Immuta, download immuta-values.yaml
and update the values to your preferred settings.
See the Helm installation page for guidance and best practices.
Parameter | Description | Default |
---|---|---|
These values are used when backup.type=s3
.
These values are used when backup.type=azblob
.
These values are used when backup.type=gs
.
tls.manageGeneratedSecret
may cause issues with helm install
.
In most cases, tls.manageGeneratedSecret
should only be set to true when Helm is not being used to install the release (i.e., Argo CD).
If tls.manageGeneratedSecret
is set to true when used with the default TLS generation hook configuration, you will encounter an error similar to the following.
Error: secrets "immuta-tls" already exists
You can work around this error by configuring the TLS generation hook to run as a post-install
hook.
However, this configuration is not compatible with helm install --wait
. If the --wait
flag is used, the command will timeout and fail.
The Metadata Database component can be configured to use either the built-in Kubernetes deployment or an external PostgreSQL database.
The following Helm values are shared between both built-in and external databases.
These values are used when database.enabled=true
.
These values are used when database.enabled=false
.
If you will only use integrations, port 5432 is optional. Using the built-in Ingress Nginx Controller, you can disable it by setting the value to false
.
The Cleanup hook is a Helm post-delete hook that is responsible for cleaning up some resources that are not deleted by Helm.
The database initialize hook is used to initialize the external database when database.enabled=false
.
The TLS generation hook is a Helm pre-install hook that is responsible for generating TLS certificates used for connections between the Immuta pods.
Deprecation Warning
The following values are deprecated. Values should be migrated to cache
and cache.memcached
. See Cache for replacement values.
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
Parameter | Description | Default |
---|---|---|
backup.enabled
Whether or not to turn on automatic backups
true
backup.restore.enabled
Whether or not to restore from backups if present
false
backup.type
Backup storage type. Must be defined if backup.enabled
is true
. Must be one of: s3
, gs
, or azblob
.
nil
backup.cronJob.nodeSelector
Node selector for backup cron job.
{"kubernetes.io/os": "linux"}
backup.cronJob.resources
Container resources.
{}
backup.cronJob.tolerations
Tolerations for backup CronJob.
nil
backup.extraEnv
Mapping of key-value pairs to be set on backup Job containers.
{}
backup.failedJobsHistoryLimit
Number of failed jobs to exist before stopping
1
backup.keepBackupVolumes
Whether or not to delete backup volumes when uninstalling Immuta
false
backup.maxBackupCount
Max number of backups to exist at a given time.
10
backup.podAnnotations
Annotations to add to all pods associated with backups
nil
backup.podLabels
Labels to add to all pods associated with backups.
nil
backup.restore.databaseFile
Name of the file in the database
backup folder to restore from.
nil
backup.restore.queryEngineFile
Name of the file in the query-engine
backup folder to restore from.
nil
backup.schedule
Kubernetes CronJob schedule expression.
0 0 * * *
backup.securityContext
SecurityContext for backup Pods.
{}
backup.serviceAccountAnnotations
Annotations to add to all ServiceAccounts associated with backups.
nil
backup.successfulJobsHistoryLimit
Number of successful jobs to exist before cleanup.
3
backup.podSecurityContext
Pod level security features.
{}
backup.containerSecurityContext
Container level security.
{}
backup.s3.awsAccessKeyId
AWS Access Key ID.
nil
backup.s3.awsSecretAccessKey
AWS Secret Access Key.
nil
backup.s3.awsRegion
AWS Region.
nil
backup.s3.bucket
S3 Bucket to store backups in.
nil
backup.s3.bucketPrefix
Prefix to append to all backups.
nil
backup.s3.endpoint
Endpoint URL of an s3-compatible server.
nil
backup.s3.caBundle
CA bundle in PEM format. Used to verify TLS certificates of custom s3 endpoint.
nil
backup.s3.forcePathStyle
Set to "true" to force the use of path-style addressing.
nil
backup.s3.disableSSL
Set to "true" to disable SSL connections for the s3 endpoint.
nil
backup.azblob.azStorageAccount
Azure Storage Account Name
nil
backup.azblob.azStorageKey
Azure Storage Account Key
nil
backup.azblob.azStorageSASToken
Azure Storage Account SAS Token
nil
backup.azblob.container
Azure Storage Account Container Name
nil
backup.azblob.containerPrefix
Prefix to append to all backups
nil
backup.gs.gsKeySecretName
Kubernetes Secret containing key.json
for Google Service Account
nil
backup.gs.bucket
Google Cloud Storage Bucket
nil
backup.gs.bucketPrefix
Prefix to append to all backups
nil
tls.enabled
Whether or not to use TLS.
true
tls.create
Whether or not to generate TLS certificates.
true
tls.manageGeneratedSecret
When true, the generated TLS secret will be created as a resource of the Helm Chart.
false
tls.secretName
Secret name to use for internal and external communication. (For self-provided certs only)
nil
tls.enabledInternal
Whether or not to use TLS for all internal communication.
true
tls.internalSecretName
Secret name to use for internal communication. (For self-provided certs only)
nil
tls.enabledExternal
Whether or not to use TLS for all external communication.
true
tls.externalSecretName
Secret name to use for external communication. (For self-provided certs only)
nil
web.extraEnv
Mapping of key-value pairs to be set on web containers.
{}
web.extraVolumeMounts
List of extra volume mounts to be added to web containers.
[]
web.extraVolumes
List of extra volumes to be added to web containers.
[]
web.image.registry
Image registry for the Immuta service image.
Value from global.imageRegistry
web.image.repository
Image repository for the Immuta service image.
immuta/immuta-service
web.image.tag
Image tag for the Immuta service image.
Value from imageTag
or immutaVersion
web.image.digest
Image digest for the Immuta service image in format of sha256:<DIGEST>
.
web.imagePullPolicy
ImagePullPolicy for the Immuta service container.
{{ .Values.imageTag }}
web.imageRepository
deprecated
Use web.image.registry
and web.image.repository
.
nil
web.imageTag
deprecated
Use web.image.tag
.
nil
web.replicas
Number of replicas of web service to deploy. Maximum: 3
1
web.workerCount
Number of web service worker processes to deploy.
2
web.threadPoolSize
Number of threads to use for each NodeJS process.
nil
web.ingress.enabled
Controls the creation of an Ingress resource for the web service.
true
web.ingress.clientMaxBodySize
client_max_body_size
passed through to nginx.
1g
web.resources
Container resources.
{}
web.podAnnotations
Additional annotations to apply to web pods.
{}
web.podLabels
Additional labels to apply to web pods.
{}
web.nodeSelector
Node selector for web pods.
{"kubernetes.io/os": "linux"}
web.serviceAccountAnnotations
Annotations for the web ServiceAccount.
{}
web.tolerations
Tolerations for web pods.
nil
web.podSecurityContext
Pod level security features.
{}
web.containerSecurityContext
Container level security features.
{}
fingerprint.image.registry
Image registry for the Immuta fingerprint image.
Value from global.imageRegistry
fingerprint.image.repository
Image repository for the Immuta fingerprint image.
immuta/immuta-fingerprint
fingerprint.image.tag
Image tag for the Immuta fingerprint image.
Value from imageTag
or immutaVersion
fingerprint.image.digest
Image digest for the Immuta fingerprint image in format of sha256:<DIGEST>
.
fingerprint.imagePullPolicy
ImagePullPolicy for the Immuta fingerprint container.
{{ .Values.imageTag }}
fingerprint.imageRepository
deprecated
Use fingerprint.image.registry
and fingerprint.image.repository
.
nil
fingerprint.imageTag
deprecated
Use fingerprint.image.tag
.
nil
fingerprint.replicas
Number of replicas of fingerprint service to deploy.
1
fingerprint.logLevel
Log level for the Fingerprint service.
WARNING
fingerprint.extraConfig
Object containing configuration options for the Immuta Fingerprint service.
{}
fingerprint.resources
Container resources.
{}
fingerprint.podAnnotations
Additional annotations to apply to fingerprint Pods.
{}
fingerprint.podLabels
Additional labels to apply to fingerprint Pods.
{}
fingerprint.nodeSelector
Node selector for fingerprint Pods.
{"kubernetes.io/os": "linux"}
fingerprint.serviceAccountAnnotations
Annotations for the fingerprint ServiceAccount.
{}
fingerprint.tolerations
Tolerations for fingerprint Pods.
nil
<component>.podSecurityContext
Pod level security features.
<component>.containerSecurityContext
Container level security features.
database.enabled
Enabled flag. Used to disable the built-in database when an external database is used.
true
database.image.registry
Image registry for the Immuta database image.
Value from global.imageRegistry
database.image.repository
Image repository for the Immuta database image.
immuta/immuta-db
database.image.tag
Image tag for the Immuta database image.
Value from imageTag
or immutaVersion
database.image.digest
Image digest for the Immuta database image in format of sha256:<DIGEST>
.
database.imagePullPolicy
ImagePullPolicy for the Immuta database container.
{{ .Values.imageTag }}
database.imageRepository
deprecated
Use database.image.registry
and database.image.repository
.
nil
database.imageTag
deprecated
Use database.image.tag
.
nil
database.extraEnv
Mapping of key-value pairs to be set on database containers.
{}
database.extraVolumeMounts
List of extra volume mounts to be added to database containers.
[]
database.extraVolumes
List of extra volumes to be added to database containers.
[]
database.nodeSelector
Node selector for database pods.
{"kubernetes.io/os": "linux"}
database.password
Password for immuta metadata database
secret
database.patroniApiPassword
Password for Patroni REST API.
secret
database.patroniKubernetes
Patroni Kubernetes settings.
{"use_endpoints": true}
database.persistence.enabled
Set this to true to enable data persistence on all database pods. It should be set to true
for all non-testing environments.
false
database.podAnnotations
Additional annotations to apply to database pods.
{}
database.podLabels
Additional labels to apply to database pods.
{}
database.replicas
Number of database replicas.
1
database.replicationPassword
Password for replication user.
secret
database.resources
Container resources.
{}
database.sharedMemoryVolume.enabled
Enable the use of a memory-backed emptyDir
volume for /dev/shm
.
false
database.sharedMemoryVolume.sizeLimit
Size limit for the shared memory volume. Only available when the SizeMemoryBackedVolumes
feature gate is enabled.
nil
database.superuserPassword
Password for PostgreSQL superuser.
secret
database.tolerations
Tolerations for database pods.
nil
database.podSecurityContext
Pod level security features.
{}
database.containerSecurityContext
Container level security features.
{}
externalDatabase.host
required
Hostname of the external database instance.
nil
externalDatabase.port
Port for the external database instance.
5432
externalDatabase.sslmode
PostgreSQL sslmode
option for the external database connection. Behavior when unset is require
.
nil
externalDatabase.dbname
Immuta database name.
bometadata
externalDatabase.username
Immuta database username.
bometa
externalDatabase.password
required
Immuta database user password.
nil
externalDatabase.superuser.username
required
Username for the superuser used to initialize the database instance.
true
externalDatabase.superuser.password
required
Password for the superuser used to initialize the database instance.
true
externalDatabase.backup.enabled
(Deprecated) Enable flag for external database backups. Refer to backup.enabled=true
.
true
externalDatabase.restore.enabled
(Deprecated) Enable flag for the external database restore. Refer to backup.restore.enabled=true
.
true
queryEngine.extraEnv
Mapping of key-value pairs to be set on Query Engine containers.
{}
queryEngine.extraVolumeMounts
List of extra volume mounts to be added to Query Engine containers.
[]
queryEngine.extraVolumes
List of extra volumes to be added to Query Engine containers.
[]
queryEngine.image.registry
Image registry for the Immuta Query Engine image.
Value from global.imageRegistry
queryEngine.image.repository
Image repository for the Immuta Query Engine image.
immuta/immuta-db
queryEngine.image.tag
Image tag for the Immuta Query Engine image.
Value from imageTag
or immutaVersion
queryEngine.image.digest
Image digest for the Immuta Query Engine image in format of sha256:<DIGEST>
.
queryEngine.imagePullPolicy
ImagePullPolicy for the Immuta Query Engine container.
{{ .Values.imageTag }}
queryEngine.imageRepository
deprecated
Use queryEngine.image.registry
and queryEngine.image.repository
.
nil
queryEngine.imageTag
deprecated
Use queryEngine.image.tag
.
nil
queryEngine.replicas
Number of database replicas
1
queryEngine.password
Password for immuta feature store database
secret
queryEngine.superuserPassword
Password for PostgreSQL superuser.
secret
queryEngine.replicationPassword
Password for replication user.
secret
queryEngine.patroniApiPassword
Password for Patroni REST API.
secret
queryEngine.patroniKubernetes
Patroni Kubernetes settings.
{"use_endpoints": true}
queryEngine.persistence.enabled
This should be set to true
for all non-testing environments.
false
queryEngine.resources
Container resources.
{}
queryEngine.service
Service configuration for Query Engine service if not using an Ingress Controller.
queryEngine.podAnnotations
Additional annotations to apply to Query Engine pods.
{}
queryEngine.podLabels
Additional labels to apply to Query Engine pods.
{}
queryEngine.nodeSelector
Node selector for Query Engine pods.
{"kubernetes.io/os": "linux"}
queryEngine.sharedMemoryVolume.enabled
Enable the use of a memory-backed emptyDir
volume for /dev/shm
.
false
queryEngine.sharedMemoryVolume.sizeLimit
Size limit for the shared memory volume. Only available when the SizeMemoryBackedVolumes
feature gate is enabled.
nil
queryEngine.tolerations
Tolerations for Query Engine pods.
nil
queryEngine.podSecurityContext
Pod level security features.
{}
queryEngine.containerSecurityContext
Container level security features.
{}
queryEngine.publishPort
Controls whether or not the Query Engine port (5432) is published on the built-in Ingress Controller service.
true
hooks.cleanup.resources
Container resources.
{}
hooks.cleanup.serviceAccountAnnotations
Annotations for the cleanup hook ServiceAccount.
{}
hooks.cleanup.nodeSelector
Node selector for pods.
{"kubernetes.io/os": "linux"}
hooks.cleanup.tolerations
Tolerations for pods.
nil
hooks.cleanup.podSecurityContext
Pod level security features.
hooks.cleanup.containerSecurityContext
Container level security features.
hooks.databaseInitialize.resources
Container resources.
{}
hooks.databaseInitialize.serviceAccountAnnotations
Annotations for the database initialize hook ServiceAccount.
{}
hooks.databaseInitialize.verbose
Flag to enable or disable verbose logging in the database initialize hook.
true
hooks.databaseInitialize.nodeSelector
Node selector for pods.
{"kubernetes.io/os": "linux"}
hooks.databaseInitialize.tolerations
Tolerations for pods.
nil
hooks.databaseInitialize.podSecurityContext
Pod level security features.
hooks.databaseInitialize.containerSecurityContext
Container level security features.
hooks.tlsGeneration.hookAnnotations."helm.sh/hook-delete-policy"
Delete policy for the TLS generation hook.
"before-hook-creation,hook-succeeded"
hooks.tlsGeneration.resources
Container resources.
{}
hooks.tlsGeneration.serviceAccountAnnotations
Annotations for the cleanup hook ServiceAccount.
{}
hooks.tlsGeneration.nodeSelector
Node selector for pods.
{"kubernetes.io/os": "linux"}
hooks.tlsGeneration.tolerations
Tolerations for pods.
nil
hooks.tlsGeneration.podSecurityContext
Pod level security features.
hooks.tlsGeneration.containerSecurityContext
Container level security features.
cache.type
Type to use for the cache. Valid values are memcached
.
memcached
cache.replicas
Number of replicas.
1
cache.resources
Container resources.
{}
cache.nodeSelector
Node selector for pods.
{"kubernetes.io/os": "linux"}
cache.podSecurityContext
SecurityContext for cache Pods.
{"runAsUser": 65532}
cache.containerSecurityContext
Container level security features.
{}
cache.updateStrategy
UpdateStrategy Spec for cache workloads.
{}
cache.tolerations
Tolerations for pods.
nil
cache.memcached.image.registry
Image registry for Memcached image.
Value from global.imageRegistry
cache.memcached.image.repository
Image repository for Memcached image.
memcached
cache.memcached.image.tag
Image tag for Memcached image.
1.6-alpine
cache.memcached.image.digest
Image digest for the Immuta Memcached image in format of sha256:<DIGEST>
.
cache.memcached.imagePullPolicy
Image pull policy.
Value from imagePullPolicy
cache.memcached.maxItemMemory
Limit for max item memory in cache (in MB).
64
deployTools.image.registry
Image registry for Immuta deploy tools image.
Value from global.imageRegistry
deployTools.image.repository
Image repository for Immuta deploy tools image.
immuta/immuta-deploy-tools
deployTools.image.tag
Image tag for Immuta deploy tools image.
2.4.3
deployTools.image.digest
Image digest for the Immuta deploy tools image in format of sha256:<DIGEST>
.
deployTools.imagePullPolicy
Image pull policy.
Value from imagePullPolicy
nginxIngress.enabled
Enable nginx ingress deployment
true
nginxIngress.podSecurityContext
Pod level security features.
{}
nginxIngress.containerSecurityContext
Container level security features.
{capabilities: {drop: [ALL], add: [NET_BIND_SERVICE]}, runAsUser: 101}
nginxIngress.controller.image.registry
Image registry for the Nginx Ingress controller image.
Value from global.imageRegistry
nginxIngress.controller.image.repository
Image repository for the Nginx Ingress controller image.
ingress-nginx-controller
nginxIngress.controller.image.tag
Image tag for the Nginx Ingress controller image.
v1.1.0
nginxIngress.controller.image.digest
Image digest for the Immuta Nginx Ingress controller image in format of sha256:<DIGEST>
.
nginxIngress.controller.imagePullPolicy
ImagePullPolicy for the Nginx Ingress controller container.
{{ .Values.imageTag }}
nginxIngress.controller.imageRepository
deprecated
Use nginxIngress.controller.image.registry
and nginxIngress.controller.image.repository
.
nil
nginxIngress.controller.imageTag
deprecated
Use nginxIngress.controller.image.tag
.
nil
nginxIngress.controller.service.annotations
Used to set arbitrary annotations on the Nginx Ingress Service.
{}
nginxIngress.controller.service.type
Controller service type.
LoadBalancer
nginxIngress.controller.service.isInternal
Whether or not to use an internal ELB
false
nginxIngress.controller.service.acmCertArn
ARN for ACM certificate
nginxIngress.controller.replicas
Number of controller replicas
1
nginxIngress.controller.minReadySeconds
Minimum ready seconds
0
nginxIngress.controller.electionID
Election ID for nginx ingress controller
ingress-controller-leader
nginxIngress.controller.hostNetwork
Run nginx ingress controller on host network
false
nginxIngress.controller.config.proxy-read-timeout
Controller proxy read timeout.
300
nginxIngress.controller.config.proxy-send-timeout
Controller proxy send timeout.
300
nginxIngress.controller.podAnnotations
Additional annotations to apply to nginx ingress controller pods.
{}
nginxIngress.controller.podLabels
Additional labels to apply to nginx ingress controller pods.
{}
nginxIngress.controller.nodeSelector
Node selector for nginx ingress controller pods.
{"kubernetes.io/os": "linux"}
nginxIngress.controller.tolerations
Tolerations for nginx ingress controller pods.
nil
nginxIngress.controller.resources
Container resources.
{}
memcached.pdbMinAvailable
Minimum pdb available.
1
memcached.maxItemMemory
Limit for max item memory in cache (in MB).
64
memcached.resources
Container resources.
{requests: {memory: 64Mi}}
memcached.podAnnotations
Additional annotations to apply to memcached pods.
{}
memcached.podLabels
Additional labels to apply to memcached pods.
{}
memcached.nodeSelector
Node selector for memcached pods.
{"kubernetes.io/os": "linux"}
memcached.tolerations
Tolerations for memcached pods.
nil
immutaVersion
Version of Immuta
<Current Immuta Version>
imageTag
Docker image tag
<Current Version Tag>
imagePullPolicy
Image pull policy
IfNotPresent
imagePullSecrets
List of image pull secrets to use
[immuta-registry]
existingSecret
Name of an existing Kubernetes Secret for the Helm install to use. A managed Secret is not created when this value is set.
nil
externalHostname
External hostname assigned to this immuta instance.
nil
podSecurityContext
Pod level security features on all pods.
{}
containerSecurityContext
Container level security features on all containers.
{}
global.imageRegistry
Global override for image registry.
registry.immuta.com
global.podAnnotations
Annotations to be set on all pods.
{}
global.podLabels
Labels that will be set on all pods.
{}