Install AWS Cloud Provider
Audience: System Administrators
Content Summary: The Immuta Helm installation integrates well with Kubernetes on AWS. This guide walks through the various components that can be set up.
Prerequisite: An Amazon EKS cluster with a recommended minimum of 3 m5.xlarge worker nodes.
Using a Kubernetes namespace
If deploying Immuta into a Kubernetes namespace other than the default, you must include the --namespace
option into all helm
and kubectl
commands provided throughout this section.
Deployment
As of Kubernetes 1.23+ on EKS, you have to configure the EBS CSI driver in order for the Immuta Helm deployment to be able to request volumes for storage. Follow these instructions:
Upon cluster creation, create an IAM policy and role and associate it with the cluster. See AWS documentation for details: Creating the Amazon EBS CSI driver IAM role for service accounts.
Upon cluster creation, add the EBS CSI driver as an add-on to the cluster. See AWS documentation for details: Managing the Amazon EBS CSI driver as an Amazon EKS add-on.
For deploying Immuta on a Kubernetes cluster using the AWS cloud provider, you can mostly follow the Kubernetes Helm installation guide.
The only deviations from that guide are in the custom values file(s) you create. You will want to incorporate any changes referenced throughout this guide, particularly in the Backups and Load Balancing sections below.
Backups
Best Practice: Use S3 for Shared Storage
On AWS Immuta recommends that you use S3 for shared storage.
AWS IAM Best Practices
When using AWS IAMs make sure that you are using the best practices outlined here: AWS IAM Best Practices.
Best Practice: Production and Persistence
If deploying Immuta to a production environment using the built-in metadata database, it is recommended to resize the /
partition on each node to at least 50GB. The default size for many cloud providers is 20 GB.
To begin, you will need an IAM role that Immuta can use to access the S3 bucket from your Kubernetes cluster. There are four options for role assumption:
IAM Roles for Service Accounts : recommended for EKS.
Kube2iam or kiam: recommended if you have other workloads running in the cluster.
Instance profile: recommended if only Immuta is running in the cluster.
AWS secret access keys: simplest set-up if access keys and secrets are allowed in your environment.
Necessary IAM Permissions
The role you choose above must have at least the following IAM permissions:
Sample Helm Values
Load Balancing
The easiest way to expose your Immuta deployment running on Kubernetes with the AWS cloud provider is to set up nginx ingress as serviceType: LoadBalancer
and let the chart handle creation of an ELB.
Best Practices: ELB Listeners Configured to Use SSL
For best performance and to avoid any issues with web sockets, the ELB listeners need to be configured to use SSL instead of HTTPS.
If you are using the included ingress controller, it will create a Kubernetes LoadBalancer Service to expose Immuta outside of your cluster. The following options are available for configuring the LoadBalancer Service:
nginxIngress.controller.service.annotations
: Useful for setting options such as creating an internal load balancer or configuring TLS termination at the load balancer.nginxIngress.controller.service.loadBalancerSourceRanges
: Used to limit which client IP addresses can access the load balancer.nginxIngress.controller.service.externalTrafficPolicy
: Useful when working with Network Load Balancers on AWS. It can be set to “Local” to allow the client IP address to be propagated to the Pods.
Possible values for these various settings can be found in the Kubernetes Documentation.
If you would like to use automatic ELB provisioning, you can use the following values:
You can then manually edit the ELB configuration in the AWS console to use ACM TLS certificates to ensure your HTTPS traffic is secured by a trusted certificate. For instructions on doing this, please see Amazon's guide on how to Configure an HTTPS Listener for Your Classic Load Balancer
Another option is to set up nginx ingress with serviceType: NodePort
and configure load balancers outside of the cluster.
For example,
In order to determine the ports to configure the load balancer for, examine the Service configuration:
This will print out the port name and port. For example,
Maintenance
The Immuta deployment to EKS has a very low maintenance burden.
Best Practices: Installation Maintenance
Immuta recommends the following basic procedures for monitoring and periodic maintenance of your installation:
Periodically examine the contents of S3 to ensure database backups exist for the expected time range.
Ensure your Immuta installation is current and update it if it is not per the update instructions.
Be aware of the solutions to common management tasks for Kubernetes deployment
If
kubectl
does not meet your monitoring needs, we recommend installing the Kubernetes Dashboard using the AWS provided instructions.
Failure Recovery Scenarios
Ensure that your Immuta deployment is taking regular backups to AWS S3.
Your Immuta deployment is highly available and resilient to failure. For some catastrophic failures, recovery from backup may be required. Below is a list of failure conditions and the steps necessary to ensure Immuta is operational.
Internal Immuta Service Failure: Because Immuta is running in a Kubernetes deployment, no action should be necessary. Should a failure occur that is not automatically resolved, follow Immuta backup restoration procedures.
EKS Cluster Failure: Should your EKS cluster experience a failure, simply create a new cluster and follow Immuta backup restoration procedures.
Availability Zone Failure: Because EKS and ELB as well as the Immuta installation within EKS are designed to tolerate the failure of an availability zone, there are no steps needed to address the failure of an availability zone.
Region Failure: To provide recovery capability in the unlikely event of an AWS Region failure, Immuta recommends periodically copying database backups into an S3 bucket in a different AWS region. Should you experience a region failure, simply create a new cluster in a working region and follow Immuta backup restoration procedures.
See the AWS Documentation for more information on managing service limits to allow for proper disaster recovery.
Last updated