Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Audience: System Administrators
Content Summary: By default, the Immuta Partition servers will run as the
immuta
user. For clusters configured to use Kerberos, this means that you must have animmuta
principal available for Cloudera Manager to provision the service. If for some reason you do not have animmuta
principal available, you can change the user that the Immuta partition servers run as.This page describes the configuration changes that are needed to change the principal(s) that Immuta uses. The same principal can be used for both services, but that is not necessary. Just make sure the configuration options are consistent for all configuration options on the individual services.
The Immuta Spark Partition Servers are components that run on your CDH cluster. The following sections will walk you through configuring the various CDH components so that the Spark Partition Servers can run as a non-default user.
In the configuration for the Immuta
service, make the following updates:
System User: Set to the system user that will be running Immuta.
System Group: Set to the primary group of the user that will be running Immuta.
Kerberos Principal: Set to the Kerberos principal of the user that will be running Immuta.
In the configuration for HDFS
, make the following updates:
Cluster-wide
Advanced Configuration Snippet (Safety Valve) for core-site.xml
:
Set immuta.spark.partition.generator.user
to the principal configured as the Kerberos Principal in the Immuta
service.
The Immuta Web Service uses the configured Kerberos principal to impersonate users when running queries against various Kerberos-enabled databases. If you are using a non-default Kerberos principal for the Immuta Web Service, be sure to update the following values.
In the configuration for HDFS
, enter the following for Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml
:
hadoop.proxyuser.<immuta service principal>.hosts
Description: The configuration that allows the Immuta service principal to proxy other hosts. Make sure to enter the appropriate principal in place of <immuta service principal>
.
Value: *
hadoop.proxyuser.<immuta service principal>.users
Description: The configuration that allows the Immuta service principal to proxy end-users. Make sure to enter the appropriate principal in place of <immuta service principal>
.
Value: *
hadoop.proxyuser.<immuta service principal>.groups
Description: The configuration that allows the Immuta service principal to proxy user groups. Make sure to enter the appropriate principal in place of <immuta service principal>
.
Value: *
If the principal for the Immuta Web Service is different from the principal used by the Immuta Partition Server, then be sure to add the Web Service principal to immuta.permission.users.to.ignore
. In the HDFS
configuration section for NameNode Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml
ensure that the user principal running the Immuta Web Service is included in the comma-separated list of users set for immuta.permission.users.to.ignore
.
Audience: System Administrators
Content Summary: Installation of the components necessary for the use of the Immuta Hadoop Integration depends on the version of Hadoop. This section contains guides for installing Cloudera Hadoop.
Prerequisites: Outlines the prerequisites required to successfully use installation components on your CDH cluster.
Performance Optimization: Describes strategies for improving performance of Immuta's NameNode plugin on CDH clusters.
Run as a Non-Default User: By default, the Immuta Partition servers will run as the immuta
user. For clusters configured to use Kerberos, this means that you must have an immuta
principal available for Cloudera Manager to provision the service. If for some reason you do not have an immuta
principal available, you can change the user that the Immuta partition servers run as. This page describes the configuration changes that are needed to change the principal(s) that Immuta uses.
Log Analysis: Details how to use the immuta_hdfs_log_analyzer
tool to troubleshoot slowdowns in your CDH cluster.
Upgrading: Details how to upgrade the Immuta Parcel and Service on your CDH cluster.
Disable or Uninstall: Outlines steps to effectively disable and/or uninstall the Immuta components from your CDH cluster.
Audience: System Administrators
Content Summary: The Immuta CDH integration installation consists of the following components:
Immuta NameNode plugin
Immuta Hadoop Filesystem plugin
Immuta Spark 1.6 Partition Service (DEPRECATED)
Immuta Spark 2 Partition Service
This page outlines the prerequisites required to successfully use these components on your CDH cluster.
This installation process has been verified to work with the following CDH versions:
5.9.x
5.12.x
5.13.x
5.14.x
5.15.x
5.16.x
6.1.x
6.2.x
6.3.x
Before installing Immuta onto your CDH cluster, the following steps need completed:
Immuta requires that HDFS Extended Attributes are enabled.
Under the HDFS service of Cloudera Manager, Configuration tab, search for key:
and, ensure the Checkbox is checked.
An Immuta System API key will also need to be generated for the NameNode to communicate securely with the Immuta Web Service. You can generate the System API key via the Immuta Configuration UI.
Before installing the Immuta software on your CDH cluster, it is recommended that you export your cluster configuration via the Cloudera Manager API and send a copy to Immuta Support. This will enable our support team to assist you with specific configurations that may be required for your environment. Knowing the configuration and layout of your cluster will also help the support team to expedite troubleshooting and resolution of any potential issues that may arise with your Immuta installation.
Before sending the exported JSON file, it is recommended to look over the configurations and redact any information that you consider too sensitive to share externally. Cloudera Manager will automatically redact known passwords; however, there may be sensitive values embedded in your configuration that Cloudera Manager does not know about. An example of this may be configuration of a third-party cluster application that requires passwords or API keys in its cluster configuration.
Begin by downloading the Immuta Parcel and CSD for your Cloudera Distribution. A complete installation will require 3 files:
IMMUTA-<VERSION>_<DATESTAMP>-<CDH_VERSION>-spark2-public-<LINUX_DISTRIBUTION>.parcel
The .parcel
file is the Immuta CDH parcel.
For versions that support it, Spark 1 is included in this parcel.
IMMUTA-<VERSION>_<DATESTAMP>-<CDH_VERSION>-spark2-public-<LINUX_DISTRIBUTION>.parcel.sha
The .parcel.sha
file contains a SHA1 hash of the Immuta .parcel
file for integrity verification by Cloudera Manager.
IMMUTA-<VERSION>_<DATESTAMP>-<CDH_VERSION>-spark2-public.jar
The .jar
file is the Custom Service Descriptor (CSD) for the Immuta service in Cloudera Manager.
The variables above are defined as:
<VERSION>
is like "2024.1.13"
<DATESTAMP>
is the compiled date in the format "YYYYMMDD
"
<CDH_VERSION>
must match your CDH version, like "5.16.2"
<LINUX_DISTRIBUTION>
is either "el7
" or "el6
".
These artifacts are available for download at https://archives.immuta.com. If you are prompted to log in and need basic authentication credentials, contact your Immuta support professional.
Parcel, SHA, and CSD downloads: https://archives.immuta.com/hadoop/cdh/
All artifacts are divided up by subdirectories in the form of[Immuta Release]/[CDH Version]
.
Audience: System Administrators
Content Summary: This page describes strategies for improving performance of Immuta's NameNode plugin on CDH clusters.
Immuta operates within a locked operation in the NameNode when granting / denying permissions based on Immuta policies. This section contains configuration and strategies to prevent RPC queue latency, threads waiting, or other issues on cluster-wide file permission checks.
Best Practice: NameNode Plugin Configuration
Immuta recommends only configuring the NameNode Plugin to check permissions on the NameNode(s) that oversee the data that you want to protect.
For example, say that you currently have a federated HDFS NameNode architecture with three Nameservices - nameservice1
, nameservice2
, and nameservice3
. The HDFS federation in this example is distributed across these nameservices as described below.
nameservice1
: /data
, /tmp/
, /user
nameservice2
: /data2
nameservice3
: /data3
Suppose you know that all the sensitive data that you want to protect with Immuta is located under /data3
. To achieve optimum performance in this case, you can go ahead and add the Immuta NameNode-only configuration (hdfs-site.xml
) to the role config group for nameservice3
, and leave it out of nameservice1
and nameservice2
. The public / client Immuta configuration (core-site.xml
) should still be configured cluster-wide. See Immuta CDH Integration Installation for more details about these configuration groupings.
One caveat to take into consideration here is that Immuta's Vulcan service requires the Immuta NameNode Plugin to oversee user credentials that are stored in /user/<username>
by default. Vulcan also stores some configuration under /user/immuta
by default. This is a problem because /user
resides under nameservice1
, and the goal is to only operate the Immuta NameNode Plugin on nameservice3
.
A simple solution to this problem is to create a new directory for these credentials, /data3/immuta_creds
for example, and configure the NameNode Plugin and the Vulcan service to use this directory instead of /user
. Changing this requires the configuration modifications listed below.
HDFS - Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml
Set immuta.generated.api.key.dir
and immuta.credentials.dir
to /data3/immuta_creds
.
Immuta - Immuta Spark 2 Vulcan Server Advanced Configuration Snippet(Safety Valve) for session/generator.xml
Set immuta.meta.store.token.dir
to /data3/immuta_creds/immuta/tokens
.
Set immuta.meta.store.remote.token.dir
to /data3/immuta_creds/immuta/remotetokens
.
Set immuta.configuration.id.file.config
to hdfs://nameservice3/data3/immuta_creds/immuta/config_id
.
Note that you will need to manually create the /data3/immuta_creds/immuta
directory and set the permissions such that only the immuta
user can read / write in that directory. The /data3/immuta_creds
directory should also be world writable to allow user directories to be created the first time that they interact with Immuta on the cluster.
immuta.permission.paths.to.enforce
Description: A comma delimited list of paths to enforce when checking permissions on HDFS files. This ensures that API calls to the Immuta web service are only made when permissions are being checked on the paths that you specify in this configuration. This also means that you can only create data sources against data that lives under these paths, and the Immuta Workspace must be under one of these paths as well. Alternatively, immuta.permission.paths.to.ignore
can be set to a list of paths that you know do not contain Immuta data - then API calls will never be made against those paths. Setting both immuta.permission.paths.to.ignore
and immuta.permission.paths.to.enforce
properties at the same time is unsupported.
immuta.permission.groups.to.enforce
Description: A comma delimited list of groups that must go through Immuta when checking permissions on HDFS files. If this configuration item is set, then fallback authorizations will apply to everyone by default, unless they are in a group on this list. If a user is on both the enforce list and the ignore list, then their permissions will be checked with Immuta (i.e., the enforce configuration item takes precedence). This may improve NameNode performance by only making permission check API calls for the subset of users who fall under Immuta enforcement.
immuta.permission.source.cache.enabled
Description: Denotes whether a background thread should be started to periodically cache paths from Immuta that represent Immuta-protected paths in HDFS. Enabling this increases NameNode performance because it prevents the NameNode plugin from calling the Immuta web service for paths that do not back HDFS data sources. For performance optimization, it is best to enable this cache to act as a "backup" to immuta.permission.paths.to.enforce
.
immuta.permission.source.cache.enabled
Description: The time between calls to sync/cache all paths that back Immuta data sources in HDFS. You can increase this value to further reduce the number of API calls made from the NameNode.
immuta.permission.workspace.base.path.override
Description: This configuration item can be set so that the NameNode does not have to retrieve the Immuta HDFS workspace base path periodically from the Immuta API.
There are also a wide variety of cache and network settings that can be used to fine-tune performance. You can refer to the Configuration Guide for details on each of these items.
immuta.permission.source.cache.timeout.seconds
immuta.permission.source.cache.retries
immuta.permission.request.initial.delay.milliseconds
immuta.permission.request.socket.timeout
immuta.no.data.source.cache.timeout.seconds
immuta.hive.impala.cache.timeout.seconds
immuta.canisee.cache.timeout.seconds
immuta.data.source.cache.timeout.seconds
immuta.canisee.metastore.cache.timeout.seconds
immuta.canisee.non.user.cache.timeout.seconds
immuta.canisee.num.retries
immuta.project.user.cache.timeout.seconds
immuta.project.cache.timeout.seconds
immuta.project.forbidden.cache.timeout.seconds
immuta.permission.system.details.retries
See Immuta Log Analysis Tool for CDH Deployments for instructions on how to identify performance issues in the Immuta NameNode Plugin.
Audience: System Administrators
Content Summary: This page details how to upgrade the Immuta Parcel and Service on your CDH cluster.
Prerequisites: Follow the to prepare for upgrading.
Transfer the Immuta .parcel
and its associated .parcel.sha
to your Cloudera Manager node and place them in /opt/cloudera/parcel-repo
. Once copied, ensure files must have ownership cloudera-scm
and group cloudera-scm
.
Once the Immuta parcel and its SHA (hash) file are in the parcel repo, you can distribute and activate the updated parcel. (Activating the new parcel will automatically deactivate an older version.) To do so,
In Cloudera Manager, select the Parcels icon in the upper right corner.
Click Check for New Parcels.
Make sure the location filter has your on-cluster parcel repo selected.
Locate the IMMUTA
parcel, and then find the row corresponding to the version you are upgrading to. Click Distribute.
Wait for the parcel to finish distribution. Once finished, the action button for that row should say Activate.
Click the Activate button to activate the parcel.
You have successfully upgraded your Immuta parcel.
The first step in upgrading your Immuta Partition Service CSD is copying the .jar
file to your Cloudera Manager node, placing it in /opt/cloudera/csd
. The file must have ownership cloudera-scm
and group cloudera-scm
.
You will need to restart Cloudera Manager in order for the CSD to be picked up:
Finally, restart the IMMUTA service in Cloudera Manager.
Audience: System Administrators
Content Summary: This page details how to use the
immuta_hdfs_log_analyzer
tool to troubleshoot slowdowns in your CDH cluster.
Sub-optimal configuration of the Immuta HDFS NameNode plugin may cause cluster-wide slowdowns under certain conditions. The NameNode plugin contains a variety of cache settings to limit the number of network calls that occur within the NameNode's locked permission checking operation. If these settings are configured properly, there will be little to no impact on the performance of HDFS operations.
You can use the immuta_hdfs_log_analyzer
command-line utility to track the number of API calls coming from NameNode plugin to the Immuta Web Service.
You can download the log analysis tool:
It can be invoked like so:
START_TIME (-s
, --start-time
): Timestamp for the beginning of the period to analyze.
END_TIME (-e
, --end-time
): Timestamp for the end of the period to analyze.
GRANULARITY (g
, --granularity
): Defines time buckets for analysis. Can be MINUTES
, HOURS
or DAYS
.
TIME_FORMAT (-t
, --time-format
): The format to use for timestamps. This should match the timestamp format in the Immuta Web Service logs.
If you are able to correlate time buckets from this tool's output to periods of slow cluster performance, you may need to adjust configuration for the Immuta HDFS NameNode plugin.
Audience: System Administrators
Content Summary: This page outlines steps to effectively disable and/or uninstall the Immuta components from your CDH cluster. The disable portions of this document detail how to deactivate the Immuta components without removing the components. For a complete uninstall, follow these steps and then proceed to remove all Immuta-related settings, configuration, and any Immuta Kerberos principals from your cluster.
These changes will require a cluster restart
The changes detailed below affect HDFS; therefore, a cluster restart is required to fully implement these changes.
The Immuta Authorization Provider must be removed from the NameNode configuration.
Navigate to the Cloudera Manager Overview page.
Click on the HDFS service.
Click on the Configuration tab.
In the search bar, enter
Click on the minus [-] sign that appears on the right of the entry corresponding to dfs.namenode.authorization.provider.class
. This will restore to the CDH default.
Click the Save Changes button at the bottom of the screen.
Warning
You may have non-default settings that are completely unrelated to Immuta! You may also have non-default settings that are currently related to Immuta that will need to be altered to another non-default custom setting specific to your installation. Your CDH Admins will know which settings this applies to. Do not blanket revert settings to their defaults unless you are certain the CDH defaults are appropriate for your cluster.
To uninstall, instead of only reverting the Immuta Authorization Provider, all Immuta customized settings can be removed from the NameNode configuration.
Navigate to the Cloudera Manager Overview page.
Click on the HDFS service.
Click on the Configuration tab.
Near the bottom of the left side navigation pane, select Non-Default. This will list all settings that are not presently set to the defaults.
All settings under
can be reverted. Click the minus [-] sign that appears on the right of the individual entries, or - if you are certain your cluster should operate on the CDH defaults - all settings can be reverted by clicking the revert arrow icon to the right of HDFS (Service-Wide).
All settings under
can be reverted. Click the minus [-] sign that appears on the right of the individual entries, or - if you are certain your cluster should operate on the CDH defaults - all settings can be reverted by clicking revert arrow icon to the right of NameNode Default Group.
Click the Save Changes button at the bottom of the screen.
If fully uninstalling, Immuta's components need to be removed from YARN's classpath.
These changes will require a cluster restart
The changes detailed below affect HDFS; therefore, a cluster restart is required to fully implement these changes.
Navigate to the YARN service.
Click on the Configuration tab.
In the search bar, enter
Click on the minus [-] sign that appears on the right of any entries that reference IMMUTA
. For example, there may be records for jars such as immuta-group-mapping.jar
or immuta-hadoop-filesystem.jar
or similar.
Click the Save Changes button at the bottom of the screen.
These settings may be applied either system-wide (via core-site.xml
) or to specific target systems such as Hive or Impala. Be sure to locate all setting locations.
These changes will require a Hive service restart
The Hive service will need to be restarted for the changes below to take effect.
Navigate to the Hive service.
Click on the Configuration tab.
In the search bar, enter
Click on the minus [-] sign that appears to the right of the entry corresponding to hadoop.security.group.mapping
. This will restore to the CDH default.
Click the Save Changes button at the bottom of the screen.
Warning
You may have non-default settings that are completely unrelated to Immuta! You may also have non-default settings that are currently related to Immuta that will need to be altered to another non-default custom setting specific to your installation. Your CDH Admins will know which settings this applies to. Do not blanket revert settings to their defaults unless you are certain the CDH defaults are appropriate for your cluster.
Navigate to the Hive service.
Click on the Configuration tab.
Near the bottom of the left side navigation pane, select Non-Default. This will list all settings that are not presently set to the defaults.
All settings under
can be reverted. Click the minus [-] sign that appears on the right of the individual entries, or - if you are certain your cluster should operate on the CDH defaults - all settings can be reverted by clicking the revert arrow icon to the right of HiveServer2 Default Group.
Click the Save Changes button at the bottom of the screen.
These settings may be applied either system-wide (via core-site.xml
) or to specific target systems such as Hive or Impala. Be sure to locate all setting locations.
These changes will require an Impala service restart
The Impala service will need restarted in order for the changes below to take effect.
Navigate to the Impala service.
Click on the Configuration tab.
In the search bar, enter
Click on the minus [-] sign that appears on the right of the entry corresponding to hadoop.security.group.mapping
. This will restore to the CDH default.
Click the Save Changes button at the bottom of the screen.
Warning
You may have non-default settings that are completely unrelated to Immuta! You may also have non-default settings that are currently related to Immuta that will need to be altered to another non-default custom setting specific to your installation. Your CDH Admins will know which settings this applies to. Do not blanket revert settings to their defaults unless you are certain the CDH defaults are appropriate for your cluster.
Navigate to the Impala service.
Click on the Configuration tab.
Near the bottom of the left side navigation pane, select Non-Default. This will list all settings that are not presently set to the defaults.
The "immuta" proxy user from
can be removed. Simply delete the "immuta=*
" (and any leading or trailing ;
) from the -authorized_proxy_user_config=
value, leaving any other values in place. It may also be done by clicking the revert arrow icon to the right of Impala (Service-Wide) if the default is appropriate.
All settings under
can be reverted. Click the minus [-] sign that appears on the right of the individual entries, or - if you are certain your cluster should operate on the CDH defaults - all settings can be reverted by clicking the revert arrow icon to the right of Impala Daemon Default Group.
If using Kerberos principal short names was only done in support of ImmutaGroupsMapping
for use in native workspaces, that setting can also be reverted. In the search bar, enter
Simply uncheck the checkbox to the left of "Impala (Service-Wide)".
Click the Save Changes button at the bottom of the screen.
These changes will require a Spark service restart
The Spark service will need to be restarted for the changes below to take effect.
Navigate to the Spark service.
Click on the Configuration tab.
In the search bar, enter
Remove any references to IMMUTA
or "immuta" in the configuration options. Particularly look for the options defined in Spark 1.6 Configuration.
Then go back to the search bar, and enter
Remove any references to IMMUTA
or "immuta" in the environment variables. Particularly look for the environment settings defined in Spark 1.6 Configuration.
Click the Save Changes button at the bottom of the screen.
If your installation leveraged the Immuta HDFS Native Workspace and ImmutaGroupsMapping
, Immuta was likely configured as a Sentry admin. When uninstalling, this can be removed.
These changes will require a Sentry service restart
The Sentry service will need to be restarted for the changes below to take effect.
Warning
You may have non-default settings that are completely unrelated to Immuta! You may also have non-default settings that are currently related to Immuta that will need to be altered to another non-default custom setting specific to your installation. Your CDH Admins will know which settings this applies to. Do not blanket revert settings to their defaults unless you are certain the CDH defaults are appropriate for your cluster.
Navigate to the Sentry service.
Click on the Configuration tab.
Near the bottom of the left side navigation pane, select Non-Default. This will list all settings that are not presently set to the defaults.
The "immuta" user can be removed from any place specified, but particularly the
should be removed. Click the minus [-] sign that appears on the right of the individual entries, or - if you are certain your cluster should operate on the CDH defaults - all settings can be reverted by clicking the revert arrow icon.
Click the Save Changes button at the bottom of the screen.
Navigate to the Cloudera Manager Overview page.
Click on the down arrow next to the IMMUTA service.
Click Stop.
Confirm that you want to stop the service.
Navigate to the Cloudera Manager Overview page.
Click on the down arrow next to the IMMUTA service.
Click Delete.
Confirm that you want to delete the service.
Complete both 1 and 2 in the previous "Disable" section.
You may need to restart the cluster before you can fully remove these parcels
If the parcel was in active use, a cluster restart is likely needed before Cloudera Manager will let you do the following steps to remove and delete these parcels.
Navigate to the Cloudera Manager Overview page.
Click on the package icon on the top right hand side of the page near the search bar.
Find the "Distributed, Activated" Immuta Parcel(s) and click the Deactivate button.
Click Confirm.
Once deactivated, go back to the Immuta Parcels(s) and select the "down arrow" beside the "Activate" button, and select Remove from Hosts.
Click Confirm.
Once not distributed, go back to the Immuta Parcels(s) and select the "down arrow" beside the "Distribute" button, and select Delete.
Click Delete.
To commit all previous settings, issue a restart of the CDH cluster.
Audience: System Administrators
Content Summary: The Immuta CDH integration installation consists of the following components:
Immuta NameNode plugin
Immuta Hadoop Filesystem plugin
Immuta Spark 2 Vulcan service
This page outlines the installation steps required to successfully deploy these components on your CDH cluster.
Prerequisites: Follow the to prepare for installation.
Begin installation by transferring the Immuta .parcel
and its associated .parcel.sha
files to your Cloudera Manager node and placing them in /opt/cloudera/parcel-repo
. Once copied, ensure files have both their owner and group permissions set to cloudera-scm
Next, transfer the Immuta CSD (.jar
file) to /opt/cloudera/csd
, and ensure both its owner and group permissions are set to cloudera-scm
as well.
You will need to restart the Cloudera Manager server in order for the CSD to be picked up:
Follow Cloudera's instructions for distributing and activating the IMMUTA parcel.
Once the parcel has been successfully activated, you can add the IMMUTA service:
From the Cloudera Manager select Add Service.
Choose Immuta.
Click Continue.
Select nodes to install the services on. Your options are
For maximum redundancy, choose all.
Choose a single node.
Choose a few nodes. Set up a Load Balancer in front of the instances to distribute load. Contact Immuta support for more details.
Proceed to the end of the workflow.
After adding the Immuta service to your CDH cluster, there is some configuration that needs to be completed.
Warning
The following settings should only be written to the configuration on the NameNode. Setting these values on DataNodes will have security implications, so be sure that they are set in the NameNode only section of Cloudera Manager. For optimal performance, only set these configuration options in the NameNode Role Config Group that controls the namespace where Immuta data resides.
Under the HDFS service of Cloudera Manager, Configuration tab, search for key:
and, using "View as XML", add/set the value(s) similar to:
Best Practice: Configuration Values
Immuta recommends that all Immuta configuration values be marked final
.
The following configuration items should be configured for both the NameNode processes and the DataNode processes. These configurations are used both by the Immuta FileSystem and the Immuta NameNode plugin. For example:
Under the HDFS service of Cloudera Manager, Configuration tab, search for key:
and, using "View as XML", add/set the value(s) similar to:
Best Practice: Configuration Values
Immuta recommends that all Immuta configuration values be marked final
.
Make sure that user directories underneath immuta.credentials.dir
are readable only by the owner of the directory. If the user's directory doesn't exist and we create it, we will set the permissions to 700
.
You can enable TLS on the Immuta Vulcan service by configuring it to use a keystore in JKS format.
Under the Immuta service of Cloudera Manager, Configuration tab, search for key:
and, using "View as XML", add/set the value(s) similar to:
Best Practice: Configuration Values
Immuta recommends that all Immuta configuration values be marked final
.
Detailed Explanation:
immuta.secure.partition.generator.keystore
Specifies the path to the Immuta Vulcan service keystore.
Example: /etc/immuta/keystore.jks
immuta.secure.partition.generator.keystore.password
Specifies the password for the Immuta Vulcan service keystore. This password will be a publicly available piece of information, but file permissions should be used to make sure that only the user running the service can read the keystore file.
Example: secure_password
immuta.secure.partition.generator.keystore.password
Specifies the password for the Immuta Vulcan service keystore. This password will be a publicly available piece of information, but file permissions should be used to make sure that only the user running the service can read the keystore file.
Example: secure_password
immuta.secure.partition.generator.keymanager.password
Specifies the KeyManager password for the Immuta Vulcan service keystore. This password will be a publicly available piece of information, but file permissions should be used to make sure that only the user running the service can read the keystore file. This is not always necessary.
Example: secure_password
Best Practice: Secure Keystore with File Permissions
Immuta recommends using file permissions to secure the keystore from improper access:
You must also set the following properties under the following client sections:
For Spark 2, under the Immuta service of Cloudera Manager, Configuration tab, search for key:
and, using "View as XML", add/set the value(s) similar to:
Best Practice: Configuration Values
Immuta recommends that all Immuta configuration values be marked final
.
Detailed Explanation:
immuta.secure.partition.generator.keystore
Set to true to enable TLS
Default: true
You must give the service principal that the Immuta Web Service is configured to use permission to delegate in Impala. To accomplish this, add the Immuta Web Service principal to authorized_proxy_user_config
in the Impala daemon command line arguments.
Under the Impala service of Cloudera Manager, Configuration tab, search for key:
and add/set the value(s) similar to:
If the authorized_proxy_user_config
parameter is already present for other services, append the Immuta configuration value to the end:
No additional configuration is required.
Note: Immuta will work with any Spark 2 version you may have already installed on your cluster.
The Immuta Vulcan service requires the same system API key that is configured for the Immuta NameNode plugin. Be sure that the value of immuta.system.api.key
is consistent across your configuration.
For Spark 2, under the IMMUTA service of Cloudera Manager, Configuration section, search for key:
and, using "View as XML", add/set the value(s) similar to:
Best Practice: Configuration Values
Immuta recommends that all Immuta configuration values be marked final
.
Though generally unnecessary given the configuration through the Application Settings of the Web UI, below is an example YAML snippet that can be used as an alternative to the Immuta Configuration UI if recommended by an Immuta representative.
Detailed Explanation:
client
kerberosRealm
Specifies the default realm to use for Kerberos authentication.
Example: YOURCOMPANY.COM
plugins
hdfsHandler
hdfsSystemToken
Token used by NameNode plugin to authenticate with the Immuta REST API. This must equal the value set in immuta.system.api.key
. Use the value of HDFS_SYSTEM_TOKEN
generated earlier.
Example: 0ec28d3f-a8a2-4960-b653-d7ccfe4803b3
kerberos
ticketRefreshInterval
Time in milliseconds to wait between kinit executions. This should be lower than the ticket refresh interval required by the Kerberos server.
Default: 43200000
username
User principal used for kinit.
Default: immuta
keyTabPath
The path to the keytab file on disk to be used for kinit.
Default: /etc/immuta/immuta.keytab
krbConfigPath
The path to the krb5 configuration file on disk.
Default: /etc/krb5.conf
krbBinPath
The path to the Kerberos installation binary directory.
Default: /usr/bin/
Most of the current Spark controls are now set through the IMMUTA
service and will be removed through the subsequent step of stopping and disabling that service. These instructions are primarily for legacy Spark 1.6 installs that may still contain settings from the .
If your cluster is configured with Kerberos, note that the default configuration expects to run Immuta services using the immuta
principal. If you need to use a different Kerberos principal, see for detailed instructions on how to configure that. After running through these steps, note that you may need to manually run the Create Immuta User Home Directory
command from the Actions
menu for the Immuta
service.
For more details on Immuta's HDFS configuration, please see .
See for details about each individual configuration value.
See for details about each individual configuration value.
The Immuta Web Service needs to be configured to support the HDFS plugin. You can set this configuration using the .
Additionally, you must upload a keytab for the immuta
user as well as a krb5.conf
configuration file to the Immuta Web Service. This can also be done via the .