/
Configuring Datameer

Configuring Datameer

Hadoop provides scalable data storage using the Hadoop Distributed File System (HDFS) and fast parallel data processing on a fault-tolerant cluster of computers. Learn more about Hadoop.

See Hadoop and Datameer to learn more about Hadoop and how to use it with Datameer.

Configuring Hadoop Cluster

To configure the Hadoop cluster settings in Datameer, you need to know which type of mode you are using and the appropriate settings for that mode such as file system or root directory within HDFS. If you don't have this information readily available, you might need to contact someone within your own organization who can assist you.

The Hadoop cluster can be configured to use local, Hadoop cluster, or Kerberos secured. These are described in the sections that follow.

General configuration

  1. Click the Admin tab.
  2. Click the Hadoop Cluster tab at the left side. The current settings are shown.
     
  3. Click Edit to make changes.
  4. Click Save when you are finished making changes.

Hadoop cluster settings

To edit Hadoop Cluster settings:

  1. Click the Admin tab.
  2. Click the Hadoop Cluster tab at the left side. The current settings are shown.
  3. Click Edit to make changes.
  4. Select Hadoop Cluster for the mode.
  5. Specify the name node and add a private folder path or use impersonation if applicable.
    Whitespaces aren't supported for use in file/folder paths. Avoid setting up Datameer storage directories (storage root path, temp paths, execution framework specific staging directories, etc.) with a whitespace in the path.

    Impersonation notes:
    - There is one-to-one mapping between the Datameer user and the OS user.  
    - The OS user who is launching the Datameer process must be a sudoer.
    - The temp folder for the Datameer installation local file system as well as in the Hadoop cluster (for Datameer) should have read/write access.

      • <Datameer_Installation_Folder>/tmp (Local FileSystem)
      • <Datameer_Private_Folder>/temp (Hadoop Cluster and MapR)

    Learn about Enabling Secure Impersonation with Datameer.

  6. Specify YARN settings.
  7. Use the properties text boxes to add Hadoop and custom properties. 
    Enter a name and value to add a property, or delete a name and value pair to delete the property.

    Within these edit fields, backslash (\) characters are interpreted by Datameer as an escape character rather than a plain text character. In order to produce the actual backslash character, you have to type two backslashes:

    example.property=example text, a backslash \\ and further text

    The second backslash is needed as you are effectively editing a Java properties file in these edit fields.


  8. Logging options. Select the severity of messages to be logged. The logging customization field allows to record exactly what is needed.
  9. Click Save when you are finished making changes.

Local execution settings

Not available with Enterprise.

To edit Local Execution settings:

  1. Click the Admin tab.
  2. Click the Hadoop Cluster tab at the left side. The current settings are shown.
  3. Click Edit to make changes.
  4. Enter a name and values in the properties text boxes. Enter a name and value to add a property, or delete a name and value pair to delete that property.
  5. Click Save when you are finished making changes.

Kerberos secured Hadoop

 Kerberos authentication is available with Datameer's Advanced Governance through a plug-in. If you used Kerberos prior to 5.11, make sure to install this plug-in when upgrading.

Prerequisites:

  • Mapping 1:1 the Datameer Service Account between hosts (Datameer and all HDFS nodes)
  • Adding the Datameer service account to the HDFS supergroup
  • Utilizing the correct krb5.conf in order for Datameer to communicate with the key distribution center (KDC) correctly. By default, Datameer assumes that the file is in /etc/krb5.conf on the Datameer application server. If the file is in another location, specify the path.

    etc/das-env.sh
    export JAVA_OPTIONS="$JAVA_OPTIONS -Djava.security.krb5.conf=/home/datameer/krb5.conf"

To edit settings for a Kerberos secured Hadoop cluster:

  1. Click the Admin tab
  2. Click the Hadoop Cluster tab at the left side. The current settings are shown.
  3. Click Edit to make changes and choose Kerberos Secured Hadoop in the Mode list if needed. Click the link in the dialog box to learn more.
  4. Specify the URI for the name node, the private folder Datameer should use, whether impersonation should be enabled, and whether to enable HDFS transparent encryption.
  5. Enter the necessary Kerberos information: the Kerberos principal for the Datameer user, the path to the keytab containing Kerberos secrets, the Kerberos principal for YARN, the Kerberos principal for HDFS, and the Kerberos principal for MapReduce.
  6. Use the Custom Properties text box to add custom properties. Enter a name and value to add a property, or delete a name and value pair to delete that property.
  7. Logging options. Select the severity of messages to be logged. It is also possible to write custom log settings to record exactly what is needed.
  8. Click Save when you are finished making changes.

Autoconfigure grid mode 

This feature is not supported with Cloudera Manager Safety Valve.

In Autoconfigure Grid Mode, Datameer evaluates your cluster and automatically configures the optimal Hadoop cluster settings. You connect Datameer to your Hadoop cluster by installing a properly configured Hadoop client package on the Datameer server (using an installation manager such as ClouderaM or Ambari or manually) and then providing Datameer with a path to that client. 

The Autoconfigure Grid Mode reads all the cluster's property files and evaluates the following properties:

spark.master
das.yarn.available-node-memory
das.yarn.available-node-vcores
spark.yarn.am.cores
spark.yarn.am.memory
spark.yarn.am.memoryOverhead
spark.driver.cores
spark.driver.memory
spark.yarn.driver.memoryOverhead
spark.executor.cores
spark.yarn.executor.memoryOverhead
das.spark.context.max-executors
das.spark.context.auto-scale-enabled
spark.executor.memory
spark.submit.deployMode

You can connect Datameer to a Hadoop cluster by installing a properly configured Hadoop client package on the Datameer server (using an installation manager such as ClouderaM or Ambari or manually) and then providing Datameer with a path to that client. 

  1. Go to Admin tab > Hadoop Cluster.
  2. Select Autoconfigure Cluster in the Mode field.
  3. In the Hadoop Configuration Directory field, enter the directory where the configuration files for Hadoop are located.
  4. Enter the path to the folder where Datameer puts its private information in Hadoop's file system in the Datameer Private Folder field. 
     
  5. Enter the number of concurrent jobs.
  6. Select whether to use secure impersonation. 
  7. Edit the Hadoop or custom properties as necessary.
    If the yarn.timeline-service.enabled property is true in the Hadoop conf files, set yarn.timeline-service.enabled=false as a custom property. (This change is not needed as of Datameer v6.1)
     
  8. Click Save.

To update your Datameer Autoconfig Grid Mode cluster configuration, click Edit on the Hadoop Cluster page and click Save. After saving has been completed, the updated configuration has been applied. New updates to the settings are not shown in the conductor.log but are displayed when clicking again on Edit for Autoconfigure Grid Mode under the Hadoop Properties label.

Autoconfigure Grid Mode should not be used when servers in your Hadoop cluster require specific settings for specialized tasks.

MapR

To edit settings for clusters using MapR:

  1. Click the Admin tab at the top of the page.
  2. Click the Hadoop Cluster tab at the left side. The current settings are shown.
  3. Click Edit to make changes and choose MapR in the mode list.
  4. Add the cluster name, the Datameer private folder, and check the boxes if using Simple Impersonation for Datameer to submit jobs and access the HDFS on behalf of Datameer user, and the Max Concurrent jobs. 
     
    There is one-to-one mapping between the Datameer user and the OS user.  
    The OS user who is launching the Datameer process must be a sudoer.
    The temp folder for the Datameer installation local file system as well as in the hadoop cluster (for Datameer) should have read/write access.

      • <Datameer_Installation_Folder>/tmp (Local FileSystem)
      • <Datameer_Private_Folder>/temp (Hadoop Cluster and MapR)

    Connecting to a secure MapR cluster

    1) Obtain the MapR ticket for the user who is running the Datameer application. Execute the following command on the shell:

    maprlogin password -user <user_who_starts_datameer>

    2) Install Datameer and open <Datameer_Home>/etc/das-env.sh and add the following system property to the Java arguments:

    -Dmapr.secure.mode=true

    3) Start and configure Datameer using MapR Grid Mode.

    The option to connect using Secure Impersonation is now available.

    4) (Optional) If there is a failure in saving the configuration:

    Caused by: java.io.IOException: Can't get Master Kerberos principal for use as renewer

    Add the following custom Hadoop properties under the Hadoop Admin page: 

    yarn.resourcemanager.principal=<value>

    The value for this property can be found in the yarn-site.xml file in your Hadoop cluster configuration.

    The steps to achieve impersonation are same as for a secured Kerberos cluster.

  5. If required, enter properties. Enter a name and value to add a property, or delete a name and value pair to delete that property.
  6. Logging options. Select the severity of messages to be logged. It is also possible to write custom log settings to record exactly what is needed.
  7. Click Save when you are finished making changes.

 Configuring High Availability

HDFS NameNode (NN)

Setting a HA enabled Hadoop cluster with Datameer is almost same as setting an ordinary Hadoop Cluster with following customizations:

  1. Specify the NameService to which the Datameer instance should be working with in HDFS NameNode text box: hdfs://nameservice1
  2. Specify the following Hadoop properties in Custom Property field:

    Custom Properties
    ### HDFS Name Node (NN) High Availability (HA) ###
    dfs.nameservices=nameservice1
    dfs.ha.namenodes.nameservice1=nn1,nn2
    dfs.namenode.rpc-address.nameservice1.nn1=<server-address>:8020
    dfs.namenode.rpc-address.nameservice1.nn2=<server-address>:8020
    dfs.client.failover.proxy.provider.nameservice1=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

    If HDFS HA is configured for automatic failover by using Quorum Journal Manager (QJM), you need to add the following additional custom properties:

    ### HDFS HA Autotmatic Failover
    # By using the Quorum Journal Manager (QJM)
    dfs.ha.automatic-failover.enabled=true
    ha.zookeeper.quorum=<zookeepperHost1>.<domain>.<tld>:2181,<zookeepperHostn>.<domain>.<tld>:2181
  3. Check the current NameNode (NN) setting within the database:

    mysql -udap -pdap dap -Bse "SELECT uri FROM data" | cut -d"/" -f3 | sort | uniq

    The command above should have only one result, the former <host>.<domain>.<tld>:<port> value configured under Admin tab > Hadoop Cluster > Storage Settings > HDFS NameNode.

  4. Update the paths to the new location in the Datameer DB:

    ./bin/update_paths.sh hdfs://<old.namenode>:8020/<root-path> hdfs://nameservice1/<root-path>
  5.  Check to ensure the new NameNode has been applied to the database and that the path is correct:

    mysql -udap -pdap dap -Bse "SELECT uri FROM data" | cut -d"/" -f3,4,5 | sort | uni

    The command above has only one result, the virtual NameNode value including the path configured under Admin tab > Hadoop Cluster > Storage Settings > Datameer Private Folder.

YARN

Before implementing configuration changes, check your current cluster setup for the correct cluster-id. You can also review the description for each setting under ResourceManager High Availability before configuration.

  1. Specify the resource manager with which the Datameer instance should be working in Yarn Resource Manager Address field: yarnRM.

     

      YARN Application Classpath is a comma-separated list of CLASSPATH entries.


  2. Specify the following Hadoop properties in Custom Property field:

    Custom Properties
    ### Resource Manager (RM) YARN High Availability (HA) ###
    yarn.resourcemanager.cluster-id=yarnRM
    yarn.resourcemanager.ha.enabled=true
    yarn.resourcemanager.ha.rm-ids=rm1,rm2
    yarn.resourcemanager.recovery.enabled=true
    yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
    yarn.resourcemanager.zk-address=<server-adress>:2181
    ## RM1 ##
    yarn.resourcemanager.hostname.rm1=<server>
    yarn.resourcemanager.address.rm1=<server-adress>:8032
    yarn.resourcemanager.scheduler.address.rm1=<server-adress>:8030
    yarn.resourcemanager.webapp.address.rm1=<server-adress>:8088
    yarn.resourcemanager.resource-tracker.address.rm1=<server-adress>:8031
    ## RM2 ##
    yarn.resourcemanager.hostname.rm2=<server>
    yarn.resourcemanager.address.rm2=<server-adress>:8032
    yarn.resourcemanager.scheduler.address.rm2=<server-adress>:8030
    yarn.resourcemanager.webapp.address.rm2=<server-adress>:8088
    yarn.resourcemanager.resource-tracker.address.rm2=<server-adress>:8031

Using Custom Properties

Custom properties consist of a name (key) and value pair separated with a '='. These properties are used to configure Hadoop jobs.
For example you can specify the output compression codec for jobs by entering mapred.output.compression.codec=org.apache.hadoop.io.compress.DefaultCodec into the custom property field.

Datameer sets a group of default properties for Hadoop jobs which are not visible in the user interface but can be overridden using it. You can find these properties inside the conf folder:

  • das-common.properties, used locally and on the cluster
  • das-job.properties, used on the cluster
  • das-conductor.properties, used only locally

There are some additional Datameer specific properties as well. These are: