本文介绍了群集中HADOOP_CONF_DIR的值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经使用Ambari设置了一个群集(YARN),并以3个VM作为主机.

I have setup a cluster(YARN) using Ambari with 3 VMs as hosts.

在哪里可以找到HADOOP_CONF_DIR的值?

Where I can find the value for HADOOP_CONF_DIR ?

# Run on a YARN cluster
export HADOOP_CONF_DIR=XXX
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master yarn-cluster \  # can also be `yarn-client` for client mode
  --executor-memory 20G \
  --num-executors 50 \
  /path/to/examples.jar \
  1000

推荐答案

还要安装Hadoop.就我而言,我已将其安装在/usr/local/hadoop

Install Hadoop as well. In my case I've installed it in /usr/local/hadoop

设置Hadoop环境变量

Setup Hadoop Environment Variables

export HADOOP_INSTALL=/usr/local/hadoop

然后设置conf目录

export HADOOP_CONF_DIR=$HADOOP_INSTALL/etc/hadoop

这篇关于群集中HADOOP_CONF_DIR的值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-07 08:40