我正在尝试在纱线群集模式下运行应用程序。这是shell脚本的设置:

spark-submit --class "com.Myclass"  \
--num-executors 2 \
 --executor-cores 2 \
 --master yarn \
 --supervise \
 --deploy-mode cluster \
../target/ \


此外,我得到以下错误。这是纱线日志APPLICATIONID的错误详细信息

INFO : org.apache.spark.deploy.yarn.ApplicationMaster - Registered signal handlers for [TERM, HUP, INT]
DEBUG: org.apache.hadoop.util.Shell - Failed to detect a valid hadoop home directory
java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
    at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:307)
    at org.apache.hadoop.util.Shell.<clinit>(Shell.java:332)
    at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:79)
    at org.apache.hadoop.yarn.conf.YarnConfiguration.<clinit>(YarnConfiguration.java:590)
    at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.newConfiguration(YarnSparkHadoopUtil.scala:62)
    at org.apache.spark.deploy.SparkHadoopUtil.<init>(SparkHadoopUtil.scala:52)
    at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.<init>(YarnSparkHadoopUtil.scala:47)


我尝试像下面那样修改spark-env.sh,我看到记录了Hadoop_Home,但仍然遇到上述错误。修改以下条目并将其添加到spark-env.sh

export HADOOP_HOME="/usr/lib/hadoop"
echo "&&&&&&&&&&&&&&&&&&&&&& HADOOP HOME "
echo "$HADOOP_HOME"
export HADOOP_CONF_DIR="$HADOOP_HOME/etc/hadoop"
echo "&&&&&&&&&&&&&&&&&&&&&& HADOOP_CONF_DIR "
echo "$HADOOP_CONF_DIR"


我在运行spark-submit时看到hadoop home已记录,但仍然抱怨hadoop-home。

最佳答案

在我的spark-env.sh中,看起来有点不同:

# Make Hadoop installation visible
export HADOOP_HOME=${HADOOP_HOME:-/usr/hdp/current/hadoop-client}
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-/etc/hadoop/conf}


也许这可以帮助您。记住要调整路径。

关于apache-spark - 在 yarn 簇模式下运行Spark应用程序时找不到HADOOP_HOME,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/35284631/

10-10 17:00
查看更多