本文介绍了Hive on Spark:无法创建Spark客户端的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图让Spark 2.1.0上的Hive 2.1.1在单个实例上工作。我不确定这是否正确。目前我只有一个实例,因此我无法构建集群。

I'm trying to make Hive 2.1.1 on Spark 2.1.0 work on a single instance. I'm not sure that's the right approach. Currently I only have one instance so I can't build a cluster.

当我在配置单元中运行任何插入查询时,出现错误:

When I run any insert query in hive, I get the error:

下找不到任何Spark日志。 code>。这是我的hive-site.xml的一部分,它与Spark和Yarn有关: 

I'm afraid that I didn't configure correctly since I couldn't find any Spark logs under hdfs dfs -ls /spark/eventlog. Here's part of my hive-site.xml which is related to Spark and Yarn:

<property>
     <name>hive.exec.stagingdir</name>
     <value>/tmp/hive-staging</value>
 </property>

 <property>
     <name>hive.fetch.task.conversion</name>
     <value>more</value>
 </property>

 <property>
     <name>hive.execution.engine</name>
     <value>spark</value>
 </property>

 <property>
     <name>spark.master</name>
     <value>spark://ThinkPad-W550s-Lab:7077</value>
 </property>

 <property>
     <name>spark.eventLog.enabled</name>
     <value>true</value>
 </property>

 <property>
     <name>spark.eventLog.dir</name>
     <value>hdfs://localhost:8020/spark/eventlog</value>
 </property>
 <property>
     <name>spark.executor.memory</name>
     <value>2g</value>
 </property>

 <property>
     <name>spark.serializer</name>
     <value>org.apache.spark.serializer.KryoSerializer</value>
 </property>

 <property>
     <name>spark.home</name>
     <value>/home/server/spark</value>
 </property>

 <property>
     <name>spark.yarn.jar</name>
     <value>hdfs://localhost:8020/spark-jars/*</value>
 </property>



由于我没有配置 fs.default.name 在hadoop中的值,我可以在配置文件中使用 hdfs:// localhost:8020 作为文件系统路径,或者将端口更改为9000当我将8020更改为9000时出现同样的错误)?

1) Since I didn't configure the fs.default.name value in hadoop, could I just use hdfs://localhost:8020 as the file system path in the config file or change the port to 9000 (I get the same error when I change 8020 to 9000)?

2)我从 start-master.sh 和 start-slave.sh spark:// ThinkPad-W550s-Lab:7077 ,是否正确?

2) I start spark by start-master.sh and start-slave.sh spark://ThinkPad-W550s-Lab:7077, is it correct?

yarn.scheduler.maximum-allocation-mb 和 yarn.nodemanager.resource的值.memory-mb 远远大于 spark.executor.memory 。

4 )我如何解决创建spark客户端失败错误?
非常感谢!

4) How could I fix the Failed to create spark client error?Thanks a lot!

推荐答案

在我的例子中,设置 spark.yarn.appMasterEnv.JAVA_HOME 属性是一个问题。

In my case, setting the spark.yarn.appMasterEnv.JAVA_HOME property was a problem.

修正...

fix...

  <property>
    <name>spark.executorEnv.JAVA_HOME</name>
    <value>${HADOOP CLUSTER JDK PATH}</value>
    <description>Must be hadoop cluster jdk PATH.</description>
  </property>

  <property>
      <name>spark.yarn.appMasterEnv.JAVA_HOME</name>
      <value>${HADOOP CLUSTER JDK PATH}</value>
      <description>Must be hadoop cluster jdk PATH.</description>
  </property>

这篇关于Hive on Spark:无法创建Spark客户端的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

05-26 05:37
查看更多