问题描述
在我生命中,我无法弄清楚我的PySpark安装出了什么问题.我已经安装了所有依赖项,包括Hadoop,但是PySpark无法找到它-我可以正确诊断吗?
For the life of me I cannot figure out what is wrong with my PySpark install. I have installed all dependencies, including Hadoop, but PySpark cant find it--am I diagnosing this correctly?
请参阅下面的完整错误消息,但最终在PySpark SQL上失败
See the full error message below, but it ultimately fails on PySpark SQL
pyspark.sql.utils.IllegalArgumentException:u实例化org.apache.spark.sql.hive.HiveSessionStateBuilder时出错:"
nickeleres@Nicks-MBP:~$ pyspark
Python 2.7.10 (default, Feb 7 2017, 00:08:15)
[GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.34)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/opt/spark-2.2.0/jars/hadoop-auth-2.7.3.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
17/10/24 21:21:58 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4041. Attempting port 4042.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4042. Attempting port 4043.
Traceback (most recent call last):
File "/opt/spark/python/pyspark/shell.py", line 45, in <module>
spark = SparkSession.builder\
File "/opt/spark/python/pyspark/sql/session.py", line 179, in getOrCreate
session._jsparkSession.sessionState().conf().setConfString(key, value)
File "/opt/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
File "/opt/spark/python/pyspark/sql/utils.py", line 79, in deco
raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: u"Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"
>>>
推荐答案
tl; dr 关闭所有其他Spark进程并重新开始.
tl;dr Close all the other Spark processes and start over.
以下WARN消息表明存在另一个保存端口的进程(或多个进程).
The following WARN messages say that there is another process (or multiple processes) that holds the ports.
我确定该进程是Spark进程,例如pyspark会话或Spark应用程序.
I'm sure that the process(es) are Spark processes, e.g. pyspark sessions or Spark applications.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4041. Attempting port 4042.
17/10/24 21:21:59 WARN Utils: Service 'SparkUI' could not bind on port 4042. Attempting port 4043.
这就是为什么Spark/pyspark发现端口4044可免费用于Web UI后,它试图实例化 HiveSessionStateBuilder
并失败的原因.
That's why after Spark/pyspark has found that the port 4044 is free to use for web UI it tried to instantiate HiveSessionStateBuilder
and failed.
pyspark失败,因为您不能启动并运行使用同一本地Hive元存储的多个Spark应用程序.
pyspark failed as you cannot have more than one Spark application up and running that uses the same local Hive metastore.
这篇关于为什么pyspark失败,并显示“实例化org.apache.spark.sql.hive.HiveSessionStateBuilder时出错"?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!