问题描述
我看到几篇帖子都包含与我收到的错误相同的错误,但没有一篇使我对我的代码进行修复.我已经多次使用这个完全相同的代码,没有任何问题,现在却遇到了问题.这是我收到的错误:
I see several post that contain the same error as the error that I am receiving, but none are leading me to a fix on my code. I have used this exact same code many times with no issue and now am having problems. Here is the error I receive:
y4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243).
这是我在python脚本中启动上下文的方法:
Here is how I start my context within my python script:
spark = ps.sql.SparkSession.builder \
.master("local[*]") \
.appName("collab_rec") \
.config("spark.mongodb.input.uri", "mongodb://127.0.0.1/bgg.game_commen$
.getOrCreate()
sc = spark.sparkContext
sc.setCheckpointDir('checkpoint/')
sqlContext = SQLContext(spark)
如果您有任何建议,请告诉我.
Please let me know if you have a suggestion.
推荐答案
SparkSession是Spark 2.x中的新入口点.这是SQLContext的替代,但是它在内部代码中使用SQLContext.
SparkSession is the new entry point in Spark 2.x. This is a replacement for SQLContext, however it uses SQLContext in internal code.
您使用SQLContext所做的一切都应该可以通过SparkSession实现.
Everything you were making with SQLContext should be possible with SparkSession.
如果您确实要使用SQLContext,请使用spark.sqlContext变量
If you really want to use SQLContext, use spark.sqlContext variable
这篇关于SparkException:此JVM中可能仅运行一个SparkContext(请参阅SPARK-2243)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!