本文介绍了从Spark连接到mysql的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试按照此处提到的说明进行操作...

I am trying to follow the instructions mentioned here...

https://www.percona.com/blog/2016/08/17/apache-spark-makes-slow-mysql-queries-10x-faster/

在这里...

https://www.percona.com/blog/2015/10/07/using-apache-spark-mysql-data-analysis/

我正在使用sparkdocker映像.

I am using sparkdocker image.

docker run -it -p 8088:8088 -p 8042:8042 -p 4040:4040 -h sandbox sequenceiq/spark:1.6.0 bash

cd /usr/local/spark/

./sbin/start-master.sh

./bin/spark-shell --driver-memory 1G --executor-memory 1g --executor-cores 1 --master local

这可以按预期工作:

scala> sc.parallelize(1 to 1000).count()

但这显示了一个错误:

val jdbcDF = spark.read.format("jdbc").options(
  Map("url" ->  "jdbc:mysql://1.2.3.4:3306/test?user=dba&password=dba123",
  "dbtable" -> "ontime.ontime_part",
  "fetchSize" -> "10000",
  "partitionColumn" -> "yeard", "lowerBound" -> "1988", "upperBound" -> "2016", "numPartitions" -> "28"
  )).load()

这是错误:

<console>:25: error: not found: value spark
         val jdbcDF = spark.read.format("jdbc").options(

如何从Spark Shell中连接到MySQL?

How do I connect to MySQL from within spark shell?

推荐答案

对于spark 2.0.x,可以使用DataFrameReader和DataFrameWriter.使用SparkSession.read访问DataFrameReader,并使用Dataset.write访问DataFrameWriter.

With spark 2.0.x,you can use DataFrameReader and DataFrameWriter.Use SparkSession.read to access DataFrameReader and use Dataset.write to access DataFrameWriter.

假设使用spark-shell.

Suppose using spark-shell.

val prop=new java.util.Properties()
prop.put("user","username")
prop.put("password","yourpassword")
val url="jdbc:mysql://host:port/db_name"

val df=spark.read.jdbc(url,"table_name",prop)
df.show()

阅读示例2

val jdbcDF = spark.read
  .format("jdbc")
  .option("url", "jdbc:mysql:dbserver")
  .option("dbtable", "schema.tablename")
  .option("user", "username")
  .option("password", "password")
  .load()

来自火花文档

import org.apache.spark.sql.SaveMode

val prop=new java.util.Properties()
prop.put("user","username")
prop.put("password","yourpassword")
val url="jdbc:mysql://host:port/db_name"
//df is a dataframe contains the data which you want to write.
df.write.mode(SaveMode.Append).jdbc(url,"table_name",prop)

这篇关于从Spark连接到mysql的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!