我正在使用更新模式下的结构化流从kafka主题读取数据流,然后进行一些转换。
然后,我创建了一个jdbc接收器,以使用Append模式将数据推送到mysql接收器中。问题是我如何告诉我的接收器让它知道这是我的主键,并根据它进行更新,这样我的表就不会有任何重复的行。
val df: DataFrame = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "<List-here>")
.option("subscribe", "emp-topic")
.load()
import spark.implicits._
// value in kafka is bytes so cast it to String
val empList: Dataset[Employee] = df.
selectExpr("CAST(value AS STRING)")
.map(row => Employee(row.getString(0)))
// window aggregations on 1 min windows
val aggregatedDf= ......
// How to tell here that id is my primary key and do the update
// based on id column
aggregatedDf
.writeStream
.trigger(Trigger.ProcessingTime(60.seconds))
.outputMode(OutputMode.Update)
.foreachBatch { (batchDF: DataFrame, batchId: Long) =>
batchDF
.select("id", "name","salary","dept")
.write.format("jdbc")
.option("url", "jdbc:mysql://localhost/empDb")
.option("driver","com.mysql.cj.jdbc.Driver")
.option("dbtable", "empDf")
.option("user", "root")
.option("password", "root")
.mode(SaveMode.Append)
.save()
}
最佳答案
一种方法是,可以将ON DUPLICATE KEY UPDATE
与foreachPartition
一起使用可达到此目的
以下是伪代码片段
/**
* Insert in to database using foreach partition.
* @param dataframe : DataFrame
* @param sqlDatabaseConnectionString
* @param sqlTableName
*/
def insertToTable(dataframe: DataFrame, sqlDatabaseConnectionString: String, sqlTableName: String): Unit = {
//numPartitions = number of simultaneous DB connections you can planning to give
datframe.repartition(numofpartitionsyouwant)
val tableHeader: String = dataFrame.columns.mkString(",")
dataFrame.foreachPartition { partition =>
// Note : Each partition one connection (more better way is to use connection pools)
val sqlExecutorConnection: Connection = DriverManager.getConnection(sqlDatabaseConnectionString)
//Batch size of 1000 is used since some databases cant use batch size more than 1000 for ex : Azure sql
partition.grouped(1000).foreach {
group =>
val insertString: scala.collection.mutable.StringBuilder = new scala.collection.mutable.StringBuilder()
group.foreach {
record => insertString.append("('" + record.mkString(",") + "'),")
}
val sql = s"""
| INSERT INTO $sqlTableName VALUES
| $tableHeader
| ${insertString}
| ON DUPLICATE KEY UPDATE
| yourprimarykeycolumn='${record.getAs[String]("key")}'
sqlExecutorConnection.createStatement()
.executeUpdate(sql)
}
sqlExecutorConnection.close() // close the connection
}
}
您可以使用preparestatement而不是jdbc语句。
进一步阅读:SPARK SQL - update MySql table using DataFrames and JDBC
关于mysql - Spark结构化流:JDBC接收器中的主键,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/55954996/