我是scala的新手。我需要一些即时帮助。

我有M * N spark sql数据帧,如下所示。我需要将每个行列的值与下一个行列的值进行比较。

诸如A1到A2,A1到A3之类的东西,直到N。 B1至B2 B1至B3。

有人可以指导我如何在Spark sql中逐行比较吗?

ID  COLUMN1 Column2
1   A1  B1
2   A2  B2
3   A3  B3

先感谢您
桑托什

最佳答案

如果我正确理解了该问题-您想比较(使用某些函数)将每个值与上一条记录中同一列的值进行比较。您可以使用lag 窗口函数做到这一点:

import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.Column
import org.apache.spark.sql.functions._
import spark.implicits._

// some data...
val df = Seq(
  (1, "A1", "B1"),
  (2, "A2", "B2"),
  (3, "A3", "B3")
).toDF("ID","COL1", "COL2")

// some made-up comparisons - fill in whatever you want...
def compareCol1(curr: Column, prev: Column): Column = curr > prev
def compareCol2(curr: Column, prev: Column): Column = concat(curr, prev)

// creating window - ordered by ID
val window = Window.orderBy("ID")

// using the window with lag function to compare to previous value in each column
df.withColumn("COL1-comparison", compareCol1($"COL1", lag("COL1", 1).over(window)))
  .withColumn("COL2-comparison", compareCol2($"COL2", lag("COL2", 1).over(window)))
  .show()

// +---+----+----+---------------+---------------+
// | ID|COL1|COL2|COL1-comparison|COL2-comparison|
// +---+----+----+---------------+---------------+
// |  1|  A1|  B1|           null|           null|
// |  2|  A2|  B2|           true|           B2B1|
// |  3|  A3|  B3|           true|           B3B2|
// +---+----+----+---------------+---------------+

07-26 05:42