问题描述
我想在Apache Spark连接中包含空值.默认情况下,Spark不包含具有null的行.
I would like to include null values in an Apache Spark join. Spark doesn't include rows with null by default.
这是默认的Spark行为.
Here is the default Spark behavior.
val numbersDf = Seq(
("123"),
("456"),
(null),
("")
).toDF("numbers")
val lettersDf = Seq(
("123", "abc"),
("456", "def"),
(null, "zzz"),
("", "hhh")
).toDF("numbers", "letters")
val joinedDf = numbersDf.join(lettersDf, Seq("numbers"))
这是joinedDf.show()
的输出:
+-------+-------+
|numbers|letters|
+-------+-------+
| 123| abc|
| 456| def|
| | hhh|
+-------+-------+
这是我想要的输出:
+-------+-------+
|numbers|letters|
+-------+-------+
| 123| abc|
| 456| def|
| | hhh|
| null| zzz|
+-------+-------+
推荐答案
Spark提供了一种特殊的NULL
安全相等运算符:
Spark provides a special NULL
safe equality operator:
numbersDf
.join(lettersDf, numbersDf("numbers") <=> lettersDf("numbers"))
.drop(lettersDf("numbers"))
+-------+-------+
|numbers|letters|
+-------+-------+
| 123| abc|
| 456| def|
| null| zzz|
| | hhh|
+-------+-------+
请注意不要将其与Spark 1.5或更早版本一起使用.在Spark 1.6之前,它需要笛卡尔积( SPARK-11111 -快速的空安全联接).
Be careful not to use it with Spark 1.5 or earlier. Prior to Spark 1.6 it required a Cartesian product (SPARK-11111 - Fast null-safe join).
在 Spark 2.3.0 或更高版本中,您可以在 PySpark 中使用Column.eqNullSafe
:
In Spark 2.3.0 or later you can use Column.eqNullSafe
in PySpark:
numbers_df = sc.parallelize([
("123", ), ("456", ), (None, ), ("", )
]).toDF(["numbers"])
letters_df = sc.parallelize([
("123", "abc"), ("456", "def"), (None, "zzz"), ("", "hhh")
]).toDF(["numbers", "letters"])
numbers_df.join(letters_df, numbers_df.numbers.eqNullSafe(letters_df.numbers))
+-------+-------+-------+
|numbers|numbers|letters|
+-------+-------+-------+
| 456| 456| def|
| null| null| zzz|
| | | hhh|
| 123| 123| abc|
+-------+-------+-------+
SparkR 中的
和%<=>%
:
numbers_df <- createDataFrame(data.frame(numbers = c("123", "456", NA, "")))
letters_df <- createDataFrame(data.frame(
numbers = c("123", "456", NA, ""),
letters = c("abc", "def", "zzz", "hhh")
))
head(join(numbers_df, letters_df, numbers_df$numbers %<=>% letters_df$numbers))
numbers numbers letters
1 456 456 def
2 <NA> <NA> zzz
3 hhh
4 123 123 abc
通过 SQL ( Spark 2.2.0 + ),您可以使用IS NOT DISTINCT FROM
:
With SQL (Spark 2.2.0+) you can use IS NOT DISTINCT FROM
:
SELECT * FROM numbers JOIN letters
ON numbers.numbers IS NOT DISTINCT FROM letters.numbers
这也可以与DataFrame
API一起使用:
This is can be used with DataFrame
API as well:
numbersDf.alias("numbers")
.join(lettersDf.alias("letters"))
.where("numbers.numbers IS NOT DISTINCT FROM letters.numbers")
这篇关于在Apache Spark Join中包含空值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!