本文介绍了如何使用monotonically_increasing_id连接两个没有公共列的pyspark数据帧?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我有两个具有相同行数的pyspark数据帧,但是它们没有任何公共列.因此,我使用monotonically_increasing_id()作为向它们两者添加新列
I have two pyspark dataframes with same number of rows but they don't have any common column. So I am adding new column to both of them using monotonically_increasing_id() as
from pyspark.sql.functions import monotonically_increasing_id as mi
id=mi()
df1 = df1.withColumn("match_id", id)
cont_data = cont_data.withColumn("match_id", id)
cont_data = cont_data.join(df1,df1.match_id==cont_data.match_id, 'inner').drop(df1.match_id)
但是在加入后,结果数据帧的行数较少.我在这里想念什么.谢谢
But after join the resulting data frame has less number of rows.What am I missing here. Thanks
推荐答案
您只是没有.这不是monotonically_increasing_id
的适用用例,根据定义是不确定的.相反:
You just don't. This not an applicable use case for monotonically_increasing_id
, which is by definition non-deterministic. Instead:
- 转换为RDD
-
zipWithIndex
- 转换回
DataFrame
. -
join
- convert to RDD
zipWithIndex
- convert back to
DataFrame
. join
这篇关于如何使用monotonically_increasing_id连接两个没有公共列的pyspark数据帧?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!