问:是否有任何方法可以合并两个数据框或将一个数据框的列复制到PySpark中的另一列?

例如,我有两个数据框:

DF1
C1                    C2
23397414             20875.7353
5213970              20497.5582
41323308             20935.7956
123276113            18884.0477
76456078             18389.9269


第二个数据框

DF2
C3                       C4
2008-02-04               262.00
2008-02-05               257.25
2008-02-06               262.75
2008-02-07               237.00
2008-02-08               231.00


然后我想像这样将DF2的C3添加到DF1:

New DF
    C1                    C2          C3
    23397414             20875.7353   2008-02-04
    5213970              20497.5582   2008-02-05
    41323308             20935.7956   2008-02-06
    123276113            18884.0477   2008-02-07
    76456078             18389.9269   2008-02-08


我希望这个例子很清楚。

最佳答案

在这种情况下,rownum + window函数即解决方案1或zipWithIndex.map即解决方案2应该有所帮助。

解决方案1:您可以使用窗口函数来获取此kind of

然后,我建议您将行号作为附加列名添加到Dataframe说df1。

  DF1
    C1                    C2                 columnindex
    23397414             20875.7353            1
    5213970              20497.5582            2
    41323308             20935.7956            3
    123276113            18884.0477            4
    76456078             18389.9269            5


第二个数据框

DF2
C3                       C4             columnindex
2008-02-04               262.00            1
2008-02-05               257.25            2
2008-02-06               262.75            3
2008-02-07               237.00            4
2008-02-08               231.00            5


现在..进行df1和df2的内部联接,仅此而已...
你会得到低于输出

像这样的东西

from pyspark.sql.window import Window
from pyspark.sql.functions import rowNumber

w = Window().orderBy()

df1 = ....  // as showed above df1

df2 = ....  // as shown above df2


df11 =  df1.withColumn("columnindex", rowNumber().over(w))
  df22 =  df2.withColumn("columnindex", rowNumber().over(w))

newDF = df11.join(df22, df11.columnindex == df22.columnindex, 'inner').drop(df22.columnindex)
newDF.show()



New DF
    C1                    C2          C3
    23397414             20875.7353   2008-02-04
    5213970              20497.5582   2008-02-05
    41323308             20935.7956   2008-02-06
    123276113            18884.0477   2008-02-07
    76456078             18389.9269   2008-02-08


解决方案2:在Scala中的另一种好方法(可能是最好的:)),您可以将其转换为pyspark:

/**
* Add Column Index to dataframe
*/
def addColumnIndex(df: DataFrame) = sqlContext.createDataFrame(
  // Add Column index
  df.rdd.zipWithIndex.map{case (row, columnindex) => Row.fromSeq(row.toSeq :+ columnindex)},
  // Create schema
  StructType(df.schema.fields :+ StructField("columnindex", LongType, false))
)

// Add index now...
val df1WithIndex = addColumnIndex(df1)
val df2WithIndex = addColumnIndex(df2)

 // Now time to join ...
val newone = df1WithIndex
  .join(df2WithIndex , Seq("columnindex"))
  .drop("columnindex")

08-05 13:03