问题描述
我有两个从两个 csv 文件中读取的数据框.
I have two dataframe which has been readed from two csv files.
+---+----------+-----------------+
| ID| NUMBER | RECHARGE_AMOUNT|
+---+----------+-----------------+
| 1|9090909092| 30|
| 2|9090909093| 30|
| 3|9090909090| 30|
| 4|9090909094| 30|
+---+----------+-----------------+
和
+---+----------+-----------------+
| ID| NUMBER | RECHARGE_AMOUNT|
+---+----------+-----------------+
| 1|9090909092| 40|
| 2|9090909093| 50|
| 3|9090909090| 60|
| 4|9090909094| 70|
+---+----------+-----------------+
我正在尝试使用 NUMBER coumn 使用 pyspark 代码连接这两个数据 dfFinal = dfFinal.join(df2, on=['NUMBER'], how='inner')
和 new数据帧生成如下.
I am triying to join this two data from using NUMBER coumn using the pyspark code dfFinal = dfFinal.join(df2, on=['NUMBER'], how='inner')
and new dataframe is generated as follows.
+----------+---+-----------------+---+-----------------+
| NUMBER | ID| RECHARGE_AMOUNT| ID| RECHARGE_AMOUNT|
+----------+---+-----------------+---+-----------------+
|9090909092| 1| 30| 1| 40|
|9090909093| 2| 30| 2| 50|
|9090909090| 3| 30| 3| 60|
|9090909094| 4| 30| 4| 70|
+----------+---+-----------------+---+-----------------+
但是我无法将此数据帧写入文件,因为加入后的数据帧具有重复的列.我正在使用以下代码.dfFinal.coalesce(1).write.format('com.databricks.spark.csv').save('/home/user/output',header = 'true')
有什么办法加入火花后避免重复列.下面给出的是我的 pyspark 代码.
But i am not able to write this dataframe into a file since the dataframe after joining is having duplicate column. I am using the following code. dfFinal.coalesce(1).write.format('com.databricks.spark.csv').save('/home/user/output',header = 'true')
Is there any way to avoid duplicate column after joining in spark. Given below is my pyspark code.
from pyspark.sql import SparkSession
from pyspark.sql.functions import col
spark = SparkSession.builder.appName("test1").getOrCreate()
files = ["/home/user/test1.txt", "/home/user/test2.txt"]
dfFinal = spark.read.load(files[0],format="csv", sep=",", inferSchema="false", header="true", mode="DROPMALFORMED")
dfFinal.show()
for i in range(1,len(files)):
df2 = spark.read.load(files[i],format="csv", sep=",", inferSchema="false", header="true", mode="DROPMALFORMED")
df2.show()
dfFinal = dfFinal.join(df2, on=['NUMBER'], how='inner')
dfFinal.show()
dfFinal.coalesce(1).write.format('com.databricks.spark.csv').save('/home/user/output',header = 'true')
我需要生成唯一的列名.即:如果我在文件数组中给出了两个文件相同的文件,它应该生成如下.
I need to generate unique column name.ie: if i gave two files in files array with same coumn it should generate as follows.
+----------+----+-------------------+-----+-------------------+
| NUMBER |IDx | RECHARGE_AMOUNTx | IDy | RECHARGE_AMOUNTy |
+----------+----+-------------------+-----+-------------------+
|9090909092| 1 | 30 | 1 | 40 |
|9090909093| 2 | 30 | 2 | 50 |
|9090909090| 3 | 30 | 3 | 60 |
|9090909094| 4 | 30 | 4 | 70 |
+----------+---+-----------------+---+------------------------+
在熊猫中,我可以使用 suffixes
参数,如下所示 dfFinal = dfFinal.merge(df2,left_on='NUMBER',right_on='NUMBER',how='inner',suffixes=('x', 'y'),sort=True)
这将生成上述数据帧.有什么办法可以在 pyspark 上复制这个.
In panda i can use suffixes
argument as show below dfFinal = dfFinal.merge(df2,left_on='NUMBER',right_on='NUMBER',how='inner',suffixes=('x', 'y'),sort=True)
which will generate the above dataframe. Is there any way i can replicate this on pyspark.
推荐答案
您可以从每个数据框中选择列并为其设置别名.
像这样.
You can select the columns from each dataframe and alias it.
Like this.
dfFinal = dfFinal.join(df2, on=['NUMBER'], how='inner') \
.select('NUMBER',
dfFinal.ID.alias('ID_1'),
dfFinal.RECHARGE_AMOUNT.alias('RECHARGE_AMOUNT_1'),
df2.ID.alias('ID_2'),
df2.RECHARGE_AMOUNT.alias('RECHARGE_AMOUNT_2'))
这篇关于在pyspark中加入具有相同名称的数据帧的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!