问题描述
你好,我有我的火花数据框的输出,它创建文件夹结构并创建部分文件。
现在我必须合并文件夹内的所有零件文件,并将该文件重命名为文件夹路径名。
这就是我如何分区
df.write.partitionBy(DataPartition,PartitionYear)
.format(csv)
。 (codec,gzip)
.save(hdfs:) /// user / zeppelin / FinancialLineItem / output)
它像这样创建文件夹结构
hdfs:/// user / zeppelin / FinancialLineItem / output / DataPartition = Japan / PartitionYear = 1971 / part-00001-87a61115-92c9- 4926-a803-b46315e55a08.c000.csv.gz
hdfs:/// user / zeppelin / FinancialLineItem / output / DataPartition = Japan / PartitionYear = 1971 / part-00002-87a61115-92c9-4926-a803-b46315e55a08。 c001.csv.gz
我必须像这样创建最终文件
hdfs:///user/zeppelin/FinancialLineItem/output/Japan.1971.currenttime.csv.gz
这里没有零件文件bith 001和002合并了两个。
我的数据大小它非常大,300 GB gzip和35 GB压缩,因此 coalesce(1)和repartition
变得非常缓慢。
I在这里看到了一个解决方案
,但我无法执行它,请帮助我。
重新分配抛出错误
错误:值重新分区不是org.apache.spark.sql.DataFrameWriter [org.apache.spark.sql.Row]的成员
dfMainOutputFinalWithoutNull.write。 repartition(DataPartition,StatementTypeCode)
这来自Spark外部的头节点...
hdfs dfs -getmerge< src> < localdst>
将源目录和目标文件作为输入,并将src中的文件连接到目标本地文件。可选地,可以将addnl设置为在每个文件的末尾添加换行符。
Hi i have output of my spark data frame which creates folder structure and creates so may part files .Now i have to merge all part files inside the folder and rename that one file as folder path name .
This is how i do partition
df.write.partitionBy("DataPartition","PartitionYear")
.format("csv")
.option("nullValue", "")
.option("header", "true")/
.option("codec", "gzip")
.save("hdfs:///user/zeppelin/FinancialLineItem/output")
It creates folder structure like this
hdfs:///user/zeppelin/FinancialLineItem/output/DataPartition=Japan/PartitionYear=1971/part-00001-87a61115-92c9-4926-a803-b46315e55a08.c000.csv.gz
hdfs:///user/zeppelin/FinancialLineItem/output/DataPartition=Japan/PartitionYear=1971/part-00002-87a61115-92c9-4926-a803-b46315e55a08.c001.csv.gz
I have to create final file like this
hdfs:///user/zeppelin/FinancialLineItem/output/Japan.1971.currenttime.csv.gz
No part files here bith 001 and 002 is merged two one .
My data size it very big 300 GB gzip and 35 GB zipped so coalesce(1) and repartition
becomes very slow .
I have seen one solution hereWrite single CSV file using spark-csv but i am not able to implement it please help me with it .
Repartition throw error
error: value repartition is not a member of org.apache.spark.sql.DataFrameWriter[org.apache.spark.sql.Row]
dfMainOutputFinalWithoutNull.write.repartition("DataPartition","StatementTypeCode")
Try this from the head node outside of Spark...
hdfs dfs -getmerge <src> <localdst>
https://hadoop.apache.org/docs/r1.2.1/file_system_shell.html#getmerge
"Takes a source directory and a destination file as input and concatenates files in src into the destination local file. Optionally addnl can be set to enable adding a newline character at the end of each file."
这篇关于如何合并由SPARK数据框创建的文件夹中的所有零件文件并在scala中重命名为文件夹名称的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!