问题描述
以下是我的 csv 文件中的内容:
Below is the content in my csv file :
A1,B1,C1
A2,B2,C2,D1
A3,B3,C3,D2,E1
A4,B4,C4,D3
A5,B5,C5,,E2
所以,有 5 列,但第一行只有 3 个值.
So, there are 5 columns but only 3 values in the first row.
我使用以下命令阅读它:
I read it using the following command :
val csvDF : DataFrame = spark.read
.option("header", "false")
.option("delimiter", ",")
.option("inferSchema", "false")
.csv("file.csv")
以下是我使用 csvDF.show() 得到的
And following is what i get using csvDF.show()
+---+---+---+
|_c0|_c1|_c2|
+---+---+---+
| A1| B1| C1|
| A2| B2| C2|
| A3| B3| C3|
| A4| B4| C4|
| A5| B5| C5|
+---+---+---+
如何读取所有列中的所有数据?
How can i read all the data in all the columns?
推荐答案
基本上,您的 csv 文件格式不正确,因为每行的列数不同,如果您需要想用 spark.read.csv
阅读它.但是,您可以改为使用 spark.read.textFile
读取它,然后解析每一行.
Basically your csv-file isn't properly formatted in the sense that it doesn't have a equal number of columns in each row, which is required if you want to read it with spark.read.csv
. However, you can instead read it with spark.read.textFile
and then parse each row.
据我所知,您事先不知道列数,因此您希望您的代码处理任意数量的列.为此,您需要确定数据集中的最大列数,因此您需要对数据集进行两次遍历.
As I understand it, you do not know the number of columns beforehand, so you want your code to handle an arbitrary number of columns. To do this you need to establish the maximum number of columns in your data set, so you need two passes over your data set.
对于这个特定的问题,我实际上会使用 RDD 而不是 DataFrames 或 Datasets,就像这样:
For this particular problem, I would actually go with RDDs instead of DataFrames or Datasets, like this:
val data = spark.read.textFile("file.csv").rdd
val rdd = data.map(s => (s, s.split(",").length)).cache
val maxColumns = rdd.map(_._2).max()
val x = rdd
.map(row => {
val rowData = row._1.split(",")
val extraColumns = Array.ofDim[String](maxColumns - rowData.length)
Row((rowData ++ extraColumns).toList:_*)
})
希望有帮助:)
这篇关于Spark 不读取第一行中具有空值的列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!