问题描述
我正在尝试从 json 以镶木地板格式创建数据框.我收到以下异常,
线程main"org.apache.spark.sql.AnalysisException 中的异常:属性名称d?G?@4???[[l?~?N!^w1?X!8??ingSuccessful"包含" ,;{}()\n\t=" 中的无效字符.请使用别名重命名.;
我知道某些具有特殊字符的 json 键是上述异常的原因.但是,我不知道有多少键具有特殊字符.
此外,一种可能的解决方案是在创建 RDD 和逐行读取时将键中的特殊字符替换为下划线或空白.
我正在使用以下代码创建镶木地板文件,
dataDf.coalesce(1).写.partitionBy("年", "月", "日", "小时").option("header", "true").option("分隔符", "\t).format("镶木地板").save("事件")
如果您不知道有多少列名包含特殊字符,那么您可以使用 df.columns
来获取列名并替换所有列名中的特殊字符.
并且最后重命名列,然后再将它们写入parquet 文件应该可以解决您遇到的问题.
希望答案对您有所帮助.
I am trying to create data frame from json in parquet format. I am getting following exception,
I know that some json key having special characters is a reason for above exception. However, I do not know how many keys have special characters.
Also, one possible solution is to replace special characters in keys with underscore or blank while creating RDD and reading line by line.
I am creating parquet file using following code,
dataDf.coalesce(1)
.write
.partitionBy("year", "month", "day", "hour")
.option("header", "true")
.option("delimiter", "\t)
.format("parquet")
.save("events")
If you don't know how many column names have special characters then you use df.columns
to get the column names and replace special characters in all of them.
And finally rename the columns before you write them to parquet files should solve the issue you are having.
I hope the answer is helpful.
这篇关于如何在Spark Parquet中使用特殊字符处理Json中的键?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!