问题描述
rdd=sc.textFile(json or xml)
rdd.collect()
[u'{', u' "glossary": {', u' "title": "example glossary",', u'\t\t"GlossDiv": {', u' "title": "S",', u'\t\t\t"GlossList": {', u' "GlossEntry": {', u' "ID": "SGML",', u'\t\t\t\t\t"SortAs": "SGML",', u'\t\t\t\t\t"GlossTerm": "Standard Generalized Markup Language",', u'\t\t\t\t\t"Acronym": "SGML",', u'\t\t\t\t\t"Abbrev": "ISO 8879:1986",', u'\t\t\t\t\t"GlossDef": {', u' "para": "A meta-markup language, used to create markup languages such as DocBook.",', u'\t\t\t\t\t\t"GlossSeeAlso": ["GML", "XML"]', u' },', u'\t\t\t\t\t"GlossSee": "markup"', u' }', u' }', u' }', u' }', u'}', u'']
但是我的输出应该是一行一行的思考
But My output should be every think in one line
{"glossary": {"title": "example glossary","GlossDiv": {"title": "S","GlossList":.....}}
推荐答案
我建议使用 Spark SQL JSON,然后保存对 Json 的调用(参见 https://spark.apache.org/docs/latest/sql-programming-guide.html#json-datasets )
I'd recommend using Spark SQL JSON and then saving calling toJson (see https://spark.apache.org/docs/latest/sql-programming-guide.html#json-datasets )
val input = sqlContext.jsonFile(path)
val output = input...
output.toJSON.saveAsTextFile(outputath)
但是,如果您的 json 记录由于多行问题或其他一些问题而无法被 Spark SQL 解析,我们可以从 Learning Spark book(当然作为合著者略有偏见)并修改它以使用整个文本文件
.
However if your json records can't be parsed by Spark SQL because of the multi-line issue or some other issue, we can take one of the examples from the Learning Spark book (slightly biased as a co-author of course) and and modify it to use wholeTextFiles
.
case class Person(name: String, lovesPandas: Boolean)
// Read the input and throw away the file names
val input = sc.wholeTextFiles(inputFile).map(_._2)
// Parse it into a specific case class. We use mapPartitions beacuse:
// (a) ObjectMapper is not serializable so we either create a singleton object encapsulating ObjectMapper
// on the driver and have to send data back to the driver to go through the singleton object.
// Alternatively we can let each node create its own ObjectMapper but that's expensive in a map
// (b) To solve for creating an ObjectMapper on each node without being too expensive we create one per
// partition with mapPartitions. Solves serialization and object creation performance hit.
val result = input.mapPartitions(records => {
// mapper object created on each executor node
val mapper = new ObjectMapper with ScalaObjectMapper
mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)
mapper.registerModule(DefaultScalaModule)
// We use flatMap to handle errors
// by returning an empty list (None) if we encounter an issue and a
// list with one element if everything is ok (Some(_)).
records.flatMap(record => {
try {
Some(mapper.readValue(record, classOf[ioRecord]))
} catch {
case e: Exception => None
}
})
}, true)
result.filter(_.lovesPandas).map(mapper.writeValueAsString(_))
.saveAsTextFile(outputFile)
}
在 Python 中:
And in Python:
from pyspark import SparkContext
import json
import sys
if __name__ == "__main__":
if len(sys.argv) != 4:
print "Error usage: LoadJson [sparkmaster] [inputfile] [outputfile]"
sys.exit(-1)
master = sys.argv[1]
inputFile = sys.argv[2]
outputFile = sys.argv[3]
sc = SparkContext(master, "LoadJson")
input = sc.wholeTextFiles(inputFile).map(_._2)
data = input.flatMap(lambda x: json.loads(x))
data.filter(lambda x: 'lovesPandas' in x and x['lovesPandas']).map(
lambda x: json.dumps(x)).saveAsTextFile(outputFile)
sc.stop()
print "Done!"
这篇关于如何将多行 json 文件作为 rdd 转换为单个记录的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!