中写入和读取原始字节数组

中写入和读取原始字节数组

本文介绍了在 Spark 中写入和读取原始字节数组 - 使用 Sequence File SequenceFile的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

如何使用 Apache Spark 将 RDD[Array[Byte]] 写入文件并再次读取?

How do you write RDD[Array[Byte]] to a file using Apache Spark and read it back again?

推荐答案

常见问题似乎是奇怪的无法将异常从 BytesWritable 转换为 NullWritable.另一个常见的问题是 BytesWritable getBytes 是一堆完全没有意义的废话,根本没有得到字节.getBytes 所做的是获取您的字节,而不是在最后添加大量零!你必须使用 copyBytes

Common problems seem to be getting a weird cannot cast exception from BytesWritable to NullWritable. Other common problem is BytesWritable getBytes is a totally pointless pile of nonsense which doesn't get bytes at all. What getBytes does is get your bytes than adds a ton of zeros on the end! You have to use copyBytes

val rdd: RDD[Array[Byte]] = ???

// To write
rdd.map(bytesArray => (NullWritable.get(), new BytesWritable(bytesArray)))
  .saveAsSequenceFile("/output/path", codecOpt)

// To read
val rdd: RDD[Array[Byte]] = sc.sequenceFile[NullWritable, BytesWritable]("/input/path")
  .map(_._2.copyBytes())

这篇关于在 Spark 中写入和读取原始字节数组 - 使用 Sequence File SequenceFile的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-27 16:21