本文介绍了为什么`linedelimiter`对于bag.read_text不起作用?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试从创建的文件中加载yaml

I am trying to load yaml from files created by

entries = bag.from_sequence([{1:2}, {3:4}])
yamls = entries.map(yaml.dump)
yamls.to_textfiles(r'\*.yaml.gz')

yamls = bag.read_test(r'\*.yaml.gz', linedelimiter='\n\n)

但会逐行读取文件。如何从文件中读取Yaml?

but it reads files line by line. How to read yamls from files?

更新:


  1. blocksize = None read_text 逐行读取文件。

  2. 如果 blocksize 已设置,您可以读取压缩文件。

  1. While blocksize=None read_text reads files line by line.
  2. If blocksize is set, you could read compressed files.

如何克服这个问题?

推荐答案

事实上, linedelimiter 是唯一的选择吗?并不是出于您的想法,而是用于分隔较大的块。就像您说的那样,使用gzip压缩时,该文件将不再可以随机访问,并且根本无法使用块。

Indeed, linedelimiter is used not for the sense you have in mind, but only for separating the larger blocks. As you say, when you compress with gzip, the file is no longer random-accessible, and blocks cannot be used at all.

可以通过 linedelimiter 转换为将数据块转换为行的函数(如果您有兴趣,请输入 dask.bag.text )。

It would be possible to pass the linedelimiter into the functions that turn chunks of data into lines (in dask.bag.text, if you are interested).

目前,解决方法如下:

yamls = bag.read_test(r'\*.yaml.gz').map_partitions(
    lambda x: '\n'.join(x).split(delimiter))

这篇关于为什么`linedelimiter`对于bag.read_text不起作用?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-19 01:01