本文介绍了Python 2.7:如何从文件一次只读几行?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

例如,我在一个文件中有2000行,我想一次读500行,在读500行之前用这500行做一些事情。我想知道是否有人会写一些快速代码让我学习。谢谢!

解决方案

您可以使用一个生成器将这些行组合在一起,并以便于使用的方式生成它们一个简单的循环。这可能让你开始:

$ pre $ def $ ch $ $ b for iterable中的项目:
out.append(item)
if len(out)> = chunk_size:
yield out
out = []
if out:
yield

然后你可以这样使用:

$ b chunk_of(file('/ path / to / file'),chunk_size = 500):
#chunk_of_lines是$ 500或更少的
$ b

  

(为什么500或更少?)因为最后一个块可能不是500行如果文件中的行数不是500的倍数)。

编辑:总是先检查文档。以下是来自的食谱

  def grouper(n,iterable,fillvalue = None):
grouper(3,'ABCDEFG','x') - > ; ABC DEF Gxx
args = [iter(iterable)] * n
return izip_longest(fillvalue = fillvalue,* args)

这会在迭代器(本例中是文件对象)上创建一个 n 迭代器的列表,因为它们都是同一个基础对象上的迭代器,当一个人进步时,其余的人都会前进 - 然后拉扯他们的结果。 izip_longest 可以像 izip 一样工作,但将结果填入 fillvalue ,而不是简单地忽略它们,因为我的 chunks_of 函数。


For example, I have 2,000 lines in a file, and I want to read 500 lines at a time and do something with these 500 lines before reading another 500 lines. I wonder if anyone would write some quick code for me to learn. Thanks!

解决方案

You could use a generator to group the lines together, and yield them in a way that is convenient to use in a simple for loop. This might get you started:

def chunks_of(iterable, chunk_size=500):
    out = []
    for item in iterable:
        out.append(item)
        if len(out) >= chunk_size:
            yield out
            out = []
    if out:
        yield out

You can then use this like:

for chunk_of_lines in chunks_of(file('/path/to/file'), chunk_size=500):
    # chunk_of_lines is 500 or fewer lines from the file

(Why "500 or fewer"? Because the last chunk might not be 500 lines if the number of lines in the file was not an even multiple of 500.)

Edit: Always check the docs first. Here's a recipe from the itertools docs

def grouper(n, iterable, fillvalue=None):
    "grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
    args = [iter(iterable)] * n
    return izip_longest(fillvalue=fillvalue, *args)

This creates a list of n iterators on the iterable (in this case, the file object) -- since they are all iterators on the same underlying object, when one advances, the rest will all advance as well -- and then zips their result. izip_longest works like izip, but pads its results with the fillvalue, rather than simply omitting them, as my chunks_of function does.

这篇关于Python 2.7:如何从文件一次只读几行?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-29 02:07