本文介绍了如何批量选择和numpy的计算阵列?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
如何(1)批量选择下HDF5文件中的所有阵列,则(2)对这些阵列应用的计算,最后3批在另一个HDF5文件中创建新的数组?
How to (1) batch select all arrays under a hdf5 file, then (2) apply calculations on those arrays and finally (3) batch create new arrays in another hdf5 file?
例如:
import numpy
import tables
file = openFile('file1',"r")
array1 = file.root.array1
array1_cal = (array1 <= 1)
newfile.createArray('/','array1_cal',array1_cal)
array2 = file.root.array2
array2_cal = (array2 <= 1)
newfile.createArray('/','array2_cal',array2_cal)
我已经下了单HDF5文件和几个HDF5文件100+阵列,我怎么能批量处理他们?非常感谢。
I have 100+ arrays under a single hdf5 file and several hdf5 files, how can I batch process them? Thanks a lot.
推荐答案
通过您可以使用 walkNodes
函数递归迭代的节点。下面是一个例子:
With PyTables you can use the walkNodes
function to recursively iterate through nodes. Here is an example:
# Recursively print all the nodes hanging from '/detector'.
print "Nodes hanging from group '/detector':"
for node in h5file.walkNodes('/detector', classname='EArray'):
data = node[:]
// do some calculation
// store new array in second file
这篇关于如何批量选择和numpy的计算阵列?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!