本文介绍了如何阅读tif文件的子部分以便更快地导入?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要循环读取数以千计的TIF文件(大小为3500x3500)。

I need to read thousands of TIF files (3500x3500 in size) in a loop.

而imread是最大的瓶颈。我只处理我有行col范围的图像的一小部分。

And imread is the biggest bottleneck. I only work on a small section of the image for which I have the row-col extent.

无论如何导入图像的子部分以改善导入过程大幅?还有其他任何建议吗?

Is there anyway to import a subsection of the image to improve the import process substantially? Any other suggestions?

这是代码的导入部分:

for m = 1:length(pFileNames)
    if ~exist(precipFileNames{m}, 'file')
        continue;
    end
    pConus = imread(pFileNames{m});
end

P.S。我尝试使用PixelRegions。但我有Matlab 2014,我收到此错误:

P.S. I tried to use PixelRegions. But I have Matlab 2014, and I get this error:

Undefined function or variable 'PixelRegion'.


推荐答案

考虑使用 vips at使用以下命令从每个图像中提取所需区域的命令行:

Consider using vips at the commandline to extract the area you want from each image with a command like:

vips extract_area INPUT.TIF OUTPUT.TIF left top width height

然后将其与 GNU Parallel 组合在一起做4或8一段时间,如下所示:

Then combine that with GNU Parallel to do 4 or 8 at a time, something like this:

parallel vips extract_area {} sub_{} left top width height ::: *.tif

我建议您在开始试验之前进行备份......

I suggest you make a backup before you start experimenting...

基准时间

我创建了1000张随机数据的TIF图像,全部大小为3,500x3500像素,然后运行 GNU Parallel + vips 命令从1,000个TIF中的每一个中提取100x100像素的区域。

I created 1,000 TIF images of random data, all sized at 3,500x3500 pixels and then ran the GNU Parallel + vips command above to extract an area of 100x100 pixels from each of the 1,000 TIFs.

在合理的规格iMac上,1,000个子提取图像并在11秒内写入磁盘。

On a reasonable spec iMac, the 1,000 sub images were extracted and written to disk in 11 seconds.

这篇关于如何阅读tif文件的子部分以便更快地导入?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-29 04:19