本文介绍了Tensorflow for Poets Inception v3图像大小的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用Tensorflow for Poets训练自己的图像集,

I am training my own image set using Tensorflow for Poets as an example,

图像需要为多大。我读过该脚本会自动为您调整图像的大小,但是它将它们调整为什么大小。您可以预先调整图像大小以节省磁盘空间(10,000个1mb图像)吗?

What size do the images need to be. I have read that the script automatically resizes the image for you, but what size does it resize them to. Can you preresize your images to this to save on your disk space (10,000 1mb images).

它如何裁剪图像,切掉一部分图像,还是添加白/黑条,或者更改宽高比?

How does it crop the images, does it chop off part of your image, or add white/black bars, or change the aspect ratio?

此外,我认为Inception v3使用299x299图像,如果图像识别需要更详细的精度怎么办?可以增加网络图像大小,例如增加到598x598?

Also, I think Inception v3 uses 299x299 images, what if your image recogition requires more detailed accuracy, is it possible to increase the networks image size, like to 598x598?

推荐答案

我不知道此实现使用的重新调整大小选项;如果您未在文档中找到该图片,那么我希望我们需要阅读代码。

I don't know what re-sizing option this implementation uses; if you haven't found that in the documentation, then I expect that we'd need to read the code.

图片可以是任何尺寸。是的,您可以缩小图像以节省磁盘空间。但是,请注意,您会丢失图像细节。

The images can be of any size. Yes, you can shrink your images to save disk space. However, note that you lose image detail; there won't be a way to recover the lost information.

好消息是您不需要它;这是个好消息。 CNN模型是为包含足够细节的图像大小而构建的,可以处理当前的问题。更大的图像细节通常不会转化为更高的分类精度。将图像分辨率提高一倍通常会浪费存储空间。

The good news is that you shouldn't need it; CNN models are built for an image size that contains enough detail to handle the problem at hand. Greater image detail generally does not translate to greater accuracy in classification. Doubling the image resolution is usually a waste of storage.

为此,您必须编辑代码以接受更大的本机图像尺寸。然后,您必须更改模型拓扑以考虑更大的输入大小:在某个地方使用较大的降压因子(这可能会破坏较大的分辨率),或者在模型上的另一层捕获较大的输入。

To do that, you'd have to edit the code to accept the larger "native" image size. Then you'd have to alter the model topology to account for the greater input size: either a larger step-down factor somewhere (which could defeat the greater resolution), or another layer on the model to capture the larger size.

要获得更准确的模型,通常需要更强大的网络拓扑。 2倍分辨率无法为我们提供更多信息来区分马匹与校车。

To get a more accurate model, you generally need a stronger network topology. 2x resolution does not give us much more information to differentiate a horse from a school bus.

这篇关于Tensorflow for Poets Inception v3图像大小的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-28 21:55