我尝试使用以下代码创建自己的 Deep Dream 算法:

import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import inception

img = np.random.rand(1,500,500,3)
net = inception.get_inception_model()
tf.import_graph_def(net['graph_def'], name='inception')
graph = tf.get_default_graph()
sess = tf.Session()
layer = graph.get_tensor_by_name('inception/mixed5b_pool_reduce_pre_relu:0')
gradient = tf.gradients(tf.reduce_mean(layer), graph.get_tensor_by_name('inception/input:0'))
softmax = sess.graph.get_tensor_by_name('inception/softmax2:0')
iters = 100
init = tf.global_variables_initializer()

sess.run(init)
for i in range(iters):
    prediction = sess.run(softmax, \
                          {'inception/input:0': img})
    grad = sess.run(gradient[0], \
                          {'inception/input:0': img})
    grad = (grad-np.mean(grad))/np.std(grad)
    img = grad
    plt.imshow(img[0])
    plt.savefig('output/'+str(i+1)+'.png')
    plt.close('all')

但即使在此循环运行 100 次迭代后,生成的图片仍然看起来是随机的(我会将所述图片附加到这个问题)。
python - Deep Dream Code 不会生成可识别的模式-LMLPHP 有人可以帮我优化我的代码吗?

最佳答案

为 Deep Dream 使用 Inception 网络有点繁琐。在您借用助手库的CADL类(class)中,讲师选择使用VGG16作为教学网络。如果你使用它并对你的代码做一些小的修改,你应该得到一些有效的东西(如果你在这里交换 Inception 网络,它会起作用,但结果看起来更令人失望):

import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import vgg16 as vgg

# Note reduced range of image, your noise function was drowning
# out the few textures that you were getting
img = np.random.rand(1,500,500,3) * 0.1 + 0.45
net = vgg.get_vgg_model()
tf.import_graph_def(net['graph_def'], name='vgg')
graph = tf.get_default_graph()
sess = tf.Session()
layer = graph.get_tensor_by_name('vgg/pool4:0')
gradient = tf.gradients(tf.reduce_mean(layer),
   graph.get_tensor_by_name('vgg/images:0'))

# You don't need to define or use the softmax layer - TensorFlow
# is smart enough to resolve the computation graph for gradients
# without explicitly running the whole network forward first
iters = 100
# You don't need to init the network variables, everything you need
# is set by the import, plus the placeholder.

for i in range(iters):
    grad = sess.run(gradient[0], {'vgg/images:0': img})

    # You can use all sorts of normalisation, this one is from CADL
    grad /= (np.max(np.abs(grad))+1e-7)

    # You forgot to use += here, and it is best to use a
    # step size even after gradient normalisation
    img += 0.25 * grad
    # Re-normalise the image, to prevent over-saturation
    img = 0.98 * (img - 0.5) + 0.5
    img = np.clip(img, 0.0, 1.0)
    plt.imshow(img[0])
    plt.savefig('output/'+str(i+1)+'.png')
    plt.close('all')
    print(i)

执行所有这些操作可以获得清晰有效的图像,但仍然需要一些改进:

python - Deep Dream Code 不会生成可识别的模式-LMLPHP

为了获得更好的效果,您可能在网上看到的那种类型的全彩色图像需要进行更多更改。例如,您可以在每次迭代之间稍微重新标准化或模糊图像。

如果你想变得更复杂,你可以尝试 the TensorFlow Jupyter notebook walk-through ,尽管由于结合了多个想法,从第一原则理解有点困难。

关于python - Deep Dream Code 不会生成可识别的模式,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/45509356/

10-12 17:19