问题描述
我正在使用提供的脚本在GPU上测试Theano为此目的在本教程中:
# Start gpu_test.py
# From http://deeplearning.net/software/theano/tutorial/using_gpu.html#using-gpu
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time
vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
iters = 1000
rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in xrange(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
print('Used the cpu')
else:
print('Used the gpu')
# End gpu_test.py
如果指定floatX=float32
,它将在GPU上运行:
If I specify floatX=float32
, it runs on GPU:
francky@here:/fun$ THEANO_FLAGS='mode=FAST_RUN,device=gpu2,floatX=float32' python gpu_test.py
Using gpu device 2: GeForce GTX TITAN X (CNMeM is disabled)
[GpuElemwise{exp,no_inplace}(<CudaNdarrayType(float32, vector)>), HostFromGpu(Gp
Looping 1000 times took 1.458473 seconds
Result is [ 1.23178029 1.61879349 1.52278066 ..., 2.20771813 2.29967761
1.62323296]
Used the gpu
如果未指定floatX=float32
,它将在CPU上运行:
If I do not specify floatX=float32
, it runs on CPU:
francky@here:/fun$ THEANO_FLAGS='mode=FAST_RUN,device=gpu2'
Using gpu device 2: GeForce GTX TITAN X (CNMeM is disabled)
[Elemwise{exp,no_inplace}(<TensorType(float64, vector)>)]
Looping 1000 times took 3.086261 seconds
Result is [ 1.23178032 1.61879341 1.52278065 ..., 2.20771815 2.29967753
1.62323285]
Used the cpu
如果指定floatX=float64
,它将在CPU上运行:
If I specify floatX=float64
, it runs on CPU:
francky@here:/fun$ THEANO_FLAGS='mode=FAST_RUN,device=gpu2,floatX=float64' python gpu_test.py
Using gpu device 2: GeForce GTX TITAN X (CNMeM is disabled)
[Elemwise{exp,no_inplace}(<TensorType(float64, vector)>)]
Looping 1000 times took 3.148040 seconds
Result is [ 1.23178032 1.61879341 1.52278065 ..., 2.20771815 2.29967753
1.62323285]
Used the cpu
为什么floatX
标志会影响Theano是否使用GPU?
Why does the floatX
flag impact whether GPU is used in Theano?
我使用:
- Theano 0.7.0(根据
pip freeze
), - Python 2.7.6 64位(根据
import platform; platform.architecture()
), - Nvidia-smi 361.28(根据
nvidia-smi
), - CUDA 7.5.17(根据
nvcc --version
), - GeForce GTX Titan X(根据
nvidia-smi
), - Ubuntu 14.04.4 LTS x64(根据
lsb_release -a
和uname -i
).
- Theano 0.7.0 (according to
pip freeze
), - Python 2.7.6 64 bits (according to
import platform; platform.architecture()
), - Nvidia-smi 361.28 (according to
nvidia-smi
), - CUDA 7.5.17 (according to
nvcc --version
), - GeForce GTX Titan X (according to
nvidia-smi
), - Ubuntu 14.04.4 LTS x64 (according to
lsb_release -a
anduname -i
).
我阅读了 floatX
上的文档,但是没有帮助.它只是说:
I read the documentation on floatX
but it didn't help. It simply says:
这将设置tensor.matrix()返回的默认dtype, tensor.vector()和类似的函数.它还设置了默认值 作为Python浮点传递的参数的theano位宽 数字.
This sets the default dtype returned by tensor.matrix(), tensor.vector(), and similar functions. It also sets the default theano bit width for arguments passed as Python floating-point numbers.
推荐答案
据我所知,这是因为它们尚未为GPU实现float64.
As far as I know, it's because they haven't yet implemented float64 for GPUs.
http://deeplearning.net/software/theano/tutorial/using_gpu.html :
这篇关于为什么floatX的标志会影响在Theano中是否使用GPU?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!