问题描述
如果您对同一会话sess.run(...)
进行两个并发调用,如何在tensorflow中同时访问变量?
If you make two concurrent calls to the same session, sess.run(...)
, how are variables concurrently accessed in tensorflow?
每次调用run
时,每次调用都将看到变量的快照,并且在整个调用过程中保持一致吗?还是他们会看到变量的动态更新,而只能保证每个变量的原子更新?
Will each call see a snapshot of the variables as of the moment run
was called, consistent throughout the call? Or will they see dynamic updates to the variables and only guarantee atomic updates to each variable?
我正在考虑在单独的CPU线程上运行测试集评估,并希望验证它与在CPU设备上并行运行推理操作一样简单.
I'm considering running test set evaluation on a separate CPU thread and want to verify that it's as trivial as running the inference op on a CPU device in parallel.
我在弄清楚究竟提供了什么保证可以使会话线程安全"时遇到了麻烦.
I'm having troubles figuring out exactly what guarantees are provided that make sessions "thread safe".
推荐答案
进行了一些实验后,似乎每个对sess.run(...)
的调用的确确实看到了变量的一致时间点快照.
After doing some experimentation it appears that each call to sess.run(...)
does indeed see a consistent point-in-time snapshot of the variables.
为了测试这一点,我执行了2个大矩阵乘法运算(每次大约需要10秒),并在之前,之间和之后更新了一个独立的因变量.在另一个线程中,我每1/10秒抓取并打印一次该变量,以查看它是否拾取了在第一个线程仍在运行时在两次操作之间发生的更改.它没有,我只看到了它的初始值和最终值.因此,我得出结论,变量更改仅在运行结束后对sess.run(...)
的特定调用之外才可见.
To test this I performed 2 big matrix multiply operations (taking about 10 sec each to complete), and updated a single, dependent, variable before, between, and after. In another thread I grabbed and printed that variable every 1/10th second to see if it picked up the change that occurred between operations while the first thread was still running. It did not, I only saw it's initial and final values. Therefore I conclude that variable changes are only visible outside of a specific call to sess.run(...)
at the end of that run.
这篇关于如何在张量流中的并发`session.run(...)`调用之间共享变量?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!