问题描述
我是Tensorflow Probability的新手,并且想进行RandomWalk Montecarlo仿真.假设我有张量r代表状态.我希望tfp.mcmc.RandomWalkMetropolis函数返回新状态r'的提议.
I am new to Tensorflow Probability and would like to do a RandomWalk Montecarlo simulation. Let's say I have tensor r that represents a state. I want the tfp.mcmc.RandomWalkMetropolis function to return a proposal for a new state r'.
tfp.mcmc.RandomWalkMetropolis(r)
>>> <tensorflow_probability.python.mcmc.random_walk_metropolis.RandomWalkMetropolis object at 0x14abed2185c0>
仅返回此RandomWalkMetropolis对象,而不是相同状态或微扰状态. RandomWalkMetropolis类还包含函数one_step,但它需要"previous_kernel_results"(我没有),因为我希望这是我的第一步.另外,如何进一步指定"Metropolis接受/拒绝"步骤?
Instead of the same state, or a slightly perturbed state only this RandomWalkMetropolis object is returned. The RandomWalkMetropolis class also contains the function one_step, but it requires 'previous_kernel_results', which I don't have, because I want this to be my first step. Also, how do I specify the Metropolis accept/reject step further?
推荐答案
RWM是python对象,可通过bootstrap_results
和one_step
方法使用.例如:
RWM is a python object, which is used via the bootstrap_results
and one_step
methods. For example:
# TF/TFP Imports
!pip install --quiet tfp-nightly tf-nightly
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
import matplotlib.pyplot as plt
def log_prob(x):
return tfd.Normal(0, 1).log_prob(x)
kernel = tfp.mcmc.RandomWalkMetropolis(log_prob)
state = tfd.Normal(0, 1).sample()
extra = kernel.bootstrap_results(state)
samples = []
for _ in range(1000):
state, extra = kernel.one_step(state, extra)
samples.append(state)
plt.hist(samples, bins=20)
这篇关于使用Tensorflow Probability的RandomWalkMetropolis函数执行RandomWalk步骤的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!