马尔可夫系统中的参数估计

马尔可夫系统中的参数估计

本文介绍了PyMC:马尔可夫系统中的参数估计的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

简单的Markow链

假设我们要估计系统的参数,以便可以在给定时间步长t的状态的情况下预测时间步长t + 1的系统的状态. PyMC应该能够轻松处理此问题.

让我们的玩具系统由一维世界中的移动物体组成.状态是对象的位置.我们要估计潜在变量/物体的速度.下一个状态取决于前一个状态,而潜在变量取决于速度.

# define the system and the data
true_vel = .2
true_pos = 0
true_positions = [.2 * step for step in range(100)]

我们假设我们的观察结果有些杂音(但这并不重要).

问题是:如何对下一个状态对当前状态的依赖性进行建模.我可以为过渡函数提供参数idx,以便在时间t处访问位置,然后预测时间t + 1处的位置.

vel = pymc.Normal("pos", 0, 1/(.5**2))
idx = pymc.DiscreteUniform("idx", 0, 100, value=range(100), observed=True)

@pm.deterministic
def transition(positions=true_positions, vel=vel, idx=idx):
    return positions[idx] + vel

# observation with gaussian noise
obs = pymc.Normal("obs", mu=transition, tau=1/(.5**2))

但是,索引似乎是不适合索引的数组.可能有更好的方法来访问以前的状态.

解决方案

最简单的方法是生成列表,并允许PyMC将其作为容器处理.在PyMC Wiki上有一个相关的示例.这是相关的代码段:

# Lognormal distribution of P's
Pmean0 = 0.
P_0 = Lognormal('P_0', mu=Pmean0, tau=isigma2, trace=False, value=P_inits[0])
P = [P_0]

# Recursive step
for i in range(1,nyears):
    Pmean = Lambda("Pmean", lambda P=P[i-1], k=k, r=r: log(max(P+r*P*(1-P)-k*catch[i-1],0.01)))
    Pi = Lognormal('P_%i'%i, mu=Pmean, tau=isigma2, value=P_inits[i], trace=False)
    P.append(Pi)

请注意,当前对数正态的均值如何与上一个对数成正比?不太优雅,使用list.append和全部,但您可以使用列表推导.

A Simple Markow Chain

Let's say we want to estimate parameters of a system such that we can predict the state of the system at timestep t+1 given the state at timestep t. PyMC should be able to deal with this easily.

Let our toy system consist of a moving object in a 1D world. The state is the position of the object. We want to estimate the latent variable/the speed of the object. The next state depends on the previous state and the latent variable the speed.

# define the system and the data
true_vel = .2
true_pos = 0
true_positions = [.2 * step for step in range(100)]

We assume that we have some noise in our observation (but that does not matter here).

The question is: how do I model the dependency of the next state on the current state. I could supply the transition function a parameter idx to access the position at time t and then predict the position at time t+1.

vel = pymc.Normal("pos", 0, 1/(.5**2))
idx = pymc.DiscreteUniform("idx", 0, 100, value=range(100), observed=True)

@pm.deterministic
def transition(positions=true_positions, vel=vel, idx=idx):
    return positions[idx] + vel

# observation with gaussian noise
obs = pymc.Normal("obs", mu=transition, tau=1/(.5**2))

However, the index seems to be an array which is not suitable for indexing. There is probably a better way to access the previous state.

解决方案

The easiest way is to generate a list, and allow PyMC to deal with it as a Container. There is a relevant example on the PyMC wiki. Here is the relevant snippet:

# Lognormal distribution of P's
Pmean0 = 0.
P_0 = Lognormal('P_0', mu=Pmean0, tau=isigma2, trace=False, value=P_inits[0])
P = [P_0]

# Recursive step
for i in range(1,nyears):
    Pmean = Lambda("Pmean", lambda P=P[i-1], k=k, r=r: log(max(P+r*P*(1-P)-k*catch[i-1],0.01)))
    Pi = Lognormal('P_%i'%i, mu=Pmean, tau=isigma2, value=P_inits[i], trace=False)
    P.append(Pi)

Notice how the mean of the current Lognormal is a function of the last one? Not elegant, using list.append and all, but you can use a list comprehension instead.

这篇关于PyMC:马尔可夫系统中的参数估计的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-22 15:17