2.6. 精度问题
在具体实践中,会出现精度损失的问题。结合式 ( 8 ) (8) (8) 和图 2 2 2 考虑 α t ( i ) \alpha_t(i) αt(i),可以看出在进行递推时会存在连续乘以小于 1 1 1 的概率的情况,这导致递推过程中计算出的 α t ( i ) \alpha_t(i) αt(i) 非常小,以至于超出计算机存储的精度范围(甚至是双精度)。更为直观地,从上面计算前向概率的例子中,可以看到随着 t t t 的增大, α t ( i ) \alpha_t(i) αt(i) 开始趋于 0 0 0。 β t ( i ) \beta_t(i) βt(i) 也会出现类似的问题。在隐马尔可夫模型的论文中,作者提出通过放大(scaling)或取对数的方法来解决该问题。
在估计问题、学习问题和解码问题中都可能出现精度问题。下面将讨论如何在解决三个问题的同时保证数据精度。
2.6.1. 估计问题
对于前向概率,首先引入两个变量 α ˉ t ( i ) \bar \alpha_t(i) αˉt(i) 和 α ^ t ( i ) \hat \alpha_t(i) α^t(i)。 α ^ t ( i ) \hat\alpha_t(i) α^t(i) 为 α ˉ t ( i ) \bar \alpha_t(i) αˉt(i) 放大后的值, α ˉ t ( i ) \bar \alpha_t(i) αˉt(i) 由 α ^ t − 1 ( j ) , 1 ≤ j ≤ N \hat \alpha_{t-1}(j),\space1\le j \le N α^t−1(j), 1≤j≤N 计算得到。
定义放大系数:
c t = 1 ∑ i = 1 N α ˉ t ( i ) , 1 ≤ t ≤ T c_t = \frac{1}{\sum\limits_{i=1}^N \bar \alpha_t(i)},\space\space\space\space 1\le t\le T ct=i=1∑Nαˉt(i)1, 1≤t≤T
可见,每个时刻的放大系数是不同的,而且同一时刻使用相同的放大系数。
令 α ˉ 1 ( i ) = α 1 ( i ) \bar \alpha_1(i) = \alpha_1(i) αˉ1(i)=α1(i), α ˉ t ( i ) \bar \alpha_t(i) αˉt(i) 和 α ^ t ( i ) \hat \alpha_t(i) α^t(i) 满足如下两个递推关系:
α ˉ t ( i ) = [ ∑ j = 1 N α ^ t − 1 ( j ) a j i ] b i ( O t ) , 2 ≤ t ≤ T (27) {\bar \alpha}_t(i) = \left[\sum_{j=1}^N {{\hat \alpha}_{t-1}}(j) a_{ji}\right] b_i(O_t),\space\space\space\space 2\le t\le T \tag{27} αˉt(i)=[j=1∑Nα^t−1(j)aji]bi(Ot), 2≤t≤T(27)
α ^ t ( i ) = c t α ˉ t ( i ) , 1 ≤ t ≤ T (28) {\hat \alpha}_t(i) = c_t{\bar \alpha}_t(i),\space\space\space\space 1\le t\le T\tag{28} α^t(i)=ctαˉt(i), 1≤t≤T(28)
式 ( 27 ) (27) (27) 关于从 α ^ t − 1 ( j ) \hat \alpha_{t-1}(j) α^t−1(j) 到 α ˉ t ( i ) \bar \alpha_t(i) αˉt(i) 的递推关系与式 ( 8 ) (8) (8) 的递推关系一致;式 ( 28 ) (28) (28) 表明从 α ˉ t ( i ) \bar \alpha_t(i) αˉt(i) 到 α ^ t ( i ) \hat \alpha_t(i) α^t(i) 的递推关系仅仅是对 α ˉ t ( i ) \bar \alpha_t(i) αˉt(i) 进行缩放,而且满足
∑ i = 1 N α ^ t ( i ) = 1 , 1 ≤ t ≤ T \sum_{i=1}^N \hat \alpha_t(i) = 1,\space\space\space\space 1\le t\le T i=1∑Nα^t(i)=1, 1≤t≤T
特别地,当 t = T t=T t=T 时,
∑ i = 1 N α ^ T ( i ) = 1 (29) \sum_{i=1}^N \hat\alpha_T(i) = 1 \tag{29} i=1∑Nα^T(i)=1(29)
通过递推关系式 ( 27 ) (27) (27) 和 ( 28 ) (28) (28),我们还可以得到 α ^ t ( i ) \hat \alpha_t(i) α^t(i) 与 α t ( i ) \alpha_t(i) αt(i) 的关系:
α ^ t ( i ) = [ ∏ τ = 1 t c τ ] α t ( i ) \hat\alpha_t(i) = \left[ \prod_{\tau=1}^t c_\tau \right] \alpha_t(i) α^t(i)=[τ=1∏tcτ]αt(i)
方便起见,令
C t = ∏ τ = 1 t c τ C_t = \prod_{\tau = 1}^t c_\tau Ct=τ=1∏tcτ
则
α ^ t ( i ) = C t α t ( i ) (30) \hat \alpha_t(i) = C_t \alpha_t(i) \tag{30} α^t(i)=Ctαt(i)(30)
推导如下。利用递推关系式 ( 27 ) (27) (27) 和 ( 28 ) (28) (28) 交替重复迭代
α ^ t ( i ) = c t α ˉ t ( i ) = c t [ ∑ j = 1 N α ^ t − 1 ( j ) a j i b i ( o t ) ] = c t [ ∑ j = 1 N c t − 1 α ˉ t − 1 ( j ) a j i b i ( o t ) ] = c t c t − 1 [ ∑ j = 1 N α ˉ t − 1 ( j ) a j i b i ( o t ) ] = c t c t − 1 [ ∑ j = 1 N [ ∑ k = 1 N α ^ t − 2 ( k ) a k j b j ( o t − 1 ) ] a j i b i ( o t ) ] = c t c t − 1 [ ∑ j = 1 N [ ∑ k = 1 N c t − 2 α ˉ t − 2 ( k ) a k j b j ( o t − 1 ) ] a j i b i ( o t ) ] = c t c t − 1 c t − 2 [ ∑ j = 1 N [ ∑ k = 1 N α ˉ t − 2 ( k ) a k j b j ( o t − 1 ) ] a j i b i ( o t ) ] = … = c t c t − 1 … c 1 [ ∑ j = 1 N [ ∑ k = 1 N [ … [ ∑ n = 1 N α ˉ 1 ( n ) a n m b m ( o 2 ) ] … ] a k j b j ( o t − 1 ) ] a j i b i ( o t ) ] = c t c t − 1 … c 1 [ ∑ j = 1 N [ ∑ k = 1 N [ … [ ∑ n = 1 N α 1 ( n ) a n m b m ( o 2 ) ] … ] a k j b j ( o t − 1 ) ] a j i b i ( o t ) ] = c t c t − 1 … c 1 [ ∑ j = 1 N [ ∑ k = 1 N [ … [ α 2 ( m ) ] … ] a k j b j ( o t − 1 ) ] a j i b i ( o t ) ] = c t c t − 1 … c 1 [ ∑ j = 1 N [ ∑ k = 1 N α t − 2 ( k ) a k j b j ( o t − 1 ) ] a j i b i ( o t ) ] = c t c t − 1 … c 1 [ ∑ j = 1 N α t − 1 ( j ) a j i b i ( o t ) ] = c t c t − 1 … c 1 α t ( i ) = C t α t ( i ) \begin{aligned} \hat \alpha_t(i) & = c_t \bar \alpha_t(i) \\ &=c_t\left[\sum_{j=1}^N \hat \alpha_{t-1}(j)a_{ji} b_i(o_t)\right] \\ &= c_t \left[\sum_{j=1}^N c_{t-1}\bar \alpha_{t-1}(j)a_{ji} b_i(o_t)\right] \\ &= c_t c_{t-1}\left[\sum_{j=1}^N \bar \alpha_{t-1}(j)a_{ji}b_i(o_t)\right] \\ &= c_t c_{t-1}\left[\sum_{j=1}^N \left[ \sum_{k=1}^N\hat \alpha_{t-2}(k) a_{kj} b_j(o_{t-1}) \right]a_{ji}b_i(o_t)\right] \\ &= c_t c_{t-1}\left[\sum_{j=1}^N \left[ \sum_{k=1}^Nc_{t-2}\bar \alpha_{t-2}(k) a_{kj} b_j(o_{t-1}) \right]a_{ji}b_i(o_t)\right] \\ &= c_t c_{t-1}c_{t-2}\left[\sum_{j=1}^N \left[ \sum_{k=1}^N\bar \alpha_{t-2}(k) a_{kj} b_j(o_{t-1}) \right]a_{ji}b_i(o_t)\right] \\ &= \dots\\ &= c_tc_{t-1}\dots c_1\left[ \sum_{j=1}^N \left[ \sum_{k=1}^N \left[\dots\left[ \sum_{n=1}^N \bar \alpha_{1}(n)a_{nm} b_m(o_2)\right]\dots\right] a_{kj}b_j(o_{t-1}) \right] a_{ji}b_i(o_{t})\right]\\ &= c_tc_{t-1}\dots c_1\left[ \sum_{j=1}^N \left[ \sum_{k=1}^N \left[\dots\left[ \sum_{n=1}^N \alpha_{1}(n)a_{nm} b_m(o_2)\right]\dots\right] a_{kj}b_j(o_{t-1}) \right] a_{ji}b_i(o_{t})\right]\\ &= c_tc_{t-1}\dots c_1\left[ \sum_{j=1}^N \left[ \sum_{k=1}^N \Bigg[\dots\Bigg[ \alpha_2(m)\Bigg]\dots\Bigg] a_{kj}b_j(o_{t-1}) \right] a_{ji}b_i(o_{t})\right]\\ &= c_tc_{t-1}\dots c_1\left[ \sum_{j=1}^N \left[ \sum_{k=1}^N \alpha_{t-2}(k) a_{kj}b_j(o_{t-1}) \right] a_{ji}b_i(o_{t})\right]\\ &= c_tc_{t-1}\dots c_1\left[ \sum_{j=1}^N \alpha_{t-1}(j) a_{ji}b_i(o_{t})\right]\\ &= c_tc_{t-1}\dots c_1 \alpha_t(i)\\ &=C_t\alpha_t(i) \end{aligned} α^t(i)=ctαˉt(i)=ct[j=1∑Nα^t−1(j)ajibi(ot)]=ct[j=1∑Nct−1αˉt−1(j)ajibi(ot)]=ctct−1[j=1∑Nαˉt−1(j)ajibi(ot)]=ctct−1[j=1∑N[k=1∑Nα^t−2(k)akjbj(ot−1)]ajibi(ot)]=ctct−1[j=1∑N[k=1∑Nct−2αˉt−2(k)akjbj(ot−1)]ajibi(ot)]=ctct−1ct−2[j=1∑N[k=1∑Nαˉt−2(k)akjbj(ot−1)]ajibi(ot)]=…=ctct−1…c1[j=1∑N[k=1∑N[…[n=1∑Nαˉ1(n)anmbm(o2)]…]akjbj(ot−1)]ajibi(ot)]=ctct−1…c1[j=1∑N[k=1∑N[…[n=1∑Nα1(n)anmbm(o2)]…]akjbj(ot−1)]ajibi(ot)]=ctct−1…c1[j=1∑N[k=1∑N[…[α2(m)]…]akjbj(ot−1)]ajibi(ot)]=ctct−1…c1[j=1∑N[k=1∑Nαt−2(k)akjbj(ot−1)]ajibi(ot)]=ctct−1…c1[j=1∑Nαt−1(j)ajibi(ot)]=ctct−1…c1αt(i)=Ctαt(i)
如此得到式 ( 30 ) (30) (30)。
对于后向概率,将其递推公式 ( 11 ) (11) (11) 与前向概率的递推公式 ( 8 ) (8) (8) 对比可知,后向概率与前向概率的数量级比较接近。放大方法本质是解决精度超出范围的问题,并不在意具体的放大系数,使用与前向概率相同的放大系数可以把后向概率放大到有效范围内,故后向概率采用和前向概率相同的放大系数 c t c_t ct。
另外, β ˉ t ( i ) \bar \beta_t(i) βˉt(i) 和 β ^ t ( i ) \hat \beta_t(i) β^t(i) 满足与放大后前向概率类似的递推关系:
β ˉ t ( i ) = ∑ j = 1 N b j ( o t + 1 ) β ^ t + 1 ( j ) a i j , 1 ≤ t ≤ T − 1 β ^ t ( i ) = c t β ˉ t ( i ) , 1 ≤ t ≤ T \bar \beta_t(i) = \sum_{j=1}^N b_j(o_{t+1})\hat \beta_{t+1}(j) a_{ij},\space\space\space\space 1\le t\le T-1 \\ {\hat \beta}_t(i) = c_t{\bar \beta}_t(i),\space\space\space\space 1\le t\le T βˉt(i)=j=1∑Nbj(ot+1)β^t+1(j)aij, 1≤t≤T−1β^t(i)=ctβˉt(i), 1≤t≤T
其中,前者与式 ( 11 ) (11) (11) 一致,后者与式 ( 28 ) (28) (28) 类似。
同样是使用这两个递推关系,类比式 ( 30 ) (30) (30) 的推导,可得
β ^ t ( i ) = [ ∏ τ = 1 T c τ ] β t ( i ) \hat \beta_t(i) = \left[ \prod_{\tau=1}^T c_\tau \right]\beta_t(i) β^t(i)=[τ=1∏Tcτ]βt(i)
方便起见,令
D t = ∏ τ = 1 T c τ D_t = \prod_{\tau=1}^T c_\tau Dt=τ=1∏Tcτ
则
β ^ t ( i ) = D t β t ( i ) (31) \hat \beta_t(i) = D_t \beta_t(i) \tag{31} β^t(i)=Dtβt(i)(31)
在讨论精度问题之前,得到的计算期望或概率的公式采用的都是未放大的前向概率 α t ( i ) \alpha_t(i) αt(i) 和后向概率 β t ( i ) \beta_t(i) βt(i),但是采用了放大方法后,我们保存的是放大后的前向概率 α ^ t ( i ) \hat \alpha_t(i) α^t(i)(和 α ˉ t ( i ) \bar \alpha_t(i) αˉt(i))和后向概率 β ^ t ( i ) \hat \beta_t(i) β^t(i)(和 β ˉ t ( i ) \bar \beta_t(i) βˉt(i)),而不是 α t ( i ) \alpha_t(i) αt(i) 和 β t ( i ) \beta_t(i) βt(i),显然这些公式无法直接使用。因此,我们希望找到放大前后前向概率和后向概率的直接关系,即 α t ( i ) \alpha_t(i) αt(i) 与 α ^ t ( i ) \hat\alpha_t(i) α^t(i)(或 α ˉ t ( i ) \bar \alpha_t(i) αˉt(i))的关系以及 β t ( i ) \beta_t(i) βt(i) 与 β ^ t ( i ) \hat\beta_t(i) β^t(i)(或 β ˉ t ( i ) \bar \beta_t(i) βˉt(i))的关系,从而得到利用 α ^ t ( i ) \hat\alpha_t(i) α^t(i)(或 α ˉ t ( i ) \bar \alpha_t(i) αˉt(i))和 β ^ t ( i ) \hat\beta_t(i) β^t(i)(或 β ˉ t ( i ) \bar \beta_t(i) βˉt(i))直接计算这些期望或概率的公式。
上面提到,估计问题是关于计算 P ( O ∣ λ ) P(O\mid \lambda) P(O∣λ) 的问题。在前向算法中,可以通过 P ( O ∣ λ ) = ∑ i = 1 N α T ( i ) P(O\mid \lambda) = \sum\limits_{i=1}^N \alpha_T(i) P(O∣λ)=i=1∑NαT(i) 计算 P ( O ∣ λ ) P(O\mid \lambda) P(O∣λ)。根据式 ( 29 ) (29) (29),利用放大前后前向概率的关系式 ( 30 ) (30) (30),可得
P ( O ∣ λ ) = ∑ i = 1 N α T ( i ) = ∑ i = 1 N α ^ T ( i ) C T = 1 C T (32) P(O\mid \lambda) = \sum\limits_{i=1}^N \alpha_T(i) = \sum_{i=1}^N \frac{\hat \alpha_T(i)}{C_T} =\frac{1}{C_T} \tag{32} P(O∣λ)=i=1∑NαT(i)=i=1∑NCTα^T(i)=CT1(32)
其中, C T C_T CT 与 α ˉ t ( i ) , 1 ≤ t ≤ T \bar \alpha_t(i),\space 1\le t\le T αˉt(i), 1≤t≤T 有关。如此便确定了采用放大方法 P ( O ∣ λ ) P(O\mid \lambda) P(O∣λ) 的计算公式。
2.6.2. 学习问题
学习问题是指通过 Baum-Welch 算法学习模型参数 A , B , π A,B, \pi A,B,π。式 ( 22 ) (22) (22) 指明 a i j a_{ij} aij 关于 α t ( i ) \alpha_t(i) αt(i) 和 β t ( i ) \beta_t(i) βt(i) 的计算公式
a i j = ∑ d = 1 D 1 P d ∑ t = 1 T − 1 α t ( d ) ( i ) a i j b j ( o t + 1 ( d ) ) β t + 1 ( d ) ( j ) ∑ d = 1 D 1 P d ∑ t = 1 T − 1 α t ( d ) ( i ) β t ( d ) ( i ) a_{ij} = \frac{\sum\limits_{d=1}^D\frac{1}{P_d}\sum\limits_{t=1}^{T-1} \alpha^{(d)}_t(i)a_{ij}b_j(o_{t+1}^{(d)})\beta^{(d)}_{t+1}(j) }{\sum\limits_{d=1}^D\frac{1}{P_d}\sum\limits_{t=1}^{T-1} \alpha^{(d)}_t(i) \beta^{(d)}_t(i) } aij=d=1∑DPd1t=1∑T−1αt(d)(i)βt(d)(i)d=1∑DPd1t=1∑T−1αt(d)(i)aijbj(ot+1(d))βt+1(d)(j)
根据式 ( 30 ) (30) (30) 和式 ( 31 ) (31) (31) 可得 a i j a_{ij} aij 关于 α ^ t ( i ) \hat\alpha_t(i) α^t(i)(或 α ˉ t ( i ) \bar\alpha_t(i) αˉt(i)) 和 β ^ t ( i ) \hat\beta_t(i) β^t(i)(或 β ˉ t ( i ) \bar\beta_t(i) βˉt(i))的计算公式
a i j = ∑ d = 1 D 1 P d ∑ t = 1 T − 1 α ^ t ( d ) ( i ) C t ( d ) a i j b j ( o t + 1 ( d ) ) β ^ t + 1 ( d ) ( j ) D t + 1 ( d ) ∑ d = 1 D 1 P d ∑ t = 1 T − 1 α ^ t ( d ) ( i ) C t ( d ) β ^ t ( d ) ( i ) D t ( d ) = ∑ d = 1 D ∑ t = 1 T − 1 1 P d C t ( d ) D t + 1 ( d ) α ^ t ( d ) ( i ) a i j b j ( o t + 1 ( d ) ) β ^ t + 1 ( d ) ( j ) ∑ d = 1 D ∑ t = 1 T − 1 1 P d C t ( d ) D t ( d ) α ^ t ( d ) ( i ) β ^ t ( d ) ( i ) \begin{aligned} a_{ij} &= \frac{\sum\limits_{d=1}^D\frac{1}{P_d}\sum\limits_{t=1}^{T-1} \frac{\hat\alpha^{(d)}_t(i)}{C^{(d)}_t}a_{ij}b_j(o_{t+1}^{(d)})\frac{\hat\beta^{(d)}_{t+1}(j)}{D^{(d)}_{t+1}} }{\sum\limits_{d=1}^D\frac{1}{P_d}\sum\limits_{t=1}^{T-1} \frac{\hat\alpha^{(d)}_t(i)}{C^{(d)}_t} \frac{\hat\beta^{(d)}_t(i)}{D^{(d)}_t} } \\ &= \frac{\sum\limits_{d=1}^D\sum\limits_{t=1}^{T-1} \frac{1}{P_dC^{(d)}_tD^{(d)}_{t+1}} \hat\alpha^{(d)}_t(i)a_{ij}b_j(o_{t+1}^{(d)})\hat\beta^{(d)}_{t+1}(j) }{\sum\limits_{d=1}^D \sum\limits_{t=1}^{T-1} \frac{1}{P_dC^{(d)}_tD^{(d)}_{t}} \hat \alpha^{(d)}_t(i) \hat\beta^{(d)}_t(i) } \end{aligned} aij=d=1∑DPd1t=1∑T−1Ct(d)α^t(d)(i)Dt(d)β^t(d)(i)d=1∑DPd1t=1∑T−1Ct(d)α^t(d)(i)aijbj(ot+1(d))Dt+1(d)β^t+1(d)(j)=d=1∑Dt=1∑T−1PdCt(d)Dt(d)1α^t(d)(i)β^t(d)(i)d=1∑Dt=1∑T−1PdCt(d)Dt+1(d)1α^t(d)(i)aijbj(ot+1(d))β^t+1(d)(j)
其中, C t ( d ) D t + 1 ( d ) = C T ( d ) C^{(d)}_tD^{(d)}_{t+1} = C^{(d)}_T Ct(d)Dt+1(d)=CT(d), C t ( d ) D t ( d ) = C T ( d ) c t ( d ) C^{(d)}_tD^{(d)}_t = C^{(d)}_Tc^{(d)}_t Ct(d)Dt(d)=CT(d)ct(d),根据式 ( 32 ) (32) (32) 可知 P ( O ( d ) ∣ λ ) C T ( d ) = 1 P(O^{(d)}\mid \lambda) C_T^{(d)}=1 P(O(d)∣λ)CT(d)=1,故上式可化为
a i j = ∑ d = 1 D ∑ t = 1 T − 1 1 P d C T ( d ) α ^ t ( d ) ( i ) a i j b j ( o t + 1 ( d ) ) β ^ t + 1 ( d ) ( j ) ∑ d = 1 D ∑ t = 1 T − 1 1 P d C T ( d ) 1 c t ( d ) α ^ t ( d ) ( i ) β ^ t ( d ) ( i ) = ∑ d = 1 D ∑ t = 1 T − 1 α ^ t ( d ) ( i ) a i j b j ( o t + 1 ( d ) ) β ^ t + 1 ( d ) ( j ) ∑ d = 1 D ∑ t = 1 T − 1 1 c t ( d ) α ^ t ( d ) ( i ) β ^ t ( d ) ( i ) = ∑ d = 1 D ∑ t = 1 T − 1 α ^ t ( d ) ( i ) a i j b j ( o t + 1 ( d ) ) β ^ t + 1 ( d ) ( j ) ∑ d = 1 D ∑ t = 1 T − 1 α ^ t ( d ) ( i ) β ^ t ( d ) ( i ) / c t ( d ) (33) \begin{aligned} a_{ij} &= \frac{\sum\limits_{d=1}^D\sum\limits_{t=1}^{T-1} \frac{1}{P_dC^{(d)}_T} \hat\alpha^{(d)}_t(i)a_{ij}b_j(o_{t+1}^{(d)})\hat\beta^{(d)}_{t+1}(j) }{\sum\limits_{d=1}^D \sum\limits_{t=1}^{T-1} \frac{1}{P_dC^{(d)}_T}\frac{1}{c^{(d)}_{t}} \hat \alpha^{(d)}_t(i) \hat\beta^{(d)}_t(i) } \\ &= \frac{\sum\limits_{d=1}^D\sum\limits_{t=1}^{T-1}\hat\alpha^{(d)}_t(i)a_{ij}b_j(o_{t+1}^{(d)})\hat\beta^{(d)}_{t+1}(j) }{\sum\limits_{d=1}^D \sum\limits_{t=1}^{T-1} \frac{1}{c^{(d)}_{t}} \hat \alpha^{(d)}_t(i) \hat\beta^{(d)}_t(i) } \\ &= \frac{\sum\limits_{d=1}^D\sum\limits_{t=1}^{T-1}\hat\alpha^{(d)}_t(i)a_{ij}b_j(o_{t+1}^{(d)})\hat\beta^{(d)}_{t+1}(j) }{\sum\limits_{d=1}^D \sum\limits_{t=1}^{T-1} \hat \alpha^{(d)}_t(i) \hat\beta^{(d)}_t(i)/{c^{(d)}_{t}} } \tag{33} \end{aligned} aij=d=1∑Dt=1∑T−1PdCT(d)1ct(d)1α^t(d)(i)β^t(d)(i)d=1∑Dt=1∑T−1PdCT(d)1α^t(d)(i)aijbj(ot+1(d))β^t+1(d)(j)=d=1∑Dt=1∑T−1ct(d)1α^t(d)(i)β^t(d)(i)d=1∑Dt=1∑T−1α^t(d)(i)aijbj(ot+1(d))β^t+1(d)(j)=d=1∑Dt=1∑T−1α^t(d)(i)β^t(d)(i)/ct(d)d=1∑Dt=1∑T−1α^t(d)(i)aijbj(ot+1(d))β^t+1(d)(j)(33)
类似地,根据式 ( 23 ) (23) (23) 可以得到 b j ( k ) b_j(k) bj(k) 关于 α ^ t ( i ) \hat\alpha_t(i) α^t(i)(或 α ˉ t ( i ) \bar\alpha_t(i) αˉt(i)) 和 β ^ t ( i ) \hat\beta_t(i) β^t(i)(或 β ˉ t ( i ) \bar\beta_t(i) βˉt(i))的计算公式
b j ( k ) = ∑ d = 1 D ∑ t = 1 , o t ( d ) = v k T α ^ t ( d ) ( i ) β ^ t ( d ) ( i ) ∑ d = 1 D ∑ t = 1 T α ^ t ( d ) ( i ) β ^ t ( d ) ( i ) (34) b_{j}(k) = \frac{ \sum\limits_{d=1}^D \sum\limits_{t=1, o^{(d)}_t=v_k}^{T} \hat\alpha_t^{(d)}(i) \hat\beta_t^{(d)}(i) } { \sum\limits_{d=1}^D \sum\limits_{t=1}^{T} \hat \alpha_t^{(d)}(i) \hat \beta_t^{(d)}(i) } \tag{34} bj(k)=d=1∑Dt=1∑Tα^t(d)(i)β^t(d)(i)d=1∑Dt=1,ot(d)=vk∑Tα^t(d)(i)β^t(d)(i)(34)
根据式 ( 21 ) (21) (21) 可以得到 π i \pi_i πi 关于 α ^ t ( i ) \hat\alpha_t(i) α^t(i)(或 α ˉ t ( i ) \bar\alpha_t(i) αˉt(i)) 和 β ^ t ( i ) \hat\beta_t(i) β^t(i)(或 β ˉ t ( i ) \bar\beta_t(i) βˉt(i))的计算公式
π i = 1 D ∑ d = 1 D α ^ 1 ( d ) ( i ) β ^ 1 ( d ) ( i ) c 1 ( d ) (35) \pi_i =\frac{1}{D}\sum_{d=1}^D\frac{\hat\alpha_1^{(d)}(i) \hat\beta^{(d)}_1(i)}{c_1^{(d)}} \tag{35} πi=D1d=1∑Dc1(d)α^1(d)(i)β^1(d)(i)(35)
在采用了放大方法的 Baum-Welch 算法中,使用式 ( 33 ) ∼ ( 35 ) (33)\sim(35) (33)∼(35) 更新模型参数。
2.6.3. 解码问题
维特比算法的目标是找到最优路径,并不要求准确求出每条路径的概率,只要能够正确比较路径概率大小即可。因此,考虑到维特比算法的递推公式的形式以及放大方法较为复杂,我们采用取对数的方法确保数据在有效范围内。
对式 ( 24 ) (24) (24) 取对数,定义 Δ t ( i ) = log δ t ( i ) \Delta_t(i) = \log \delta_t(i) Δt(i)=logδt(i)
Δ t ( i ) = log max s 1 , s 2 , … , s t − 1 P ( o t , o t − 1 , … , o 1 , s t = q i , s t − 1 , … , s 1 ∣ λ ) = max s 1 , s 2 , … , s t − 1 log P ( o t , o t − 1 , … , o 1 , s t = q i , s t − 1 , … , s 1 ∣ λ ) \begin{aligned} \Delta_t(i) &= \log\max _{s_1,s_2,\dots, s_{t-1}} P(o_t, o_{t-1},\dots, o_1,s_{t}=q_i,s_{t-1},\dots, s_1\mid \lambda) \\ &= \max _{s_1,s_2,\dots, s_{t-1}}\log P(o_t, o_{t-1},\dots, o_1,s_{t}=q_i,s_{t-1},\dots, s_1\mid \lambda) \\ \end{aligned} Δt(i)=logs1,s2,…,st−1maxP(ot,ot−1,…,o1,st=qi,st−1,…,s1∣λ)=s1,s2,…,st−1maxlogP(ot,ot−1,…,o1,st=qi,st−1,…,s1∣λ)
初始值
Δ 1 ( i ) = log δ 1 ( i ) = log ( π i ) + log b i ( o 1 ) \Delta_1(i) = \log \delta_1(i) = \log(\pi_i) + \log b_i(o_1) Δ1(i)=logδ1(i)=log(πi)+logbi(o1)
变量 Δ \Delta Δ 的递推公式
Δ t ( i ) = max 1 ≤ j ≤ N [ Δ t − 1 ( j ) + log a j i ] + log b i ( o t ) \Delta_{t}(i) = \max_{1\le j \le N} [\Delta_{t-1}(j) + \log a_{ji}] + \log b_i(o_{t}) Δt(i)=1≤j≤Nmax[Δt−1(j)+logaji]+logbi(ot)
保存路径的变量 Ψ \Psi Ψ 的初始化和递推公式如下
Ψ 1 ( i ) = 0 , 1 ≤ i ≤ N Ψ t ( i ) = a r g max 1 ≤ j ≤ N [ Δ t − 1 ( j ) + log a j i ] , 1 ≤ i ≤ N \Psi_1(i) = 0,\space\space\space\space 1\le i\le N\\ \Psi_t(i) = {\rm arg}\max _{1\le j\le N}[\Delta_{t-1}(j) + \log a_{ji}] ,\space\space\space\space 1\le i\le N\\ Ψ1(i)=0, 1≤i≤NΨt(i)=arg1≤j≤Nmax[Δt−1(j)+logaji], 1≤i≤N
最优路径的概率 P ∗ P^* P∗ 满足 log P ∗ = max 1 ≤ i ≤ N Δ T ( i ) \log P^* = \max\limits_{1\le i \le N} \Delta_T(i) logP∗=1≤i≤NmaxΔT(i);最优路径回溯过程不变。
可见,我们不仅可以找到最优路径,而且计算出了正确的最优路径概率,另外,取对数的方法使得计算量大大减少并且缓解了精度问题。
REF
[1]《统计学习方法(第二版)》李航著
[2]《机器学习》周志华著
[3]《统计自然语言处理(第二版)》宗成庆著
[4] 隐马尔科夫模型HMM(三)鲍姆-韦尔奇算法求解HMM参数 - 博客园
[5] HMM经典介绍论文【Rabiner 1989】翻译(十六)- CSDN
[6] HMM经典介绍论文【Rabiner 1989】翻译(十七)——多观测序列 - CSDN
[7] 机器学习-白板推导系列(十四)-隐马尔可夫模型HMM(Hidden Markov Model)- bilibili
[8] 机器学习-白板推导系列(十四)-隐马尔可夫模型HMM(Hidden Markov Model)笔记 - 知乎
[10] 【机器学习】EM 算法 - CSDN
[11] 概率图模型 - 知乎