文章

cs188-note10

Markov Models

这段内容主要介绍了Markov Models是什么,可以视为无限长的链状Bayes Net。而该Model大多都是与事件有关,每个对象都有一个单独的随机变量,且具有无记忆特性。

Markov Models的最后一个典型假设是过渡模型是静止的。换句话说,对于所有 i 值(所有时间步),P(Wi+1 Wi) 都是相同的。
这样,我们就可以只用两张表来表示马尔可夫模型:一张表表示 P(W0),另一张表示 P(Wi+1 Wi)。

The Mini-Forward Algorithm

\[P(W_{i+1}) = \sum_{wi} P(W_{i+1}|w_{i+1})P(w_i)\]

其实就是求边缘概率

Stationary Distribution

Stationary Distribution,它描述了在长期运行之后,马尔可夫链的状态分布如何趋于稳定。具体来说,平稳分布是指在状态转移过程中,当系统达到一个稳定状态后,各个状态的概率不会再随时间变化。

\[P(W_{t+1}) = P(W_t )\]

Hidden Markov Models

其实就是Markov的基础上,在每个时间节点又观测到了一些证据。

所以我们有state variableevidence variable两种不同节点。

As a final point on notation, we’ll define the belief distribution at time i with all evidence F1, . . . , Fi observed up to date:

\[B(W_i) = P(W_i| f_1, . . . , f_i)\]

Similarly, we’ll define B′(Wi) as the belief distribution at time i with evidence f1, . . . , fi−1 observed:

\(B′(W_i) = P(W_i| f_1, . . . , f_{i−1})\)

The Forward Algorithm

我们可以对上面推出来的式子进行进一步化简,从而能得到两个很重要的关系。

\[B^′(W_{i+1}) = \sum_{wi}P(W_{i+1}|w_i)B(w_i)\] \[B(W_{i+1}) \propto P(f_{i+1} | W_{i+1}) B'(W_{i+1})\]

结合两个关系,我们将得到forward algorithm

\[B(W_{i+1}) \propto P(f_{i+1} | W_{i+1})\sum_{w_i} P(W_{i+1} | w_i) B(w_i)\]

As a parting note, the normalization trick discussed above can actually simplify computation significantly when working with Hidden Markov Models. If we began with some initial distribution and were interested in computing the belief distribution at time t, we could use the forward algorithm to iteratively compute B($W_1$), . . . , B($W_t$) and normalize only once at the end by dividing each entry in the table for B(Wt) by the sum of it’s entries.

Viterbi Algorithm

详情请看note

Particle Filtering

详情请看note

本文由作者按照 CC BY 4.0 进行授权