small changes to conclusion
[mlfp.git] / writeup / markov.tex
blobec9fda2ef40404ad53f0a8f0e85c5980c4135f3d
1 \section{Classification Model}
2 \label{markov}
3 In this section we describe the $k$ Markov chains model, provide justification for why we chose it, and describe EM updates.
5 \subsection{$k$-Markov chains model}
6 Consider the standard Markov model defined by
7 \begin{eqnarray*}
8 p(x_1, \ldots, x_m) = t(x_1) \prod_{i=2}^m t(x_i | x_{i-1})
9 \end{eqnarray*}
10 where $t(x_1)$ is the probability that $x_1$ is the initial state and $t(x_i \vert x_{i-1})$ are the transition probabilities.
12 Similar to Gaussian Mixture Models, we can take a mixture of $k$ Markov chains. A simple extension of the basic mixture model to allow variable length sequences gives the final model we used:
13 \begin{eqnarray*}
14 p(x_1, \ldots, x_m) = \sum_{z=1}^k q(z) \left(t_z(x_1) t(END | x_m) \prod_{i=2}^m t_z(x_i | x_{i-1})\right)
15 \end{eqnarray*}
17 \subsection{Model Choice}
18 We chose this model because it fits our goals of clustering different patterns of sequential interaction. We chose to use $k$ Markov chains for the following reasons:
19 \begin{enumerate}
20 \item Mixture models are a natural way to uncover clusters in an unsupervised fashion by examining the component models of the mixture.
21 \item The temporal relationship between method calls are more interesting and informative than, say, the counts of certain method calls. Markov chains capture this in a simple way by assuming that the system only cares about the previous state. While program interaction is not solely determined by the previous function call (in fact, there may be add/remove sequence that have context-free structures), we reasoned that choosing a simpler model would make learning easier while still yielding insight into API interaction\footnote{We also considered using other classification methods ($k$-means clustering, SVMs) and capturing the temporal nature of our data by selecting pairs of function calls as features, but Markov chains capture this in a more straightforward way}.
22 \end{enumerate}
23 We considered using hidden Markov models (HMMs) as well, but it was not clear to us what the hidden states would be. The learning problem would have been considerably more difficult if we chose to use a mixture of HMMs, as we would need to choose the number of mixtures as well as the number of hidden states.
25 \subsection{Maximum Likelihood Estimation}
26 Similar to our derivation of the EM algorithm for other models, we first take a look at maximum likelihood estimation in the case where we observe exactly the underlying Markov Chain that our observations were drawn from. In this case, simlar to GMMs, the maximum likelihood estimates for the parameters in our model are simple. Suppose that we have $n$ training sequence and define $\delta(z, i)$ to be 1 iff sample i was drawn from chain $z$. The maximum likelihood estimates for our parameters will be:
27 \begin{eqnarray*}
28 n(z) & = & \sum_{i=1}^n \delta(z, i), \ \ q^*(z) = \frac{n(z)}{n}\\
29 t_z^*(x) & = & \frac{\sum_{i=1}^n \delta(z, i) [[ x_{i1} == x]]}{\sum_{i=1}^n \delta(z, i)}\\
30 t_z^*(x_i | x_j) & =& \frac{\sum_{i=1}^n \delta(z, i) \sum_{j=1}^{|\underline{x}_i| - 1} [[ x_{ij} == x_j \wedge x_{i,j+1} == x_i]]}{\sum_{i=1}^n \delta(z, i) \sum_{j=1}^{|\underline{x}_i|}[[ x_{ij} == x_j]]}\\
31 t_{z}^*(END|x_j) &= &\frac{\sum_{i=1}^n \delta(z, i) [[x_{i|\underline{x}_i|} == x_j]]}{\sum_{i=1}^n \delta(z, i) \sum_{j=1}^{|\underline{x}_i|}[[ x_{ij} == x_j]]}
32 \end{eqnarray*}
33 Note that since we have variable length sequences, we sum to $|x_i|$ in the denominator rather than $|x_i| - 1$.
34 Now that we have the estimates for the case where we know which Markov Chain each sequence was generated from, we can extend this to the case where we have a probability distribution over the Markov Chains that could have generated the sequence. We can keep all of our estimates the same, except that we need to change the way the $\delta(z, i)$ are computed. Rather than being strictly 0 or 1, $\delta(z, i)$ now represents a normalized probability that example $i$ was generated from the $z^{th}$ Markov Chain. We compute $\delta(z, i)$ (assume that $\underline{x}_i$ has $m$ observations) with
35 \begin{eqnarray*}
36 \delta(z, i) = \frac{q(z) t_z(x_{i1}) t_z(END| x_{im}) \prod_{j=2}^{m} t(x_{ij} | x_{i,j-1})}{\sum_z q(z) t_z(x_{i1}) t_z(END | x_{im}) \prod_{j=2}^{m} t_z(x_{ij} | x_{i,j-1})}.
37 \end{eqnarray*}
38 We implemented both the representation of the $k$-Markov chains model and EM for this model in MATLAB without using any existing libraries or code.