Please take a look at the following video regarding Hidden Markov Models (HMM) as explained by Michelle Tanco. Michelle Tanco is a member of Teradata’s Advanced Analytics Practice and loves solving analytical problems. She primarily works with Teradata’s Aster technology and has expertise in time series analysis, specifically using Aster’s Hidden Markov Model functions. Attached is the source code she put together for the video so you can follow along. The data is also contained within Aster Express. (Retail_Web_Clicks)
A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states. An HMM can be presented as the simplest dynamic Bayesian network. The mathematics behind the HMM were developed by L. E. Baum and coworkers. It is closely related to an earlier work on the optimal nonlinear filtering problem by Ruslan L. Stratonovich,who was the first to describe the forward-backward procedure.
In simpler Markov models (like a Markov chain), the state is directly visible to the observer, and therefore the state transition probabilities are the only parameters. In a hidden Markov model, the state is not directly visible, but the output, dependent on the state, is visible. Each state has a probability distribution over the possible output tokens. Therefore, the sequence of tokens generated by an HMM gives some information about the sequence of states. The adjective 'hidden' refers to the state sequence through which the model passes, not to the parameters of the model; the model is still referred to as a 'hidden' Markov model even if these parameters are known exactly.
Hidden Markov models are especially known for their application in temporal pattern recognition such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial dischargesand bioinformatics.
A hidden Markov model can be considered a generalization of a mixture model where the hidden variables (or latent variables), which control the mixture component to be selected for each observation, are related through a Markov process rather than independent of each other. Recently, hidden Markov models have been generalized to pairwise Markov models and triplet Markov models which allow consideration of more complex data structures  and the modelling of nonstationary data.
Please check out a great blog on Hidden Markov from Karthik Guruswamy
Hidden Markov Model Functions
Unsupervised Learning Given an observation sequence and the number of states, find the model that maximizes the probability of the observed sequence.
Supervised Learning Given an observation sequences and states, find the model that maximizes the probability of the observed sequence.
Decoding Given the trained model and an observation sequence, find an optimal state sequence.
Evaluation Given the trained model and an observation sequence, find the probability of the sequence.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.