Learn Machine Learning in 10 Minutes: Naive Bayes

Learn Aster
Teradata Employee

The Naive Bayes algorithm is very simple, yet surprisingly effective. A training data set (for which we know discrete outcomes and either discrete or continuous input variables) is used to generate the model. The model is then used to predict the outcome of future observations, based on their input variables.

There are two main components of the Naive Bayes Model:

Bayes' Theroem:

Bayes’ theorem is a classical law, stating that the probability of observing an outcome given the data is proportional to the probability of observing the data given the outcome, times the prior probability of the outcome.

Naive probability Model:

The naive probability model is the assumption that the input data are independent of one another, and conditional on the outcome. This is a very strong assumption, and never true in real life, but it makes computation of all model parameters extremely simple, and violating the assumption does not hurt the model much.

Watch this video to learn how to implement Naive Bayes: