Learn Data Science

turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Teradata
- :
- Data Science Blog Posts
- :
- Learn Data Science
- :
- Learn Machine Learning in 10 Minutes: Naive Bayes

05-21-2015
03:45 AM

- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Email to a Friend
- Printer Friendly Page
- Report Inappropriate Content

05-21-2015
03:45 AM

The Naive Bayes algorithm is very simple, yet surprisingly effective. A training data set (for which we know discrete outcomes and either discrete or continuous input variables) is used to generate the model. The model is then used to predict the outcome of future observations, based on their input variables.

**There are two main components of the Naive Bayes Model:**

**Bayes' Theroem:**

Bayes’ theorem is a classical law, stating that the probability of observing an outcome given the data is proportional to the probability of observing the data given the outcome, times the prior probability of the outcome.

**Naive probability Model:**

The naive probability model is the assumption that the input data are independent of one another, and conditional on the outcome. This is a very strong assumption, and never true in real life, but it makes computation of all model parameters extremely simple, and violating the assumption does not hurt the model much.

**Watch this video to learn how to implement Naive Bayes:**

Labels:

You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.