Blog

The best minds from Teradata, our partners, and customers blog about relevant topics and features.

Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

05-20-2015
05:28 AM

- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Email to a Friend
- Printer Friendly Page
- Report Inappropriate Content

05-20-2015
05:28 AM

**Support vector machines** are among the most popular "out of the box" classification algorithms. The objective of the algorithm is similar to logistic regression – given a set of predictor variables, classify an object as one of two possible outcomes. The methods by which the algorithms achieve this objective are very different. Intuitively, logistic regression develops a probabilistic model based on the input data, and given a test instance x, estimates the probability that x belongs in a particular class. In contrast, support vector machines ignore the probabilistic interpretation. A support vector machine attempts to find the boundary that maximizes the distance between the two classes. In the prediction phrase, given a test instance x, calculate which side of the boundary x lies in order to compute a class prediction.

This implementation solves the primal form of a linear kernel support vector machine via gradient descent on the objective function. It is based primarily on Pegasos: Primal Estimated Sub-GrAdient SOlver for SVM (by S. Shalev-Shwartz, Y. Singer, and N. Srebro; presented at ICML 2007).

As mentioned before, it is a binary classification algorithm. Multiple classification is achieved using machine learning reductions, or more specifically one-against-all is adopted in the function. In a K-class classification problem, K support vector machines are trained. Each SVM is a binary classifier, where the nth class is labeled positive, and all other classes are labeled negative. In the test phase, each test observation is trained using each of the K SVMs. The class which results in the most observations predicted as positive is the resulting prediction.

To learn more about how to implement Support Vector Machines watch this short video:

Labels:

You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.