Support vector machine
|
Support vector machines (SVMs) are a set of related supervised learning methods, applicable to both classification and regression.
Contents |
Linear classification
When used for classification, the SVM algorithm creates a hyperplane that separates the data into two classes with the maximum-margin. Given training examples labeled either "yes" or "no", a maximum-margin hyperplane splits the "yes" and "no" training examples, such that the distance from the closest examples (the margin) to the hyperplane is maximized.
The use of the maximum-margin hyperplane is motivated by Vapnik Chervonenkis theory, which provides a probabilistic test error bound that is minimized when the margin is maximized. However the utility of this theoretical analysis is sometimes questioned given the large slack associated with these bounds: the bounds often predict more than 100% error rates.
The parameters of the maximum-margin hyperplane are derived by solving a quadratic programming (QP) optimization problem. There exist several specialized algorithms for quickly solving the QP problem that arises from SVMs. The most common method for solving the QP problem is Platt's SMO algorithm (http://research.microsoft.com/users/jplatt/smo.html).
Non-linear classification with the "Kernel Trick"
The original optimal hyperplane algorithm proposed by Vladimir Vapnik in 1963 was a linear classifier. However, in 1992, Bernhard Boser, Isabelle Guyon and Vapnik suggested a way to create non-linear classifiers by applying the kernel trick (originally proposed by Aizerman) to maximum-margin hyperplanes. The resulting algorithm is formally similar, except that every dot product is replaced by a non-linear kernel function. This allows the algorithm to fit the maximum-margin hyperplane in the transformed feature space. The transformation may be non-linear and the transformed space high dimensional; thus though the classifier is a hyperplane in the high-dimensional feature space it may be non-linear in the original input space.
If the kernel used is a radial basis function, the corresponding feature space is a Hilbert space of infinite dimension. Maximum margin classifiers are well regularized, so the infinite dimension does not spoil the results. Some common kernels include,
- Polynomial: <math>k(\mathbf{x},\mathbf{x}')=(\mathbf{x} \cdot \mathbf{x'})^d<math>
- Radial Basis: <math>k(\mathbf{x},\mathbf{x}')=\exp(- \frac{\|\mathbf{x} - \mathbf{x'}\|}{2 \sigma^2})<math>
- Sigmoid: <math>k(\mathbf{x},\mathbf{x}')=\tanh(\kappa \mathbf{x} \cdot \mathbf{x'}+c)<math>
Soft margin
In 1995, Corinna Cortes and Vapnik suggested a modified maximum margin idea that allows for mislabeled examples. If there exists no hyperplane that can split the "yes" and "no" examples, the Soft Margin method will choose a hyperplane that splits the examples as cleanly as possible, while still maximizing the distance to the nearest cleanly split examples. This work popularized the expression Support Vector Machine or SVM. The SVM was popularized in the machine learning community by Bernhard Schölkopf in his 1997 PhD thesis, which compared it to other methods.
Regression
A version of a SVM for regression was proposed in 1997 by Vapnik, Steven Golowich, and Alex Smola. This method is called Support vector regression (SVR). The model produced by Support Vector Classification (as described above) only depends on a subset of the training data, because the cost function for building the model does not care about training points that lie beyond the margin. Analogously, the model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold <math>\epsilon<math>) to the model prediction.
References
- B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In D. Haussler, editor, 5th Annual ACM Workshop on COLT, pages 144-152, Pittsburgh, PA, 1992. ACM Press.
- Christopher J. C. Burges. "A Tutorial on Support Vector Machines for Pattern Recognition". Data Mining and Knowledge Discovery 2:121 - 167, 1998 (Also available at CiteSeer: [1] (http://citeseer.ist.psu.edu/burges98tutorial.html))
- Nello Cristianini and John Shawe-Taylor. An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, 2000. ISBN 0-521-78019-5
- Thorsten Joachims. "Text Categorization with Support Vector Machines: Learning with Many Relevant Features". In: Proceedings of ECML-98, 10th European Conference on Machine Learning, edited by Claire Nédellec and Céline Rouveirol, pp 137-142. Springer-Verlag, 1998. (Also available at CiteSeer: [2] (http://citeseer.nj.nec.com/joachims98text.html))
- K.-R. Müller, S. Mika, G. Rätsch, K. Tsuda, and B. Schölkopf. "An introduction to kernel-based learning algorithms". IEEE Neural Networks, 12(2):181-201, May 2001. (Also available on line: PDF (http://mlg.anu.edu.au/~raetsch/ps/review.pdf))
- Bernhard Schölkopf and A. J. Smola: Learning with Kernels. MIT Press, Cambridge, MA, 2002. (Partly available on line: [3] (http://www.learning-with-kernels.org).) ISBN 0-262-19475-9
- Bernhard Schölkopf, Christopher J.C. Burges, and Alexander J. Smola (editors). "Advances in Kernel Methods: Support Vector Learning". MIT Press, Cambridge, MA, 1999. ISBN 0-262-19416-3. [4] (http://www.kernel-machines.org/nips97/book.html)
- Bernhard Schölkopf: "Support vector learning" (GMD-Berichte No. 287. GMD-Forschungszentrum Informationstechnik) (1997)
- John Shawe-Taylor and Nello Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004. ISBN 0-521-81397-2
- Vladimir Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, 1999. ISBN 0-387-98780-0
External links
- www.kernel-machines.org (general information and collection of research papers)
- LIBSVM -- A Library for Support Vector Machines, Chih-Chung Chang and Chih-Jen Lin (http://www.csie.ntu.edu.tw/~cjlin/libsvm/)
- The Formulation of Support Vector Machine (http://svr-www.eng.cam.ac.uk/~kkc21/thesis_main/node8.html)
- SVMlight (http://svmlight.joachims.org/) -- a popular implementation of the SVM algorithm by Thorsten Joachims; it can be used to solve classification, regression and ranking problems.de:Support-Vector-Maschine