Talk:Support vector machine
|
Hello! I have a comment about the first sentence, which says "A SVM is a <blank>", and <blank> has been either "statistical classification model" and "supervised learning method" lately. I'm in favor of the former because it situates SVM in a very large group of related methods from both conventional statistics and machine learning. "Supervised learning" is deficient on two counts -- s.l. also includes regression as well as classification, and it suggests only a link to machine learning and not conventional statistics. So I'd like to hear what other people have to say. Happy editing, Wile E. Heresiarch 02:35, 29 Apr 2004 (UTC)
- Yes, indeed, classification is more specific than supervised learning. However, there are both classification and regression forms of a support vector machine. The latter is often called Support Vector Regression (SVR). I added a discussion of SVR to this article, although it is difficult for laypeople to understand. Anyway, I think that supervised learning is a more accurate description. The article supervised learning is much less stubby than classification, too.
- To me, machine learning is statistics, so I don't have a preference for "supervised learning" over "classification" on that basis. -- hike395 04:46, 29 Apr 2004 (UTC)
Hmm. Not quite the way I'd put it. However, I haven't got my references on me at the moment, so I can't come up with a precise description. -- 213.253.39.90 (http://www.wikipedia.org/w/wiki.phtml?title=Special:Contributions&target=213.253.39.90)
Do SVMs have to be non-linear? I thought they could be either linear or non-linear. -- Oliver PEREIRA 13:18 Jan 26, 2003 (UTC)
No. See if my new edit makes you happier. By the way, does anyone know why they're called "machines"? Nobody seems to call older machine learning techniques (e.g. the perceptron, feed-forward networks, etc.) "machines". --Ryguasu 00:13 Apr 2, 2003 (UTC)
- According to Vapnik (The Nature Of Statistical Learning Theory, p. 133) they are non-linear: The Support Vector (SV) machine implements the following idea: it maps the input vectors x into a high-dimensional feature space Z through some nonlinear mapping, chosen a priori. In this space, an Optimal separating hyperplane is constructed. To be strict a so-called linear SVM is an optimal hyperplane (it has support vectors, but is not a support vector machine), although many authors ignore this. --knl 15:57, 28 Aug 2004 (UTC)
Fast and lightweight? As compared to what? (Sitting in front of machine that's spent 3 days on a linearly separable dataset C4.5 takes a few minutes to chomp through). User:Iwnbap
Don't forget Sequential Minimal Optimization (SMO) [1] (http://research.microsoft.com/~jplatt/smo.html)