<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Artificial Intelligence Blog &#187; Artificial Intelligence Blog &#187; Support Vector Machines</title>
	<atom:link href="http://artent.net/category/support-vector-machines/feed/" rel="self" type="application/rss+xml" />
	<link>http://artent.net</link>
	<description>We&#039;re blogging machines!</description>
	<lastBuildDate>Sat, 14 Mar 2026 20:14:25 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.0</generator>
	<item>
		<title>&#8220;Deep Support Vector Machines for Regression Problems&#8221;</title>
		<link>http://artent.net/2013/07/09/deep-support-vector-machines-for-regression-problems/</link>
		<comments>http://artent.net/2013/07/09/deep-support-vector-machines-for-regression-problems/#comments</comments>
		<pubDate>Tue, 09 Jul 2013 19:34:31 +0000</pubDate>
		<dc:creator><![CDATA[hundalhh]]></dc:creator>
				<category><![CDATA[Deep Belief Networks]]></category>
		<category><![CDATA[Sparsity]]></category>
		<category><![CDATA[Support Vector Machines]]></category>

		<guid isPermaLink="false">http://162.243.213.31/?p=1962</guid>
		<description><![CDATA[Nuit Blanche&#8216;s article &#8220;The Summer of the Deeper Kernels&#8221; references the two page paper &#8220;Deep Support Vector Machines for Regression Problems&#8221; by Schutten, Meijster, and Schomaker (2013). &#160; The deep SMV is a pretty cool idea.  A normal support vector machine (SVM) classifier, finds $\alpha_i$ such that $f(x) = \sum_i \alpha_i K(x_i, x)$ is positive for one [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://nuit-blanche.blogspot.com/">Nuit Blanche</a>&#8216;s article &#8220;<a href="http://nuit-blanche.blogspot.com/2013/07/the-summer-of-deeper-kernels.html">The Summer of the Deeper Kernels</a>&#8221; references the two page paper &#8220;<a href="http://www.ai.rug.nl/~mwiering/GROUP/ARTICLES/DSVM_extended_abstract.pdf">Deep Support Vector Machines for Regression Problems</a>&#8221; by Schutten, Meijster, and Schomaker (2013).</p>
<p>&nbsp;</p>
<p>The deep SMV is a pretty cool idea.  A normal <a href="https://en.wikipedia.org/wiki/Support_vector_machine">support vector machine</a> (SVM) classifier, finds $\alpha_i$ such that</p>
<p>$f(x) = \sum_i \alpha_i K(x_i, x)$ is positive for one class of $x_i$ and negative for the other class (sometimes allowing exceptions).  ($K(x,y)$ is called the kernel function which is in the simplest case just the dot product of $x$ and $y$.)  SVM&#8217;s are great because they are fast and the solution is sparse (i.e. most of the $\alpha_i$ are zero).</p>
<p>Schutten, Meijster, and Schomaker apply the ideas of <a href="http://en.wikipedia.org/wiki/Deep_learning">deep neural nets</a> to SVMs.</p>
<p>They construct $d$ SVMs of the form</p>
<p>$f_a(x) = \sum_i \alpha_i(a) K(x_i, x)+b_a$</p>
<p>and then compute a more complex two layered SVM</p>
<p>$g(x) = \sum_i \alpha_i  K(f(x_i), f(x))+b$</p>
<p>where $f(x) = (f_1(x), f_2(x), \ldots, f_d(x))$.  They use a simple gradient descent algorithm to optimize the alphas and obtain numerical results on ten different data sets comparing the mean squared error to a standard SVM.</p>
]]></content:encoded>
			<wfw:commentRss>http://artent.net/2013/07/09/deep-support-vector-machines-for-regression-problems/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Feature Selection in Medicine</title>
		<link>http://artent.net/2013/02/24/feature-selection-in-medicine/</link>
		<comments>http://artent.net/2013/02/24/feature-selection-in-medicine/#comments</comments>
		<pubDate>Sun, 24 Feb 2013 13:54:18 +0000</pubDate>
		<dc:creator><![CDATA[hundalhh]]></dc:creator>
				<category><![CDATA[General ML]]></category>
		<category><![CDATA[Support Vector Machines]]></category>

		<guid isPermaLink="false">http://162.243.213.31/?p=1115</guid>
		<description><![CDATA[In the seminal paper &#8220;Gene Selection for Cancer Classification using Support Vector Machines&#8220;, Guyon, Weston, Barnhill, and Vapnik (2002) use Recursive Feature Elimination to find the genes which are the most predictive of cancer. Recursive Feature Elimination repeatedly ranks the features and eliminates the worst feature until only a small subset of the original set of features [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>In the seminal paper &#8220;<a href="http://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;ved=0CDYQFjAA&amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.70.9598%26rep%3Drep1%26type%3Dpdf&amp;ei=pv-gUO-gDMnt0gHrnoHYDA&amp;usg=AFQjCNHkySRiUTjGFWNks9OvGbSljMHcAA">Gene Selection for Cancer Classification using Support Vector Machines</a>&#8220;, Guyon, Weston, Barnhill, and Vapnik (2002) use Recursive Feature Elimination to find the genes which are the most predictive of cancer. Recursive Feature Elimination repeatedly ranks the features and eliminates the worst feature until only a small subset of the original set of features remains. Although several feature ranking methods were explored, the main method was a soft margin <a href="http://en.wikipedia.org/wiki/Support_vector_machine">SVM</a> classifier with which the authors found 8 key colon cancer genes out of 7000.</p>
]]></content:encoded>
			<wfw:commentRss>http://artent.net/2013/02/24/feature-selection-in-medicine/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Interior Point Methods for Large Scale SVMs</title>
		<link>http://artent.net/2012/11/14/interior-point-methods-for-large-scale-svms/</link>
		<comments>http://artent.net/2012/11/14/interior-point-methods-for-large-scale-svms/#comments</comments>
		<pubDate>Wed, 14 Nov 2012 12:04:39 +0000</pubDate>
		<dc:creator><![CDATA[hundalhh]]></dc:creator>
				<category><![CDATA[Optimization]]></category>
		<category><![CDATA[Support Vector Machines]]></category>

		<guid isPermaLink="false">http://162.243.213.31/?p=788</guid>
		<description><![CDATA[Jacek Gondzio has some nice slides (2009) on interior point methods for large scale support vector machines.  He focuses on the primal dual logarithmic barrier methods (see e.g. Wright 1987) for softer classification.  Great explanations, diagrams, and numerical results are provided.  Kristian Woodsend wrote his 2009 Ph.D. thesis on the same subject.  Woodsend applies the interior point methods [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Jacek Gondzio has some nice <a href="http://numml.kyb.tuebingen.mpg.de/numl09/talk_gondzio.pdf">slides</a> (2009) on interior point methods for large scale support vector machines.  He focuses on the primal dual logarithmic barrier methods (see e.g. <a href="http://www.amazon.com/Primal-Dual-Interior-Point-Methods-Stephen-Wright/dp/089871382X">Wright 1987</a>) for softer classification.  Great explanations, diagrams, and numerical results are provided.  Kristian Woodsend wrote his 2009 Ph.D. <a href="http://www.maths.ed.ac.uk/pg/thesis/woodsend.pdf">thesis</a> on the same subject.  Woodsend applies the interior point methods and low rank approximations of the SVM kernel to reduce the computational cost to order $n$ where $n$ is the number of data points.  He compares this approach to active set methods, gradient projection algorithms, and cutting-plane algorithms and concludes with numerical results.</p>
]]></content:encoded>
			<wfw:commentRss>http://artent.net/2012/11/14/interior-point-methods-for-large-scale-svms/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>

<!-- Performance optimized by W3 Total Cache. Learn more: http://www.w3-edge.com/wordpress-plugins/

 Served from: artent.net @ 2026-04-16 13:46:06 by W3 Total Cache -->