<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>Comments on: Matlab code and a Tutorial on DIRECT Optimization</title>
	<atom:link href="http://artent.net/2012/10/20/a-tutorial-on-direct-optimization/feed/" rel="self" type="application/rss+xml" />
	<link>http://artent.net/2012/10/20/a-tutorial-on-direct-optimization/</link>
	<description>We&#039;re blogging machines!</description>
	<lastBuildDate>Wed, 15 Jan 2025 16:08:06 +0000</lastBuildDate>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.0</generator>
	<item>
		<title>By: hundalhh</title>
		<link>http://artent.net/2012/10/20/a-tutorial-on-direct-optimization/#comment-737</link>
		<dc:creator><![CDATA[hundalhh]]></dc:creator>
		<pubDate>Fri, 22 Mar 2013 14:13:37 +0000</pubDate>
		<guid isPermaLink="false">http://162.243.213.31/?p=652#comment-737</guid>
		<description><![CDATA[Also, Conjugate Gradient is not derivative free.]]></description>
		<content:encoded><![CDATA[<p>Also, Conjugate Gradient is not derivative free.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: hundalhh</title>
		<link>http://artent.net/2012/10/20/a-tutorial-on-direct-optimization/#comment-736</link>
		<dc:creator><![CDATA[hundalhh]]></dc:creator>
		<pubDate>Fri, 22 Mar 2013 14:08:47 +0000</pubDate>
		<guid isPermaLink="false">http://162.243.213.31/?p=652#comment-736</guid>
		<description><![CDATA[Yes, neither Nelder-Mead nor Hooke-Jeaves use the actual derivative.  

Nelder-Mead in effect does a gradient descent.  By evaluating the function at the vertices of the simplex, it figures out approximately the direction of the gradient and uses that to determine the next evaluation.  So it is quite similar to steepest descent.

Hooke-Jeaves also is similar to gradient descent because it evaluates points near the best current estimate of the minimum. 

So despite the fact that they are derivative free, it seems to me that they behave similarly to gradient descent.]]></description>
		<content:encoded><![CDATA[<p>Yes, neither Nelder-Mead nor Hooke-Jeaves use the actual derivative.  </p>
<p>Nelder-Mead in effect does a gradient descent.  By evaluating the function at the vertices of the simplex, it figures out approximately the direction of the gradient and uses that to determine the next evaluation.  So it is quite similar to steepest descent.</p>
<p>Hooke-Jeaves also is similar to gradient descent because it evaluates points near the best current estimate of the minimum. </p>
<p>So despite the fact that they are derivative free, it seems to me that they behave similarly to gradient descent.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Hasan</title>
		<link>http://artent.net/2012/10/20/a-tutorial-on-direct-optimization/#comment-731</link>
		<dc:creator><![CDATA[Hasan]]></dc:creator>
		<pubDate>Fri, 22 Mar 2013 04:26:53 +0000</pubDate>
		<guid isPermaLink="false">http://162.243.213.31/?p=652#comment-731</guid>
		<description><![CDATA[The slide shown doesn&#039;t appear right. Nelder-mead, and Hooke-Jeaves are both derivative free.]]></description>
		<content:encoded><![CDATA[<p>The slide shown doesn&#8217;t appear right. Nelder-mead, and Hooke-Jeaves are both derivative free.</p>
]]></content:encoded>
	</item>
</channel>
</rss>

<!-- Performance optimized by W3 Total Cache. Learn more: http://www.w3-edge.com/wordpress-plugins/

 Served from: artent.net @ 2026-04-06 03:08:50 by W3 Total Cache -->