We're blogging machines!
Subscribe to feed
‹ “Optimistic Optimization of a Deterministic Function without the Knowledge of its Smoothness” • “Variance Reduction in Monte-Carlo Tree Search” ›
October 20, 2012 in Optimization by hundalhh | 3 comments
Yves Brise created this nice set of slides describing the DIRECT algorithm for Lipschitz functions. Tim Kelly of Drexel university provides Matlab code here.
Hasan on March 22, 2013 at 12:26 am
The slide shown doesn’t appear right. Nelder-mead, and Hooke-Jeaves are both derivative free.
hundalhh on March 22, 2013 at 10:08 am
Yes, neither Nelder-Mead nor Hooke-Jeaves use the actual derivative.
Nelder-Mead in effect does a gradient descent. By evaluating the function at the vertices of the simplex, it figures out approximately the direction of the gradient and uses that to determine the next evaluation. So it is quite similar to steepest descent.
Hooke-Jeaves also is similar to gradient descent because it evaluates points near the best current estimate of the minimum.
So despite the fact that they are derivative free, it seems to me that they behave similarly to gradient descent.
hundalhh on March 22, 2013 at 10:13 am
Also, Conjugate Gradient is not derivative free.
Comments are now closed.
Powered by WordPress and Tarski