“discriminant analysis, Bayesian, neural networks, support vector machines, decision trees, rule-based classifiers, boosting, bagging, stacking, random forests and other ensembles, generalized linear models, nearest-neighbors, partial least squares and principal component regression, logistic and multinomial regression, multiple adaptive regression splines and other methods”
from the Weka, Matlab, and R machine learning libraries. The 121 datasets were drawn mostly from the UCI classification repository.
The overall result was that the random forest classifiers were best on average followed by support vector machines, neural networks, and boosting ensembles.
For more details, read the paper!