In “Exploitation of Machine Learning Techniques in Modelling Phrase Movements for Machine Translation“, Ni, Saunders, Szedmak, and Niranjan (2011) create a “phrase reordering model” for statistical machine translation. They apply their method to a Chinese-English corpus to match phrases in each language. They compare their method to well known maximum entropy methods, support vector machines, maximum margin regression, and max-margin structure learning while giving short summaries on how each method is applied. I’m very impressed with their writing style and the content of the paper. The concept of maximum margin regression (similar to SVM) is explored in “Learning via Linear Operators: Maximum Margin Regression; Multiclass and Multiview Learning at One-class Complexity” by Szedmak, and Shawe-Taylor, and Parado-Hernandez (2006). Max-margin structure learning is described in “Max–margin markov networks” by Taskar, Guestrin, and Koller (NIPS 2003).
You are currently browsing the archive for the Languages category.
I am quite excited about the Julia language (windows download, manual). It’s free. It’s almost the same as Matlab, but it is as fast as C++ (much faster than Matlab and Octave, 160 times faster in the example below). Here is a quick comparison.
Matlab code (primeQ.m):
function b = primeQ( i ) for j=2:ceil(i/2.0) if mod(i,j) == 0 b = false; return end end b = true; end
tic; primeQ(71378569); toc
Elapsed time is 52.608765 seconds.
Julia code (primeQ.jl):
function primeQ( i ) for j=2:ceil(i/2.0) if mod(i,j) == 0 return false; end end return true end
tic(); primeQ(71378569); toc()
elapsed time: 0.3280000686645508 seconds
For a speed comparison with Matlab, see this post.