General ML

You are currently browsing the archive for the General ML category.

  1. Enjoying John Baez’s blog Azimuth.  Especially the posts on good research practices and an older post on levels of mathematical understanding.
  2. García-Pérez, Serrano, and Boguñá wrote a cool paper on primes, probability, and integers as a bipartite network.
  3. Loved the idea behind the game theoretical book “Survival of the Nicest”  (see Yes Magazine for a two page introduction).
  4. Scott Young is learning Chinese quickly.
  5. Cyber warriors to the rescue.
  6. Mao, Fluxx, and Douglas Hofstadter‘s Nomic are fun games.
  7. Healy and Caudell are applying category theory to semantic and neural networks.
  8. Some MOOCs for data science and machine learning.
  9. Here an old but good free online course on the Computational Complexity of Machine Learning.
  10. Great TeX graphics.
  11. Watch this Ted Video to learn anything in 20 hours (YMMV).
  12. Where are all the Steeler fans?  Cowboy fans?  ….
  13. Productivity Hints.
  14. Copper + Magnets = Fun
  15. Stray dogs on the subway.
  16. Deep learning on NPR.
  17. Happy 40th birthday D&D
  18. Google is applying deep learning to images of house numbers
  19. Deep learning in your browser.
  20. How to write a great research paper.
  21. Do Deep Nets Really Need to be Deep?
  22. A variation on neural net dropout.
  23. Provable algorithms for Machine Learning
  24. 100 Numpy Exercises
  25. Learn and Practice Applied Machine Learning | Machine Learning Mastery

 

Zygmunt Zając at fastml.com has a great post titled “13 NIPS Papers that caught our eye.” Zajac provides a short readable summary of his favorite NIPS 2013 papers.  (NIPS 2013 which just ended last week.) The papers are:

  1. Understanding Dropout  by Baldi and Sadowski
  2. Training and Analysing Deep Recurrent Neural Networks by Hermans and Schrauwen
  3. RNADE: The real-valued neural autoregressive density-estimator by Uria, Murray, and Larochelle
  4. Predicting Parameters in Deep Learning by Denil, Shakibi, Dinh,  Ranzato, and Freitas
  5. Pass-efficient unsupervised feature selection by Maung and Schweitzer
  6. Multi-Prediction Deep Boltzmann Machines by Goodfellow, Mirza, Courville, and Bengio
  7. Memoized Online Variational Inference for Dirichlet Process Mixture Models by Hughes, and Sudderth
  8. Learning word embeddings efficiently with noise-contrastive estimation by Andriy , and Kavukcuoglu
  9. Learning Stochastic Feedforward Neural Networks by Tang and Salakhutdinov
  10. Distributed Representations of Words and Phrases and their Compositionality by Mikolov, Sutskever, Chen, Corrado, and Dean
  11. Correlated random features for fast semi-supervised learning by McWilliams, Balduzzi, and Buhmann
  12. Convex Two-Layer Modeling by Aslan, Cheng, Zhang, and Schuurmans, and
  13. Approximate inference in latent Gaussian-Markov models from continuous time observations by Cseke, Opper, and Sanguinetti

I suggest reading Zajac’s summaries before diving in.

Over the last few weeks, I’ve been working with some tricky features. Interestingly, I needed to add noise to the features to improve my classifier performance.  I will write a post on these “confounding features” later.  For now, let me just point out the following useful fact.

If

$$f(x, \sigma) = {1\over{\sigma \sqrt{2 \pi}}} \exp{\left(-{{x^2}\over{2 \sigma^2}}\right)},$$

then

$$\max_\sigma f(x,\sigma) = f(x, |x|).$$

So, if you have a Gaussian with mean zero and you want to fatten it to maximize the likelihood of the probability density function at $x$ without changing the mean, then set the standard deviation to $|x|$.

So I have been learning a little category theory in the hopes that I could use it somehow to teach computers abstraction.

What is abstraction?  Carl and I have been talking about this for years.  Here are some ideas on what abstraction could be:

1a) An average model of a large group of similar things.  For example, the abstraction of a chair might be an idealized chair with a seat, a backrest, four legs, and possibly “strechers” between the legs to give it strength and stability.

1b) A function in a computer program that is used in several places in the code.  In effect this function shortens the code.  If the code has length L1 without the function, the code is length L2 with the function, the function is length LC, and the function is used N times in the code with the function, then typically

L2 $\approx$ L1 – LC*N + LC + N.

The function is an abstraction of the pieces of code that it replaced similar to the way an ideal chair is an abstraction of many chairs.  The function has the common characteristics of the code it replaced.  For example, the following code consists of about 160 characters

// C Code without function

main()
{
   printf("Hello John\n");
   printf("Hello Harry\n");
   printf("Hello Rose\n");
   printf("Hello Carol\n");
   printf("Hello Mr. Thomas\n");
   printf("Hello Mrs. Jones\n");
}

 

and it could be shortened slightly to about 137 characters by

 

// C Code with function

void pr(char *s)
{
   printf("Hello %s\n, s");
}

main() {
   pr("John");
   pr("Harry");
   pr("Rose");
   pr("Carol");
   pr("Mr. Thomas");
   pr("Mrs. Jones");
}

 

Though this example is not in any way practical, it does illustrate how a function can capture commonality.  All of the pieces of code replaced by the pr function had a call to printf, the substring “Hello “, and the carriage return “\n”.  The pr function was, in a sense, the idealized version of the six print statements in the original code.

In category theory, we could refer to the category of strings with arrows going from superstrings to substrings.  In this category, there would be an arrow from the line of code ”   printf(“Hello John\n”);” to the string ”   printf(“Hello \n”);”.  The categorical coproduct of several strings in this category would be the longest common substring.  So, category theory suggests the substring ”   printf(“Hello \n”);” should somehow be included in the abstraction of those six printf statements in the original code.

2)  A simplifying transformation that changes things into simpler things which are easier to work with.  For example, if you are counting sheep in a pen, you might convert groups of sheep into numbers and then add the numbers to get a total count for the entire pen.  We simplified the counting process by first converting groups of sheep into numbers.  Another example might be solving a corn maze by first flying over it, taking a picture, and then converting the “impassable” rows of corn into lines drawn on a piece of paper.  To solve the corn maze, you first convert it to a pen drawing on paper and then solve the pen drawn maze.

3)  Removing irrelevant information or detail.  If a piece in chess is attacking the opponent’s king, then the opponent must either a) move the king, b) put a piece between the attacker and the king, or c) capture the attacking piece.  If you wanted to list the possible moves for the opponent, you can often delete from the chess board most of the pieces because most of the pieces on the board do not affect the three options above.  In the diagram below, the white pawns, rook, and king and the black pawn on a7 do not affect the legal moves for black.

Check

In a corn maze, the type of corn, the height of the corn stalks, the temperature, or the type of soil do not affect the solution of the corn maze, so that information could be ignored.  Ignoring irrelevant information allows the brain to form a more abstract, simple model of the corn maze.

4) Variables in first order predicate logic.  “If person A is taller than person B and person B is taller than person C, then person A is taller than person C.”  In that statement, person A could any one of a billion people on earth.

5) Establishing classes of objects.  “Fruit tends to spoil quickly.”  That statement applies to a huge variety of organic objects.  Abstraction classes can even be put in a hierarchy like   Animal ==>> Mammal ==> Cat ==> Siamese Cat.

6) Statistical Patterns and Compression.  If a piece of code like zip.exe compresses a corpus of English text files, we could say that the piece of code formed an abstract model of the corpus.  In the case of the zip function which uses the Lempel–Ziv–Markov chain algorithm, the zip program would notice that

  • the letters ‘e’ and ‘t’ appear frequently,
  • the letter ‘u’ usually follows ‘q’, and
  • the capital letter I frequently appears with spaces on either side.

These patterns are all mini-abstractions or aspects of English.

 

A very general notion of abstraction could be any function which shortens the compression of data.  That function could be an abstraction of some part of the data.  If your data consisted of 3 dimensional models of chairs, then a compression program could use the fact that a chair usually has four legs, a seat, and a backrest.  A compression program might benefit from a function like the one shown below.

char *DescribeChair( double *featureVector)
{
   char *description;

   description = describeLegs(outputBuffer, featureVector);
   description = describeSeatOnTopOfLegs(description,
      featureVector);
   description = describeBackRestOnTopOfSeat(description,
      featureVector);

   return description;
}

 

The abstract version of a chair would help compress the data describing many chairs.

A compression program for a corpus of chess games would probably include a board evaluation function.  The program could use the evaluation function to suggest the top five most likely moves.  This would allow the program to use less space to compress the moves by using a probability distribution (e.g. arithmetic encoding) over moves as a function of the “goodness” of each move.  Better moves would be more likely than worse moves.  Converting the positions to valuations helps the computer to compress by making it “understand” chess.  If the computer is compressing many chess games, it may even note that some human players were more aggressive players or stronger players.  By classifying the players by skill or behavior, the program achieves better compression.

Well that’s it for now.

Hopefully, sometime in the future, I will be describing how category theory can help us automate those six abstractions and then use the abstractions to generate features for learning algorithms.

 

 

(Code highlighting was done by http://hilite.me/.  Thanks!)

 

 

Nuit Blanche has a nice summary of the FOCS 2012 53rd Annual IEEE Symposium on Foundations of Computer Science Workshop on “Randomized Numerical Linear Algebra (RandNLA): Theory and Practice“.  Faster random algorithms for QR, SVD, Eigenvalues, Least Squares, …  using random projections and other techniques were discussed.

Erik Brynojolfsson, director of the MIT Center for Digital Business, gave a great TED talk on the recent changes in world-wide productivity, computers, machine learning and the “great stagnation“.  There has been a stagnation in GDP per capita for the last 5 years in the United States, Europe, and Japan.  However, growth has continued unabated in India, China, and Brazil.  Economist Tyler Cowen is not particularly optimistic about the current economic situation which he describes in his book “The Great Stagnation: How America Ate All The Low-Hanging Fruit of Modern History, Got Sick, and Will (Eventually) Get Better” ($4 Kindle edition). Erik disagrees with the idea that the economy, technology, and science are stagnant.  Instead, we are being confused by what he calls the “great decoupling” of productivity growth and employment/median income.  He believes that we are actually in the early phases of the “new machine age” which will create more wealth than the industrial revolution.  Here are some notes on his TED talk:

  • The introduction of electricity did not immediately lead to large productivity growth. At first, factory managers just replaced existing steam machines with electric machines. It took thirty years (one generation) before the factories were redesigned with electricity in mind and new work processes were created. Then productivity growth took off.
  • Big general purpose technologies like fire, agriculture, writing, metallurgy, printing press, steam engine, telecommunications & electricity, and now computers cause “cascades of complementary innovations”.
  • The GDP growth rate in America has been fairly steady at an average of 1.9% per year since 1800.
  • Erik graphically compares the 1960-2011 productivity growth caused partly by computers and the 1890-1940 productivity growth caused by electricity.  They appear quite similar.
  • Productivity has grown faster (2.4% per year) in the last decade than any other decade since 1970. The average productivity growth rate was about 2.3% during the second industrial revolution.
  • World GDP in the last decade has grown faster than any other decade in history.
  • New growth has been driven by thoughts and ideas more than physical products. This growth is hard to measure. The massive utility of free products like Google Search or the Wikipedia is not included in GDP.
  • The “new machine age” is “digital, exponential, and combinatorial”.
  • Digital goods are perfectly, freely replicable and they have near zero transportation cost (often moving at great speed crossing the globe in a minute.)
  • Computer power has grown exponentially. In Erik’s words, “Computers get better faster than anything else ever.” (2013 playstation = 1996 supercomputer)
  • We are not designed by evolution to anticipate exponential trends. We expect linear trends.
  • We now have Big Data and Machine Learning which allow us to analyze everything more deeply.
  • He disagrees with the idea of “low hanging fruit” inventions (see e.g. Cowen’s book). He thinks that “each innovation produces building blocks for even more innovations”.
  • The digital, exponential, and combinatorial nature of the new machine age will lead to the greatest industrial revolution in history. (see e.g. cat transportation at 6:55 in the video).
  • Machine Learning may be the most important innovation of this revolution. (The Watson Jeopardy team improved faster than any human could by using machine learning.)  Watson’s technology is now being applied in the legal, banking, and medical fields.
  • “The great decoupling” is the decoupling of high productivity growth from employment and median income.
  • Jobs are being replaced by computers.
  • In 1997 Gary Kasparov was beaten by the super computer Deep Blue. Today a cell phone can beat a chess grandmaster.
  • In computer assisted chess play (Advanced Chess), sometimes the winners are neither computer experts nor grandmasters. Instead, they were the best at interacting with the computer—guiding its exploration of the possibilities. So, maybe we will not lose our jobs to computers, rather we will collaborate with computers to create wonderful things.

In the article “Is chess the drosophila of artificial intelligence? A social history of an algorithm“, Nathan Ensmenger writes about the history of artificial intelligence and chess computers as well as providing a detailed description of the main chess algorithm, minimax tree search. Here are some notes on the article:

  • Russian mathematician Alexander Kronrod was credited for first stating that “chess was the drosophila of artificial intelligence” in 1965 to justify the use of expensive computers for playing chess.
  • In 1946, Turing suggested that chess playing computers would be an example of a thinking machine and in 1953 he wrote the first program that could play chess; however, there were no computers around that could run his program.
  • In 1950, Claude Shannon wrote the first paper on chess algorithms and proposed two main approaches:  type ‘A’ brute force minimax search and type ‘B’ which used heavy heuristics to carefully examine a much smaller number of positions.  Type ‘A’ turned out to be superior for chess and governed the vast majority of chess computer research.  Shannon introduced the terms “ply” and “pruning”.
  • Chess minimax algorithms were attractive because they were simple to program and modify, they required only a little memory, and they worked.  Chess was attractive to AI because there were well-known standards for play (Elo chess ratings) and ever improving computing hardware created ever improving, measurable performance in computer chess play, providing some defense against the critics of AI during the AI winter.  (see e.g. Lighthill Report)
  • Unlike the drosophila in genetics research, the simplistic minimax algorithm never lead to improved theoretical understanding in AI and may have even hindered the progress of AI.  
  • When Deep Blue beat Gary Kasparov, it was capable of “11,380,000,000 floating point operations per second, making it one of the top 300 supercomputers in the entire world at the time.”

Perhaps surprisingly, the article does not discuss the major improvements made to computer chess by Fabien Letouzey (Fruit) and Vasik Rajlich (Rybkain 2006 and 2007.  Though these programs increased the power of chess computers by 200 Elo points over one year, they had little or no impact on AI.  I guess 200 Elo points does not seem like much compared to the 2000 Elo gain of computer chess programs between 1960 and today.  The 2000 point gain has been due to both hardware and software improvements but does not seem to stem from advances in AI. In fact, modern chess programs do not seem to use any modern machine learning algorithms.  (Please write a comment about this if I am wrong!)

I enjoyed this simple entertaining video by Hilary Mason which is both a history of machine learning and an overview.

I’m quite excited by the Nuit Blanche post on the papers “Structure Discovery in Nonparametric Regression through Compositional Kernel Search” (Duvenaudy, Lloydy, Grossez, Tenenbaumz, Ghahramaniy 2013) and “Exploiting compositionality to explore a large space of model structures” (Grosse, Salakhutdinovm, Freeman, and Tenenbaum 2012).  For years my old company Momentum Investment Services, Carl, and I have been looking for fast, systematic ways to search large hypothesis spaces.  We considered context-free grammars as a means of generating hypothesis.  Carl and I did not get anywhere with that research, but now it seems that others have succeeded.  Be sure to look over the article, the blog posts, and the comments.

Scott Aaronson gave a wonderful talk at NIPS 2012 called “Quantum information and the Brain“.  I really enjoyed his very concise intuitive description of quantum mechanics and his sceptical yet fair attitude towards the ideas expressed by Penrose in “The Emperor’s New Mind”.

« Older entries § Newer entries »