January 2013

You are currently browsing the monthly archive for January 2013.

I really enjoyed Jeremy Kun’s article on Information Distance at Math ∩ Programming.

Check out Ross Rosen’s collection of tools for machine learning with Python.

Caleb Crane wrote

“The mention of military technology brings me to my last idea. This is the challenge of the robot utopia. You remember the robot utopia. You imagined it when you were in fifth grade, and your juvenile mind first seized with rapture upon the idea of intelligent machines that would perform dull, repetitive tasks yet demand nothing for themselves. In the future, you foresaw, robots would do more and more, and humans less and less. There would be no need for humans to endanger themselves in coal mines or bore themselves on assembly lines. A few people would always be needed to repair and build the robots, and this drudgery of robot supervision would have to be rewarded somehow, but someday robots would surely make wealth so abundant that most people wouldn’t need to work and would be free merely to enjoy and cultivate themselves—by, say, hunting in the morning, fishing in the afternoon, and doing literary criticism after dinner.

Your fifth-grade self was wrong, of course. Robots aren’t altruistic beings; they’re capital investments; and though robots may not ask to be paid, their owners demand a return on their investment. We now live in the robot utopia, which isn’t one. Thanks in large part to computerized mechanization, manufacturing productivity in the past century has increased many times over. Standards of living are higher than they ever were, but we no longer need as many humans to work as we once did. Perhaps not coincidentally, human wages, in America at least, have stagnated since the 1970s. If humans made no more money in the past four decades, where did the wealth created by the higher productivity go? Toward robot wages, as it were. The owners of the robots took the money—that is, the capitalists. Any fifth-grader can see where this leads. At some point society has to choose. Either society accepts the robots’ gift as a general one, and redistributes the wealth that the robots inadvertently concentrate, or society allows the robots to become the exclusive tools of an ever-shrinking elite, increasingly resented, in confused fashion, by the people whom the robots have displaced.”

in


“DEBTMAGEDDON VS. THE ROBOT UTOPIA”

Srivastava and Schmidhuber have recently written about their first experiences with Power Play.  Power Play is a selfmodifying general problem solver that uses algorithmic complexity theory and ideas similar to Levin Search and AIQ/AIXI (see [1], [2], and [3]) to find novel solutions to problems and invent new “interesting” problems.  I was more interested in Hutter‘s approximations to AIXI mainly because it was easy to understand the results when it was applied to the simple games (1d-maze, Cheese Maze, Tiger, TicTacToe, Biased Rock-Paper-Scissor, Kuhn Poker, and Partially Observable Pacman), but I look forward to future papers on Power Play.

 

Check out WhiteSwami’s list of subjects to master if you are going to become a machine learner.

David Andrzejewski at Bayes’ Cave wrote up a nice summary of practical machine learning advice from the KDD 2011 paper “Detecting Advesarial Advertisements in the Wild”.  I’ve quoted below several of the main points from David’s summary:

  • ABE: Always Be Ensemble-ing
  • Throw a ton of features at the model and let L1 sparsity figure it out
  • Map features with the “hashing trick
  • Handle the class imbalance problem with ranking
  • Use a cascade of classifiers
  • make sure the system “still works” as its inputs evolve over time
  • Make efficient use of expert effort
  • Allow humans to hard-code rules
  • periodically use non-expert evaluations to make sure the system is working

According to this graph

Figure 10 From THE LONG-TERM IMPACTS OF TEACHERS: TEACHER VALUE-ADDED AND STUDENT OUTCOMES IN ADULTHOOD by Chetty, Friedman, and Jonah E. Rockoff

high quality elementary school teachers increase the lifetime earnings of their students by about $200,000 per child.

 

In “Improving neural networks by preventing co-adaptation of feature detectors“, Hinton, Srivastava, Krizhevsky, Sutskever, and Salakhutdinov answer the question:  What happens if “On each presentation of each training case, each hidden unit is randomly omitted from the network with a probability of 0.5, so a hidden unit cannot rely on other hidden units being present.”  This mimics the standard technique of training several neural nets and averaging them, but it is faster.  When they applied the “dropout” technique to a deep Boltzmann neural net on the MNIST hand written digit data set and the TIMIT speech data set, they got robust learning without overfitting.  This was one of the main techniques used by the winners of the Merck Molecular Activity Challenge.

Hinton talks about the dropout technique in his video Brains, Sex, and Machine Learning.

“Aaron Swartz (1986-2013)”

In “Analysis of Thompson Sampling for the multi-armed bandit problem“, Agrawal and Goyal show that Thompson Sampling (“The basic idea is to choose an arm to play according to its probability of being the best arm.”) has logarithmic regret. More specifically, if there are $n$ bandits and the regret $R$ at time $T$ is defined by

$$R(T) := \sum_{t=1}^T (\mu^* – \mu_{i(t)})$$

where $\mu_i$ is the expected return of the $i$th arm and $\mu^* = \max_{i = 1, \ldots, n} \mu_i$, then

$$E[R(T)] \in O\left(\left(\sum_{i\neq i^*} 1/\Delta_i^2\right)^2 \log T \right)$$

where $\mu(i^*) = \mu^*$ and $\Delta_i = \mu^* – \mu_i$.

« Older entries