March 2013

You are currently browsing the monthly archive for March 2013.

You can find anything on the web.  Earlier today, I noticed that the Sterling approximation and the Gaussian distribution both had a $\sqrt{2 \pi}$ in them, so I started thinking that maybe you could apply Sterling’s approximation to the binomial distribution with $p=1/2$ and large $n$ and the central limit theorem to derive the Gaussian distribution.  After making a few calculations it seemed to be working and I thought, “Hey, maybe someone else has done this.”  Sure enough, Jake Hoffman did it and posted it here.  The internet is pretty cool.

In “Generalized Thompson sampling for sequential decision-making and causal inference“, Ortega and Braun (2013) give a short history of Thompson sampling (see [1][2][3][4]) and report on the relationship between intelligent agents, evolutionary game theory, Bayesian inference, KL Divergence, and Thompson sampling.  They develop a generalization of Thompson sampling based on a Bayesian prior over a distribution of environments.  Some numerical results are provided.  Here’s the abstract:

Recently, it has been shown how sampling actions from the predictive distribution over the optimal action—sometimes called Thompson sampling—can be applied to solve sequential adaptive control problems, when the optimal policy is known for each possible environment. The predictive distribution can then be constructed by a Bayesian superposition of the optimal policies weighted by their posterior probability that is updated by Bayesian inference and causal calculus. Here we discuss three important features of this approach. First, we discuss in how far such Thompson sampling can be regarded as a natural consequence of the Bayesian modeling of policy uncertainty. Second, we show how Thompson sampling can be used to study interactions between multiple adaptive agents, thus, opening up an avenue of game-theoretic analysis. Third, we show how Thompson sampling can be applied to infer causal relationships when interacting with an environment in a sequential fashion. In summary, our results suggest that Thompson sampling might not merely be a useful heuristic, but a principled method to address problems of adaptive sequential decision-making and causal inference.

I enjoyed this simple entertaining video by Hilary Mason which is both a history of machine learning and an overview.

I was reading MarkCC’s gripe about the misuse of the word dimension and it reminded me about the Hausdorff dimension.  Generally speaking, the Hausdorff dimension matches our normal intuitive definition of dimension, i.e the H-dimension of a smooth curve or line is 1, the H-dimension of a smooth surface is 2, the H-dimension of the interior of a cube is 3. But for fractals, the H-dimension can be a non-integer. For the Cantor set, the H-dimension is 0.63. For more information, check out the Wikipedia article

 

http://en.wikipedia.org/wiki/Hausdorff_dimension

 

I’m quite excited by the Nuit Blanche post on the papers “Structure Discovery in Nonparametric Regression through Compositional Kernel Search” (Duvenaudy, Lloydy, Grossez, Tenenbaumz, Ghahramaniy 2013) and “Exploiting compositionality to explore a large space of model structures” (Grosse, Salakhutdinovm, Freeman, and Tenenbaum 2012).  For years my old company Momentum Investment Services, Carl, and I have been looking for fast, systematic ways to search large hypothesis spaces.  We considered context-free grammars as a means of generating hypothesis.  Carl and I did not get anywhere with that research, but now it seems that others have succeeded.  Be sure to look over the article, the blog posts, and the comments.

In “Algorithm Portfolio Design: Theory vs. Practice“, Gomes and Selman (2013) study the use of a portfolio of stochastic search algorithm to solve computationally hard search problems.  Here are some interesting quotes from the paper:

“Our studies reveal that in many cases the performance of a single algorithm dominates all others, on the problem class under consideration.”

“Given the diversity in performance profiles among algorithms, various approaches have been developed to combine different algorithms to take into account the computational resource constraints and to optimize the overall performance. These considerations led to the development of anytime algorithms (Dean and Boddy 1988), decision theoretic metareasoning and related approaches (Horvitz and Zilberstein 1996; Russell and Norvig 1995), and algorithm portfolio design (Huberman et al. 1997).”

“In addition, we also show that a good strategy for designing a portfolio is to combine many short runs of the same algorithm. The effectiveness of such portfolios explains the common practice of  “restarts” for stochastic procedures, where the same algorithm is run repeatedly with different initial seeds for the random number generator. (For related work on the effectiveness of restarts, see e.g., Aldous and Vazirani 1994; Ertel 1991; Selman and Kirkpatrick 1996.)”

Scott Aaronson gave a wonderful talk at NIPS 2012 called “Quantum information and the Brain“.  I really enjoyed his very concise intuitive description of quantum mechanics and his sceptical yet fair attitude towards the ideas expressed by Penrose in “The Emperor’s New Mind”.

This is just too cute.  Type

 

data:text/html, <html contenteditable>

 

into the URL tab and then type in the blank page.

 

Courtesy of Jose Jesus Perez Aguinaga  https://coderwall.com/p/lhsrcq  !

 

“…we often joke that our job, as the team that builds the experimentation platform, is to tell our clients that their new baby is ugly, …”

Andrew Gelman at Statistical Modeling, Causal Inference, and Social Science pointed me towards the paper “Trustworthy Online Controlled Experiments:  Five Puzzling Outcomes Explained” by Ron Kohavi, Alex Deng, Brian Frasca, Roger Longbotham, Toby Walker, and Ya Xu all of whom seem to be affiliated with Microsoft.

The paper itself recounted five online statistical experiments mostly done at Microsoft that had informative counter-intuitive results:

  • Overall Evaluation Criteria for Bing
  • Click Tracking
  • Initial Effects
  • Experiment Length
  • Carry Over Effects.

The main lessons learned were:

  • Be careful what you wish for. – Short term effects may be diametrically opposed to long-term effects.  Specifically, a high number clicks or queries per session could be indicative of a bug rather than success.  It’s important to choose the right metric.  The authors ended up focusing on “sessions per user” as a metric as opposed to “queries per month” partly due to a bug which increased (in the short-term) queries and revenues while degrading the user’s experience.
  • Initial results are strongly affected by “Primacy and Novelty”. – In the beginning, experienced users may click on a new option just because it is new, not because it’s good.  On the other hand, experienced users may be initially slowed by a new format even if the new format is “better”.
  • If reality is constantly changing, the experiment length may not improve the accuracy of the experiment.  The underlying behavior of the users may change every month.  A short-term experiment may only capture a short-term behavior. Rather than running the experiment for years, the best option may be to run several short-term experiments and adapt the website to the changing behavior as soon as the new behavior is observed.
  • If the same user is presented with the same experiment repeatedly, her reaction to the experiment is a function of the number of times she has been exposed to the experiment.  This effect must be considered when interpreting experimental results.
  • The Poisson Distribution should not be used to model clicks.  They preferred Negative Binomial.

The paper is easy to read, well written, and rather informative. It is especially good for web analytics and for anyone new to experimental statistics.  I found the references below to be especially interesting:

 

Pedro Domingos wrote this wonderful introductory overview of machine learning which teaches twelve short lessons about classifiers:

  1. LEARNING = REPRESENTATION + EVALUATION + OPTIMIZATION
  2. IT’S GENERALIZATION THAT COUNTS
  3. DATA ALONE IS NOT ENOUGH
  4. OVERFITTING HAS MANY FACES
  5. INTUITION FAILS IN HIGH DIMENSIONS
  6. THEORETICAL GUARANTEES ARE NOT WHAT THEY SEEM
  7. FEATURE ENGINEERING IS THE KEY
  8. MORE DATA BEATS A CLEVERER ALGORITHM
  9. LEARN MANY MODELS, NOT JUST ONE
  10. SIMPLICITY DOES NOT IMPLY ACCURACY
  11. REPRESENTABLE DOES NOT IMPLY LEARNABLE
  12. CORRELATION DOES NOT IMPLY CAUSATION

The lessons are augmented with excellent examples, so I highly recommend reading the simply written, short article.

I put together some notes and quotes, on and from these lessons which I will share below.

 

LEARNING = REPRESENTATION + EVALUATION + OPTIMIZATION – Creating a classifiers involves three key ideas:  representation, an objective function, and optimization.  For example, for $k$-means classification, the domain is divided up by a Voronoi diagram and each region has a different label or possibly a mixture of labels.  This is the representation.  The objective function does not involve the labels—only the input data.  The objective function is the sum of the squared distances from input data to the “center” of each region to which the datum belongs.  The optimization technique is greedy alternation between optimizing the center of each region and the assignment of data to regions.

IT’S GENERALIZATION THAT COUNTS – “The fundamental goal of machine learning is to generalize beyond the examples in the training set.”  Merely learning the training set is both trivial and insufficient. Often doing well on the training set is a bad indicator of performance outside the training set.  Generalization is the only way to label inputs that are not in the training set.  Unfortunately, Wolpert’s “no free lunch” theorems demonstrate that no generalization method can always beat random guessing for learning problems without structure  (i.e.  Martin-Löf random classification problems do not admit generalization.)

OVERFITTING HAS MANY FACES – If we do not have enough data or time to determine the correct representation or the correct number of parameters, then we introduce either bias errors when we use the wrong representation or overfitting errors when we have too many parameters (i.e. Bias-Variance Trade-off).  The fact that trying too many hypotheses has the same dangers as using too many parameters implies “contrary to intuition, a more powerful learner is not necessarily better than a less powerful one.” and “strong false assumptions can be better than weak true ones, because a learner with the latter needs more data to avoid overfitting.”  Cross validation, statistical significance tests, minimizing the false discovery rate, and regularization terms prevent most overfitting.  Overfitting can even occur when all the data is correctly labelled.

INTUITION FAILS IN HIGH DIMENSIONS – Most algorithms have more trouble in higher dimensional spaces and our intuition fails in higher dimensional spaces. Nearest Neighbor for simple classification problems tends to fail in more than 100 dimensions. Our intuition built on experience with 3 dimensional spaces fails to understand facts like a hypersphere with radius 0.95 has less than 1% of the volume of a hypersphere of radius 1 in a 200 dimensional space. So for a Gaussian in such a space, the majority of the probability is not near the mean, instead it lies in a “shell” about the mean. The volume of a hypersphere is much less than the smallest hypercube that contains it. These counter-intuitive facts reek havoc for several machine learning algorithms. It is even hard to fix or adjust the algorithm because we often can’t visualize the problem in high dimensional spaces. So dimensionality reduction via manifold learning, PCA, LDA, ICA, and feature selection becomes very important.

THEORETICAL GUARANTEES ARE NOT WHAT THEY SEEM  – Domingos repeats the well known argument that if bad classifiers are wrong with probability greater than $\epsilon$ and the number of hypothetical classifiers is $H$, then $n = {1\over\epsilon}(\ln(H) + \ln({1\over\delta}))$ randomly chosen test examples are sufficient to eliminate all of the bad classifiers with probability greater than $1- \delta$.  This works great as long at $H$ is not large, but often $H$ is large.  The number of possible classifiers for $k$ binary features is $2^{2^k}$, so the bound above is sometimes not useful.  The number of hypotheses can also be effectively infinite.  Similarly, if the number of bad classifiers is large compared to the number of good classifiers, then $\delta$ needs to be very small.  Other performance bounds are true only in the limit (e.g. infinite data) or contain unknown constants, so they merely give the user hope rather than a guarantee that the resulting classifier is correct.

FEATURE ENGINEERING IS THE KEY –  Machine learning is often an iterative process of generating features, choosing algorithms, and analyzing the results.  Creating useful features is often difficult and can require domain knowledge or luck, so “there is ultimately no replacement for the smarts you put into feature engineering.”

LEARN MANY MODELS, NOT JUST ONE – Use ensemble learning including bagging, boosting, bootstrap, stacking, Bayesian optimization, mixture of experts, and plain old averaging/voting.  These ensembles of other well-known methods work and win contests like the Netflix prize.  Bayesian model averaging (BMA) is theoretically optimal, but it tend to heavily weight one of the learners, so it should not be considered ensemble learning.

MORE DATA BEATS A CLEVERER ALGORITHM – If you are only going to read one section, read this section.  The title itself is self-explanatory, but Domingos adds several interesting insights in this section like,

“…even though in principle more data means that more complex classifiers can be learned, in practice simpler classifiers wind up being used, because complex ones take too long to learn.”

“As a rule, it pays to try the simplest learners first (e.g., naive Bayes before logistic regression, k-nearest neighbor before support vector machines). More sophisticated learners are seductive, but they are usually harder to use, because they have more knobs you need to turn to get good results, and because their internals are more opaque.”,

“Part of the reason using cleverer algorithms has a smaller payoff than you might expect is that, to a first approximation, they all do the same. This is surprising when you consider representations as different as, say, sets of rules and neural networks. But in fact propositional rules are readily encoded as neural networks, and similar relationships hold between other representations. All learners essentially work by grouping nearby examples into the same class; the key difference is in the meaning of “nearby.” “,

“In the end, the biggest bottleneck is not data or CPU cycles, but human cycles. In research papers, learners are typically compared on measures of accuracy and computational cost. But human effort saved and insight gained, although harder to measure, are often more important. This favors learners that produce human-understandable output (e.g., rule sets). And the organizations that make the most of machine learning are those that have in place an infrastructure that makes experimenting with many different learners, data sources and learning problems easy and efficient, and where there is a close collaboration between machine learning experts and application domain ones.”,

and

“Learners can be divided into two major types: those whose representation has a fixed size, like linear classifiers, and those whose representation can grow with the data, like decision trees.”

He says the variable size learners can, in theory, learn any class, but they often fall into local optima, require too much time, or fall to the curse of dimensionality, so these methods require a lot of experimentation and thinking on the part of the analyst.

Lastly, Generative models are more fun.

« Older entries