“…we often joke that our job, as the team that builds the experimentation platform, is to tell our clients that their new baby is ugly, …”
Andrew Gelman at Statistical Modeling, Causal Inference, and Social Science pointed me towards the paper “Trustworthy Online Controlled Experiments: Five Puzzling Outcomes Explained” by Ron Kohavi, Alex Deng, Brian Frasca, Roger Longbotham, Toby Walker, and Ya Xu all of whom seem to be affiliated with Microsoft.
The paper itself recounted five online statistical experiments mostly done at Microsoft that had informative counter-intuitive results:
- Overall Evaluation Criteria for Bing
- Click Tracking
- Initial Effects
- Experiment Length
- Carry Over Effects.
The main lessons learned were:
- Be careful what you wish for. – Short term effects may be diametrically opposed to long-term effects. Specifically, a high number clicks or queries per session could be indicative of a bug rather than success. It’s important to choose the right metric. The authors ended up focusing on “sessions per user” as a metric as opposed to “queries per month” partly due to a bug which increased (in the short-term) queries and revenues while degrading the user’s experience.
- Initial results are strongly affected by “Primacy and Novelty”. – In the beginning, experienced users may click on a new option just because it is new, not because it’s good. On the other hand, experienced users may be initially slowed by a new format even if the new format is “better”.
- If reality is constantly changing, the experiment length may not improve the accuracy of the experiment. The underlying behavior of the users may change every month. A short-term experiment may only capture a short-term behavior. Rather than running the experiment for years, the best option may be to run several short-term experiments and adapt the website to the changing behavior as soon as the new behavior is observed.
- If the same user is presented with the same experiment repeatedly, her reaction to the experiment is a function of the number of times she has been exposed to the experiment. This effect must be considered when interpreting experimental results.
- The Poisson Distribution should not be used to model clicks. They preferred Negative Binomial.
The paper is easy to read, well written, and rather informative. It is especially good for web analytics and for anyone new to experimental statistics. I found the references below to be especially interesting:
- “Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society” by Manzi (book)
- “Web Analytics: An Hour per Day” by Kaushik (book)
- “Controlled experiments on the web: survey and practical guide” by Kohavi, Longbotham, Sommerfield, and Henne (2009)
- “Seven Pitfalls to Avoid when Running Controlled Experiments on the Web” by Crook, Frasca, Kohavi, Longbotham (2009)