I recently came across this remarkable randomized algorithm due to Seidel (1991) for linear programming in 2 dimensions that takes only time for constraints (in expectation). It’s deﬁnitely worth a close inspection, so I thought I would write a short article about it.
Note that my presentation of this algorithm assumes that there are no redundant or colinear constraints, that the solution is bounded, and that the solution for each subproblem encountered is unique. These assumptions are easy to remove without slowing the algorithm down, but they complicate the presentation a little.
INPUT: Set of constraints & vector .
OUTPUT: A point in the polytope deﬁned by minimizing .
BASE CASE: If then return the unique intersecting point.
NON BASE CASE:
- Pick a constraint uniformly at random from .
- = lpsolve(, ).
- If is in the half-space deﬁned by RETURN , otherwise:
- RETURN , the solution of the 1D LP consisting of the projection of onto .
This algorithm is quite remarkable in its simplicity. It takes the form of an recursive algorithm, where we solve a base case, then add additional constraints 1 at a time. Every time we add a constraint, we consider if it changes the solution. It’s only when the old solution is outside the added constraint that we have to do any work, namely solving a simple 1D LP, which is trivially time for constraints.
If we had to solve the 1D LP subproblem at every step the algorithm would have quadratic running time ). We avoid this because we pick the constraint to recurse on at each step randomly. Clearly we only have to solve that 1D LP if the randomly chosen constraint B forms part of the new solution to the problem (ignoring degenerate solutions), and since a solution is the intersection of 2 constraints, there is only a in chance that we randomly pick such a constraint.
It follows that the cost of each recursion in expectation is times which is just 2, and we recurse times, giving us a running time in expectation.
In the original presentation by Seidel, the algorithm is given in a general form, usable in higher dimensions as well. The diﬃculty is that the running time quickly gets out of hand as the dimension increases. The main obstacle is step 4 above, you essentially need to solve a dimensional LP, which is only slightly easier than your original LP in higher dimensions.
On a side note, there is an interesting discussion of this algorithm and related ones in the Wikipedia page for LP-type problems.
Seidel, Raimund (1991), "Small-dimensional linear programming and convex hulls made easy", Discrete and Computational Geometry, 6 (5): 423–434
The Bayes factor test is an interesting thing. Some Bayesians advocate it unequivalently, whereas others reject the notion of testing altogether, Bayesian or otherwise. This post takes a critical look at the Bayes factor, attempting to tease apart the ideas to get to the core of what it’s really doing. If you're used to frequentist tests and want to understand Bayes factors, this post is for you.
The Bayes factor test goes all the way back to Jeﬀreys’ early book on the Bayesian approach to statistics [Jeﬀreys, 1939]. Evidence for an alternative hypothesis against that of the null hypothesis is summarized by a quantity known as the Bayes factor. The Bayes factor is just the ratio of the data likelihoods, under both hypotheses and integrating out any nuisance parameters:
If this ratio is large, we can conclude that there is strong evidence for the alternative hypothesis. In contrast, if the inverse of this ratio is large, we have evidence supporting the null hypothesis. The deﬁnition of what is strong evidence is subjective, but usually a Bayes factor of 10 or more is considered suﬃcient.
Although Bayes factors are sometimes used for testing simple linear regression models against more complex ones, by far the most common test in practice is the analogue to the frequentist t-test, the Bayes factor t-test. Under the assumption of normality with unknown variance, it tests a null hypothesis of zero mean against non-zero mean.
This test is implemented in the BayesFactor R package with the ttestBF method. For the pairwise case it can be invoked with bf <- ttestBF(x = xdata, y=ydata). For a non-pairwise test just x can be passed in.
Statistical details of the test To apply the test we must formally deﬁne the two hypotheses in the form of statistical models. Jeﬀreys’ book is very pragmatic, and the modelling choices for this test reﬂect that. Since this is a t-test, we of course assume a normal distribution for the data for both hypothesis, so it just remains to just deﬁne the priors on it’s parameters in both cases.
We have essentially 3 parameters we have to place priors on: the mean of the alternative, the variance under the alternative and the variance under the null The under the null is assumed to be . We have to be extremely careful about the priors we place on these parameters. Reasonable seeming choices can lead to divergent integrals or non-sensical results very easily. We discuss this further below.
Variance Because the variance appears in both the numerator and denominator of the Bayes factor (the null and the alternative), we can use a “non-informative” improper prior of the form for both. This prior causes the marginal likelihood integrals to go to inﬁnity, but this cancels out between the numerator and denominator giving a ﬁnite value.
Mean The choice of a prior for the mean is a little more subjective. We can’t use an improper prior here, as the mean prior appears only in the numerator, and without the convenient cancelation with factors in the denominator, the Bayes factor won’t be well deﬁned. Jeﬀreys argues for a Cauchy prior, which is just about the most vague prior you can use that still gives a convergent integral. The scale parameter of the Cauchy is just ﬁxed to the standard deviation in his formulation. The BayesFactor package uses 0.707 times the standard deviation instead as the default, but this can be overridden.
What are the downsides compared to a frequentist -test? The main downside is that the BF test is not as good by some frequentist measures than the tests that are designed to be as good as possible with respect to those measures. In particular, consider the standard measure of power at : the probability of picking up an eﬀect of size under repeated replications of the experiment. Depending on the eﬀect size and the number of samples , a Bayesian -test often requires 2-4 times as much data to match the power of a frequentist -test.
Implementation details The Bayes factor requires computing marginal likelihoods, which is a quite distinct problem from the usual posterior expectations we compute when performing Bayesian estimation instead of hypothesis testing. The marginal likelihood is just an integral over the parameter space, which can be computed using numerical integration when the parameter space is small. For this t-test example, the BayesFactor package uses Gaussian quadrature for the alternative hypothesis. The null hypothesis doesn’t require any integration since it consists of a single point.
Jeﬀerys lived in the statistical era of tables and hand computation, so he derived several formulas that can be used to approximate the Bayes factor. The following formula in terms of the -statistic works well for in the hundreds or thousands:
There are two types of priors that appear in the application of Bayes factors; The priors on the hypothesis , , and the priors on parameters that appears in marginal likelihoods, the . Your prior belief ratio is updated to form your posterior beliefs using the simple equation:
The ratio is not formally considered part of the Bayes factor, but it plays an important role. It encodes your prior beliefs about the hypotheses. If you are using Bayes factors in a subjective fashion, to update your beliefs in a hypothesis after seeing data, then it is crucial to properly encode your prior belief in which hypothesis is more likely to be true. However, in practice everybody always takes this ratio to be , following the ideals of objective Bayesian reasoning, where one strives to introduce as little prior knowledge as possible into the problem. If your writing a scientiﬁc paper, it’s reasonable to use 1 as anybody reading the paper can apply there own ratio as a correction to your Bayes factor, reﬂecting there own prior beliefs.
The prior ratio of the hypotheses plays an interesting part in the calibration of the Bayes factor test. One way of testing Bayes factors is to simulate data from the priors, and compare the empirical Bayes factors over many simulations with the true ratios. This involving sampling a hypothesis, or , then sampling then D. If the probability of sampling and is not equal at that ﬁrst stage, then the observed Bayes factor will be oﬀ by the corresponding . This is somewhat of a tautology, as we are basically just numerically verifying the Bayes factor equation as written. Nevertheless, it shows that if the priors on the hypotheses don’t reﬂect reality, then the Bayes factor won’t either. Blind application of a Bayes factor test won’t automatically give you interpretable “beliefs”.
The dependence of Bayes factors on the parameter priors and is really at the core of understanding Bayes factors. It is a far more concrete problem than the hypothesis likelihoods, as there is no clear “objective” choices we can make. Additionally, intuitions from Bayesian estimation problems do not carry over to hypothesis testing.
Consider the simple BF t-test given above. We described the test using the Cauchy prior on the location parameter of the Gaussian. Often in practice people use large vague Gaussian priors instead when doing Bayesian estimation (as opposed to hypothesis testing), as they tend to be more numerically stable than the Cauchy. However, it has been noted by several authors [Berger and Pericchi, 2001] that simply replacing the Cauchy with a wide Gaussian in the BF t-test causes serious problems. Suppose we use a standard deviation of in this wide prior ( large). It turns out that Bayes factor scales asymptotically like:
where is the standard t-statistic of the data. The large that was chosen with the intent that it would be ’non-informative’ (i.e. minimally eﬀect the result) turns out to directly divide the BF!
This is the opposite situation from estimation, where as increases is behaves more and more like the improper ﬂat prior . We can’t use the ﬂat prior in the BF t-test, but if we assume that the sample standard deviation matches the true standard deviation, we can perform a BF z-test instead. Using a ﬂat prior on , we get a BF of:
Because the ﬂat prior appears in the numerator only there is no clear scaling choice for it, so this expression is not in any sense calibrated (Several approaches to calibration appear in the literature, see Robert ).
A similar formula appears when the data standard deviation is assumed to be known also:
The part can also be written in terms of the known standard deviation and the empirical mean, and clearly it is constant asymptotically; the expression on the whole scales similarly as in the unknown variance case.
Lindley’s paradox refers to a remarkable diﬀerence in the behavior of Bayes factor tests compared to frequentist tests as the sample size increases. Under a frequentist t-test, we compute the statistic and compare it against a threshold that depends on , and that decreases as increases, (Recall the deﬁnition of the statistic: ). In contrast the eﬀective threshold used by the BF -test actually increases as increases, although slowly, following roughly . This diﬀerence is illustrated in the following plot lifted from Rouder et al. :
The eﬀect of this diﬀerence is that for large sample sizes, a frequentist test can reject the null with small (say ), while a BF -test run on the same data can strongly support the null hypothesis! As a concrete example, if we generate synthetic data points with mean , variance 1, then the BF -test yields 0.028 for the alternative hypothesis (i.e. 36:1 odds for the null being true), whereas a frequentist -test rejects the null with (two-sided test).
It’s all well and good to argue the philosophical diﬀerences between Bayesian and frequentist approaches, but when they disagree so strongly, it becomes a practical issue. The key issue is the handling of small-eﬀects. As increases and stays ﬁxed, a ﬁxed value of say 2 represents a smaller and smaller measured eﬀect, namely:
Detecting small eﬀects requires a large sample size, and the Bayesian -test greatly prefers the null hypothesis over that of a small eﬀect. This is usually described positively as a Occam’s razor eﬀect, but it does have real consequences when we are attempting to ﬁnd true small eﬀects. Essentially, the BF test will require much more data pick up a small eﬀect then a corresponding frequentist test. Often, this is viewed from the opposite side; that frequentist tests too easily reject the null hypothesis for large sample sizes. This is more of an issue when testing more complex models such as linear regressions with multiple coeﬃcients. Unless the data was actually generated from a linear equation with Gaussian noise, large samples will inevitably reject simple null models. This is in contrast on simple point null hypothesis testing, where we can appeal to asymptotic normality.
Here we attempt to answer the following question: are the properties of the default Bayes factor test indicative of BF tests in general? As we see it, there are two main ways to modify the test: We can use a non-point hypothesis as the null, or we can change the alternative hypothesis in some fashion. We discuss these possibilities and their combination separately.
The default Bayes factor t-test is formulated from a objectivist point of view: it’s designed so that the data can “speak for its self”, the priors used are essentially as vague as possible while still yielding a usable test. What about if we choose the prior for the alternative that gives the strongest evidence for the alternative instead? This obviously violates the likelihood principle as we choose the prior after seeing the data, but it is nevertheless instructive in that it indicates how large the diﬀerence between point NHST tests and BF tests are.
A reasonable class of priors to consider is those that are non-increasing in terms of , as they have a single mode at . We discuss other priors below. Table 1 shows a comparison of p-values, likelihood ratios from the Neyman-Pearson likelihood ratio test, and a bound on BFs from any possible prior of this form. It is immediately obvious that according to this bound, any reasonable choice of prior yields much less evidence for the alternative then that suggested by the p-value. For example, the diﬀerence at p=0.001 is 18 fold, and the diﬀerence is larger for smaller .
Although we disregarded multi-modal alternative priors above, they actually form an interesting class of tests. In terms of conservativeness, their BF is easily seen to be bounded by the likelihood ratio when using a prior concentrated on the maximum likelihood solution :
This ratio is simplify the Neyman-Pearson likelihood ratio for composite hypothesis, and it’s values are also shown in Table 1. This values are still more conservative than -values by a factor of 2-5. Note that in a Neyman-Pearson test such as the frequentist -test, this ratio is not read-oﬀ directly as an odds ratio as we are doing here; that’s why this NP odds ratio doesn’t agree directly with the p-value.
The interesting aspect of multi-modal priors is that they allow you to formulate a test that is more powerful for ﬁnding evidence of the null hypothesis. It can be shown that under the default BF t-test, evidence for the null accumulates very slowly when the null is true, where as evidence for the alternative accumulates exponentially fast when the alternative is true. Multi-modal priors can be formulated that ﬁx this imbalance [Johnson and Rossell, 2010]. One simple case is a normal or Cauchy prior with a notch around the null set to have low probability.
If we go a step further and consider the limits of sequences of improper priors, the Bayes factor can take essentially any value. Some particular “reasonable” sequences actually yield values very similar to -test’s p-values [Robert, 1993] as their posterior probabilities, at least between 0.1 and 0.01. Their construction also places 0 probability mass in the neighborhood of the null point in the limit, so it is similar to the multi-modal priors discussed in the previous paragraph. Although this improper prior does actually avoid Lindley’s paradox, it is more of a theoretical construction; it is not recommend in practice, even by it’s discoverer [Robert, 1993].
As discussed above, when restricting ourselves to well behaved proper priors, “Lindley’s paradox” as such can not be rectiﬁed by a careful choice of a proper prior on the alternative hypothesis. When one takes a step back and considers the consequences of this, Lindley’s paradox becomes much clearer, as a statement about objective priors rather than BF tests in general.
The central idea is this: It requires a lot of evidence to detect small eﬀects when you use vague priors. The idea of running a long experiment when you use a vague prior is nonsensical from an experimental design point of view: large amounts of data are useful for picking up small eﬀects, and if you expect such a small eﬀect you should encode this sensibly in your prior. The default Cauchy prior used in the BF t-test essentially says that you expect a 50% chance that the absolute eﬀect size (Cohen’s ) is larger than , which is a completely implausible in many settings. For instance, if you are trying to optimize conversion rates on a website with say a 10% base conversion rate, an eﬀect size of “1” is a 4x increase in sales! Some software defaults to 0.707 instead of 1 for the Cauchy scale parameter, but the eﬀect is largely the same
Experimental design considerations are important in the choice of a prior, and the Bayesian justiﬁcation of the Bayes factor depends on our priors being reasonable. The choice of using a particular BF test is not a full experimental design, just as for a frequentist it is not suﬃcient to just decide on particular test without regard to the eventual sample size. It is necessary to examine other properties of the test, so determine if it will behave reasonably in the experimental setting in which we will apply it. So in eﬀect, Lindley’s paradox is just the statement that a poorly designed Bayesian experiment won’t agree with a less poorly designed frequentist experiment.
Put another way, if the Bayes factor is comparing two hypothesis, both of which are unlikely under the data, you won’t get reasonable results. You can’t identify this from examining the Bayes factor, you have to look at the experiment holistically. The extreme case of this is when neither hypothesis includes the true value of . In such cases, the BF will generally not converge to any particular value, and may show strong evidence in either direction.
It is illustrative to discuss an example in the literature of this problem: a high proﬁle case where the Bayesian and frequentist results are very diﬀerent under a default prior, and more reasonable under a subjective prior. One such case is the study by Bem  on precognition. The subject matter is controversial; Bem gives results for 9 experiments, in which 8 show evidence for the existence of precognition at the level! This obviously extraordinary claim lead to extensive examination of the results from the Bayesian view-point.
In Wagenmakers et al. [2011a], the raw data of Bem  is analyzed using the default Bayesian t-test. They report Bayes factors towards the alternative (in decreasing order) of 5.88, 1.81, 1.63, 1.05, 0.88, 0.58, 0.47, 0.32, 0.29, 0.13 . So only one experiment showed reasonable evidence of precognition (5.88:1 for), and if you combine the results you get 19:1 odds against an eﬀect.
In contrast, Bem et al.  performed a Bayes factor analysis using informative priors on eﬀect sizes, uses values considered reasonable under non-precognitive circumstances. The 3 largest Bayes factors for the existence of precognition they got were 10.1, 5.3 and 4.9, indicating substantial evidence!
The interesting thing here is that the vague prior used by Wagenmakers et al. [2011a] placed a large prior probability on implausibly large eﬀect sizes. Even strong proponents of ESP don’t believe in such large eﬀects, as they would have been easily detected in other past similar experiments. Somewhat counter-intuitively, this belief that ESP eﬀects are strong if they exist at all yields a weaker posterior belief in the existence of ESP!
It should be noted that Wagenmakers et al. [2011b] responded to this criticism, but somewhat unsatisfactory in my opinion. Of course, even with the use of correct subjective Bayesian priors on the alternative hypothesis, these experiments are not suﬃcient to convince a skeptic (such as my self) of the existence of ESP! If one factors in the prior and updates their beliefs correctly, you may go from a 1:1-million belief in ESP existing to a 1:76 belief (if you use the naive combined BF of 13,669 from Bem et al. ).
Most reasonable people would consider the chance that a systematic unintentional error was made during the experiments to be greater than 1 in a thousand, and so not update their beliefs quite as severely. Regardless, several pre-registered replications of Bem’s experiments have failed to show any evidence of precognition [Ritchie et al., 2012].
The general consensus in the literature is that single Bayes factors should not be reported. Instead the BFs corresponding to several diﬀerent possible priors for the alternative hypothesis should be reported. Only if the conclusions are stable for a wide range of priors can they be reported as deﬁnitive. If the results are not stable, then it must be argued why a particular prior, chosen subjectively, is the best choice.
Bounds giving the largest or smallest possible Bayes factor over reasonable classes of priors, such as the bounds given in Berger and Jeﬀerys , can also be reported. The consideration of multiple possible priors is generally called “robust” Bayesian analysis.
As we showed above, under essentially any point-null Bayes factor t-test, the results are much more conservative than frequentist t-tests on the same data. But is this indicative of BF tests in general? It turns out Bayes factor tests that don’t use point-nulls behave much more like their frequentists counterparts [Casella and Berger, 1987].
But ﬁrst, lets just point out that point-null hypotheses are fundamentally weird things. When we start assigning positive probability mass to single values, we move from the comfortable realm of simple distribution functions we can plot on a whiteboard, into the Terra pericolosa that is measure theory. As anybody who has done a measure theory course knows, common sense reasoning can easily go astray even on simple sounding problems. The “surprising” Occam’s razor eﬀect seen in point-null BF tests is largely attributable to this eﬀect. It can be argued that any prior that concentrates mass on a single point is not “impartial”, eﬀectively favoring that point disproportionately [Casella and Berger, 1987].
The simplest alternative to a point null test is a postive versus negative eﬀect test, where the null is taken to be the entire negative (or positive) portion of the real line. This test is very similar in eﬀect to a frequentist one sided test. Unlike the point-null case, both the alternative and null hypotheses spaces are the same dimension; in practice this means we can use the same unnormalized priors on both without running into issues. This test can be preformed using the BayesFactor R package with the ttestBF method, althought somewhat indirectly, by taking the ratio of two seperate point null tests:
bfNonPoint <− bfInterval / bfInterval
Applying one-sided tests in the frequentist setting is routine (in fact often advocated [Cho and Abe, 2013]), it is surprising then that their theoretical properties are very diﬀerent to two-sided tests. If we look at both Bayesian and frequentist approaches from the outside, using decision theory (with quadratic loss), we see a fundamental diﬀerence. For the one sided case, the Fisher p-value approach and obeys a weak kind of optimality known as admissibility [Hwang et al., 1992]: no approach strictly dominates it. In contrast, in the two-sided case neither the frequentist or the standard Bayesian approach is admissible, and neither dominates the other!
What does this tell us about sequential testing settings? The use of Bayes factor tests is sometimes advocated in the sequential testing setting, as their interpretation is not aﬀected by early stopping. “Sequential testing” is when a test of some form of test is applied multiple times during the experiment, with the experiment potentially stopping early if the test indicates strong evidence of an eﬀect. Using frequentist tests sequentially is problematic; the false-positive rate is ampliﬁed and special considerations must be made to account for this. See my previous post for details on how to perform the frequentist tests correctly.
The interpretation of the Bayes factor in contrast is unaﬀected by early stopping. Naive application of a point-null BF test does seem to perform reasonable in a sequential setting, as it’s naturally conservative nature results in few false positives being detected. However, as we pointed out above, one-sided non-point null tests are equally valid, and they are not particularly conservative. Just applying one sequentially gives you high false-positive rates comparable to the frequentist test. The false-positive rate is a frequentist property, and as such it is quite unrelated to the Bayesian interpretation of the posterior distribution.
Indeed, in most sequential A/B testing situations, such as website design tests, the one-sided non-point test seems the most reasonable, as the two hypothesis correspond more directly to the two courses of action that will be taken after the test concludes: Switch all customers to design A or design B.
Here we start to move outside the realm of mathematics and start to meander into the realm of philosophy. You have to consider the purpose of your hypothesis test. Point-nulls enforce a strong Occam’s razor eﬀect, and depending on the purpose of your test, this may or may not be appropriate.
The examples and motivation used for null hypothesis testing in the original formulation of the problem by Jeﬀreys came from physics. In physics problems testing for absolute truth is a much clearer proposition. Either an exotic particle exists or it doesn’t, there is no in-between. This is far cry from the psychology, marketing and economics problems that are more commonly dealt with today, where we always expect some sort of eﬀect, although potentially small and possibly in the opposite direction from what we guessed apriori.
You need a real belief in the possibility of zero eﬀect in order to justify a point null test. This is a real problem in practice. In contrast, by using non-point null tests you will be in closer agreement with frequentist results, and you will need less data to make conclusions, so there is a lot of advantages to avoiding point null tests.
It is interesting to contrast modern statistical practice in high energy physics (HEP) with that of other areas where statistics is applied. It is common practice to seek “ﬁve sigma” of evidence before making a claim of a discovery in HEP ﬁelds. In contrast to the usual requirements, the diﬀerence is vast. Five sigma corresponds to about a 1-in-1million value. However, if you view the ﬁve sigma approach as a heuristic approximation to the point-null BF test, the results are much more comparable. The equivalent statistic needed for rejection of the point null under the Bayesian test scales with the amount of data. HEP experiments often average over millions of data points, and it is in that multiple-million region where the rejection threshold, when rephrased in terms of the statistic, is roughly ﬁve.
- Don’t use point-null Bayesian tests unless the point null hypothesis is physically plausible.
- Always perform a proper experimental design, including the consideration of frequentist implications of your Bayesian test.
- Try to use an informative prior if you are using a point null test; non-informative priors are both controversial and extremely conservative. Ideally show the results under multiple reasonable priors.
- Don’t blindly apply the Bayes factor test in sequential testing situations. You need to use decision theory.
J.O. Berger and R.L. Wolpert. The Likelihood Principle. Institute of Mathematical Statistics. Lecture notes : monographs series. Institute of Mathematical Statistics, 1988. ISBN 9780940600133. URL https://books.google.com.au/books?id=7fz8JGLmWbgC.
George Casella and Roger L. Berger. Reconciling bayesian and frequentist evidence in the one-sided testing problem. Journal of the American Statistical Association, 82(397):106–111, 1987. ISSN 0162-1459. doi: 10.1080/01621459.1987.10478396.
Hyun-Chul Cho and Shuzo Abe. Is two-tailed testing for directional research hypotheses tests legitimate? Journal of Business Research, 66(9):1261 – 1266, 2013. ISSN 0148-2963. doi: http://dx.doi.org/10.1016/j.jbusres.2012.02. 023. URL http://www.sciencedirect.com/science/article/pii/S0148296312000550. Advancing Research Methods in Marketing.
Jiunn Tzon Hwang, George Casella, Christian Robert, Martin T. Wells, and Roger H. Farrell. Estimation of accuracy in testing. Ann. Statist., 20(1):490–509, 03 1992. doi: 10.1214/aos/1176348534. URL http://dx.doi.org/10.1214/aos/1176348534.
Eric Jan Wagenmakers, Ruud Wetzels, Denny Borsboom, and Han van der Maas. Why psychologists must change the way they analyze their data: The case of psi. Journal of Personality and Social Psychology, 2011a.
Eric Jan Wagenmakers, Ruud Wetzels, Denny Borsboom, and Han van der Maas. Why psychologists must change the way they analyze their data: The case of psi: Clariﬁcations for bem, utts, and johnson (2011). Journal of Personality and Social Psychology, 2011b.
It’s amazing the amount of confusion on how to run a simple A/B test seen on the internet. The crux of the matter is this: If you're going to look at the results of your experiment as it runs, you can not just repeatedly apply a 5% significance level t-test. Doing so WILL lead to false-positive rates way above 5%, usually on the order of 17-30%.
Most discussions of A/B testing do recognize this problem, however the solutions they suggest are simply wrong. I discuss a few of these misguided approaches below. It turns out that easy to use, practical solutions have been worked out by clinical statistician decades ago, in papers with many thousands of citations. They call the setting “Group Sequential designs” [Bartroff et al., 2013]. At the bottom of this post I give tables and references so you can use group sequential designs in your own experiments.
The key problem is this: Whenever you look at the data (whether you run a formal statistical test or just eyeball it) you are making a decision about the effectiveness of the intervention. If the results look positive and you may decide to stop the experiment, particularly if it looks like the intervention is giving very bad results.
Say you make a p=0.05 statistical test at each step. On the very first step you have a 5% chance of a false positive (i.e. the intervention had no effect but it looks like it did). On each subsequent test, you have a non-zero additional probability of picking up a false positive. It’s obviously not an additional 5% each time as the test statistics are highly correlated, but it turns out that the false positives accumulate very rapidly. Looking at the results each day for a 10 day experiment with say 1000 data points per day will give you about an accumulated 17% false positive rate (According to a simple numerical simulation) if you stop the experiment on the first p<0.05 result you see.
They key idea of “Group Sequential” testing is that under the usual assumption that our test statistic is normally distributed, it is possible to compute the false-positive probabilities at each stage exactly using dynamic programming. Since we can compute these probabilities, we can also adjust the test’s false-positive chance at every step so that the total false-positive rate is below the threshold we want. This is know as the error spending approach. Say we want to run the test daily for a 30 day experiment. We provide a 30 element sequence of per step error chances, which sums to less than say . Using dynamic programming, we can then determine a sequence of thresholds, one per day, that we can use with the standard z-score test. The big advantage of this approach is that is a natural extension of the usual frequentist testing paradigm.
The error spending approach has a lot of flexibility, as the error chances can be distributed arbitrarily between the days. Several standard allocations are commonly used in the research literature.
Of course, libraries are available for computing these boundaries in several languages. Tables for the most common settings are also available. I’ve duplicated a few of these tables below for ease of reference.
- Choose a -spending function, from those discussed below. The Pocock approach is the simplest.
- Fix the maximum length of your experiment, and the number of times you wish to run the statistical test. I.e. daily for a 30 day experiment.
- Lookup on the tables below, or calculate using the ldbounds or GroupSeq R packages, the -score thresholds for each of the tests you will run. For example, for the Pocock approach, , 10 tests max, use for every test. For the O’brien-Fleming approach it will be the sequence
- Start the experiment.
- For each test during the course of experiment, compute the -score to the threshold, and optionally stop the test if the score exceeds the threshold given in the lookup table.
The simplest case of A/B testing is when each observation for each user is a numerical value, such as a count of activity that day, the sales total, or some other usage statistic. In this case the -score when we have data points from group and from group with means and standard deviations and is:
For the th trial, just compare the value here to the th threshold from the appropriate table below, if it exceeds it you may stop the experiment.
If in our A/B test we are testing if the click through rate or purchase chance improves, we use a slightly different formula. Let be the click through proportions and the total views (i.e. conversions in group ). Then
As above, for the th trial, just compare the value here to the th row of the appropriate table below, if it exceeds it you may stop the experiment.
- We are assuming the distribution of test statistic is normally distributed, with known variance. If you
did STATS101, you are probably ok with this assumption, but those of you who did STATS102 are no
doubt objecting. Shouldn’t you be using a t-test or chi-square test? What about the exact binomial test for
conversions? The fact is that unless you are getting only a tiny amount of data between tests, you just don’t
need to worry about this. If you are getting tiny amounts of data, you can’t hope to pick up any difference
between the A/B groups over short time scales. Just run your test less regularly!
The effect sizes we hope to see in A/B testing are much smaller than those in other areas of statistics such as clinical trials, so we usually can’t expect to get statistical significance in the small data regime where the normality approximation brakes down.
- Fixed maximum experiment length. This is an annoying problem. Often we would like to just keep running the
experiment until we get some sort of effect. The difficulty is that we have to spread the chance of false-positives
across the duration of the experiment. We could potentially use a error-spending function that drops off
rapidly, so that the total error probability is summable. However, I would not recommend this. Instead, do a
power calculation to determine the length of the experiment needed to pick up the minimum effect size that
you would care about in practice. Then you can use that length for the experiment.
In practice you will always see some sort of effect/uplift if you run a long enough experiment. There is no real world situation where the A/B groups behave exactly the same.
- Equally distributed tests. We are assuming above that we have the exact same amount of data gathered between tests.
Fortunately this assumption is not too crucial to the correctness of the test. It is actually quite easy to correct for
differing intervals using the R software discussed. Say you are in the common situation where you see much less
data over the weekend than for a weekday. The ldbounds R package lets you calculate the sequence of
boundaries for variable intervals. you just feed in the time between tests as a normalized sequence that sums to
For example, for a 7 day experiment starting on Monday, where the expected volume of data is half on Saturday and
Sunday, you would just use something like:
b <- bounds(cumsum(c(rep(1/6, 5), 1/12, 1/12)), iuse=c(1, 1), alpha=c(0.025, 0.025))
 5.366558 3.710552 2.969718 2.538649 2.252156 2.178169 2.088238
The iuse argument controls the spending function used (here we used the O’Brien-Fleming approach) and alpha gives the false-positive rate on either side (we are doing a fpr two-sided test here).
- Two-sided tests. The approach described above is that of a two-sided test. I allow for the effect of the intervention to be both potentially positive and negative. This is obviously more conservative then assuming it can only be positive, but I think in practice you should always use two-sided tests for A/B experiments. It’s hard to justify the assumption that the intervention can’t possibly have a negative effect!
- Why should I use this approach when Wald’s SPRT test is optimal? Wald’s theory of sequential testing is based around testing a single point hypothesis against another point hypothesis. While this approach is nearly as good as is possible for the simple point-point case, there is no clear generalization to the case of testing a point hypothesis of zero effect against the “composite” hypothesis of non-zero effect. In Wald’s 1945 book, several approaches are suggested, but they have known deficiencies [Sec 5.2 Tartakovsky et al., 2013]. Lordon’s 2-SPRT [Lordon, 1976] is the modern SPRT variant that is most commonly used. It requires setting an indifference region around 0 instead of a fixing the measurement points in advance, so the experimental design decisions are a little different. I intend to write more about the 2-SPRT in the future.
Some people attempt to perform Bayesian tests by putting a prior on their data and computing “credible intervals” (or equivalently posterior probabilities) instead of confidence intervals. The classical “likelihood principle” states that Bayesian statistical quantities do not change under sequential testing, however under the Bayesian methodology estimation is fundamentally different from hypothesis testing, and trying to convert interval estimates to tests doesn’t work. The intuition is fairly simple: If Bayesian tests are immune to the sequential testing problem but Frequentist tests are not, then techniques like credible intervals which agree with frequentist confidence intervals on simple problems can’t then be used for Bayesian tests.
In general the best approach is to combine the Bayesian estimates with decision theory. This involves assigning a cost to each datapoint gathered, as well as determining the cost due to the test coming to the wrong solution.
I’m a strong proponent of Bayesian statistics, but I find it next to impossible in practice to determine these two costs. The frequentist approach of controlling false-positive rates is much easy to apply in a business context. The main exception is for per-user personalization or advert placement where the business decisions have to be made automatically. In that case the techniques studied for multi-arm Bandits or contextual bandits are natural automatic applications of the decision theory approach. They are far outside the scope of this post however.
The most commonly suggested Bayesian approach is the Bayes Factor (BF) test, a potential drop in replacement for frequentist tests. Indeed, A. Wald (the progenitor of sequential testing) way back in his 1945 book suggests a similar approach to a BF test [Sec. 4.1.3, Wald, 1945].
Unfortunately the bayes factor test often gives counter-intuitive and sometimes radically different results from frequentist tests. This is known as Lindley’s paradox [Lindley, 1957]. This doesn’t invalidate BF results, but you do need to fully understand the implications to use them in practice. I intend to write a another post going into detail about bayes factor tests, and their pitfalls and advantages over frequentist approaches.
The GroupSeq R package provides a very easy to use GUI for calculating the stopping boundaries for most of the above settings. It can also do power calculations and stopping intervals. I’ve put a few screenshots of it in action below.
Powe calculations are typically used with frequentist experiments to determine the length of the experiment required to pick up effects of a certain magnitude. Power calculations can still be done for group sequential tests, using the R tools mentioned above. Like with the -score boundary calculations, multiple integrals need to be evaluated numerically, so it is somewhat of a black-box.
Power calculations can also be used to select from the various error spending approaches above. Just run the power calculation for each approach, then choose the one that gives the smallest required experiment length.
Pocock’s approach uses the exact same -score threshold at every test. Values for larger (the total number of tests) or irregularly spaced test intervals may be calculated using the GroupSeq R package. The test values here illustrated how much more conservative you need to be. For example, when running 10 tests during the experiment, you effectively have to run a -test at the level to get an effective false positive rate.
This approach is essentially the most natural generalization of Wald’s SPRT test to the group-sequential case with composite hypothesis. This test is much less likely to stop at the early stages of the test than the other approaches, but more likely to stop in the later tests. Instead of reporting the tables from their paper I give the stopping boundaries for a version of their approach [O’brien and Fleming, 1979] adapted to the error-spending algorithm implemented in the ldbounds library.
z-scores (two-sided tests)
Faster Convex Optimization: Simulated Annealing with an Efficient Universal Barrier
Variance Reduction for Faster Non-Convex Optimization
Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives
Even Faster Accelerated Coordinate Descent Using Non-Uniform Sampling
Deep Speech 2 : End-to-End Speech Recognition in English and Mandarin
On the Iteration Complexity of Oblivious First-Order Optimization Algorithms
Black-box Optimization with a Politician
Importance Sampling Tree for Large-scale Empirical Expectation
CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy
Solving Ridge Regression using Sketched Preconditioned SVRG
Variance-Reduced and Projection-Free Stochastic Optimization
On Graduated Optimization for Stochastic Non-Convex Problems
Doubly Robust Off-policy Value Evaluation for Reinforcement Learning
Stochastic Variance Reduced Optimization for Nonconvex Sparse Learning
A Variational Analysis of Stochastic Gradient Algorithms
Stochastic Variance Reduction for Nonconvex Optimization
A Superlinearly-Convergent Proximal Newton-type Method for the Optimization of Finite Sums
SDCA without Duality, Regularization, and Individual Convexity
Training Neural Networks Without Gradients: A Scalable ADMM Approach
This post describes the newly developed class of variance reduction techniques in stochastic optimisation, which are fast and simple alternatives to the traditional stochastic gradient descent (SGD) method. I give a light introduction here, focused on actually applying the methods in practice.
Quite possibly the most important algorithm in machine learning, SGD is simply a method for optimizing objective functions that consist of a sum or average of terms:
The SGD method maintains an estimate of the solution at step , and at each step:
At it’s most basic, this update is just an approximation of the classical gradient descent technique, where we take the non-random step The step size parameter here is just a constant, which depending on the problem may need to be adjusted down at each step. Variance reduction methods build upon SGD by modifying the core update in various ways.
The SAGA method is a simple modiﬁcation of SGD, where we remember the last gradient evaluated for each datapoint, and use this to correct the SGD step to reduce its variance. These past gradients are stored in a table. We denote the previous gradient for datapoint j as . The core step is as follows:
Source code for SAGA and the accelerated variant point-saga is available on github. Some points about the algorithm:
- Storing the past gradients is typically fast and low cost. For logistic regression or least squares, it reduces to storing a single real value per datapoint instead of a full gradient. Similar reductions apply to neural networks. When the storage cost is high, the SVRG method described below can be used instead.
- The table average is of course cached rather then recalculated at each step.
- Notice that there is only really two extra terms on top of the usual SGD step. The two terms also trivially cancel when you take the expectation with respect to . So the expected step direction is a gradient step, just like with SGD.
- The step above assumes you already have a table of gradients. If you are just starting the algorithm, do the ﬁrst pass over the data in-order (instead of random), and use the table average over only the data seen so far in the step. Alternatively, just do regular SGD steps during the ﬁrst pass.
- The requirement for randomly sampling datapoints instead of doing in-order passes turns out to be absolutely crucial for it to work.
- Implementing the step eﬃciently when the gradients are sparse is a little subtle, but possible. See the SAGA paper for details and example code.
- SAGA supports non-diﬀerentiable regularisers through the use of proximal operators. See the paper for details.
VR methods like SAGA have entirely diﬀerent convergence properties than SGD. On strongly convex problems they are able to converge linearly (), instead of the much slower convergence possible with the SGD method (which requires careful iterate averaging). Linear convergence on a stochastic method is quite remarkable, up until recently no such methods were known. It’s also fast linear convergence, compared to the theoretical rate for regular gradient descent, it makes 1/3 the progress each step, but each step is only a single datapoint evaluation rather than a full batch gradient. For large datasets that can be millions of times faster. A schematic illustration of the practical speedup is below, labeled as "incremental gradient":
SAGA is perhaps the most straightforward of the variance reduction techniques. The other VR methods have advantages in certain situtations, there is no clear best method in all cases. We list the main methods here, each has several variants as well:
- is the earliest of these VR methods developed. It doesn’t actually take the gradient direction in expectation, rather it introduces a small bias with the potential for a lower variance in the gradient estimate. Unfortunately SAG is hard to extend from the point of view of the theory.
- is the direct predecessor of SAGA that makes a diﬀerent trade-oﬀ. It avoids storing gradients, but at the expense of computing two gradients per step instead of one. It also has a more complex double loop formulation, where the outer loop requires a recalculation of the full-data gradient. In most cases the extra computation makes it slower than SAGA, but it does have some applications.
- take a step derived indirectly from the dual formulation of the problem, while avoiding any dual computation. They don’t work eﬃcently on sparse problems, but are perhaps the fastest methods on dense problems.
- is a purely dual method. It can be applied to some non-diﬀerentiable problems unlike the other methods here, but it is complex to apply eﬃcently to other problems such as logistic regression. It also has the nice properly of avoiding tunable parameters on problems with L2 regularisation. For practical application it crucially requires a non-degenerate amount of strong convexity though.
The simple slow approach: rejection samplingNormally I avoid wasting time on approaches that don't work well in practice, however the simple rejection sampling approach to the problem turns out to be the vital building block of the algorithms that do work. The rejection sampling approach is only a few lines of Python: The idea is simple. We sample uniformly from the indices, then do a rejection sampling correction to account for the actual non-uniformity of the data. The best way to visualise what it's doing is to consider it as picking a point in 2 dimensions uniformly, then doing a reject or accept operation based on if the point is under the "graph" of the distribution or not. This is shown schematically as the yellow shaded regions below. The general idea of rejection sampling and this graph interpretation is quite subtle, and I'm not going to attempt to explain it here. If you just stare at it for a while you should be able to convince your self that it works. Unfortunately, rejection sampling like this is not practical when the weights are uneven. The rejection probability depends on the magnitude of the largest weight compared to the average, and that can be very large. Imagine if the first eight boxes above where %1 full and the last 100% full. It's going to reject roughly 80% of the time. It can of course be much worse when n is larger.
Making it practicalRejection sampling can be very fast when all the weights are similar. For instance, suppose all the weights are in the interval [1,2]. Then the acceptance probability w[i]/w_max is always more than half, and the expected number of loops until acceptance is at most 2. It's not only expected constant time, but it's fast in practice as well. More generally, this is true whenever the interval is of the form [2^i, 2^(i+1)]. This leads to the idea used by most of the practical methods: group the data into "levels" where each level is an interval of that form. We can then sample a level, followed by sampling within the level with rejection sampling. Matias et al. (2003) show that the level sampling can be done in O(log*(n)) time, where log* is the iterated logarithm, a slowly growing function which is always no more than 5. Effectively it is an expected constant time sampling scheme. We don't recommend using the Matias scheme though, as its practical performance is hampered by its complexity. Instead, we suggest using a method that has a (weak) dependence on the size of the weights, which we detail below. Consider the intervals [2^i, 2^(i+1)]. The number of unique intervals ("levels") we need to consider is just the log2 of the ratio of the largest and smallest weights, which on practical problems won't be more than 20, corresponding to a 1 to 1 million difference. The extra overhead of the Matis method is not worth it to reduce the constant from 20 to 5. In practice the Matis method requires building a tree structure and updating it whenever the weights change, which is way slower than traversing a 20 element sequential array. Lets be a little more concrete. The algorithm will maintain a list of levels, in order of largest to smallest. Each level i consists of a list of the elements that fall within that levels range: [2^i, 2^(i+1)]. The list for each level need not be sorted or otherwise ordered. To sample an instance from the set, we sample a level, then we perform rejection sampling within that level. In Python, it looks like: The full class including weight updating is on github. Notice that we use a linear-time algorithm for sampling the levels (A cumulative distribution table lookup). Alternative methods could be used, such as a balanced binary tree or Walker's algorithm (see below). The only difficulty is that we want to change the weight of an element potentially after every sample, so any pre-computation needed for the level sampling needs to be fast. It doesn't seem worth it in practice to use something more complicated here since the number of levels is so small (as mentioned above usually less than 20).
EnhancementsA few small changes are possible to improve the usability and performance. The rejection sampling actually only needs a single random sample instead of 2. We can just take a U[0,1] sample, then multiply by level_size. The integer part is the idx_in_level and the remainder is the u_lvl part. When updating weights, we need to delete elements potentially from the middle of a levels index list. For example, imagine we need to move element 6 from the [2,4] bucket to the [1,2] bucket in the above diagram. We need to store the indices in a contiguous array for fast O(1) lookup, so initially this would look like a problem. However, since the order within the lists doesn't matter, we can actually take the last element of the list, move it to the location we want to delete from, then delete from the end of the list. In the sample code given, we keep a level_max array which just contains the upper bounds for each level. With a little extra code we could instead change this to be the largest element that has been in that bucket so far. This could lower the rejection rate a little at the cost of a few more operations maintaining the level_max array.
Other methodsMarsaglia et al. 2004 describe an early method that also treats the data in levels, but instead of rejection sampling it spreads each datapoint's probability mass across multiple levels. I believe updating the weights under their scheme would be quite complex, although in big O notation it should be roughly the same as the algorithm I described above.
Walker's alias methodThis post is concerned with methods can be used when we wish to modify the weights dynamically as the algorithm runs. However, I would be remise to not mention the alias method, which given a fixed set of weights at initialization time constructs a table that allows extremely efficient constant time sampling without any of the complexity of the other approaches I mentioned. The key idea is simple. Take a table like that used in the rejection sampling method. Instead of normalizing each bucket by w_max, normalize by w(sum)/n. When doing this, some buckets will overflow, as the probability within them could be larger than w(sum)/n. So we take the excess, and spread it among those buckets that are not full. It's not too difficult to do this in such a way that each bucket is exactly full, and contains only two indices within it. Now when we go to sample, we pick a bucket uniformly as before, but within the bucket there is no longer a rejection option, just two indices to choose from. Unfortunately, there is no fast way to modify the table built by Walker's method when we change just a single weight. I've not been able to find any methods in the literature that manage to do so efficiently. Given it's simplicity, there are a few implementations of Walker's method out there, for example a Python version here.
Variance ReductionStopWasting My Gradients: Practical SVRG Another great practical paper from Mark Schmidt and the rest of his group. The techniques they describe help speedup SVRG, although your still probably better off using SAGA.
On Variance Reduction in Stochastic Gradient Descent and its Asynchronous Variants Great typesetting and plots. They show a good speedup on SVRG on a multicore machine. Would be good to have async-SAGA plots though. Speedup varies from 0.3 to 1 per additional core.
Local Smoothness in Variance Reduced Optimization Essentially theory and experiements for adaptive importance sampling for SVRG and SDCA. Looks good. It's hard to say how robust it is and how easy to implement it will be.
Variance Reduced Stochastic Gradient Descent with Neighbors Good theory, some improvements on the basic SAGA theory. The N-SAGA method looks like it would be hard to implement fast though.
Quartz: Randomized Dual Coordinate Ascent with Arbitrary Sampling A SDCA variant with a more conservative step and corresponding improved theory. They give a proof that applies for almost any sampling scheme, nice. Could be faster than SDCA for sparse problems if implemented carefully.
A Universal Catalyst for First-Order Optimization Uses the accelerated proximal-point algorithm as a "meta-algorithm" to speedup any of the finite-sum methods. The double loop construction they use introduces a logarithmic term in the convergence rate which is unfortunate. Probably not ideal in practice due to the additional tuning required, but it's hard to say.
Data scienceHidden Technical Debt in Machine Learning Systems This is based on a previous workshop paper. I'm not really impressed with the paper, it just doesn't go into enough detail to actually be useful. It's practically an extended abstract.
Precision-Recall-Gain Curves: PR Analysis Done Right I really like this paper. It correctly points out that PR curves are pretty bad, and suggests a better improvement. For the kind of problems I'm interested in ROC curves are still the better choice though.
Distributed OptimizationDeep learning with Elastic Averaging SGD The basic idea here is classical: distribute an optimisation problem by splitting it into parts (1 per machine), and enforce consistancy between the machines using a quadratic penality function. It's a "soft" penality, in the sense it doesn't enforce exact equality. It's interesting that they actually give each machine access to the whole dataset, whereas normally each machine is given a fixed subset of the data. I believe this means that on convex problems it will still converge to the exact solution, which is not true of a soft-penalty otherwise. They also show how important a momentum based updated is. Good paper.
Online/Stochastic OptimizationProbabilistic Line Searches for Stochastic Optimization This paper was present orally at the conference as well. They gave an execelent presentation. The basic idea is to use bayesian optimization techniques for line searches. The details are non-trivial, and they have good plots to go along with it. I suspect however that their method is too slow in practice for most applications.
Beyond Convexity: Stochastic Quasi-Convex Optimization They give an interesting and non-obvious definition of local-quasi-convexity that they use in their proof of convergence for the stochastic normalized gradient descent method. It's really a theory paper.
Online Gradient Boosting I'm no expert on boosting, but it looks interesting. I'll stick with offline boosting for solving practical problems for now.
Fast Rates for Exp-concave Empirical Risk Minimization Some good results here. The exp-concave subproblem of ERM is interesting; I'm a strong believer in narrowing down your problem class until you can prove something. They get an expected loss convergence rate like O(d/n) for d dimensions and n steps. This is better than the general ERM case, and better than past results. I'll probably read through the proof in detail at some point.
SGD Algorithms based on Incomplete U-statistics: Large-Scale Minimization of Empirical Risk The U-statistics terminology is new to me. They talk about direct minimisation of AUC instead of the usual ERM losses, which is an approach I find interesting. I'm surprised there hasn't been more research on this, from what they say in the paper. In the past I've never achieved better results out of direct AUC minimisation compared to logistic-loss though.
BanditsPolicy Evaluation Using the Ω-Return Not really my area, so I can't comment in detail. Looks interesting though.
Bandit Smooth Convex Optimization: Improving the Bias-Variance Tradeoff Always nice to see improved bounds.
The Pareto Regret Frontier for Bandits
Frank-WolfeFrank-Wolfe Bayesian Quadrature: Probabilistic Integration with Theoretical Guarantees The Frank-Wolfe method is popping up everywhere these days. Apparently it allows you to get convergence rates for Bayesian quadrature now. They appear to be applying it in function space, which is really cool. Nice visualisations as well.
A Universal Primal-Dual Convex Optimization Framework
On the Global Linear Convergence of Frank-Wolfe Optimization Variants Simon Lacoste-Julien is quickly becoming the Frank-Wolfe expert. I like the consideration of the structure of the constraint set through a notion similar to a condition number. Great diagrams as well.
Inference/SamplingSmooth and Strong: MAP Inference with Linear Convergence I don't fully understand the novelty in this paper, it looks like an application of standard duality tricks for smoothing. If anybody has insight here please let me know in the comments.
Reflection, Refraction, and Hamiltonian Monte Carlo A way of improving traditional HMC in some cases by better handling innate structure.
Optimization Monte Carlo: Efficient and Embarrassingly Parallel Likelihood-Free Inference
A Complete Recipe for Stochastic Gradient MCMC
Other OptimizationHONOR: Hybrid Optimization for NOn-convex Regularized problems Basically they switch between Quasi-newton and gradient steps depending on the local structure. Feels a little adhoc. They also only compare against GIST, a method they also invented. I would like to see a more general comparison.
Convergence rates of sub-sampled Newton methods The approach they take to sub-sampled approximations is interesting. They handle noise in the Hessian from the approximation by setting the smaller eigenvalues not to zero but to the value of the kth top eigenvalue, for some k. Noise should have a bigger relative effect on the lesser eigenvalues, so that looks reasonable. I'm not super-convinced by the plots though, I would like to see results on a few other problems. Also, fonts on plots are too small.
TheorySolving Random Quadratic Systems of Equations Is Nearly as Easy as Solving Linear Systems Quite an interesting signal-recovery problem solved here. Related to compressed sensing where you assume some kind of structure on the design matrix (i.e. iid sampled from a gaussian or binomial).
Newton-Stein Method: A Second Order Method for GLMs via Stein's Lemma An alternative hessian approximation. They make use of sub-sampling in a similar way to the "Convergence rates of sub-sampled Newton methods" paper above, citing a pre-print of it. I like how they use estimates of the covariance matrix of the data to estimate the Hessian. Plot font is too small. They use the wrong citation style for NIPS, not sure how that got through review.
Learning with Symmetric Label Noise: The Importance of Being Unhinged