Posts
Kenneth Hung | Department of Mathematics | UC Berkeley

p-value Screening

Will and I have been working on a problem for finding confidence lower bounds for maximum parameter. Here is a good toy model: the confidence lower bound for the maximum mean parameter for a i.i.d. Gaussian observation. Specifically, we are trying to find a confidence bound for from the observations where is the mean vector from .

Now a good lower confidence bound would one that is larger. We can consider the test dual to the confidence bound: a test with higher power generally should mean a better confidence bound. The dual test will test the null hypothesis

which is really just the intersection of . We are now in the realm of multiple testing!

A basic idea to test is of course through Bonferroni correction. However, as we test for that is larger than many of the observations ’s, few of these observations will provide evidence against the null but they count towards the multiplicity nonetheless. This is where we can aim to do better.

Zhao, Small and Su (2018) suggested to use a to screen out the p-values under , divide the remaining p-values by . (This idea was also rediscovered by Ellis, Pecanka, Goeman (2017).) We have also rediscovered the same idea, and all three groups stated very similar conditions for the p-values, from “uniformly conservative” to “supreuniform”. This can also be taken as a stopping time: can go from 0 to 1, revealing any p-value greater than and stopping based only on this information. It can be shown that this still controls the type I error rate, through a martingale argument not unlike the proof of Benjamini–Hochberg procedure.

But Zhao, Small and Su (2018) took an extra step in proposing a beautiful method for selecting this : Note that the “saving” ones gets from performing the p-value screening is

where is the averaged distribution of the p-values. The smaller the ratio above, the better the test will perform. Taking the derivatives give

and a good choice of stopping is when there is no strong evidence that . In general this should eventually happen, either when gets uncomfortably close to 0, or when most of the true nulls have been removed.

This approach is great for testing, but comes with two caveats when it comes to finding lower confidence bounds:

The later point raises a question, similar to that in Johari, Pekelis and Walsh (2016) or “spotting time” as suggested by Aaditya Ramdas: Is it possible to allow an analyst stop anytime they want? If so, allowing an analyst to go backwards should come with a price. How do we make an appropriate correction?

Choosing a screening threshold at is essentially the same as using

as the test statistic and reject for small values. Here is the smallest p-value and is the empirical CDF of the p-values. What we want to do is essentially using the test statistic

for some subset , and rejecting for small values. This is equivalent to allow an analyst to search through to cherry pick the best looking . Some possible options of are for some prespecified or where is the -th smallest p-value.

The first one, while maybe more practical, is less interesting as it probably is not too different from using a fixed . We will turn to the second option here. We will analyse it by finding a “stochastic lower bound” for this statistic. We have

Of course, is stochastically larger than . So it suffices to control , or equivalently, make sure is not too big. We consider the least favorable distribution, where all p-values are uniformly distributed, i.e. .

This quantity of interest may look like something from empirical process, but we can focus on its value at one of the p-values. So it is good enough to look at

which happens to be a martingale under the filtration as decreases from to . The expectation of each term in , so we can use Doob’s martingale inequality on the submartingale

to get some concentration, giving

There probably is a better bound, but for now let’s stick with this. Under the null we have

for any . With this bound, we have that

is stochastically larger than for any . Optimizing over gives that

is also stochastically larger than , so we can use this as our p-value.

In a pessimistic case where the analyst shows no restraint, there is nevertheless no reason to choose , so the smallest is 2. Now the question comes: how does this test fare compared to Zhao, Small, Su (2018)?

We wrote a short piece of code to test this:

spotting.test <- function(x, k = 2) {
  n <- length(x)
  p <- sort(pnorm(x, lower.tail = FALSE))
  multiplier <- min((k:n) / p[k:n] / n)
  comb.p <-
    n^2 / (n + 1) * p[1] * multiplier +
    3 / 2^(2 / 3) * (n * p[1] * multiplier)^(2 / 3) *
    (n^2 * (n + 1 - k) / k / (n + 1)^2 / (n + 2))^(1 / 3)
  min(comb.p, 1)
}

spotting.power <- function(mu, k = 2) {
  require(foreach)
  rej <- foreach(i = 1:10000, .combine = "c") %do% {
    x <- mu + rnorm(length(mu))
    spotting.test(x, k) < 0.05
  }
  mean(rej)
}

spotting.power(rep(0, 100))
spotting.power(c(4, rep(0, 99)))
spotting.power(c(4, rep(-1, 99)))
spotting.power(c(4, rep(-4, 99)))
spotting.power(c(4, rep(-10, 99)))
spotting.power(c(rep(1, 20), rep(0, 80)))
spotting.power(c(rep(1, 20), rep(-1, 80)))
spotting.power(c(rep(1, 20), rep(-4, 80)))

Since post condition we are really just using Bonferroni test, we will compare to those particular rows in Table 2 in Zhao, Small, Su (2018):

Setting = 0.5 Adaptive Spotting
1. All null 5.0 5.0 0.7
2. 1 strong 99 null 76.6 76.7 57.3
3. 1 strong 99 conservative 85.2 84.0 70.9
4. 1 strong 99 very conservative 98.0 98.7 88.3
5. 1 strong 99 extremely conservative 97.8 98.9 89.0
6. 20 weak 80 null 21.0 22.5 4.6
7. 20 weak 80 conservative 28.1 26.3 6.0
8. 20 weak 80 very conservative 38.1 47.3 11.6

Welp. This does not work that well. One possible reason is that “Adaptive” does a good job capturing the best cutoff already, spotting needs to account for too much noise and pays an unnecessarily high price. In either case, it is not clear if the spotting test is monotone anyway.