# Look-elsewhere effect

The look-elsewhere effect is a phenomenon in the statistical analysis of scientific experiments where an apparently statistically significant observation may have actually arisen by chance because of the sheer size of the parameter space to be searched.[1][2][3][4][5]

Once the possibility of look-elsewhere error in an analysis is acknowledged, it can be compensated for by careful application of standard mathematical techniques.[6]

More generally known in statistics as the problem of multiple comparisons, the term gained some media attention in 2011, in the context of the search for the Higgs boson at the Large Hadron Collider.[7]

## Use

Many statistical tests deliver a p-value, the probability that a given result could be obtained, assuming random coincidence. When asking "does X affect Y?", it is common to vary X and see if there is significant variation in Y as a result. If this p-value is less than some predetermined statistical significance threshold α, one considers the result "significant".

However, if one is performing multiple tests ("looking elsewhere" if the first test fails) then a p value of 1/n is expected to occur after about n tests. For example, when there is no real effect, an event with p < 0.05 will on average still be seen after 20 tests. In order to compensate for this, you could divide your threshold α by the number of tests n, so a result is significant when p < α/n. Or, equivalently, multiply the observed p value by the number of tests (significant when np < α).

This is a simplified case; the number n is actually the number of degrees of freedom in the tests, or the number of effectively independent tests. If they are not fully independent, the number may be lower than the number of tests.

The look-elsewhere effect is a frequent cause of "significance inflation" when the number of independent tests n is underestimated because failed tests are not published. One paper may fail to mention alternative hypotheses considered, or a paper producing no result may simply not be published at all, leading to journals dominated by statistical outliers.

## Examples

• A Swedish study in 1992 tried to determine whether or not power lines caused some kind of poor health effects. The researchers surveyed everyone living within 300 meters of high-voltage power lines over a 25-year period and looked for statistically significant increases in rates of over 800 ailments. The study found that the incidence of childhood leukemia was four times higher among those that lived closest to the power lines, and it spurred calls to action by the Swedish government. The problem with the conclusion, however, was that they failed to compensate for the look-elsewhere effect; in any collection of 800 random samples, it is likely that at least one will be at least 3 standard deviations above the expected value, by chance alone. Subsequent studies failed to show any links between power lines and childhood leukemia, neither in causation nor even in correlation.[8]
• The Bible Code phenomenon purports to find atypical significant groupings of words predicting future events hidden in text of the Hebrew Bible taken as a raw sequence of unspaced letters and arranged into various grids of different proportions. However, as an article in Skeptical Inquirer demonstrated,[9] this amounts to generating vast numbers of grids to examine for patterns or groupings by dividing the full text string into widths of from a few to hundreds of thousands of letters wide, repeating the width for subsequent rows. Each one of those many grids can then in turn be searched further for a wide range of words of interest by skipping in intervals, forward or backward, of an arbitrary x letters in the text (or x+1, x+2, etc.), in a massive cross product of parameterized possibilities, and an associated coincident word of interest can be any nearby string in an arbitrary skip of x+k or y+k letters, forward or backward, such that the permutational volumes become enormous. Thus, setting aside related questions like confirmation bias, even if no groupings of interest or significance were found in the first grid, the next iteration can be tried by computer and so on en masse until "miraculous" or "improbable" groupings are finally arrived at. This is tantamount in effect to, upon dealing oneself an uninteresting poker hand, continuing to do so in whatever great quantities necessary until one obtains a straight flush, royal flush, or even many such events in sequence, and calling the deck inspired for enabling such a result. The Skeptical Inquirer author was thus able to achieve identical effects simply by applying the same search algorithms both to the English language King James Bible text in place of the allegedly divinely inspired Hebrew version, and then just as effectively to the mundane and arbitrary example text of the 1987 United States Supreme Court decision Edwards v. Aguillard.
• The XKCD comic "Significant" provides a good fictional example of this problem.