Posts
Kenneth Hung | Department of Mathematics | UC Berkeley

Thoughts on "The signal and the noise"

I came across Nate Silver’s name in two past occasions: once when I was taking Stat 215a at Berkeley, another time during the 2016 elections. With some luxury of free time when I was interning in Chicago, I took the time to finally read this book.

First the good things. Primarily due to my line of research, I have always been more fond of the frequentist approach. The book gave a very convincing argument for the Bayesian approach: as long as we do not exclude the truth, then with more data, we will be “less and less and less wrong”. While it might be odd to assign probabilities to when Newton’s Second Law does not hold, it most certainly is a valid approach for the more contemporary research, where repeated experiments show very large deviations in effect sizes; Or using his example on climate change, while climate change is irrefutable, the scale and severity of its effect might not be so agreed upon if the estimates are showing rather large variances.

The book also gave a very intuitive explanation of overfitting with the earthquake example, especially when it comes to extrapolation (vs. intrapolation). For exceptionally rare events that are rarely or never seen, an estimate for the probability of such events might be unreliable (this can go both ways), but the payoff (or penalty) can be orders of magnitude, not unlike the main focus in Talen’s book Black Swan.

I have some confusion about some of the examples used, however. In arguing that more information available can make estimation worse, the book mentioned National Journal panelists’ predictions about the 2010 midterm elections. For all 435 races, the predictions from conservatives and liberals differed by 6 seats, while for the race in a few specific states (Nevada, Illinois, Pennsylvania, Florida and Iowa) they differed by 5 out of 11. Differing by 6 out of 435 is definitely a sign of consistency in estimation compared to 5 out of 11. However, giving accurate prediction out of 11 (idealized) coin flips is definitely a harder job that doing so for 435 flips. The small number of coin flips forbids us to make full use of law of large numbers, making the prediction innately difficult. The difference in the estimates by conservatives and liberals need to be put in perspective by proper scaling (such as ). While I agree with the sentiment, I am not sure if this example is so appropriate.

In Chapter 10, the book analyzed a poker game against a mythical opponent “the Lawyer”, and walked us through a step-by-step usage on Bayesian analysis: a prior was setup (an initial read that my fellow interns in Chicago like to talk about in poker nights) followed by Bayesian updates to our prior as the game unfolds. Each of the updates are objective as poker is a fairly mathematical game. This is a benefit of poker game with clean-cut probabilities that does not translate so well to more complicated systems, say basketball games. With a subjective prior, and Bayesian updates being subjective by nature, I am worried that it is possible that the subjectivity can add up, disallowing us to converge to the truth. However since I mostly work on more theoretical problems, I believe that other statisticians may know the tradeoffs better in practice.

Nate Silver pointed out many pitfalls and their solutions in statistical applications, some of such solutions mutually opposing, leaving room for data analysts to strike a balance. The abundance of charts and examples also made it a great summer read for me to take a break — a break from statistics by reading about statistics.