Quantcast
Viewing latest article 3
Browse Latest Browse All 22

How to check your simple definition of p-value

I just read Andrew Gelman’s post about an article with his name on it starting with an inaccurate definition of p-value. I sympathize with all parties. Journalists and editors are just trying to reduce technical terms by presenting layperson definitions. Earlier this year I caught a similar inaccurate definition on a site defining statistical terms for journalists. So admittedly, this wrong definition must be incredibly attractive to our minds for some reason.

A simple rule for testing if your definition is wrong:

  • If knowing the truth could make the p-value (as you have defined it) go to zero, then it is wrongly defined.

Let’s test:

  • wrong definition: p-value is the probability that a result is due to random chance

Suppose we know the truth, that the result is not due to random chance. Then the p-value as defined here should be zero. So this definition is wrong. The Wikipedia definition is too technical. I prefer the one from Gelman’s post:

  • right definition: “the probability of seeing something at least as extreme as the data, if the model […] were true”

where “model” refers to the boring model (e.g. the coin is balanced, the card guesser is not clairvoyant, the medicine is a sugar pill, etc.). This definition does not fail my test. We can calculate a probability under a model even when we know that model is not true.


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Viewing latest article 3
Browse Latest Browse All 22

Trending Articles