P-Value Calculator
Select the distribution type, enter your test statistic and degrees of freedom (if needed), and choose the tail.
P-Value — Complete Guide
What is a p-value?
The p-value is the probability of obtaining a test result at least as extreme as the observed result, assuming the null hypothesis H0 is true. It does not tell you the probability that H0 is true; it measures how compatible your data are with H0.
How to interpret a p-value
| p-value | Interpretation |
|---|---|
| p < 0.001 | Very strong evidence against H0 |
| 0.001 ≤ p < 0.01 | Strong evidence against H0 |
| 0.01 ≤ p < 0.05 | Moderate evidence against H0 (significant at α = 0.05) |
| 0.05 ≤ p < 0.10 | Weak evidence; marginal significance |
| p ≥ 0.10 | Little to no evidence against H0 |
Z-distribution (standard normal)
Used when the test statistic follows a standard normal distribution (large samples, known σ).
- Two-tailed: p = 2 × (1 − Φ(|z|)), where Φ is the standard normal CDF
- Left-tailed: p = Φ(z)
- Right-tailed: p = 1 − Φ(z)
Example: z = 1.96 (two-tailed) ⇒ p ≈ 0.05
χ²-distribution (chi-square)
Used in goodness-of-fit and independence tests. Always right-tailed.
p = 1 − Fχ²(χ², df), where Fχ² is the chi-square CDF with df degrees of freedom.
Example: χ² = 5.99, df = 2 (two-tailed) ⇒ p ≈ 0.05
Common misconceptions
- A p-value is not the probability that H0 is true.
- Statistical significance does not imply practical significance.
- p ≥ 0.05 does not prove H0 is true; it only means insufficient evidence to reject it.
- The threshold α = 0.05 is a convention, not a law — choose based on context.
Step-by-step example (Z-test, two-tailed)
- Compute the test statistic: z = (x̄ − μ0) / (σ / √n)
- Find the area beyond |z| in the standard normal distribution.
- Multiply by 2 for a two-tailed test: p = 2 × P(Z > |z|)
- Compare p to α: if p < α, reject H0.
References
- Wasserstein, R.L. & Lazar, N.A. (2016). The ASA Statement on p-values: Context, Process, and Purpose. The American Statistician, 70(2), 129–133.
- Fisher, R.A. (1925). Statistical Methods for Research Workers. Oliver & Boyd.
- Neyman, J. & Pearson, E.S. (1933). On the problem of the most efficient tests of statistical hypotheses. Philosophical Transactions of the Royal Society A, 231, 289–337.
- Casella, G. & Berger, R.L. (2002). Statistical Inference (2nd ed.). Duxbury.
