Quick Answer: A p-value tells you the probability that the results of your experiment happened through pure, random luck. If the p-value is 0.03 (3%), it means there is only a 3% chance that your data was a random fluke. Because this is low (under the standard 5% threshold), scientists call the result "Statistically Significant."
The Null Hypothesis: Assuming You Are Wrong
The scientific method is pessimistic. It starts with the "Null Hypothesis," which assumes your new drug doesn't work, or your marketing campaign failed, and any difference you observed is just random noise. The burden of proof is on you to prove the noise wrong.
Calculating the P-Value
You run your experiment. The control group lost 1 pound. The group taking your diet pill lost 3 pounds. The p-value calculation asks: "If the diet pill is actually useless (Null Hypothesis is true), how likely is it that we would see a 2-pound difference purely due to random variance?"
If the math returns p = 0.01, it means: "There is only a 1% chance we would see this gap by accident. The drug probably works." We reject the null hypothesis.
The Magic Number: P < 0.05
Decades ago, statistician R.A. Fisher arbitrarily suggested that a 1-in-20 chance (5%, or 0.05) of a false positive was an acceptable threshold for calling something a real discovery. This stuck. A p-value of 0.04 is "significant." A p-value of 0.06 is "insignificant."
The P-Hacking Crisis
P-values are currently under heavy scrutiny. If you test 20 different useless vitamins against a placebo, math dictates that at least ONE will show a positive result with a p-value < 0.05 by pure random chance (1 in 20). Unethical researchers publish the 1 fluke and hide the 19 failures. This is called "P-Hacking," and has led many scientific journals to move away from rigid p-value thresholds.