pp. Manipulating scientific data so that the results appear to be statistically significant.
2015
If we’re going to rely on science as a means for reaching the truth — and it’s still the best tool we have — it’s important that we understand and respect just how difficult it is to get a rigorous result. I could pontificate about all the reasons why science is arduous, but instead I’m going to let you experience one of them for yourself. Welcome to the wild world of p-hacking.
2014
Perhaps the worst fallacy is the kind of self-deception for which psychologist Uri Simonsohn of the University of Pennsylvania and his colleagues have popularized the term P-hacking; it is also known as data-dredging, snooping, fishing, significance-chasing and double-dipping.
2012
Almost more alarming than the few individuals committing academic fraud are the high percentage of researchers who admitted to more common questionable research practices, like post-hoc theorizing and data-fishing (sometimes referred to as p-hacking), in a recent study led by Leslie John.
2012 (earliest)
In the final talk, Uri Simonsohn of UPenn discussed what he refers to as "p-hacking." P-hacking is the idea that if researchers are engaging in questionable analysis practices, then they should have a disproportionate number of findings at or close to the p < .05 threshold for statistical significance, and that this can be relatively easy to detect.
In statistics, the P value is the probability that an effect shown by scientific data is not a real effect and is, instead, just an artifact of the methodology (such as a random sampling error). If P is low (the usual threshold is five percent, or 0.05), then the effect shown by the data is said to be statistically significant.