There are interesting null results that get published and are well known. For example, Card & Kruger (1994) was a null result paper showing that increasing the minimum wage has a null effect on employment rates. This result went against the common assumption that increasing wages will decrease employment at the time.
Other null results are either dirty (e.g., big standard errors) or due to process problems (e.g., experimental failure). These are more difficult to publish because it's difficult to learn anything new from these results.
The challenge is that researchers do not know if they are going to get a "good" null or a "bad" one. Most of the time, you have to invest significant effort and time into a project, only to get a null result at the end. These results are difficult to publish in most cases and can lead to the end of careers if someone is pre-tenure or lead to funding problems for anyone.
They avoid mentioning the elephant in the room: jobs and tenure. When you can get hired for a tenure-track job based on your null-result publications, and can get tenure for your null-result publications, then people will publish null results. Until then, they won't hit the mainstream.
I studied physics in university, and found it challenging to find null-result publications to cite, which can be useful when proposing a new experiment or as background info for a non-null paper.
I promised myself if I became ultra-wealthy I would start a "Journal of Null Science" to collect these publications. (this journal still doesn't exist)
It's amazing to see this on the front page of HN as it came up in a discussion with my partner early in our relationship. I was saying something about how a lot of people don't understand the robustness of peer review and replication. I was gushing about how it's the most perfect system of knowledge advancement and she replied, "I mean, it's not perfect though," and then said, pretty much verbatim, the title of this article.
The problem specifically isn't so much that null results don't get published, it's that they get published as a positive result in something the researchers weren't studying - they have to change their hypothesis retroactively to make it look like a positive result. Worse, this leads to studies that are designed to attempt to study as many things as possible, to hedge their bets. These studies suffer from quality problems because of course you can't really study something deeply if you're controlling a lot of variables.
You could publish them as a listicle "10 falsehoods organic chemists believe!" Because behind most every null result was an hypothesis that sounded like it was probably true. Most likely, it would sound probably true to most people in the field, so publishing the result is of real value to others.
The problem arises that null results are cheap and easy to "find" for things no-one thinks sound plausible, and therefore a trivial way to game the publish or perish system. I suspect that this alone explains the bias against publishing null results.
There is zero incentive for the researcher personally to publish null results.
Null results are the foundations on which “glossy” results are produced. Researchers would be wasting time giving away their competitive advantage by publishing null results.
Never worked in academia, but in industry null results are really valuable: “we tried that, it didn’t work, don’t waste your time”. Sometimes very intuitive things just don’t work. Sometimes people are less inclined to share those things more publicly.
In a perfect world, there’s still a forcing function to get researchers to publish null results. Maybe the head of a department publishes the research they tried but didn’t work out. I wonder how much money has been lost on repeatedly trying the same approaches that don’t work.
Journals could fix that. They could create a category null results and dedicate it a fixed amount of pages (like 20%). Researchers want to be in journals, if this category doesn’t have a lot of submissions it would be much easier to get published.
We just need a site for posting null result reports.
With really good keyword, search functionality.
Much less formal requirements than regular papers, make it easy. But with some common sense guidelines.
And public upvotes, and commenting, so contributors get some feedback love, and failures can attract potential helpful turnaround ideas.
And of course, annual rewards. For humor & not so serious (because, why so serious?) street cred, but with the serous mission of raising consciousness about how negative results are not some narrow bit of information, but that attempts and results, bad or not, are rich sources of new ideas.
They could just publish them on arxiv or their own websites. Publishing is easy. What they mean is, they struggle to get attention to them in a way that boosts their own career by pumping the metrics their administrators care about. Not the same thing at all.
Researchers value null results, but struggle to publish them
(nature.com)118 points by Bluestein 23 July 2025 | 45 comments
Comments
There are interesting null results that get published and are well known. For example, Card & Kruger (1994) was a null result paper showing that increasing the minimum wage has a null effect on employment rates. This result went against the common assumption that increasing wages will decrease employment at the time.
Other null results are either dirty (e.g., big standard errors) or due to process problems (e.g., experimental failure). These are more difficult to publish because it's difficult to learn anything new from these results.
The challenge is that researchers do not know if they are going to get a "good" null or a "bad" one. Most of the time, you have to invest significant effort and time into a project, only to get a null result at the end. These results are difficult to publish in most cases and can lead to the end of careers if someone is pre-tenure or lead to funding problems for anyone.
I promised myself if I became ultra-wealthy I would start a "Journal of Null Science" to collect these publications. (this journal still doesn't exist)
The problem specifically isn't so much that null results don't get published, it's that they get published as a positive result in something the researchers weren't studying - they have to change their hypothesis retroactively to make it look like a positive result. Worse, this leads to studies that are designed to attempt to study as many things as possible, to hedge their bets. These studies suffer from quality problems because of course you can't really study something deeply if you're controlling a lot of variables.
The problem arises that null results are cheap and easy to "find" for things no-one thinks sound plausible, and therefore a trivial way to game the publish or perish system. I suspect that this alone explains the bias against publishing null results.
Null results are the foundations on which “glossy” results are produced. Researchers would be wasting time giving away their competitive advantage by publishing null results.
In a perfect world, there’s still a forcing function to get researchers to publish null results. Maybe the head of a department publishes the research they tried but didn’t work out. I wonder how much money has been lost on repeatedly trying the same approaches that don’t work.
https://en.wikipedia.org/wiki/Michelson%E2%80%93Morley_exper...
With really good keyword, search functionality.
Much less formal requirements than regular papers, make it easy. But with some common sense guidelines.
And public upvotes, and commenting, so contributors get some feedback love, and failures can attract potential helpful turnaround ideas.
And of course, annual rewards. For humor & not so serious (because, why so serious?) street cred, but with the serous mission of raising consciousness about how negative results are not some narrow bit of information, but that attempts and results, bad or not, are rich sources of new ideas.