An essay, The Missing Half of Science: Rethink Negative Results

10 minute read

Published:

This essay, written with my friends and fellow researchers, examines a major bottleneck in modern science: the loss of knowledge from failed or discontinued research. We argue that these so-called dead ends are often the starting points for iterative improvement, and that preserving them through structured reporting, shared databases and reformed publishing practices could make research more efficient, collaborative and transparent.

Title: The Missing Half of Science: Rethink Negative Results

Authors: Noa Midzic, Cyan Ching, Astrid Canal

If I get a new idea, I often feel that it has probably been done before. Outside of checking platforms such as Google Scholar, it is difficult to know whether something has never been attempted, or whether it was explored and quietly abandoned. Most published articles present successful results, or at least partially successful ones. It is rare to encounter a paper that concludes, “this approach was tested and proven wrong.” This creates a strange contradiction: science advances by testing ideas, yet many of the ideas that do not work disappear from the public record.

For researchers in academia, this silence is reinforced by incentives. Salaries are often modest, positions are limited and reputation plays a central role in career progression. Professional relationships, recommendations and visibility matter. In such an environment, openly discussing failed ideas or unsuccessful projects can feel risky rather than useful. As a result, failures often remain in internal notes, private conversations or institutional memory, instead of becoming part of shared scientific knowledge.

When scaled to modern science, this becomes a structural flaw. Publicly funded teams may unknowingly repeat experiments that have already failed elsewhere. Science maintains extensive records of success, but the equally valuable record of failure is fragmented, hidden or lost. Time, expertise and public money are therefore spent rediscovering the same limitations.

Yet failure is not the opposite of progress. Science advances not only by discovering what works, but also by ruling out what does not. A failed experiment can reveal flawed assumptions, weaknesses in methodology or conditions under which an idea breaks down. Many failures are also more complex than they first appear. An experiment may not fail because the main idea is wrong, but because of the model used, the protocol chosen, the timing, the available equipment or the way results were measured. Without access to these details, future researchers cannot distinguish between a weak idea and a poor implementation. Instead of improving on earlier attempts, they are forced to start again from zero.

The usual solution is to encourage scientists to publish negative results. Although well intentioned, this has not solved the problem. Publishing unsuccessful work takes time and often brings little career benefit. Voluntary platforms, even anonymous ones, still depend on busy individuals choosing to document work they may have already moved on from. Participation therefore remains uneven.

This suggests that the issue is not simply individual behaviour, but system design. If science wants reliable access to the knowledge contained in failure, responsibility should shift from individual researchers to the organisations that fund research. Public funding bodies could require unsuccessful, discontinued or inconclusive research pathways to be documented in a standardised format and deposited into a shared database. In the European Union, this could be integrated into Horizon Europe and related programmes. In the United States, similar systems could be implemented through agencies such as the NIH, NSF, DOE, DARPA and ARPA-H.

To make participation consistent, reporting could be tied to grant milestones or final payments. It would then become a normal part of closing a project rather than an optional act of goodwill. The purpose would not be to catalogue every failed attempt in excessive detail, but to capture the essential lessons: what was attempted, which methods were used, why the approach was stopped and what future researchers should know before trying something similar. Sensitive details could be anonymised, restricted temporarily or released later when appropriate.

Such a system would allow researchers planning new projects to check whether similar ideas had already been tested, under what conditions they failed and whether alternative routes remain promising. This would reduce unnecessary repetition and help teams design stronger experiments from the beginning. Public research money would also be used more responsibly, since taxpayers would benefit not only from successful discoveries but from the full knowledge generated through exploration.

However, preserving failure is only one part of the problem. The structure of scientific publishing also contributes to inefficiency. Academic publishing often favours complete narratives: large studies that present a polished story from beginning to end. While this format is useful, it raises the barrier to publication and encourages researchers to accumulate results before sharing them. When unsuccessful attempts are included, they are often compressed into minor sections or supplementary material, making them difficult to find and use.

A more effective model would allow well-documented individual experiments to be published regardless of outcome. Instead of requiring every contribution to resolve an entire research question, science could be communicated through precise, reproducible modules. Understanding would then emerge collectively from many smaller contributions. A failed experiment, if carefully documented, would not be treated as wasted work, but as useful evidence that can be reviewed, cited and built upon.

This would also allow evaluation criteria to evolve. Scientific merit should not depend only on positive results, but also on clarity, rigour, reproducibility and usefulness. For students and early-career researchers, this is especially important. Analysing why something did not work is often one of the most valuable parts of scientific training. If such work were visible and recognised, researchers could receive feedback, refine their methods and design better follow-up experiments. Failure would become part of an iterative process rather than a hidden endpoint.

Failures can also be integrated more directly into successful publications. When appropriate, researchers should include clearly documented unsuccessful approaches alongside their final results. This would provide a more honest and complete account of how conclusions were reached, and it would help others understand not only what worked, but what was tried along the way.

This model would encourage a more distributed form of collaboration. Instead of multiple teams independently rebuilding the same tools, methods or technical capabilities, researchers could specialise in specific techniques and contribute to shared problems. Experimental fields would especially benefit, since access to calibrated equipment, specialist protocols and technical expertise could be shared more efficiently. This would reduce redundant investment and allow researchers, including doctoral students, to focus more on scientific questions and less on rebuilding infrastructure.

There have already been attempts to share failed experiments, including initiatives at Utrecht University and elsewhere. However, these efforts have remained limited because they still depend on individual motivation, extra labour and uncertain career rewards. This again shows why systemic support is necessary. Alongside mandatory reporting, institutions could support modular publishing platforms that streamline submission, review and dissemination. Since researchers already write, review and revise scientific work, such platforms could formalise existing academic labour while reducing dependence on traditional publishing structures and their associated costs.

Accessibility must also be addressed. Publicly funded research should be publicly available. Paywalls restrict access not only to successful results, but to the broader knowledge ecosystem. At minimum, researchers should maintain open versions of their work through preprint platforms or institutional repositories, ensuring that publicly funded knowledge remains accessible.

Reproducibility could also become part of this system. Students and early-career researchers could contribute by attempting to reproduce published experiments and reporting their findings in a structured way. This would strengthen scientific reliability while providing valuable training and recognition. It would also help identify which results are robust, which depend on specific conditions and which need further clarification.

To keep these distributed contributions coherent, more experienced researchers could take on a stronger curatorial role. Rather than producing every component of a study themselves, they could synthesise and connect results across teams, identify patterns, resolve inconsistencies and assemble broader narratives. Complete scientific stories would still exist, but they would emerge from the integration of many precisely documented parts rather than being produced in isolation.

At a larger scale, this could become a Failure Knowledge Commons. In Europe, such a system could be built into Horizon Europe, requiring projects above a defined funding threshold to submit “closed pathway reports” when research fails to confirm its original hypothesis, becomes inconclusive or is discontinued after meaningful use of funds. The United States could build a similar system through its major federal funding agencies.

These reports could feed into a searchable platform covering participating countries, institutions and research areas. The platform could classify reports by field, method, cause of failure, cost, equipment, model system and stage of development. A researcher working in cancer biology, battery storage, AI safety or another field could quickly see which approaches had already been tested, why they failed and where improvement may still be possible.

The system could also inform funding decisions. Grant reviewers could evaluate whether applicants had considered relevant prior failures before proposing new work. Over time, learning from unsuccessful attempts could become as normal as reviewing successful publications.

A first pilot could run for several years across selected areas such as health, energy and digital technologies. Its impact could be assessed by measuring whether repeated failed approaches become less common, whether grant proposals improve and whether resources are redirected sooner toward stronger ideas.

As the database grows, it could reveal broader structural problems. If many projects fail for similar reasons, this may point to common barriers such as limited access to specialist equipment, regulatory bottlenecks, poor coordination between institutions or gaps in technical expertise. The platform would therefore not only preserve knowledge from unsuccessful research, but also help diagnose why progress slows.

Scientific research today is increasingly expensive, specialised and difficult to coordinate. It cannot afford to keep paying separately for the same mistakes. By preserving failed and discontinued work, reforming publication practices and recognising rigorous contributions regardless of outcome, science could become more efficient, collaborative and transparent.

If the public funds exploration, it should inherit the full map: not only the routes that led to success, but also the paths that revealed where assumptions failed, where methods broke down and where better approaches might begin. These are not simply dead ends. They are the groundwork for the next iteration.