What Gets Lost When Science Is Optimised

Scientific discovery has rarely come from clean results. As AI and optimisation increasingly shape how science is done, it is worth asking whether we are accelerating progress within known frames while quietly losing the conditions that once allowed the unexpected to matter.

Published in Philosophy & Religion

What Gets Lost When Science Is Optimised
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

When discovery becomes optimisation

Many scientific discoveries emerge not from the successful execution of a predefined plan, but from deviations that occur when a system behaves differently than expected. Historically, such deviations were often preserved long enough to be interrogated. In some cases, they became the starting point for entirely new lines of understanding.

Contemporary scientific discovery is increasingly mediated by automated and AI-driven pipelines. Hypotheses can be generated algorithmically, experiments executed autonomously, and outcomes evaluated against predefined criteria. This approach has clear advantages. It enables efficiency, reproducibility, and exploration at a scale that would be impractical by human effort alone.

However, this shift also changes the conditions under which discovery takes place. It is therefore worth asking not only what these systems enable, but what they implicitly exclude.

Known unknowns and unknown unknowns

Most automated discovery platforms are designed to explore known unknowns. They operate within hypothesis spaces that are bounded in advance, guided by representations, metrics, and objective functions that specify what counts as progress. Whether the goal is to optimise a molecular sequence, identify a new material with target properties, or refine a biological pathway, exploration proceeds within a framework that is already considered meaningful. This is exploration, but it is exploration within a constrained space.

Unknown unknowns, by contrast, are often not points within the space at all, but signs that the space is missing a dimension. Historically, such behaviour frequently appeared as artefacts, contamination, or experimental failure. Their significance was often recognised only after sustained exposure and retrospective analysis.

In modern pipelines, these same features are commonly suppressed. Noise is filtered, outliers are flagged, and deviations from expectation are corrected or discarded. In doing so, systems become more robust and more efficient, but also more selective in what they allow to persist.

Bias is not removed, it is encoded

Importantly, automated discovery systems do not operate neutrally. They necessarily reflect the assumptions and priorities of their designers. Choices about what to measure, what to optimise, and what to ignore encode beliefs about relevance, plausibility, and value. Training data further reflect historical patterns of attention, including both what has been studied extensively and what has been neglected.

As a result, automated discovery does not simply explore nature. It explores nature as already framed by existing knowledge.

In this sense, AI does not eliminate human bias from discovery. It stabilises and formalises it. Unknown unknowns are not merely difficult to optimise for. They are often excluded at the level of representation, before any search begins.

The quiet arrogance of optimisation

Beneath optimisation-driven discovery lies an implicit epistemic assumption: that we already understand the system well enough to define the problem correctly. By specifying objective functions and success criteria in advance, we behave as if the relevant variables are known and the meaningful behaviours are recognisable.

This assumption is rarely justified in early or poorly understood domains. A substantial fraction of scientific progress has arisen from the realisation that the initial framing of a problem was incomplete or incorrect. In such cases, optimisation does not reveal new understanding. It reinforces an inadequate model. In contrast, discovery often begins by exposing the limits of that understanding.

Why scale does not solve the problem

It is tempting to assume that larger models and more data will eventually surface the unexpected. However, increasing scale changes resolution, not perspective. Larger systems can explore parameter spaces more thoroughly, but they do not reveal when the underlying representation is wrong.

Optimisation amplifies what is already legible within a given framework. It does not generate new forms of legibility. Phenomena that fall outside the representational assumptions of the system remain invisible, regardless of scale.

Many transformative discoveries did not emerge from exhaustive searches within established spaces. They emerged because anomalous behaviour was allowed to persist long enough to be examined.

Automation and the disappearance of mess

Autonomous and AI-driven laboratories are typically engineered to be clean, reproducible, safe, and efficient. These are sensible and often necessary goals. However, they are also anti-correlated with certain forms of discovery. Systems optimised for throughput converge quickly. Systems optimised for convergence suppress drift. Systems optimised to eliminate error rarely tolerate procedural or environmental failure.

In contexts where the objective is clear, this is desirable. When optimisation is mistaken for discovery, progress may accelerate even as understanding stagnates. When experiments are tightly controlled and evaluated exclusively against predefined criteria, there is little opportunity for unexpected behaviour to acquire significance.

The shrinking role of noticing

Discovery has never consisted solely of data generation. It has also depended on noticing when something unexpected has occurred and deciding that it might matter. Humans do more than notice anomalies; they supply new concepts when the old ones fail. Automated systems, by contrast, excel at consistency. They do not register surprise unless surprise has been explicitly encoded as a metric. As more exploratory work is delegated to machines, the human role shifts from observer to supervisor, reducing opportunities for delayed recognition and reinterpretation.

A question of balance, not rejection

This is not an argument against automation or AI in science. These tools are indispensable for many classes of problems and have already transformed what is experimentally accessible. The concern arises when optimisation becomes the dominant mode of exploration, and when efficiency is conflated with understanding.  In this shift, interpolation within known spaces increasingly stands in for discovery itself.

Scientific progress depends on a balance between control and exposure, but current incentives systematically favour control. Control enables reproducibility and application. Exposure allows systems to behave in ways that were not anticipated. If discovery pipelines are optimised exclusively for control, they may become increasingly effective while simultaneously narrowing the range of phenomena that can be observed.

Making room for the unexpected

An alternative is not a rejection of automation, but a plurality of discovery modes. Alongside highly targeted, task-oriented platforms, there may be value in deliberately under-optimised systems. Experiments allowed to age, drift, and degrade. Pipelines that preserve anomalies rather than deleting them. Human involvement focused less on micromanagement and more on noticing and contextualising unexpected outcomes.

Such systems may appear inefficient by conventional metrics. Their value often becomes apparent only retrospectively. Historically, however, these conditions have played a disproportionate role in generating new categories of understanding.

Looking forward

As AI-driven discovery systems continue to mature, the relevant question may not be how efficiently they can explore predefined spaces, but whether scientific practice remains capable of recognising when those spaces are no longer sufficient. In practice, most AI systems do not explore the unknown. They interpolate aggressively within what is already known, formalised, and measurable. More provocatively, this raises the possibility that AI is not narrowing discovery so much as amplifying an academic system that had already begun to reward interpolation within accepted frames. Progress may accelerate, but the space of what can be discovered may quietly contract.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in