Skip to main navigation Skip to search Skip to main content

The future of fundamental science led by generative closed-loop artificial intelligence

  • Hector Zenil*
  • , Jesper Tegnér*
  • , Felipe S. Abrahão
  • , Alexander Lavin
  • , Vipin Kumar
  • , Jeremy G. Frey
  • , Adrian Weller
  • , Larisa Soldatova
  • , Alan R. Bundy
  • , Nicholas R. Jennings
  • , Koichi Takahashi
  • , Lawrence Hunter
  • , Saso Dzeroski
  • , Andrew Briggs
  • , Frederick D. Gregory
  • , Carla P. Gomes
  • , Jon Rowe
  • , James Evans
  • , Hiroaki Kitano
  • , Ross King
  • *Corresponding author for this work

Research output: Contribution to journalReview articlepeer-review

3 Downloads (Pure)

Abstract

Artificial intelligence is approaching the point at which it can complete the scientific cycle, from hypothesis generation to experimental design and validation, within a closed loop that requires little human intervention. Yet, the loop is not fully autonomous: humans still curate data, set hyperparameters, adjudicate interpretability, and decide what counts as a satisfactory explanation. As models scale, they begin to explore regions of hypothesis and solution space that are inaccessible to human reasoning because they are too intricate or alien to our intuitions. Scientists may soon rely on AI strategies they do not fully understand, trusting goals and empirical payoffs rather than derivations. This prospect forces a choice about how much control to relinquish to accelerate discovery while keeping outputs human relevant. The answer cannot be a blanket policy to deploy LLMs or any single paradigm everywhere. It demands principled matching of methods to domains, hybrid causal and neurosymbolic scaffolds around generative models, and governance that preserves plurality and counters recursive bias. Otherwise, recursive training and uncritical reuse risk model collapse in AI and an epistemic collapse in science, as statistical inertia amplifies flaws and narrows the investigation. We argue for graded autonomy in AI-conducted science: systems that can close the loop at machine speed, while remaining anchored to human priorities, verifiable mechanisms, and domain-appropriate forms of understanding.
Original languageEnglish
Article number1678539
Number of pages16
JournalFrontiers in Artificial Intelligence
Volume9
DOIs
Publication statusPublished - 11 Feb 2026

Keywords

  • human-machine collaboration
  • epistemic singularity
  • AI-conducted science
  • domain-method alignment
  • AI4Science
  • cognitive collapse
  • closed-loop discovery
  • graded autonomy

Fingerprint

Dive into the research topics of 'The future of fundamental science led by generative closed-loop artificial intelligence'. Together they form a unique fingerprint.

Cite this