Skip to main navigation Skip to search Skip to main content

Building trust in the integration of artificial intelligence into chemical risk assessment: findings from the 2024 ECETOC workshop

  • Timothy W. Gant*
  • , Alistair Boxall
  • , Daniel Burgwinkel
  • , Maryam Zare Jeddi
  • , Ivo Djidrovski
  • , Steffi Friedrichs
  • , Barry Hardy
  • , Thomas Hartung
  • , Daniela Holland
  • , Andreas Karwath
  • , Anne Kienhuis
  • , Nicole Kleinstreuer
  • , Zhoumeng Lin
  • , Emma L. Marczylo
  • , Antonino Marvuglia
  • , Hua Qian
  • , Bennard van Ravenzwaay
  • , Paul Rees
  • , Haralambos Sarimveis
  • , Tewes Tralau
  • Lucy Wilmot, Adam Zalewski, David Rouquié
*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Downloads (Pure)

Abstract

Artificial Intelligence (AI) is increasingly influencing chemical risk assessment, enabling faster, more comprehensive, and potentially more ethical assessments. The application of AI in chemical risk assessment refers to both generative and predictive algorithms encompassing machine learning, to analyse complex chemical, biological, and environmental data and provide insights into adverse effect potential for humans and ecosystems. AI systems support the prediction of chemical hazards, exposure levels, and adverse effects by learning from experimental results, mechanistic models, and regulatory datasets, thereby enhancing the efficiency of safety evaluations. 

In October 2024, ECETOC held an international workshop, with experts from academia, industry, and regulatory bodies, to reflect upon the historical challenges in integrating multidimensional omics technologies into chemical regulation and explore the current capabilities and future potential of AI in toxicology and regulatory science. Discussions emphasised that implementation of Findable, Accessible, Interoperable, and Reusable (FAIR) data principles is not just a best practice but rather a prerequisite for building transparent, reliable, and unbiased AI systems. The reliability of AI in producing scientifically valid and socially responsible outcomes depends fundamentally on the availability of FAIR data. However, ensuring trustworthiness also requires robust governance frameworks that go beyond data and human oversight. Critical enablers of responsible AI in chemical risk assessment are rigorous governance, explainability, fit-for-purpose applications, and human oversight. ECETOC supports the development of flexible and iterative frameworks advancing development, validation, transparency, accountability, and trust in AI applications in chemicals regulation.

Original languageEnglish
Number of pages19
JournalArchives of Toxicology
Early online date17 Feb 2026
DOIs
Publication statusE-pub ahead of print - 17 Feb 2026

Bibliographical note

Publisher Copyright:
© Crown 2026.

Keywords

  • AI
  • Explainable AI
  • FAIR data
  • Hazard and risk assessment
  • Toxicology
  • Trust

ASJC Scopus subject areas

  • Toxicology
  • Health, Toxicology and Mutagenesis

Fingerprint

Dive into the research topics of 'Building trust in the integration of artificial intelligence into chemical risk assessment: findings from the 2024 ECETOC workshop'. Together they form a unique fingerprint.

Cite this