Skip to main navigation Skip to search Skip to main content

Robustness-enhanced cooperative adaptive cruise control for multi-task scenarios via generalised joint multi-agent reinforcement learning

  • Lu Dong
  • , Xiaomeng Li
  • , Xu He
  • , Min Hua
  • , Quan Zhou
  • , Changyin Sun
  • , Kun Jiang*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Cooperative adaptive cruise control (CACC) leverages vehicle-to-vehicle communication to achieve tighter distance control and better formation maintenance, improving efficiency and safety. However, cross-task robustness and multi-objective decision-making remain challenging. This paper introduces a Multi-Agent Reinforcement Learning (MARL) framework tailored for multi-objective CACC in cross-task environments. The proposed approach employs a synergistic cognitive fusion and dynamic weight adaptation strategy to optimize the allocation of multiple driving objectives. By dynamically adjusting the relative importance of objectives such as safety, efficiency, and comfort, the framework adapts to varying driving scenarios. Simulation experiments demonstrate the method's effectiveness in enhancing overall system performance and driving safety. Furthermore, comparisons with real-world driving data underscore the approach's potential for practical application.

Original languageEnglish
Article number132036
Number of pages11
JournalNeurocomputing
Volume664
Early online date6 Nov 2025
DOIs
Publication statusPublished - 1 Feb 2026

Bibliographical note

Publisher Copyright:
© 2025 Elsevier B.V.

Keywords

  • Cooperative adaptive cruise control
  • Intelligent transportation systems
  • Multi-agent reinforcement learning
  • Multi-objective optimization
  • Robustness control

ASJC Scopus subject areas

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Robustness-enhanced cooperative adaptive cruise control for multi-task scenarios via generalised joint multi-agent reinforcement learning'. Together they form a unique fingerprint.

Cite this