GPT-4-powered analysis and prediction of selective catalytic reduction experiments through an effective chain-of-thought prompting strategy

Muyu Lu, Fengyu Gao, Xiaolong Tang*, Linjiang Chen*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

22 Downloads (Pure)

Abstract

This study explores the use of Large Language Models (LLMs) in interpreting and predicting experimental outcomes based on given experimental variables, leveraging the human-like reasoning and inference capabilities of LLMs, using selective catalytic reduction of NOx with NH3 as a case study. We implement the Chain of Thought (CoT) concept to formulate logical steps for uncovering connections within the data, introducing an "Ordered-and-Structured" CoT (OSCoT) prompting strategy. We compare the OSCoT strategy with the more conventional "One-Pot" CoT (OPCoT) approach and with human experts. We demonstrate that GPT-4, equipped with this new OSCoT prompting strategy, outperforms the other two settings and accurately predicts experimental outcomes and provides intuitive reasoning for its predictions.
Original languageEnglish
Article number109451
JournaliScience
Early online date7 Mar 2024
DOIs
Publication statusE-pub ahead of print - 7 Mar 2024

Fingerprint

Dive into the research topics of 'GPT-4-powered analysis and prediction of selective catalytic reduction experiments through an effective chain-of-thought prompting strategy'. Together they form a unique fingerprint.

Cite this