Abstract
This study explores the use of Large Language Models (LLMs) in interpreting and predicting experimental outcomes based on given experimental variables, leveraging the human-like reasoning and inference capabilities of LLMs, using selective catalytic reduction of NOx with NH3 as a case study. We implement the Chain of Thought (CoT) concept to formulate logical steps for uncovering connections within the data, introducing an "Ordered-and-Structured" CoT (OSCoT) prompting strategy. We compare the OSCoT strategy with the more conventional "One-Pot" CoT (OPCoT) approach and with human experts. We demonstrate that GPT-4, equipped with this new OSCoT prompting strategy, outperforms the other two settings and accurately predicts experimental outcomes and provides intuitive reasoning for its predictions.
Original language | English |
---|---|
Article number | 109451 |
Journal | iScience |
Early online date | 7 Mar 2024 |
DOIs | |
Publication status | E-pub ahead of print - 7 Mar 2024 |