Do Not Design, Learn: A Trainable Scoring Function for Uncertainty Estimation in Generative LLMs

Duygu Nur Yaldiz, Yavuz Faruk Bakman, Baturalp Buyukates, Chenyang Tao, Anil Ramakrishna, Dimitrios Dimitriadis, Salman Avestimehr

Research output: Working paper/PreprintPreprint

80 Downloads (Pure)

Abstract

In this work, we introduce the Learnable Response Scoring Function (LARS) for Uncertainty Estimation (UE) in generative Large Language Models (LLMs). Current scoring functions for probability-based UE, such as length-normalized scoring and semantic contribution-based weighting, are designed to solve specific aspects of the problem but exhibit limitations, including the inability to handle biased probabilities and under-performance in low-resource languages like Turkish. To address these issues, we propose LARS, a scoring function that leverages supervised data to capture complex dependencies between tokens and probabilities, thereby producing more reliable and calibrated response scores in computing the uncertainty of generations. Our extensive experiments across multiple datasets show that LARS substantially outperforms existing scoring functions considering various probability-based UE methods.
Original languageEnglish
PublisherarXiv
Pages1-16
Number of pages16
DOIs
Publication statusPublished - 17 Jun 2024

Keywords

  • cs.CL

Fingerprint

Dive into the research topics of 'Do Not Design, Learn: A Trainable Scoring Function for Uncertainty Estimation in Generative LLMs'. Together they form a unique fingerprint.

Cite this