Skip to content

Latest commit

 

History

History
23 lines (16 loc) · 1.35 KB

readme.md

File metadata and controls

23 lines (16 loc) · 1.35 KB

Generative Models as a Complex Systems Science: How can we make sense of large language model behavior?

Ari Holtzman, Peter West, and Luke Zettlemoyer

paper

Abstract

Coaxing out desired behavior from pretrained models, while avoiding undesirable ones, has redefined NLP and is reshaping how we interact with computers. What was once a scientific engineering discipline—in which building blocks are stacked one on top of the other—is arguably already a complex systems science—in which \textit{emergent behaviors} are sought out to support previously unimagined use cases.

Despite the ever increasing number of benchmarks that measure task performance, we lack explanations of what behaviors language models exhibit that allow them to complete these tasks in the first place. We argue for a systematic effort to decompose language model behavior into categories that explain cross-task performance, to guide mechanistic explanations and help future-proof analytic research.

Citation

@article{holtzman2023generativemodels,
  title         = "Generative Models as a Complex Systems Science: How can we make sense of large language model behavior?",
  author        = "Holtzman, Ari and West, Peter and Zettlemoyer, Luke",
  year          =  2023,
  journal      =  {preprint}
}