From 0148db22b637e4a95d70d6e9313db54f4a499007 Mon Sep 17 00:00:00 2001 From: Luca Beurer-Kellner Date: Mon, 19 Feb 2024 16:42:36 +0100 Subject: [PATCH] add papers --- docs/research/index.md | 71 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 71 insertions(+) diff --git a/docs/research/index.md b/docs/research/index.md index 3dbbedf2..eab9f6bd 100644 --- a/docs/research/index.md +++ b/docs/research/index.md @@ -5,6 +5,76 @@ aside: false
The core publications around LMQL and its implementation.
+
+ +## Prompt Sketching for Large Language Models + + +arXiv:2311.04954 [cs.CL] + + +
+ +[Luca Beurer-Kellner](https://www.sri.inf.ethz.ch/people/luca), [Mark Niklas Müller](https://www.sri.inf.ethz.ch/people/mark), [Marc Fischer](https://www.sri.inf.ethz.ch/people/marc), [Martin Vechev](https://www.sri.inf.ethz.ch/people/martin) + +
+ +[**SRI**lab](https://www.sri.inf.ethz.ch) @ [ETH Zürich](https://ethz.ch), Switzerland + +Read the full paper + +Many recent prompting strategies for large language models (LLMs) query the model multiple times sequentially – first to produce intermediate results and then the final answer. However, using these methods, both decoder and model are unaware of potential follow-up prompts, leading to disconnected and undesirably wordy intermediate responses. In this work, we address this issue by proposing prompt sketching, a new prompting paradigm in which an LLM does not only respond by completing a prompt, but by predicting values for multiple variables in a template. This way, sketching grants users more control over the generation process, e.g., by providing a reasoning framework via intermediate instructions, leading to better overall results. The key idea enabling sketching with existing, autoregressive models is to adapt the decoding procedure to also score follow-up instructions during text generation, thus optimizing overall template likelihood in inference. Our experiments show that in a zero-shot setting, prompt sketching outperforms existing, sequential prompting schemes such as direct asking or chain-of-thought on 7 out of 8 LLM benchmarking tasks, including state tracking, arithmetic reasoning, and general question answering. To facilitate future use, we release a number of generic, yet effective sketches applicable to many tasks, and an open source library called dclib, powering our sketch-aware decoders. + +
+ +
+ +## Large Language Models are Zero-Shot Multi-Tool Users + + +Knowlege and Logical Reasoning Workshop - ICML 2023, Honolulu, Hawaii + + + +
+ +[Luca Beurer-Kellner](https://www.sri.inf.ethz.ch/people/luca), [Marc Fischer](https://www.sri.inf.ethz.ch/people/marc), [Martin Vechev](https://www.sri.inf.ethz.ch/people/martin) + +
+ +[**SRI**lab](https://www.sri.inf.ethz.ch) @ [ETH Zürich](https://ethz.ch), Switzerland + +Read the full paper + + +We introduce LMQL Actions, a framework and programming environment to facilitate the implementation of tool-augmented language models (LMs). Concretely, we augment LMs with the ability to call actions (arbitrary Python functions), and experiment with different ways of tool discovery and invocation. We find that, while previous works heavily rely on few-shot prompting to teach tool use, a zero-shot, instruction-only approach is enough to achieve competitive performance. At the same time, LMQL Actions zero-shot approach also offers a much simpler programming interface, not requiring any involved demonstrations. Building on this, we show how LMQL Actions enables LLMs to automatically discover and combine multiple tools to solve complex tasks. Overall, we find that inline tool use as enabled by LMQL Actions, outperforms existing tool augmentation approaches, both in arithmetic reasoning tasks and text-based question answering. + +
+ +
+ +## LMQL Chat: Scripted Chatbot Development + + +Neural Conversational AI Workshop, TEACH - ICML 2023, Honolulu, Hawaii + + + +
+ +[Luca Beurer-Kellner](https://www.sri.inf.ethz.ch/people/luca), [Marc Fischer](https://www.sri.inf.ethz.ch/people/marc), [Martin Vechev](https://www.sri.inf.ethz.ch/people/martin) + +
+ +[**SRI**lab](https://www.sri.inf.ethz.ch) @ [ETH Zürich](https://ethz.ch), Switzerland + +Read the full paper + + +We introduce LMQL Chat, a powerful open-source framework for building interactive systems on top of large language models, making it easy to create conversational agents with features like tool usage, internal reflection or safety constraints. + +
+
## Prompting Is Programming: A Query Language For Large Language Models @@ -49,6 +119,7 @@ We show that LMQL can capture a wide range of state-of-the-art prompting methods .paper { position: relative; text-align: justify; + line-height: 1.0; } .paper p { margin: 10pt 0pt;