Skip to content

The official source code for "Boosting LLM Agents with Recursive Contemplation for Effective Deception Handling" (ACL 2024, Findings)

Notifications You must be signed in to change notification settings

Shenzhi-Wang/recon

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Recursive Contemplation (ReCon)

Shenzhi Wang*, Chang Liu*, Zilong Zheng†, Siyuan Qi, Shuo Chen, Qisen Yang, Andrew Zhao, Chaofei Wang, Shiji Song, Gao Huang†

*: Equal Contribution, †: Corresponding Authors

Project Page | Chinese Report by Synced (机器之心) | Chinese Report by AI Era (新智元) | Chinese Report by QbitAI (量子位)

1. Introduction

This repository is the official source code for Boosting LLM Agents with Recursive Contemplation for Effective Deception Handling (ACL 2024, Findings).

The figure above is the illustrative framework of our proposed Recursive Contemplation (ReCon) with the Avalon game as an example. Specifically, ReCon presents a cognitive process with two stages: contemplation of formulation and refinement, each associated with first-order and second-order perspective transition, respectively.

2. Installation

The python version used in our experiments is 3.9.17.

git clone https://github.com/Shenzhi-Wang/recon.git
cd recon
pip install -r requirements.txt 

3. Usage

3.1 Add your API key

Change the gpt_api_key in api_config.py to your own API key.

3.2 Play Avalon games!

In the following, N_ROUNDS is the number of game repetitions.

  1. CoT (as the good side) v.s. CoT (as the evil side):
./scripts/run_exp.sh baseline_gpt baseline_gpt ${N_ROUNDS}
  1. ReCon (as the good side) v.s. CoT (as the evil side):
./scripts/run_exp.sh ours_gpt baseline_gpt ${N_ROUNDS}
  1. ReCon (as the good side) v.s. ReCon (as the evil side):
./scripts/run_exp.sh ours_gpt ours_gpt ${N_ROUNDS}
  1. CoT (as the good side) v.s. ReCon (as the evil side):
./scripts/run_exp.sh baseline_gpt ours_gpt ${N_ROUNDS}

The logs of Avalon games will be saved at game_history.csv under the logs directory.

Citation

We would greatly appreciate it if you could cite our work!

@inproceedings{
wang2024boosting,
title={Boosting LLM Agents with Recursive Contemplation for Effective Deception Handling},
author={Wang, Shenzhi and Liu, Chang and Zheng, Zilong and Qi, Siyuan and Chen, Shuo and Yang, Qisen and Zhao, Andrew and Wang, Chaofei and Song, Shiji and Huang, Gao},
booktitle={The 62nd Annual Meeting of the Association for Computational Linguistics},
year={2024},
url={https://openreview.net/forum?id=tw5yAlP1ne}
}

About

The official source code for "Boosting LLM Agents with Recursive Contemplation for Effective Deception Handling" (ACL 2024, Findings)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published