Skip to content

Latest commit

 

History

History
63 lines (45 loc) · 2.51 KB

README.md

File metadata and controls

63 lines (45 loc) · 2.51 KB

Quip Among Us

A creative word-smithing party game for 3-8 players. A recreation of the Jackbox game Quiplash, with a twist: one of the answers each round is generated by AI. Adapted from quiplash-js.

Playable in the same room or remotely via screensharing.

demo preview

How to Play

  • Host starts a new game at https://quip-among-us-9549d28ae7de.herokuapp.com/create
    • If players are remote, screen share this screen
  • Players join game at https://einarbalan.com/quipamongus (recommended to use a mobile device)
  • Host starts the game
  • Players receive two prompts to answer (Be as silly as possible)
  • Players then vote for their favorite answer to different prompts, as well as which answer they think is generated by AI
  • Whoever gets the most votes for their submitted answer is technically the winner, but the person that had the most fun is the true winner

Tech Stack

  • Frontend - React
  • Backend - Nodejs, Express
  • SocketIO

Extracting answers for the prompts

The Python script(state/generated-data/extract.py) helps generate funny responses for a collection of prompts stored in state/generated-data/Prompts.pg13.js using the Gemini 1.5 Flash AI model. The script processes prompts and instructs Gemini to respond with witty, humorous answers, saving both prompts and responses to a CSV file for easy tracking and review.

How to Start Local Server

  • npm start

Note: The React web app needs to get built, whereas the node server does not.

Testing on another device

  • Start local server
  • If you have a firewall, ensure it allows connections from NodeJS
  • Open your other device's browser and connect to HOST_IP:3001

Evaluation

  • ./Evaluation - Evaluation with BertScore
  • CS263_Evaluation.ipynb
    • A Jupyter Notebook for running and visualizing evaluation processes interactively.
    • Use this file to explore the evaluation pipeline step by step and verify results.
  • evaluate_quiplash.py
    • Automates the evaluation of JSON files.
    • Saves results in the ./results/ directory.

How to Run Evaluation

  1. Place your JSON files in the ./reports/ directory.
  2. Run the evaluation script:
    python evaluate_quiplash.py
  3. If you want to compare the BertScore by files and model, run the CS263_Evaluation.ipynb file.

Directory Structure

  • build - Built web app
  • public - Template html from create-react-app
  • server - Node Server code
  • src - Front-end code
  • Evaluation - Evaluation with BertScore