In the project all-persons-fictitious, we’re trying to see what it would look like to have conversations with entities that respond in music. In our installation, you experience communicating with an artificial intelligence, that rather than reply in conventional words, responds in sound that you can hear. As you keep talking, you will hear the music change over time.
- You write something you want to tell the chatbot
- The chatbot takes that input and thinks of a response
- The chatbot’s response is analyzed for sentiment, that is its emotional and affective content
- This information is sent to our synthesizer, which processes the emotional data into music
In doing this, we are building on the work of several different disciplines. Primarily, we rely on the development of chatbots within artificial intelligence, sentiment analysis within natural language processing, sonification of data, as well the open source movement around the digital audio workstation Pure Data.
The chatbot.py file is the main script to run the chatbot. Run it and communicate with the chatbot in terminal while having the .pd file open. Use Pute Data (https://puredata.info/) to run the .pd file. The chatbot and synth communicate via Open Sound Control (OSC). To use OSC in pd, you need to download the mrpeach library (Do this via "Help" > "Find externals" and search for "mrpeach"). To activate sound in pd, remember to turn on DSP ("Media" > "DSP On").