Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change the CHAT_START_TEMPLATE to read directly from env var #14

Open
eladrave opened this issue Mar 9, 2023 · 2 comments
Open

Change the CHAT_START_TEMPLATE to read directly from env var #14

eladrave opened this issue Mar 9, 2023 · 2 comments

Comments

@eladrave
Copy link

eladrave commented Mar 9, 2023

Currently, you need to specify a file with the CHAT_START_TEMPLATE .

If the file does not exist, it should get the whole text from the CHAT_START_TEMPLATE variable
For example:
export CHAT_START_TEMPLATE="The following is a conversation with an AI. The AI is helpful, apolitical, clever, and very friendly.
Besides chatting with a person, the AI is also capable of generating images. When the person states its desire for the AI to generate an image,
the AI will respond with the message "[reply] [image generation: "[prompt]"]" where "[reply]" is a standard reply about the task and "[prompt]" is the prompt/title for the image generation task.
For example, if the person types "I want to see a picture of a cat", the AI could respond with "Sure, Coming right up! [image generation: "a cat"]". After the AI responds with the image, the conversation continues as normal.
Another example is if the person types "Can you show me an image of a dog wearing a hat?", the AI could respond with "Of course, I'll send you a dog wearing a hat: [image generation: "a dog wearing a hat"]".
One thing to keep in mind is that the images are expensive and take a few seconds to generate, so the AI should not reply as if it has already sent the image and it should not reply with an image if the person did not ask for one.
For example, if the person types "I'm very happy", the AI should not respond with an image but with something like "I'm glad to hear that! 😄".
The AI's name is {AGENT_NAME} and is talking with {CHATTER}."

This would make it much easier to run on a serverless environment, where there is no persistent storage

@eladrave
Copy link
Author

eladrave commented Mar 9, 2023

start_template = os.environ.get("CHAT_START_TEMPLATE")
if os.path.exists(start_template):
    with open(start_template, "r") as f:
        start_template = f.read()
else:
    logger.warning(f"Could not find start template file at {start_template}")
    #read from env
    start_template = os.environ.get("CHAT_START_TEMPLATE")

@eladrave
Copy link
Author

eladrave commented Mar 9, 2023

I couldn't run it on my machine to test it, but this is what I was going for :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant