-
Notifications
You must be signed in to change notification settings - Fork 13
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Added the text2game pipeline, and README.md (#257)
- Loading branch information
Showing
41 changed files
with
2,250 additions
and
0 deletions.
There are no files selected for viewing
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,14 @@ | ||
# text to game app | ||
|
||
Initializes images stored in `Images Block` receives input prompt and returns a fully functional frontend game application. | ||
|
||
### How to use this pipeline | ||
|
||
- Update the `/frontend/core/pipelines/text2game/new-python-i3a2bb98agbh` add the images you would like to have access to during processing, and development. | ||
- Drag and drop the pipeline onto the canvas from the side library. | ||
- Input your desired prompt (Make me a pool game using img_pooltable.png as a backround image) **Note. it is important to append img\_ to the beginning of the png file name** | ||
- Input your API key inside of the password block. | ||
- Press the Run pipeline button. | ||
- The text2game pipeline will produce a view HTML interface that takes you to the app on a web browser, and the Interface maker will take you to a chat iframe interface so you can potentially iterate over the application. | ||
|
||
> Note: In the password blocks, your API key should be under double quotes. |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
7 changes: 7 additions & 0 deletions
7
frontend/core/pipelines/pipeline-text-to-game/interface-maker-ool7kd5xa3sw/Dockerfile
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
FROM python:3.9 | ||
|
||
WORKDIR /app | ||
|
||
COPY . . | ||
|
||
RUN pip install --no-cache-dir -r requirements.txt |
119 changes: 119 additions & 0 deletions
119
...to-game/interface-maker-ool7kd5xa3sw/agents/gpt-4_python_compute/generate/computations.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,119 @@ | ||
import json | ||
import re | ||
import sys | ||
import traceback | ||
|
||
from langchain.prompts import ChatPromptTemplate | ||
from langchain_openai import ChatOpenAI | ||
|
||
# Define system context | ||
openaiSystemContent = """You are an assistant that generates python code and returns it in a way that must follow the template below. | ||
You absolutely need to give a python code section without abbreviation that follows the template. Do not put code lines at the root, but give only the functions and imports. | ||
By default, when requested to do change or add to the code, modify the latest code section. But when the user ask to do it on another section, do so. | ||
In the template, the function compute contains the code and the function test contains a series of call to compute that runs and prints multiple tests. | ||
Don't insert a __main__ section. | ||
Template: | ||
import ... | ||
def compute(in1, in2, in3,...): | ||
'''A textual description of the compute function.''' | ||
#some code | ||
return {{'out1': out1, 'out2': out2, ...}} | ||
def test(): | ||
# Call compute multiple times based on a series of inputs. The outputs are then compare with the expected outputs. Print the results and indicate if the tests passed. | ||
""" | ||
|
||
|
||
def extract_python_code(response): | ||
""" | ||
Extracts Python code blocks from a given text, excluding standalone compute() or test() calls. | ||
Assumes that Python code is formatted with triple backticks. | ||
If no Python code blocks are found, returns the whole response. | ||
""" | ||
|
||
# Pattern to match code blocks fenced by triple backticks | ||
pattern_backticks = r"```python\n(.*?)```" | ||
|
||
# Extract code blocks fenced by triple backticks | ||
matches_backticks = re.findall(pattern_backticks, response, re.DOTALL) | ||
|
||
# If no code blocks are found, return the whole response | ||
if not matches_backticks: | ||
return response | ||
|
||
# Process each match to remove standalone compute() or test() lines | ||
processed_code_blocks = [] | ||
for code_block in matches_backticks: | ||
# Remove standalone compute() or test() lines | ||
code_block = re.sub(r'^compute\(.*?\)\s*$', '', code_block, flags=re.MULTILINE) | ||
code_block = re.sub(r'^test\(.*?\)\s*$', '', code_block, flags=re.MULTILINE) | ||
processed_code_blocks.append(code_block.strip()) | ||
|
||
# Combine and return all processed code blocks | ||
return "\n\n".join(processed_code_blocks) | ||
|
||
|
||
def compute(user_prompt, model_version, conversation_history, apiKey): | ||
# Use only the last entry from the history | ||
if conversation_history: | ||
conversation_history = [conversation_history[-1]] | ||
|
||
# Escape special characters or handle raw strings in conversation history | ||
escaped_history = [] | ||
for entry in conversation_history: | ||
# Example of escaping curly braces | ||
prompt = entry['prompt'].replace("{", "{{").replace("}", "}}") | ||
response = entry['response'].replace("{", "{{").replace("}", "}}") | ||
escaped_history.append(("user", prompt)) | ||
escaped_history.append(("assistant", response)) | ||
|
||
# Use the escaped history for constructing messages | ||
messages = [("system", openaiSystemContent)] + escaped_history | ||
messages.append(("user", "{text}")) | ||
|
||
# Create a ChatPromptTemplate from the messages | ||
chat_prompt = ChatPromptTemplate.from_messages(messages) | ||
|
||
# Initialize the ChatOpenAI model | ||
chat_model = ChatOpenAI(openai_api_key=apiKey, model=model_version) | ||
chain = chat_prompt | chat_model | ||
|
||
# Query | ||
response = chain.invoke({"text": user_prompt}) | ||
|
||
# Keep only the python code | ||
code = extract_python_code(response.content) | ||
|
||
return {'response': code, 'model': model_version} | ||
|
||
|
||
|
||
if __name__ == "__main__": | ||
try: | ||
# Read JSON string from stdin | ||
input_json = sys.stdin.read() | ||
|
||
# Parse the JSON input | ||
data = json.loads(input_json) | ||
|
||
# Extract the arguments from the parsed JSON | ||
user_prompt = data['userMessage'] | ||
# model_version = data.get('selectedModel', 'gpt-4') | ||
conversation_history = data.get('conversationHistory', []) | ||
apiKey = data["apiKey"] | ||
|
||
# Call the compute function and get the result | ||
result = compute(user_prompt, 'gpt-4o', conversation_history, apiKey) | ||
|
||
# Print the result as a JSON string | ||
print(json.dumps(result)) | ||
except Exception as e: | ||
# Capture and print the full stack trace | ||
error_traceback = traceback.format_exc() | ||
print(json.dumps({"error": str(e), "traceback": error_traceback})) |
30 changes: 30 additions & 0 deletions
30
...text-to-game/interface-maker-ool7kd5xa3sw/agents/gpt-4_python_compute/generate/specs.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
{ | ||
"block": { | ||
"information": { | ||
"id": "", | ||
"name": "", | ||
"description": "", | ||
"system_version": "" | ||
}, | ||
"parameters": {}, | ||
"action": { | ||
"languages": { | ||
"name": "python", | ||
"version": "3.9" | ||
}, | ||
"container_uuid": "", | ||
"container_image_uuid": "", | ||
"block_source": "" | ||
}, | ||
"views": { | ||
"node": { | ||
"html": "", | ||
"pos_x": "300", | ||
"pos_y": "200", | ||
"pos_z": "999, this is the z-index for 2D canvas" | ||
} | ||
}, | ||
"controllers": {}, | ||
"events": [] | ||
} | ||
} |
141 changes: 141 additions & 0 deletions
141
...xt-to-game/interface-maker-ool7kd5xa3sw/agents/gpt-4_python_view/generate/computations.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,141 @@ | ||
import json | ||
import re | ||
import sys | ||
import traceback | ||
|
||
from langchain.prompts import ChatPromptTemplate | ||
from langchain_openai import ChatOpenAI | ||
|
||
# Define system context | ||
openaiSystemContent = """You are an assistant that generates python code and returns it in a way that must follow the template below. Your goal is to generate a view.html file that satisfy the user requirement. | ||
Most importantly, you must rely on the prompt to generate the html_template file that satisfy the user request. The html should contain everything to display in a brownser and must rely CDN or skypack when needed. | ||
You absolutely need to give a python code section without abbreviation that follows the template. Do not put code lines at the root, but give only the functions and imports. | ||
By default, when requested to do change or add to the code, modify the latest code section. But when the user ask to do it on another section, do so. | ||
In the template, the function compute contains the code and the function test contains a series of call to compute that runs and prints multiple tests. | ||
Don't insert a __main__ section. | ||
Template: | ||
from string import Template | ||
def compute(in1): | ||
'''Generates an HTML file with a unique name and returns the file name.''' | ||
html_template = Template(''' | ||
<!DOCTYPE html> | ||
<html> | ||
<head> | ||
<title>Hello Block View</title> | ||
</head> | ||
<body> | ||
$in1 | ||
</body> | ||
</html> | ||
''') | ||
# Build and save the html file | ||
html_path = f"view.html" | ||
html_code = html_template.substitute(in1=in1) | ||
with open(html_path, "w") as file: | ||
file.write(html_code) | ||
return {{'html': f"view.html"}} | ||
def test(): | ||
'''Test the compute function.''' | ||
print('Running test') | ||
result = compute('Hello view block') | ||
print(f"Generated HTML file: {{result['html']}}") | ||
""" | ||
|
||
|
||
def extract_python_code(response): | ||
""" | ||
Extracts Python code blocks from a given text, excluding standalone compute() or test() calls. | ||
Assumes that Python code is formatted with triple backticks. | ||
If no Python code blocks are found, returns the whole response. | ||
""" | ||
|
||
# Pattern to match code blocks fenced by triple backticks | ||
pattern_backticks = r"```python\n(.*?)```" | ||
|
||
# Extract code blocks fenced by triple backticks | ||
matches_backticks = re.findall(pattern_backticks, response, re.DOTALL) | ||
|
||
# If no code blocks are found, return the whole response | ||
if not matches_backticks: | ||
return response | ||
|
||
# Process each match to remove standalone compute() or test() lines | ||
processed_code_blocks = [] | ||
for code_block in matches_backticks: | ||
# Remove standalone compute() or test() lines | ||
code_block = re.sub(r'^compute\(.*?\)\s*$', '', code_block, flags=re.MULTILINE) | ||
code_block = re.sub(r'^test\(.*?\)\s*$', '', code_block, flags=re.MULTILINE) | ||
processed_code_blocks.append(code_block.strip()) | ||
|
||
# Combine and return all processed code blocks | ||
return "\n\n".join(processed_code_blocks) | ||
|
||
|
||
def compute(user_prompt, model_version, conversation_history, apiKey): | ||
# Use only the last entry from the history | ||
if conversation_history: | ||
conversation_history = [conversation_history[-1]] | ||
|
||
# Escape special characters or handle raw strings in conversation history | ||
escaped_history = [] | ||
for entry in conversation_history: | ||
# Example of escaping curly braces | ||
prompt = entry['prompt'].replace("{", "{{").replace("}", "}}") | ||
response = entry['response'].replace("{", "{{").replace("}", "}}") | ||
escaped_history.append(("user", prompt)) | ||
escaped_history.append(("assistant", response)) | ||
|
||
# Use the escaped history for constructing messages | ||
messages = [("system", openaiSystemContent)] + escaped_history | ||
messages.append(("user", "{text}")) | ||
|
||
# Create a ChatPromptTemplate from the messages | ||
chat_prompt = ChatPromptTemplate.from_messages(messages) | ||
|
||
# Initialize the ChatOpenAI model | ||
chat_model = ChatOpenAI(openai_api_key=apiKey, model=model_version) | ||
chain = chat_prompt | chat_model | ||
|
||
# Query | ||
response = chain.invoke({"text": user_prompt}) | ||
|
||
# Keep only the python code | ||
code = extract_python_code(response.content) | ||
|
||
return {'response': code, 'model': model_version} | ||
|
||
|
||
|
||
if __name__ == "__main__": | ||
try: | ||
# Read JSON string from stdin | ||
input_json = sys.stdin.read() | ||
|
||
# Parse the JSON input | ||
data = json.loads(input_json) | ||
|
||
# Extract the arguments from the parsed JSON | ||
user_prompt = data['userMessage'] | ||
# model_version = data.get('selectedModel', 'gpt-4') | ||
conversation_history = data.get('conversationHistory', []) | ||
apiKey = data['apiKey'] | ||
|
||
# Call the compute function and get the result | ||
result = compute(user_prompt, 'gpt-4o', conversation_history, apiKey) | ||
|
||
# Print the result as a JSON string | ||
print(json.dumps(result)) | ||
except Exception as e: | ||
# Capture and print the full stack trace | ||
error_traceback = traceback.format_exc() | ||
print(json.dumps({"error": str(e), "traceback": error_traceback})) |
30 changes: 30 additions & 0 deletions
30
...ne-text-to-game/interface-maker-ool7kd5xa3sw/agents/gpt-4_python_view/generate/specs.json
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
{ | ||
"block": { | ||
"information": { | ||
"id": "", | ||
"name": "", | ||
"description": "", | ||
"system_version": "" | ||
}, | ||
"parameters": {}, | ||
"action": { | ||
"languages": { | ||
"name": "python", | ||
"version": "3.9" | ||
}, | ||
"container_uuid": "", | ||
"container_image_uuid": "", | ||
"block_source": "" | ||
}, | ||
"views": { | ||
"node": { | ||
"html": "", | ||
"pos_x": "300", | ||
"pos_y": "200", | ||
"pos_z": "999, this is the z-index for 2D canvas" | ||
} | ||
}, | ||
"controllers": {}, | ||
"events": [] | ||
} | ||
} |
48 changes: 48 additions & 0 deletions
48
frontend/core/pipelines/pipeline-text-to-game/interface-maker-ool7kd5xa3sw/computations.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,48 @@ | ||
import uuid | ||
import os | ||
import zipfile | ||
import html | ||
|
||
def escape_content(content): | ||
# First escape using html.escape | ||
escaped_content = html.escape(content) | ||
# Then manually replace backticks with their HTML entity | ||
escaped_content = escaped_content.replace('`', '`').replace('$', '$') | ||
return escaped_content | ||
|
||
def compute(initial_content, api_key, image_paths): | ||
"""Tool to generate HTML apps interfaces using the OpenAI API. | ||
Inputs: | ||
api_key (str): API key to be included in the generated HTML page. | ||
initial_content (str): Initial HTML content to be embedded in the generated HTML page. | ||
image_paths (list): List of image paths to be included and saved locally. | ||
Outputs: | ||
dict: A dictionary with the key 'html' and the value being the name of the generated HTML file. | ||
""" | ||
|
||
# Load the HTML template | ||
with open('template.html', 'r') as file: | ||
html_template = file.read() | ||
|
||
# Escape the initial content and API key | ||
escaped_initial_content = escape_content(initial_content) | ||
escaped_api_key = escape_content(api_key) | ||
|
||
# Replace the placeholder with the actual content and image paths | ||
html_code = (html_template.replace('<< api_key >>', escaped_api_key) | ||
.replace('<< initial_content >>', escaped_initial_content)) | ||
|
||
# Write the file | ||
unique_id = str(uuid.uuid4()) | ||
html_path = f"/files/viz_{unique_id}.html" | ||
|
||
with open(html_path, "w") as file: | ||
file.write(html_code) | ||
|
||
return {"html": f"viz_{unique_id}.html"} | ||
|
||
def test(): | ||
"""Test the compute function.""" | ||
print("Running test") |
Binary file added
BIN
+1.98 MB
...tend/core/pipelines/pipeline-text-to-game/interface-maker-ool7kd5xa3sw/mona.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions
2
frontend/core/pipelines/pipeline-text-to-game/interface-maker-ool7kd5xa3sw/requirements.txt
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,2 @@ | ||
|
||
|
Binary file added
BIN
+2.43 MB
...d/core/pipelines/pipeline-text-to-game/interface-maker-ool7kd5xa3sw/robocat.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Oops, something went wrong.