-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add mech tool for TEE AI agent #250
Conversation
packages/valory/customs/tee_openai_request/tee_openai_request.py
Outdated
Show resolved
Hide resolved
engine = tool.replace(PREFIX, "") | ||
|
||
params = { | ||
"openaiApiKey": api_key, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is quite risky, and would mean that we expose the agent's OpenAI API key to an external service.
Any way we can avoid this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is safe, here is why:
- The "external service" here is the AI Agent contract, the execution of the contract is verifiable and guaranteed by consensus system of Phala blockchain, like the view of smart contract, we trust the behavior by trust the code here. Check the doc if you are interested in how it work
- The execution is happening inside TEE, in other words, happen in an isolated area in the CPU and nobody (including the host OS, hypervisor, etc) can see the context. The key is private and only know for the AI agent contract
packages/valory/customs/tee_openai_request/tee_openai_request.py
Outdated
Show resolved
Hide resolved
@tolak to resolve failing checks, can you run:
Choose
|
Proposed changes
This PR contains the 1st phase integration of adding trusted execution environment (TEE) AI agent to Valory mech hub. A TEE AI agent represents a program that running inside TEE (more specifically, Intel SGX), to provide the verifiable AI computation back and forth. Here is a diagram to show how the workflow would be like:
This PR implement a custom tool that can forward the openAI request to Phala TEE AI agent contract, where the latter then forward the request to openAI to get the LLM response. The workflow behind can be described as:
https://wapo-testnet.phala.network/ipfs/QmeUiNKgsHiAK3WM57XYd7ssqMwVNbcGwtm8gKLD2pVXiP
) through the http request[TODO in 2nd phase integration] The tool give users the option to access openAI service through TEE, which can bring verifiability to the mech ecosystem. However, we still rely on the service provided by openAI and the user data (like prompt) must expose to them, which didn't bring full privacy for users. To solve this, we will provide the GPU TEE network in a short future to host LLM model, at that time, the user request can be encrypted and send to TEE, and get the encrypted response with user's account key.
Fixes
If it fixes a bug or resolves a feature request, be sure to link to that issue.
Types of changes
What types of changes does your code introduce? (A breaking change is a fix or feature that would cause existing functionality and APIs to not work as expected.)
Put an
x
in the box that appliesChecklist
Put an
x
in the boxes that apply.main
branch (left side). Also you should start your branch off ourmain
.Further comments
If this is a relatively large or complex change, kick off the discussion by explaining why you chose the solution you did and what alternatives you considered, etc...