Proof of concept #2
edgrosvenor
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I just threw this together in a few minutes as an alternative, easier-to-maintain OpenAI wrapper that can be extended to other providers as well. My concern with the road we were going down was that it required us to build a driver for every endpoint in the OpenAI API and that ends up being a lot to maintain. The approach I'm proposing loads the client in a way that allows us to use any API token and organization and should allow for hooks. I haven't tried to tackle streamed responses or anything related to the assistants API yet, but I wanted to run this past you.
OpenAI::token($token)->organization($organization)->chat()->create(['model' => 'wahtever', 'message's => [...]]);
In a case where the token and (optionally) organization are pulled from the environment via the
config/openai.php
file, the API here would be exactly the same as the official wrapper:OpenAI::chat()->create(['model' => 'whatever', 'messages' => [...]]);
I haven't tried to run yet and it can almost be considered pseudocode. I've only created the chat endpoint so far. But the driver file is here: https://github.com/artisan-build/llm/blob/main/src/OpenAI/OpenAIDriver.php
I did just notice that we'll need to make $response a property on that class instead of a temporary variable in order for the lifecycle hook to be useful, but that's easy enough.
Does this architecture make sense to you?
Beta Was this translation helpful? Give feedback.
All reactions