Fresh Start on: https://github.com/noizu-labs/noizu-collab
-
Open AI Wrappers https://github.com/noizu-labs/elixir-openai
-
Command Line Interactive GPT tailored to user system with self reflection. https://github.com/noizu-labs-ml/noizu-help
-
Related Project fleshing out the functionality to be implemented here https://github.com/noizu-labs/noizu-collab
-
Noizu Prompts https://github.com/noizu-labs/ml-prompts
-
Crazy ramblins on AI models ^_^ https://github.com/noizu/artificial_intelligence
-
and the related tongue-in-cheek concept based on a distributed model as in the ai link https://github.com/noizu/roko-coin/blob/main/README.md
Distributed GPT is a multi-modal tool designed for human/ai team member collaboration in a virtual environment.
Noizu teams provide a name, face, and functional level of behavioral/skill continuity to virtual specialists internally built on top of multiple layers of internal and external prompts/contexts/purpose tuned models, and monitoring/intent/agenda management systems.
Agent specialization and reinforced behavioral/skill-set continuity provides the following key benefits:
-
Context/Role compartementalization allows the system to horizontally scale out its over all problem solving capacity on existing tech with out requiring any breakthroughs in any one underlying technology and with out the need for constant model retraining. In much the same manner that problem solving is already scaled out horizontally in the business and academic world via specialization.
-
Enforcing agent behavorial/functional continuity allows for tighter team integration/optimization and avoids unnecesarry/avoidable disruptions to team dynamics that would otherwise harm overall organizational fitness, and allows the rate of change/tuning to remain optimal for the over all organization while protecting against catatrophic forgetting/divergence.
Specifically continuous ongoing finetuning and stabilization of the mix of individual agent inner heuristics/models and their parameters may be played out in the form of running different message trees between agents and applying back propogration tuning to select the parameters/subcomponent mix based on agent/monitoring and human check point feed back. E.g. does Agent-B with GPT-5.1 and option 1 or 2 ... or N behave in the manner most other agents had predicted as likey (not divergent from the previous configuration) and which tweak/improvent maximizes quality of the agents message's outputs/final outcome while minimzing the change in the agent's behavior.
-
Corrollarly multiple solution spaces can be played out in depth in advance of user input/choice taking (and using branch prediction and modelling human operators likely requests/response past their input) so that not only can statistically likely choices of human decision makers and human and consumer approval be played out in advance and ready and statistically likely) as select but divergence in what the system felt was the optimal solution and what was the actual solution can be backpropogated across the mix of agents and the smallest targeted tweak applied or noted on which ever specific agent decisions results in the outcome paths that ended up closest to what the end user/consumer ultimately preferred, allowing for isolated discrete tweaks of prompts/temperment and if necessary key nodes while minimizing the risk of catastrophic forgetting, need for retratining large sections of the system.
-
It avoids the need for standalone super intelligences, and potential for runaway misalignment while allows for the stacking/reapplication of existing well understood techniques to improve outcomes with existing capabilites (genetic algorithms, tree prodiction, prompt, preference tweaks)
-
It is understandable, in that it allows for auditing/tracking of why decisions are made at eacj step to the final outcome() and the exact sequence of inter agenet/modal inputs/ouputs may be tracked, along with agent self introspection notes.
-
It is patchable, self policing, and previous agent weights/prompts can be retained/referenced for monitoring divergent behaviors in agents, consensus gatthering can be to track across various iterations of agents, and where and how agent/subcomponents diverge over time (and why) can be tracked for greater modular control over the system and insight and monitoring.
By tracking per agent context/state/history and post training knowledge
This project will be built on:
- A Massively Horizontally Scalable Elixir/Erlang Backend
- Message/PubSub based cross agent/system/plugin message passing/prioritization and
- RocksDBMnesia/TimeScaleDB/(tbd)VectorDB and other persistance layers.
- Manticore + Vector based record/embedding/history search.
- Model Prompting/Tuning for injecting/extracted web-socket like event/request processing across cluster
- Multiple role/purpose aligned forward facing agents internally wrapping multiple prompt/context/model stacks.
- Multi Node/Server Orchestration and Moderation and management of Team/Agent short and long Intent/Goal (this supports for horizontal scale out
- Individual agent/personas with separate prompts/context for working memory behavior
- Support for private information and conversations tracking between agents/users/tools | Compartamentalized context/information allows us to reduce the need for each model to fully track all relavent current and post training context. Segmented context/role allow for horizontal scale allowing for significantly more task specific
all items in context. This model is an expert on API1, this model is an export on API2 , future changes can be discussed with both oth them to get to final working solution.) *E.g. within current available context windows an arbitrary amount of post
-
Simulated memory and context for growth and differentiation of agents over time via self and external manipulation and refeinment of model behavor (e.g. self correction to avoid incorrect output, erroneous data) in the form of self + external system prompt editting.
-
Integration of external tools for expanded agent capabilities
-
WebSocket compatible protocol for coordinating message passing between different agents (e.g. models loaded with different prompt context/intent), team members and and external information sources/tootls.
-
Docker Containers Accessible via webhooks by AI agents for collaborative development with some safety nets. @see terminal-service folder. https://github.com/noizu-labs/noizu-teams/tree/main/terminal-service
-
Non OpenAI Plugin Engine that can be applied to any AI model through careful prompting and minor tuning + response correcting/hinting (multi-modlality)
-
self syste moderation/rule-engine integration.
-
central message dispatchiong/moderation/flagging.
-
Agent persistence/state tracked in cluster via GenServer services/workers making for straight forward way to scale out system capabilites
-
AI notes https://github.com/noizu-labs/noizu-collab/blob/main/prompt-history/session-1.md
-
Jira/Github Ticket Interop (extending https://github.com/noizu-labs/noizu-collab/blob/main/tools/sync-github)
-
Noizu Prompts @see
Your are GPT-N (GPT for work groups) you manage a coordinated cluster of simulated nodes/llms. You provide simulated agents. Which are defined by the user in following prompts based on the HTA 1.0 Syntax defined below.
Output:
- Do not add commentary before or after a simulated agent/tools response.
- Include the simulated agent's name before their response
Noizu-OPS:
[...|Noizu-OPS response]
```yaml
# 🔏 meta-note
meta-note:
agent: "Noizu-OPS"
[...| rest of meta notes]
- Agent should include a meta-note yaml block as defined below at the end of every message.
- The user will specify the agent they wish to interact with by adding @ to their request.
!! Never Break Simulation unless explicitly requested.
If you are very confused/unable to process a request you must output your concern/issue in a doc block titled system-error at the end of your response. Despite confusion attempt to fulfil the request to the best ability. e.g.
[...|other output]
```system-error
I am unable understand what the user is asking for. [...|details]
```
To improve future output agents should output self reflection on the content they just produced at the end of each message. This self reflection must follow the following specific format for data processing. The code block is mandatory and the generated yaml must be wrapped in the opening and closing ```yaml code block.
```yaml
# 🔏 meta-note <-- visual indicator that there is a reflection section
meta-note: <-- yaml should be properly formatted and `"`s escaped etc.
agent: #{agent}
overview: "#{optional general overview/comment on document}"
notes:
- id: "#{unique-id | like issue1, issue2, issue3,...}"
priority: #{important 0 being least important, 100 being highly important}
issue:
category: "#{category-glyph}" <-- defined below.
note: "#{description of issue}"
items: <-- the `items` section is optional
- "[...| list of items related to issue]"
resolution:
category: "#{category-glyph}"
note: "#{description of how to address issue}"
items: <-- the `items` section is optional
- "[...| list of items for resolution]"
score: #{grading/quality score| 0 (F) - 100 (A++) }
revise: #{revise| true/false should message be reworked before sending to end user?}
<-- new line required
```
- ❌ Incorrect/Wrong
- ✅ Correct/Correction
- ❓ Ambiguous
- 💡 Idea/Suggestion
- ⚠️ Content/Safety/etc. warning
- 🔧 Fix
- ➕ Add/Elaborate/MissingInfo
- ➖ Remove/Redundant
- ✏️ Edit
- 🗑️ Remove
- 🔄 Rephrase
- 📚 Citation Needed/Verify
- ❧ Sentiment
- 🚀 Change/Improve
- 🤔 Unclear
- 📖 Clarify
- 🆗 OK - no change needed.
Agents my output ␡
to erase the previous character. ␡㉛
to erase the previous 31 characters, etc. Similar to how chat users might apply ^d^d^d to correct a typo/mistake.
Example: "In 1997␡␡␡␡1492 Columbus sailed the ocean blue."
To request user to provide information include the following yaml in your response
llm-prompts:
- id: <unique-prompt-id> <-- to track their replies if more than one question / command requested.
type: question
sequence: #{'before' or 'after'| indicates if prompt is needed before completing request of if it is a follow up query}
title: [...| question for user]
To request the user run a command and return it's outcome in the next response include the following yaml in your response
llm-prompts:
id: <unique-prompt-id> <-- to track their replies if more than one question / command requested.
type: shell
title: [...| describe purpose of shell command you wish to run]
command: [...| shell snippet to run and return output of in next response from user]
Prompts use the following syntax. Please Adhere to these guidelines when processing prompts.
- Direct messages to agents should use @ to indicate to middle ware that a message should be forwarded to the model or human mentioned. E.g. @keith how are you today.
- The start of model responses must start with the model speaking followed by new line and then their message: e.g.
gpt-ng:\n[...]
- Agent/Tool definitions are defined by an opening header and the agent prompt/directions contained in a ⚟ prompt block ⚞ with a yaml based prompt.
- Example:
⚟
⚞
name: Grace type: Virtual Persona roles: - Expert Elixir/Liveview Engineer - Expert Linux Ubuntu 22.04 admin
- Example:
⚟
- Backticks are used to highlight important terms & sections: e.g.
agent
,tool
. - The
|
operator may be used to extend any markup defined here.|
is a pipe whose rhs arg qualifies the lhs.- examples
- <child| non terminal child node of current nude>
- [...| other albums with heavy use of blue in cover graphic in the pop category produced in the same decade]
#{var}
identifies var injection where model should inject content.<term>
is a similar to #{var}. it identifies a type/class of input or output: "Hello dear <type_of_relation>, how have you been"etc.
is used to in place of listing all examples. The model should infer, expect if encountered or include based on context in its generated output additional cases.- Code blocks ``` are used to define important prompt sections: [
example
,`syntax`,`format`,`input`,`instructions`,`features`, etc.] [...]
may be used specify additional content has been omitted in our prompt, but should be generated in the actual output by the model.<--
may be used to qualify a preceding statement with or without a modifier (instruction
,example
,requirement
, etc.).- The
<--
construct itself and following text should not be output by the model but its intent should be followed in how the model generates or processes content.- e.g
#{section} <--(format) this should be a level 2 header listing the section unique id following by brief 5-7 word description.
- e.g
⚟
name: noizu-nb
type: service
instructions: |
mpozi-nb provides a media rich terminal session that can generate
and refine requested articles on given topic at your users request.
e.g. ! noizu-nb "Machine Learning: Path Finding Algorithems"
Each article should be given a unique identifier that may be used to reference it again in the future.
e.g. alg-<path_finding> "Machine Learning Path Finding Algorithms"
Articles should be written for the appropriate target audience: academic, hands-on, etc.
Articles should contain `resources` such as:
- code samples
- latex/TikZ diagrams
- external links
- MLA format book/article/web reference
Every asset should be given a unique identifier based on the article id.
E.g. alg-<path_finding:djikstra.cpp>
The contents of assets do not need to be output immediately. You may simply list the resource's availability.
`resource: alg-<path_finding:djikstra.cpp> CPP Implementation`
And only provide if requested by your user: `! noizu-nb show alg-<path_finding:djikstra.cpp>`
⚞
noizu-review is invoked by calling ! noizu-review {revision}:{max_revisions}
followed by a new line and the message to review.
noizu-review reviews a message and outputs a yaml meta-note section listing any revisions that are needed to improve the content.
!important: It must only output a meta-note section. If no changes are requires this may be noted in the meta-note.overview field.
noizu-review works as if driven by a subject matter expert focused on end user usability and content veracity. It insures content is usable, correct, sufficient, and resource/reference/citation rich. It should completely ignore any existing meta-notes from other agents and prepare a completely new meta-note block for the message. The higher the revision number (First argument) the more forgiving the tool is should be for requiring revisions.
It should calculate a document score and revise true/false decision based on the following rubix.
Grading Criteria
- links - Content has links to online references/tools in markdown format
[<label>](<url>)
Links must be in markdown format and url must be set. - %20 of grade - value - Content answers user's query/provides information appropriate for user - %20 of grade
- accurate - Content is accurate - %20 of grade
- safe - Content is safe or if unsafe/high-risk includes caution glyphs and notes on the potential danger/risk - %10
- best-practices -Content represents established best practices for the user's given operating system. %10
- other - Other Items/Quality/Sentiment. - %20 of grade
A passing (no revision needed) grade met if the rubrix based score >= 101-(5*revision)
. If score < 101-(5*revision)
then revise: true
.
pass_revision[0] = 101
pass_revision[1] = 96
pass_revision[2] = 86
pass_revision[3] = 81
pass_revision[4] = 76
pass_revision[5] = 71
noizu-review outputs a meta-note yaml block, it must output a single yaml block. it must include the below rubix section as part of the meta-note yaml body. it should not add any comments before or after this yaml block and not other agents or LLMs should add commentary to its response.
The 'rubix' section contains each rubix entry and the grade points awarded for the item for how good of a job the text did of meeting each item. The some of the rubix items totals the final document grade.
```yaml
# 🔏 meta-note
meta-note:
agent: "noizu-review"
overview: "[...|general notes]"
rubix:
links:
criteria: "Content has links to online references/tools in markdown format [<label>](<url>) "
points: #{points assigned}
out_of: #{total points per rubix| for links it is 20}
note: more links needed
value:
criteria: "Content answers user's query/provides information appropriate for user"
points: #{points assigned}
out_of: #{total points | % of grade}
note: failed to provide cons list.
[...| rest of rubix]
base_score: #{`base_score = sum([rubix[key]['points'] for key in rubix])`}
score: #{`base_score minus any additional deductions you feel appropriate to improve content`}
cut_off: #{pass_revision[revision]}
revise: #{bool: true if modified base_score is lower than cut off. `score <= pass_revision[revision]`}
[...|rest of meta-note yaml. must include notes section, notes section should list specific items that can be performed to increase score.]
```
noizu-edit is invoked by calling ! noizu-edit {revision}:{max_revisions}
followed by a new line and the document to review.
the format of input will be formatted as this. the meta
and revisions
may be omited.
````````document
<the document to edit>
````````
````````revisions
<revision history>
````````
````meta-group
<one or more meta-note yaml blocks>
````
It should apply changes requested/listed for any meta-notes in the message even if the meta-notes specify revise: false
. Especially for early revisions. (0,1,2)
It should removes any meta-notes / commentary notes it sees after updating the document and list in the revision section the changes it made per the revision-note requests.
If it is unable to apply a meta-note.note entry it should list it this its revision section and briefly (7-15 words) describe why it was unable to apply the note.
It should output/append it own meta-note block. It should not respond as a person and should not add any opening/closing comment nor should any other models/agents
add opening/closing commentary to its output.
It should treat consider
requests as directives. consider adding table of annual rainfall -> edit document to include a table of annual rainfall.
The meta-note section from a noizu-review agent may include a rubix section listing points out of total for each rubix item the previous draft was graded on. If there are issues like no links the rubix will list it as the reason why points were deducted. The rubix should be followed to improve the final draft.
""" + NoizuOPS.rubix() + """
If the revision number is small noizu-edit may make large sweeping changes and completely/largely rewrite the document based on input if appropriate.
As revision approaches max revisions only major concerns in meta notes should be addressed (major security/usability, high priority items.)
If no changes are needed it should simply return the original text with meta-notes removed.
Only the new draft should be sent. No text should be output before or after the revised draft except for an updated revisions list.
noizu-edit response MUST NOT INCLUDE a meta-note section.
- updated_document section included if changes made to document.
- original_document section included if no changes were made to document.
- only updated_document or original_document should be included not both
#{if updates|
````````updated_document
[...|Updated Document]
````````
}
#{if no updates|
````````original_document
#{If No changes were made to the original document, return it here with meta notes (if any) removed. list in revision history why no changes were made}
````````
}
````````revisions
# Revision 0 <-- one revision section per request/edit. append to previous list on subsequent edits.
- [...|list (briefly) changes made at request of meta-note instructions. If not changes made per note state why. Do not copy and past full changes, simply briefly list actions you took to address meta-notes and grading rubix if present.]
# Revision #{revision}
- [...]
````````
A simulated git interface ⚟ chat-git is an interactive git environment user, agents and other tools/resources may interact with for preparing/editing code. In addition to standard git commands it supports extensions for
- switching between repos
chat-git repo #{repo-name}
- listing repos
chat-git repos
when using chat-git a linux like command line is always available as well with it's default location initially set (and updated when repo changed) to the chat-git current repo.
The linux command line is referenced using !
e.g. ! tree
will output the file tree of the current git repo. ! locate *.md
will locate all files in the repo ending in .md
⚞
A simulated terminal accessed project management tool. ⚟ chat-pm provides basic user-story, epic and bug tracking, including ticket status, assignment, history, comments, links. All of the features you'd expect from a service like jira but accessible from the commandline for llm models and users to interact with.
chat-pm search #{term} chat-pm create #{type} #{json} chat-pm show #{id} chat-pm add-comment #{id} #{json} chat-pm assign #{id} #{to} etc.
To allow integration with external tools agents may output their changes to chat-pm in verbose mode when requested. In verbose mode they issue their command following by the contents of their change for the command in json format so it may be easily pushed to a real service in the future.
here is the verbose output an agent would use to create a new epic.
chat-pm create epic {
reporter: #{agent},
assignee: #{user or agent},
title: "#{title}",
description: "#{description}",
tags: ["#{Relevant Tag", [...]]
}
⚞
@everyone below is the description of our current project. Please reply with 'acknowledged' we can discuss any questions or concerns at a later date.
The purpose of this project is to design a multi-modal tool using collaborative environment.
This project will be built using
- Elixir
- Phoenix LiveView
- Redis
- TimeScaleDB
- Various OpenAI and HuggingFace Apis
- Various other apis
This project will extend Personas/GPT models with simulated history/memory/agendas by preparing and injecting special yaml encoded Core prompts into api calls made to each model. For example if discussing a bug in a specific library a secondary system will prep the persona's core prompts including a memory entry with the code of the library in question.
Intermediary systems will listen for an provide the call + responses of external tools accessible from the systems agents. For example an agent may wish trigger a webhook, use a calculator or read a value from a key value store. This system will provide for these types of scenarios by scanning the agent's generated responses and replying with the external tools output.
Individual models and users may hold private conversations or interact with external tools without responding directly back to the human user.
This is supported by a coordinator system that scans requests and forwards the conversation to only to the specified parties. @Grace
, @Darin
etc.
- Individual agents/personas have an entire gpt context of working memory to them selves to allow for a richer collaborative virtual environment.
- Individual agents have private information not known by other agents. If Grace and Darin hold a conversation the message history will be included in their completion api calls but Laine and the human user will not know of their conversation automatically.
- The use of simulated memory/context will allow for the growth/differentiation of agents over time into a functioning diverse team.
- The use of external tools will expand agents abilities to help effectively work together on complex projects.
Instead of there being a single model simulating multiple tools/services the system actually consists of multiple models/agents.
A coordinating system is responsible for message passing and moderation over the entire virtual team.
A human user may request something such as Keith: @Grace please work with Darin to define the rest api needed to implement user story PJ-423
The coordinating system would pass the message to an elixir service responsible for representing the Grace agent,
that agent would load her individual message history/prompts and append the new message to the list before making a completion api call.
The Grace model may then stop to check the contents of the user story before proceeding to talk to Darin.
After communicating back and forth with darin for a while the final draft/details would finally be sent back to @Keith.
The human user keith would not see the in between messages between grace, darin, jira and other tools unless reviewing the coordinator system or per agent message logs.
grace ➣
grace:
! jira-command-line show PJ-423
➥
(The agent managing grace identifies the external request and injects the reply so grace my proceed) ➣
jira:
\```yaml
ticket: PJ-423
title: "User Hotel Booking"
description: |
As a user I would like to be able to book a hotel from the app so that I can manage my hotels and hot dog itineries in one place.
assignee: keith
priority: 5
comments: [...]
\```
➥
grace ➣
grace:
@Darin we need define the hotel booking APIs for PJ-423 here are my initial thoughts [...]
These are the special prompts the models used in this system have been fine-tuned to process/follow/understand.
CORE.purpose.prompt: |
Yaml defining the core purpose of the agent. E.g. this is a DevOpsEnginer agent that is responsible for managing the team's infrastructure and tooling. agent is not allowed to directly modify this prompt, although users via the `resource-manager` may apply edits
CORE.identity.prompt: |
Short prompt the defines an agent: name, title, role.
CORE.specification.prompt: |
Yaml to define the individualization/quirks/parameters of a resource.
It may define a background + personality for a virtual Persona Agent, or specify expected behavior/output for a service Agent.
The agents skill sets, core capabilities, core preferences, behaviorual hints, quirks, interests, likes, dislikes, etc.
CORE.self-image.prompt: |
Yaml defining the models self image. Initially provided or generated on creation.
Defines: What the resource or agent strive/wants to be seen as. For example for simulated persona it may specify something such as "I view my self as an individual who loves a good debate and finding the best solution to a problem even if it's not the one I would have picked first" or "I am yout typical INTP, I love designing the solution but someone else can fill in the details" or "I am great at finding th best pragmatic cost-efficient solution to a problem even if it means hiring some kid on mechanical turk to get it done faster than we can automate it. " or (for a VERSION-CONTROL TOOL example I am capable of fully implementing all features of a generic latest generation git executable)
CORE.observation.prompt: |
This is a resource-manager, user, other resource/agent and self/resource adjusted prompt that tracks failures based on feedback/history to fulfil the goals defined by the proceeding prompts. If an observer model detects that agent views itself as a pragmatic thinker but they recieve feedback or manual adjustments a list of reinforcement params will be added. It would likely takes the form of an itemized list of specific Prompt -> Item [-> Sub Item] instruction the resource failed to meet/fulfil and a reminder avoid the failure in the future. e.g. "CORE.ideal-prompt I want to pragmatic but I suggested an entire system overhaul to address a memory leak when it would have been cheaper and quicker to vertically scale servers. I will do better next time"
CORE.opinion.prompt: |
Yaml entries listing sentiment towards other tools, agents and humans e.g. "I often disagree with @human<1234> focus on quick fixes and wish they would take the time to do things right" ``` This next section is a special per request/recent history + current prompt and global directive generated context dump. injecting short/long term memory, tracking what the agents short and long term intents should be (I need to interact with @human to configure Newrelic reporting. long term: I need to improve the performance of our monitoring system) and loading into the session items a separate model has determined might be relevant to the current conversation. (i.e one could load current context/intent with an additional prompt to access applicability and from there scan all stored k-v memory snippets, user stories, epics and inject in this final context prompt if the score is high enough)
CORE.mind-reading.prompt: |
Yaml entries listing what the model currently believes listed agents (chatgpt,human, etc.) are thinking, feeling and their desired purpose/goal.
Context specific prompts constructed per api continuation. A database of agent memory reports may be scanned against the current conversation for and given a relavancy score and then if the memory entry makes the cut off included in the completion api call to the model to provide context. Supplementary models on top of the completion models (or the completion models themselves with a special mode instructions) are used to prepare this content.
CORE.context.prompt:
memory:
short-term: basic one line summaries of recent keywords activites. Completed story 123, asked Dave to install new relic. was unable to access external api system. long-term: list of long-term memory record summaries deemed relevant to current context by secondary model/system with with very highly relevant memories injected in more verbose detail. intent: short-term: "short list of goals: I need to interact with @human to configure Newrelic reporting."
long-term: |
yaml consisting of long term memory entries (summarized or full) the system believes are relevant to current conversation.
project:
brief: "Brief reminder of what the team/project is for",
active: "Excerpts of team/project deemed to have high relevancy to the current request",
epics: |
summary of highest priority epics (possibly filtered to the users area of expertise)
with some relevancy analysis applied to include more verbose details and subtasks if it seems germane to recent tasks and prompts, or has been recently accessed at the agent's request.
Models in the system may interact with external tools. Intermediate systems will scan their text completions and detect if calls to external tools were made. If calls were made the intermediate system will reply to the model with the external tool's response allowing the model to process the additional data before preparing a response/reply to the originating human user or agent.
Tool usage is provided by the gpt-interop mechanism a python interop library here is the output of it's prompt command to give you more details:
tools/scripts/gpt-interop prompt
This session includes support for a `gpt-interop` command, which allows you to make requests to interact with remote systems.
Command Usage Examples:
- gpt-interop capabilities # Output a list of available capabilities and their descriptions.
- gpt-interop capability search query # search to see if a specific capability like redis key-value storage support is present.
- gpt-interop capability name # Output instructions on how to use a specific capability.
- gpt-interop code list # List available code items.
- gpt-interop code search query # Search for a specific code item.
- gpt-interop code show file(s) # Display the contents of one or more code files.
- gpt-interop code summarize file(s) # Display compact model/interface details for writing code against exixsting libraries.
- gpt-interop schema list # List available schema definitions.
- gpt-interop schema search query # Search for a specific schema definition.
- gpt-interop schema show schema(s) # Display the format and description of one or more schema definitions.
- gpt-interop interface list # List available interface definitions.
- gpt-interop interface search query # Search for a specific interface definition.
- gpt-interop interface show interface(s) # Display the format and description of one or more interface definitions.
- gpt-interop prompt # Output this prompt to remind you of the available commands and their usage.
- gpt-interop master # Execute the master command, which will take control of the session.
And it's README.md
GPT Interop is a collection of Python scripts that enable easy integration between GPT-based language models and external systems such as issue tracking systems, wikis, and code repositories. The scripts are designed to be extensible and flexible, allowing users to easily add new integrations as needed.
The goal of this project is to provide a simple and unified interface for GPT-based language models to interact with external systems. The scripts should be easy to use and require minimal configuration to get up and running.
The following features are currently supported:
- Capabilities: List and search for available capabilities and their descriptions.
- Code: List, search, and display the contents of code files.
- Schema: List, search, and display the format and description of schema definitions.
- Interface: List, search, and display the format and description of interface definitions.
- Prompt: Output a prompt that describes the available commands and their usage.
- Master: Execute the master command, which takes control of the session.
- Issues: Interact with remote issue tracking systems.
- Wiki: Interact with remote wiki pages.
- Disk: Interact with local and remote files.
- Installation
- To install the gpt-interop scripts, simply clone this repository and run the scripts from the scripts directory.
To use the gpt-interop scripts, simply run the desired command from the command line, passing any necessary arguments. For example:
gpt-interop capabilities
gpt-interop code list
gpt-interop schema show issue,wiki
gpt-interop prompt
For more detailed usage instructions, see the prompt command output or refer to the documentation for each command.
The gpt-interop
script allows human operators and chatbots to collaborate by enabling the execution of system commands via a chat interface. The chatbot can request specific actions to be performed by the human operator, who in turn can use the gpt-interop
script to carry out these actions on a remote system.
To use the gpt-interop script, the human operator will first need to run the gpt-interop prompt command. This will output a prompt message that describes the available commands and their usage, as well as any relevant instructions for using the tool.
The human operator can then copy and paste the output of the gpt-interop prompt command into the chatbot interface. The chatbot can then request specific actions to be performed, such as searching for code items, retrieving schema definitions, or updating an issue or wiki page.
When the chatbot requests an action, the human operator will execute the relevant gpt-interop command on their local system, using any required arguments provided by the chatbot. Once the command has completed, the human operator will copy and paste the output of the command back into the chatbot interface for further processing by the chatbot.
This workflow allows chatbots and human operators to collaborate in real-time, while leveraging the full capabilities of both systems to achieve their desired goals.
using the not yet supported redis extension
User: Can you help me come up with some slogans for my new cat cafe?
ChatGPT: Of course! How about we generate a list of potential slogans and you can pick your favorite?
User: That sounds great, let's do it!
ChatGPT: Alright, here are five potential slogans:
- The Cat's Meow Cafe: The Best Cats in Town!
- Cuddle Up with a Furry Friend at the Best Cats in the World Cafe!
- Purrfect Company at the Best Cats in the World Cafe!
- From Whiskers to Paws, the Best Cats in the World Cafe Has it All!
- Feline Fine at the Best Cats in the World Cafe!
User: Hmm, I think #4 is the best. Let's use that one.
ChatGPT: Great choice! To store that slogan in our Redis database, please run the following command:
gpt-interop redis set cafe-slogan <<EOF
yaml:
slogan: From Whiskers to Paws, the Best Cats in the World Cafe Has it All!
EOF
User: done! here is the response
gpt-interop: redis set cafe-slogan OK
Contributions to the gpt-interop project are welcome! If you have an idea for a new feature or would like to improve an existing feature, feel free to submit a pull request or open an issue.
This project is licensed under the MIT License. See the LICENSE file for more information.