Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Development #121

Merged
merged 4 commits into from
Jul 22, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .github/FUNDING.yml
Original file line number Diff line number Diff line change
@@ -1 +0,0 @@

11 changes: 3 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,10 @@
Introduction
============


WAFL is a framework for home assistants.
It is designed to combine Large Language Models and rules to create a predictable behavior.
Specifically, instead of organising the work of an LLM into a chain of thoughts,
WAFL intends to organise its behavior into inference trees.

WAFL is a work in progress.
WAFL is a framework for personal agents. It integrates Large language models, speech recognition and text to speech.
This framework combines Large Language Models and rules to create a predictable behavior.
A set of rules is used to define the behavior of the agent, supporting function calling and a working memory.
The current version requires the user to specify the rules to follow.
While it is ready to play with, it might not ready for production depending on your use-case.

Installation
============
Expand Down
Binary file modified documentation/build/doctrees/installation.doctree
Binary file not shown.
Binary file modified documentation/build/html/_images/two-parts.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 2 additions & 2 deletions documentation/build/html/_sources/installation.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ The second command starts the audio interface as well as a web server on port 80
Please see the examples in the following chapters.


LLM side (needs a GPU)
LLM side (needs a GPU to be efficient)
----------------------
The second part (LLM side) is a model server for the speech-to-text model, the LLM, the embedding system, and the text-to-speech model.
In order to quickly run the LLM side, you can use the following installation commands:
Expand All @@ -41,7 +41,7 @@ In order to quickly run the LLM side, you can use the following installation com
$ pip install wafl-llm
$ wafl-llm start

which will use the default models and start the server on port 8080.
which will use the default models and start the server on port 8080.

The interface side has a `config.json` file that needs to be filled with the IP address of the LLM side.
The default is localhost.
Expand Down
Binary file modified documentation/build/html/_static/two-parts.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion documentation/build/html/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ <h1>Welcome to WAFL’s 0.1.0 documentation!<a class="headerlink" href="#welcome
<li class="toctree-l1"><a class="reference internal" href="introduction.html">Introduction</a></li>
<li class="toctree-l1"><a class="reference internal" href="installation.html">Installation</a><ul>
<li class="toctree-l2"><a class="reference internal" href="installation.html#interface-side">Interface side</a></li>
<li class="toctree-l2"><a class="reference internal" href="installation.html#llm-side-needs-a-gpu">LLM side (needs a GPU)</a></li>
<li class="toctree-l2"><a class="reference internal" href="installation.html#llm-side-needs-a-gpu-to-be-efficient">LLM side (needs a GPU to be efficient)</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="initialization.html">Initialization</a></li>
Expand Down
9 changes: 4 additions & 5 deletions documentation/build/html/installation.html
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@
<li class="toctree-l1"><a class="reference internal" href="introduction.html">Introduction</a></li>
<li class="toctree-l1 current"><a class="current reference internal" href="#">Installation</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#interface-side">Interface side</a></li>
<li class="toctree-l2"><a class="reference internal" href="#llm-side-needs-a-gpu">LLM side (needs a GPU)</a></li>
<li class="toctree-l2"><a class="reference internal" href="#llm-side-needs-a-gpu-to-be-efficient">LLM side (needs a GPU to be efficient)</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="initialization.html">Initialization</a></li>
Expand Down Expand Up @@ -110,16 +110,15 @@ <h2>Interface side<a class="headerlink" href="#interface-side" title="Link to th
The second command starts the audio interface as well as a web server on port 8090 by default.
Please see the examples in the following chapters.</p>
</section>
<section id="llm-side-needs-a-gpu">
<h2>LLM side (needs a GPU)<a class="headerlink" href="#llm-side-needs-a-gpu" title="Link to this heading"></a></h2>
<section id="llm-side-needs-a-gpu-to-be-efficient">
<h2>LLM side (needs a GPU to be efficient)<a class="headerlink" href="#llm-side-needs-a-gpu-to-be-efficient" title="Link to this heading"></a></h2>
<p>The second part (LLM side) is a model server for the speech-to-text model, the LLM, the embedding system, and the text-to-speech model.
In order to quickly run the LLM side, you can use the following installation commands:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>$<span class="w"> </span>pip<span class="w"> </span>install<span class="w"> </span>wafl-llm
$<span class="w"> </span>wafl-llm<span class="w"> </span>start

which<span class="w"> </span>will<span class="w"> </span>use<span class="w"> </span>the<span class="w"> </span>default<span class="w"> </span>models<span class="w"> </span>and<span class="w"> </span>start<span class="w"> </span>the<span class="w"> </span>server<span class="w"> </span>on<span class="w"> </span>port<span class="w"> </span><span class="m">8080</span>.
</pre></div>
</div>
<p>which will use the default models and start the server on port 8080.</p>
<p>The interface side has a <cite>config.json</cite> file that needs to be filled with the IP address of the LLM side.
The default is localhost.</p>
<p>Finally, you can run the LLM side by cloning [this repository](<a class="reference external" href="https://github.com/fractalego/wafl-llm">https://github.com/fractalego/wafl-llm</a>).</p>
Expand Down
2 changes: 1 addition & 1 deletion documentation/build/html/searchindex.js

Large diffs are not rendered by default.

Binary file modified documentation/source/_static/two-parts.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 2 additions & 6 deletions documentation/source/introduction.rst
Original file line number Diff line number Diff line change
@@ -1,12 +1,8 @@
Introduction
============

WAFL is a framework for personal agents.
It integrates Large language models, speech recognition and text to speech.

WAFL is a framework for personal agents. It integrates Large language models, speech recognition and text to speech.
This framework combines Large Language Models and rules to create a predictable behavior.
A set of rules is used to define the behavior of the agent, supporting function calling and a working memory.

WAFL is a work in progress.
The current version requires the user to specify the rules to follow.
While it is ready to play with, it might not ready for production depending on your use-case.

Binary file modified images/two-parts.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed images/wafl-commands.png
Binary file not shown.
Loading