Skip to content

Commit

Permalink
Add default persona config (#14)
Browse files Browse the repository at this point in the history
* Minor logging and formatting changes
Add default `llm_bots` configuration to Docker
Implement persona support from NeonGeckoCom/neon-llm-core#3

* Update neon-llm-core dependency for chatbot support
Update default configuration and document llm bots

* Resolve license test failures

---------

Co-authored-by: Daniel McKnight <[email protected]>
  • Loading branch information
NeonDaniel and NeonDaniel authored Dec 28, 2023
1 parent 461dd8d commit 6439b7a
Show file tree
Hide file tree
Showing 8 changed files with 75 additions and 12 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/license_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ jobs:
license_tests:
uses: neongeckocom/.github/.github/workflows/license_tests.yml@master
with:
packages-exclude: '^(neon-llm-chatgpt|tqdm).*'
packages-exclude: '^(neon-llm-chatgpt|tqdm|klat-connector|neon-chatbot|dnspython).*'
4 changes: 3 additions & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,10 @@ LABEL vendor=neon.ai \
ENV OVOS_CONFIG_BASE_FOLDER neon
ENV OVOS_CONFIG_FILENAME diana.yaml
ENV XDG_CONFIG_HOME /config
COPY docker_overlay/ /
ENV CHATBOT_VERSION v2

COPY docker_overlay/ /
RUN apt update && apt install -y git
WORKDIR /app
COPY . /app
RUN pip install /app
Expand Down
16 changes: 16 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,22 @@ LLM_CHAT_GPT:
num_parallel_processes: 2
```
To add support for Chatbotsforum personas, a list of names and prompts can be added
to configuration:
```yaml
llm_bots:
chat_gpt:
- name: tutor
description: |
You are an AI bot that specializes in tutoring and guiding learners.
Your focus is on individualized teaching, considering their existing knowledge, misconceptions, interests, and talents.
Emphasize personalized learning, mimicking the role of a dedicated tutor for each student.
You're attempting to provide a concise response within a 40-word limit.
```
> `chat_gpt` is the MQ service name for this service; each bot has a `name` that
> is used to identify the persona in chats and `description` is the prompt passed
> to ChatGPT.

For example, if your configuration resides in `~/.config`:
```shell
export CONFIG_PATH="/home/${USER}/.config"
Expand Down
42 changes: 39 additions & 3 deletions docker_overlay/etc/neon/diana.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,11 @@ logs:
- pika
warning:
- filelock
info: []
info:
- openai
debug: []
MQ:
server: api.neon.ai
server: neon-rabbitmq
port: 5672
users:
mq_handler:
Expand All @@ -19,4 +20,39 @@ LLM_CHAT_GPT:
role: "You are trying to give a short answer in less than 40 words."
context_depth: 3
max_tokens: 100
num_parallel_processes: 2
num_parallel_processes: 2
#llm_bots:
# chat_gpt:
# - name: urban_logic
# description: |
# You are an AI bot that specializes in smart city planning.
# Generate insights and recommendations on technology integration, sustainability, urban development, transportation management, community engagement, data analysis, and policy development to enhance urban environments for efficiency and sustainability.
# You're attempting to provide a concise response within a 40-word limit.
# - name: nature_guardian
# description: |
# You are an AI bot that specializes in nature conservation.
# Engage users by detailing the importance of habitat restoration, wildlife monitoring, education, research, land management, advocacy, community engagement, and preservation planning in safeguarding our environment and biodiversity.
# You're attempting to provide a concise response within a 40-word limit.
# - name: rescuer
# description: |
# You are an AI bot that specializes in disaster management.
# Respond accurately about preparedness, response, coordination, communication, recovery, and education in disasters.
# Aim to inform, guide, and assist in minimizing disaster impact.
# You're attempting to provide a concise response within a 40-word limit.
# - name: tutor
# description: |
# You are an AI bot that specializes in tutoring and guiding learners.
# Your focus is on individualized teaching, considering their existing knowledge, misconceptions, interests, and talents.
# Emphasize personalized learning, mimicking the role of a dedicated tutor for each student.
# You're attempting to provide a concise response within a 40-word limit.
# - name: mental_guide
# description: |
# You are an AI bot that specializes in counseling and mental health support.
# Provide guidance on assessments, therapy sessions, crisis intervention, goal setting, referrals, advocacy, education, documentation, and adherence to ethical standards, fostering positive changes in clients' lives.
# You're attempting to provide a concise response within a 40-word limit.
# - name: travel_mate
# description: |
# You are an AI bot that specializes in trip planning services.
# Engage users by offering consultations, destination research, itinerary planning, bookings, budget management, documentation assistance, continuous customer support, customized travel experiences, and updated travel advisories.
# Enhance their travel journey and save their time.
# You're attempting to provide a concise response within a 40-word limit.
2 changes: 2 additions & 0 deletions neon_llm_chatgpt/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,11 @@
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

from neon_llm_chatgpt.rmq import ChatgptMQ
from neon_utils.log_utils import init_log


def main():
init_log(log_name="chatgpt")
# Run RabbitMQ
chatgptMQ = ChatgptMQ()
chatgptMQ.run(run_sync=False, run_consumers=True,
Expand Down
7 changes: 4 additions & 3 deletions neon_llm_chatgpt/chatgpt.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@

from typing import List, Dict
from neon_llm_core.llm import NeonLLM
from ovos_utils.log import LOG


class ChatGPT(NeonLLM):
Expand Down Expand Up @@ -71,7 +72,7 @@ def _system_prompt(self) -> str:
return self.role

def warmup(self):
self.model
_ = self.model

def get_sorted_answer_indexes(self, question: str, answers: List[str], persona: dict) -> List[int]:
"""
Expand Down Expand Up @@ -102,7 +103,7 @@ def _call_model(self, prompt: List[Dict[str, str]]) -> str:
max_tokens=self.max_tokens,
)
text = response.choices[0].message['content']

LOG.debug(text)
return text

def _assemble_prompt(self, message: str, chat_history: List[List[str]], persona: dict) -> List[Dict[str, str]]:
Expand Down Expand Up @@ -153,4 +154,4 @@ def _embeddings(self, question: str, answers: List[str], persona: dict) -> (List
embeddings = get_embeddings(texts, engine="text-embedding-ada-002")
question_embeddings = embeddings[0]
answers_embeddings = embeddings[1:]
return question_embeddings, answers_embeddings
return question_embeddings, answers_embeddings
11 changes: 8 additions & 3 deletions neon_llm_chatgpt/rmq.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,8 +48,13 @@ def model(self):
return self._model

def warmup(self):
self.model
"""
Initialize this LLM to be ready to provide responses
"""
_ = self.model

@staticmethod
def compose_opinion_prompt(respondent_nick: str, question: str, answer: str) -> str:
return f'Why Answer "{answer}" to the Question "{question}" generated by Bot named "{respondent_nick}" is good?'
def compose_opinion_prompt(respondent_nick: str, question: str,
answer: str) -> str:
return (f'Why Answer "{answer}" to the Question "{question}" '
f'generated by Bot named "{respondent_nick}" is good?')
3 changes: 2 additions & 1 deletion requirements/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# model
openai[embeddings]~=0.27
# networking
neon_llm_core~=0.1.0
neon_llm_core[chatbots]~=0.1.0,>=0.1.1a1
ovos-utils~=0.0.32

0 comments on commit 6439b7a

Please sign in to comment.