Skip to content

Latest commit

 

History

History
73 lines (51 loc) · 4.51 KB

File metadata and controls

73 lines (51 loc) · 4.51 KB

Using Semantic Workbench with different AI models

This project provides a functional chatbot example that can be configured to use models from OpenAI, Anthropic, or Google Gemini that can be used in the Semantic Workbench. This example provides a general message model that adapts to the configured model at runtime. Each model is called using their own native Python SDK.

Responsible AI

The chatbot includes some important best practices for AI development, such as:

See the Responsible AI FAQ for more information.

Suggested Development Environment

  • Use GitHub Codespaces for a quick, turn-key dev environment: /.devcontainer/README.md
  • VS Code is recommended for development

Pre-requisites

  • Set up your dev environment
  • Set up and verify that the workbench app and service are running using the semantic-workbench.code-workspace
  • If using Azure OpenAI, set up an Azure account and create a Content Safety resource
    • See Azure AI Content Safety for more information
    • Copy the .env.example to .env and update the ASSISTANT__AZURE_CONTENT_SAFETY_ENDPOINT value with the endpoint of your Azure Content Safety resource
    • From VS Code > Terminal, run az login to authenticate with Azure prior to starting the assistant

Steps

  • Use VS Code > Run and Debug (ctrl/cmd+shift+d) > semantic-workbench to start the app and service from this workspace
  • Use VS Code > Run and Debug (ctrl/cmd+shift+d) > launch assistant to start the assistant.
  • If running in a devcontainer, follow the instructions in .devcontainer/POST_SETUP_README.md for any additional steps.
  • Return to the workbench app to interact with the assistant
  • Add a new assistant from the main menu of the app, choose the assistant name as defined by the service_name in chat.py
  • Click the newly created assistant to configure and interact with it

Starting the example from CLI

If you're not using VS Code and/or Codespaces, you can also work from the command line, using uv:

cd <PATH TO THIS FOLDER>

uv run start-assistant

Create your own assistant

Copy the contents of this folder to your project.

  • The paths are already set if you put in the same repo root and relative path of /<your_projects>/<your_assistant_name>
  • If placed in a different location, update the references in the pyproject.toml to point to the appropriate locations for the semantic-workbench-* packages

From Development to Production

It's important to highlight how Semantic Workbench is a development tool, and it's not designed to host agents in a production environment. The workbench helps with testing and debugging, in a development and isolated environment, usually your localhost.

The core of your assistant/AI application, e.g. how it reacts to users, how it invokes tools, how it stores data, can be developed with any framework, such as Semantic Kernel, Langchain, OpenAI assistants, etc. That is typically the code you will add to chat.py.

Semantic Workbench is not a framework. Dependencies on semantic-workbench-assistant package are used only to test and debug your code in Semantic Workbench. When an assistant is fully developed and ready for production, configurable settings should be hard coded, dependencies on semantic-workbench-assistant and similar should be removed.