-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #4009 from zwpaper/dev
docs: add tabby coding integration
- Loading branch information
Showing
3 changed files
with
104 additions
and
0 deletions.
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,104 @@ | ||
--- | ||
title: Tabby | ||
description: A step-by-step guide on integrating Jan with Tabby and VSCode, JetBrains, or other IDEs. | ||
keywords: | ||
[ | ||
Jan, | ||
Customizable Intelligence, LLM, | ||
local AI, | ||
privacy focus, | ||
free and open source, | ||
private and offline, | ||
conversational AI, | ||
no-subscription fee, | ||
large language models, | ||
Tabby integration, | ||
VSCode integration, | ||
JetBrains integration, | ||
] | ||
--- | ||
|
||
import { Tabs, Steps } from 'nextra/components' | ||
|
||
# Tabby | ||
|
||
## Integrate Jan with Tabby and Your Favorite IDEs | ||
|
||
[Tabby](https://www.tabbyml.com/) is an open-source, self-hosted AI coding assistant. | ||
With Tabby, teams can easily set up their own LLM-powered code completion server. | ||
|
||
Tabby provides integrations with VSCode, JetBrains, and other IDEs to help developers code more efficiently, | ||
and it can be used with various LLM services, including Jan. | ||
|
||
To integrate Jan with Tabby, follow these steps: | ||
|
||
<Steps> | ||
|
||
### Step 1: Enable the Jan API Server | ||
|
||
To set up Tabby with Jan's Local Server, you must activate the Jan API Server with your chosen model. | ||
|
||
1. Click the `Local API Server` (`<>`) button above the Settings. Jan will direct you to the **Local API Server** section. | ||
2. Configure the server, including the **IP Port**, **Cross-Origin Resource Sharing (CORS)**, and **Verbose Server Logs**. | ||
3. Press the **Start Server** button. | ||
|
||
### Step 2: Find the Model ID and Ensure the Model is Activated | ||
|
||
1. Go to `Settings` > `My Models`. | ||
2. Models are listed with their **Model ID** beneath their names. | ||
3. Click the **three dots (⋮)** button next to the model. | ||
4. Select **Start Model** to activate the model. | ||
|
||
### Step 3: Installing Tabby Server | ||
|
||
Use the following documentation to install the Tabby server: | ||
- [Docker](https://tabby.tabbyml.com/docs/quick-start/installation/docker/) | ||
- [Apple Silicon](https://tabby.tabbyml.com/docs/quick-start/installation/apple/) | ||
- [Linux](https://tabby.tabbyml.com/docs/quick-start/installation/linux/) | ||
- [Windows](https://tabby.tabbyml.com/docs/quick-start/installation/windows/) | ||
|
||
Then, follow the steps to connect Jan with the Tabby server: | ||
[Connect Jan with Tabby](https://tabby.tabbyml.com/docs/references/models-http-api/jan.ai/). | ||
|
||
For example, to connect Jan with Tabby, save the following configuration under `~/.tabby/config.toml`: | ||
|
||
```toml title="~/.tabby/config.toml" | ||
# Chat model | ||
[model.chat.http] | ||
kind = "openai/chat" | ||
model_name = "model_id" | ||
api_endpoint = "http://localhost:1337/v1" | ||
api_key = "" | ||
``` | ||
|
||
Currently, the Jan completion and embedding API is under construction. | ||
Once completed, you can also connect Jan with Tabby for completion and embedding tasks. | ||
|
||
### Step 4: Installing Tabby on Your Favorite IDEs | ||
|
||
Refer to the following documentation to install the Tabby extension on your favorite IDEs: | ||
- [Visual Studio Code](https://tabby.tabbyml.com/docs/extensions/installation/vscode/) | ||
- [JetBrains IntelliJ Platform](https://tabby.tabbyml.com/docs/extensions/installation/intellij/) | ||
- [VIM / NeoVIM](https://tabby.tabbyml.com/docs/extensions/installation/vim/) | ||
|
||
</Steps> | ||
|
||
## How to Use Tabby with Jan Integration | ||
|
||
### Answer Engine: Chat with Your Codes and Documentation | ||
|
||
Tabby offers an [Answer Engine](https://tabby.tabbyml.com/docs/administration/answer-engine/) on the homepage, | ||
which can leverage the Jan LLM and related contexts like code, documentation, and web pages to answer user questions. | ||
|
||
Simply open the Tabby homepage at [localhost:8080](http://localhost:8080) and ask your questions. | ||
|
||
![Answer Engine](./_assets/tabby-answer-engine.png) | ||
|
||
### IDE Chat Sidebar | ||
|
||
After installing the Tabby extension on your preferred IDEs, you can engage in a conversation with Jan to: | ||
|
||
1. Discuss your code, receive suggestions, and seek assistance. | ||
2. Request Jan to inline edit your code, and then review and accept the proposed changes. | ||
|
||
![Chat Sidebar](./_assets/tabby-chat-sidebar.png) |