-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrating Language Models with Visual Studio Code (VS Code) #90
Conversation
0517b5b
to
26bf66f
Compare
|
@MichaelClifford , I have added this section in the markdown file. Thank you for pointing it out.
Thanks for sharing the link! Currently looking into it. |
pip install llama-cpp-python[server] | ||
|
||
# Start the server | ||
python3 -m llama_cpp.server --model <model_path> | ||
``` | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you suggesting to do it this way instead of using the llamacpp playground container we use in the rest of the repo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we have two different cases,
- where we interact with models using prompt, edit and code explanation features.
- where we interact with autocomplete feature, which require different model.
In this article, I have suggested another way to setup server for model used for autocompletion. If anyone wants to interact with both features, then they can used llama-cpp playground container to interact with case 1, and setup llama-cpp-python server for case 2. Does this approach makes sense to you? wdyt?
Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would they not create a second playground container with the other model?
Is this what is recommended by "continue" using 2 different models?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thats a good question.
-
yes one can create a second playground container with the other model.
-
No there is not such restriction. One can use same model for both cases. However, they strongly recommend to use smaller model (1B - 3B parameters) for TabAutoCompletion options. Currently, I am exploring some models which I can use in both cases, without frequent crashes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is what I experienced while using containerized options, frequent crashing. I hesitate to suggest 2 models, or a non-containerized approach or one that crashes frequently. It would be ideal if we can figure out why its crashing and come up with a solution to use 1 containerized model server for both.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @MichaelClifford ,
I observed that the reason for the crashes is because we're sending too many requests to our model too quickly, especially when we're using the AutoComplete feature. To fix this, I increased the debounceDelay
parameter setting to 4000 ms so that there is a short wait time between each request we send to the server. Now it seems to be working. ptal.
Thank you!
} | ||
``` | ||
|
||
The tabAutocompleteModel is similar to other objects in the models array of config.json. You have the flexibility to choose any model you prefer, but it's recommended to use a small model for tab autocomplete, such as deepseek-1b, starcoder-1b, starcoder-3b, or stable-code-3b for optimal performance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do any of these other models have MIT or Apache-2.0 License?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Deepseek has the MIT:
- https://huggingface.co/deepseek-ai/deepseek-vl-1.3b-chat
- https://github.com/deepseek-ai/DeepSeek-LLM/blob/HEAD/LICENSE-CODE
starcoder has the BigCode OpenRAIL-M v1 license:
- https://huggingface.co/bigcode/starcoder2-15b#license
- explained here: https://www.bigcode-project.org/docs/pages/model-license/
Open Responsible AI Licenses (OpenRAIL) are licenses designed to permit free and open access, re-use, and downstream distribution of derivatives of AI artifacts, for research, commercial or non-commercial purposes, as long as the use restrictions present in the license always apply (including to derivative works).
- use restrictions: https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement#attachment-a-use-restrictions
And stable-code has a non-commercial research community license
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Welp I didn't completely read the Deepseek licensing, the code in their repository is under the MIT license but
"The use of DeepSeek-VL Base/Chat models is subject to DeepSeek Model License. DeepSeek-VL series (including Base and Chat) supports commercial use."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @KushGuptaRH ! We'll have to evaluate the use of these other licenses. I can follow up on this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No problem! I'll also note that our colleagues from Germany have done some work around integrating Ollama with Continue in disconnected environments utilizing OpenShift dev spaces if it helps the awesome work y'all are doing: https://www.opensourcerers.org/2023/11/06/a-personal-ai-assistant-for-developers-that-doesnt-phone-home/
# Interacting with the "Continue" Extension: Practical Examples | ||
|
||
Now that you've configured the "Continue" extension, let's explore how you can effectively interact with the language model directly within VS Code. Here are several ways to engage with the extension: | ||
|
||
1. **Prompting for code generation:** Open the "Continue" panel in VS Code and prompt the extension with a specific task, such as "Write a code to add two numbers." The extension will then provide relevant code autocompletion suggestions based on your input prompt, aiding in code generation and text completion tasks. | ||
|
||
![Prompt-response](../assets/interaction-vscode1.png) | ||
|
||
2. **Querying Working Code:** Copy your existing code snippet or press `⌘ + L` to paste it into the "Continue" panel, then pose a question such as "Explain this section of the code." The extension (LLM) will analyze the code snippet and provide explanations or insights to help you understand it better. | ||
|
||
![Querying Working Code](../assets/interaction-vscode2.png) | ||
|
||
3. **Editing Code in Script:** Edit your Python code directly within a `.py` script file using the "Continue" extension. Press `⌘ + I` to initiate the edit mode. You can then refine a specific line of code or request enhancements to make it more efficient. The extension will suggest additional code by replacing your edited code and provide options for you to accept or reject the proposed changes. | ||
|
||
![Editing Code in Script](../assets/interaction-vscode3.png) | ||
|
||
By exploring these interactions, users can fully leverage the capabilities of language models within VS Code, enhancing their coding experience and productivity. | ||
|
||
4. **Tab Autocomplete:** | ||
|
||
![autocompletion-config-example](../assets/autocomplete_example.png) | ||
|
||
In addition to its core functionalities, the "Continue" extension offers a tab auto complete feature in its pre-release version. This feature enhances the coding experience by providing aut-complete suggestions tailored to your coding context within VS Code. To leverage this functionality with the custom model, follow these steps to configure the `config.json` file: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure we need this section here. This is mainly just generic info you can get from the the Continue docs, right?
(Its good info to know, but don't think its that relevant here)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback. It does make sense. Removed it and updated the file.
Please take a look. Thank you @MichaelClifford
97c7a3f
to
531dfab
Compare
Thanks @suppathak, can you also sign your commit so it will pass the DCO check. |
|
||
## Step 2: Ensure Model Service is Running | ||
|
||
Before configuring the "Continue" extension, ensure that the Model Service is up and running. Follow the instructions provided in the existing (README.md)[README.md] document to build and deploy the Model Service. Note the port and endpoint details for the Model Service. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before configuring the "Continue" extension, ensure that the Model Service is up and running. Follow the instructions provided in the existing (README.md)[README.md] document to build and deploy the Model Service. Note the port and endpoint details for the Model Service. | |
Before configuring the "Continue" extension, ensure that the Model Service is up and running. Follow the instructions provided in the existing [README.md](README.md)document to build and deploy the Model Service. Note the port and endpoint details for the Model Service. |
} | ||
``` | ||
|
||
In addition to its core functionalities, the "Continue" extension offers a tab auto complete feature in its `pre-release version`. This feature enhances the coding experience by providing aut-complete suggestions tailored to your coding context within VS Code. To leverage this functionality with the custom model, follow these steps to configure the `config.json` file: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In addition to its core functionalities, the "Continue" extension offers a tab auto complete feature in its `pre-release version`. This feature enhances the coding experience by providing aut-complete suggestions tailored to your coding context within VS Code. To leverage this functionality with the custom model, follow these steps to configure the `config.json` file: | |
In addition to its core functionalities, the "Continue" extension offers a tab auto complete feature in its pre-release version. This feature enhances the coding experience by providing auto-complete suggestions tailored to your coding context within VS Code. To leverage this functionality with the custom model, follow these steps to configure the `config.json` file: |
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
needs closing ```
a3b0bd2
to
03bfd67
Compare
Signed-off-by: Surya Prakash Pathak <[email protected]>
7b0076b
to
1675822
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks @suppathak
In this guide, we'll walk through the process of integrating a language model with Visual Studio Code (VS Code) to enhance code generation tasks and developer productivity.