Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrating Language Models with Visual Studio Code (VS Code) #90

Merged
merged 1 commit into from
Apr 1, 2024

Conversation

suppathak
Copy link
Collaborator

In this guide, we'll walk through the process of integrating a language model with Visual Studio Code (VS Code) to enhance code generation tasks and developer productivity.

@suppathak
Copy link
Collaborator Author

#66

@MichaelClifford
Copy link
Collaborator

@suppathak
Copy link
Collaborator Author

suppathak commented Mar 22, 2024

  • This is missing the use of the "tab autocomplete feature". Did you explore using this feature of continue?

@MichaelClifford , I have added this section in the markdown file. Thank you for pointing it out.

Thanks for sharing the link! Currently looking into it.

Comment on lines 72 to 77
pip install llama-cpp-python[server]

# Start the server
python3 -m llama_cpp.server --model <model_path>
```

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are you suggesting to do it this way instead of using the llamacpp playground container we use in the rest of the repo?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we have two different cases,

  1. where we interact with models using prompt, edit and code explanation features.
  2. where we interact with autocomplete feature, which require different model.

In this article, I have suggested another way to setup server for model used for autocompletion. If anyone wants to interact with both features, then they can used llama-cpp playground container to interact with case 1, and setup llama-cpp-python server for case 2. Does this approach makes sense to you? wdyt?
Thanks.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would they not create a second playground container with the other model?

Is this what is recommended by "continue" using 2 different models?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thats a good question.

  • yes one can create a second playground container with the other model.

  • No there is not such restriction. One can use same model for both cases. However, they strongly recommend to use smaller model (1B - 3B parameters) for TabAutoCompletion options. Currently, I am exploring some models which I can use in both cases, without frequent crashes.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is what I experienced while using containerized options, frequent crashing. I hesitate to suggest 2 models, or a non-containerized approach or one that crashes frequently. It would be ideal if we can figure out why its crashing and come up with a solution to use 1 containerized model server for both.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @MichaelClifford ,

I observed that the reason for the crashes is because we're sending too many requests to our model too quickly, especially when we're using the AutoComplete feature. To fix this, I increased the debounceDelay parameter setting to 4000 ms so that there is a short wait time between each request we send to the server. Now it seems to be working. ptal.

Thank you!

}
```

The tabAutocompleteModel is similar to other objects in the models array of config.json. You have the flexibility to choose any model you prefer, but it's recommended to use a small model for tab autocomplete, such as deepseek-1b, starcoder-1b, starcoder-3b, or stable-code-3b for optimal performance.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do any of these other models have MIT or Apache-2.0 License?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deepseek has the MIT:

starcoder has the BigCode OpenRAIL-M v1 license:

And stable-code has a non-commercial research community license

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Welp I didn't completely read the Deepseek licensing, the code in their repository is under the MIT license but

"The use of DeepSeek-VL Base/Chat models is subject to DeepSeek Model License. DeepSeek-VL series (including Base and Chat) supports commercial use."

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @KushGuptaRH ! We'll have to evaluate the use of these other licenses. I can follow up on this.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No problem! I'll also note that our colleagues from Germany have done some work around integrating Ollama with Continue in disconnected environments utilizing OpenShift dev spaces if it helps the awesome work y'all are doing: https://www.opensourcerers.org/2023/11/06/a-personal-ai-assistant-for-developers-that-doesnt-phone-home/

Comment on lines 29 to 51
# Interacting with the "Continue" Extension: Practical Examples

Now that you've configured the "Continue" extension, let's explore how you can effectively interact with the language model directly within VS Code. Here are several ways to engage with the extension:

1. **Prompting for code generation:** Open the "Continue" panel in VS Code and prompt the extension with a specific task, such as "Write a code to add two numbers." The extension will then provide relevant code autocompletion suggestions based on your input prompt, aiding in code generation and text completion tasks.

![Prompt-response](../assets/interaction-vscode1.png)

2. **Querying Working Code:** Copy your existing code snippet or press `⌘ + L` to paste it into the "Continue" panel, then pose a question such as "Explain this section of the code." The extension (LLM) will analyze the code snippet and provide explanations or insights to help you understand it better.

![Querying Working Code](../assets/interaction-vscode2.png)

3. **Editing Code in Script:** Edit your Python code directly within a `.py` script file using the "Continue" extension. Press `⌘ + I` to initiate the edit mode. You can then refine a specific line of code or request enhancements to make it more efficient. The extension will suggest additional code by replacing your edited code and provide options for you to accept or reject the proposed changes.

![Editing Code in Script](../assets/interaction-vscode3.png)

By exploring these interactions, users can fully leverage the capabilities of language models within VS Code, enhancing their coding experience and productivity.

4. **Tab Autocomplete:**

![autocompletion-config-example](../assets/autocomplete_example.png)

In addition to its core functionalities, the "Continue" extension offers a tab auto complete feature in its pre-release version. This feature enhances the coding experience by providing aut-complete suggestions tailored to your coding context within VS Code. To leverage this functionality with the custom model, follow these steps to configure the `config.json` file:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure we need this section here. This is mainly just generic info you can get from the the Continue docs, right?

(Its good info to know, but don't think its that relevant here)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback. It does make sense. Removed it and updated the file.
Please take a look. Thank you @MichaelClifford

@suppathak suppathak force-pushed the vscode-extension branch 2 times, most recently from 97c7a3f to 531dfab Compare April 1, 2024 02:11
@MichaelClifford
Copy link
Collaborator

Thanks @suppathak, can you also sign your commit so it will pass the DCO check.


## Step 2: Ensure Model Service is Running

Before configuring the "Continue" extension, ensure that the Model Service is up and running. Follow the instructions provided in the existing (README.md)[README.md] document to build and deploy the Model Service. Note the port and endpoint details for the Model Service.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Before configuring the "Continue" extension, ensure that the Model Service is up and running. Follow the instructions provided in the existing (README.md)[README.md] document to build and deploy the Model Service. Note the port and endpoint details for the Model Service.
Before configuring the "Continue" extension, ensure that the Model Service is up and running. Follow the instructions provided in the existing [README.md](README.md)document to build and deploy the Model Service. Note the port and endpoint details for the Model Service.

}
```

In addition to its core functionalities, the "Continue" extension offers a tab auto complete feature in its `pre-release version`. This feature enhances the coding experience by providing aut-complete suggestions tailored to your coding context within VS Code. To leverage this functionality with the custom model, follow these steps to configure the `config.json` file:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In addition to its core functionalities, the "Continue" extension offers a tab auto complete feature in its `pre-release version`. This feature enhances the coding experience by providing aut-complete suggestions tailored to your coding context within VS Code. To leverage this functionality with the custom model, follow these steps to configure the `config.json` file:
In addition to its core functionalities, the "Continue" extension offers a tab auto complete feature in its pre-release version. This feature enhances the coding experience by providing auto-complete suggestions tailored to your coding context within VS Code. To leverage this functionality with the custom model, follow these steps to configure the `config.json` file:

Comment on lines 55 to 57
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

needs closing ```

Copy link
Collaborator

@MichaelClifford MichaelClifford left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
Thanks @suppathak

@MichaelClifford MichaelClifford merged commit 936d0be into containers:main Apr 1, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants