Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Ollama Model for local usage of yorkie intelligence #297

Merged
merged 12 commits into from
Aug 20, 2024

Conversation

sihyeong671
Copy link
Contributor

@sihyeong671 sihyeong671 commented Aug 14, 2024

What this PR does / why we need it?

This PR adds the Docker image necessary for the Ollama model to the docker-compose file and updates the environment variables required for utilizing the model.

Any background context you want to provide?

The changes in this PR are essential for incorporating the Ollama model into the project. By adding the necessary Docker image and updating the environment variables, the Ollama model can be seamlessly integrated and utilized.

What are the relevant tickets?

Fixes #255

Checklist

  • Added relevant tests or not required
  • Didn't break anything

Summary by CodeRabbit

Summary by CodeRabbit

  • New Features

    • Added support for Yorkie Intelligence, enhancing collaborative editing capabilities.
    • Integrated new chat model selection logic, allowing dynamic model instantiation based on configuration.
    • Introduced new Docker services for Yorkie Intelligence, improving application scalability.
  • Bug Fixes

    • Modified settings logic to broaden criteria for enabling Yorkie Intelligence.
  • Dependencies

    • Added a new dependency for Ollama to support language processing features.

Copy link
Contributor

coderabbitai bot commented Aug 14, 2024

Walkthrough

The recent updates significantly enhance the backend configuration and Docker setup. Key improvements include the integration of the Ollama model for Yorkie Intelligence, offering greater flexibility and reduced costs compared to the previous OpenAI model. Environment variables were updated to support this functionality, and the Docker configuration expanded to include new services, streamlining collaborative editing features and ensuring proper authentication.

Changes

Files Change Summary
backend/.env.development Updated GitHub credentials; changed YORKIE_INTELLIGENCE to model name "ollama:gemma2:2b"; added OLLAMA_HOST_URL.
backend/docker/docker-compose.yml Added new service yorkie-intelligence using ollama/ollama:latest, exposing port 11434.
backend/docker/docker-compose-full.yml Included new service yorkie-intelligence with similar configuration as in the basic compose file.
backend/package.json Added new dependency @langchain/ollama version ^0.0.4 for model handling.
backend/src/langchain/langchain.module.ts Updated chatModelFactory to dynamically select between ChatOllama and ChatOpenAI based on config.

Assessment against linked issues

Objective Addressed Explanation
Model selection by changing model name in env file (255)
Download Ollama model automatically when running codepair (255) No implementation for automatic downloading is present.

Poem

🐰 In a meadow bright and fair,
The Ollama model fluffs the air.
With Yorkie's help, we leap so high,
No need for fees, we now can fly!
Together we'll code with glee,
A rabbit's dream of freedom, you see! 🌼✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between aaa1270 and 57178b5.

Files ignored due to path filters (1)
  • backend/package-lock.json is excluded by !**/package-lock.json
Files selected for processing (6)
  • backend/.env.development (2 hunks)
  • backend/docker/docker-compose-full.yml (1 hunks)
  • backend/docker/docker-compose.yml (1 hunks)
  • backend/package.json (1 hunks)
  • backend/src/langchain/langchain.module.ts (1 hunks)
  • backend/src/settings/settings.service.ts (1 hunks)
Additional context used
Gitleaks
backend/.env.development

8-9: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

Additional comments not posted (6)
backend/src/settings/settings.service.ts (1)

13-13: Verify the impact of broadening the enable condition.

The condition for enabling yorkieIntelligence has been changed to check if the value is not "false", which broadens the criteria. Ensure this change aligns with the intended behavior and verify that all configurations are correctly set to avoid unexpected enabling of the feature.

backend/src/langchain/langchain.module.ts (1)

8-19: Verify model instantiation logic and compatibility.

The chatModelFactory now conditionally instantiates ChatOllama or ChatOpenAI based on the YORKIE_INTELLIGENCE environment variable. Ensure that both models are correctly implemented and compatible with the rest of the system, and verify that the environment variable is set appropriately in all deployment environments.

backend/docker/docker-compose.yml (1)

12-16: Verify the integration of the yorkie-intelligence service.

The new service yorkie-intelligence has been added with the image ollama/ollama:latest, exposing port 11434. Ensure there are no port conflicts with other services and that the service is correctly integrated into the overall architecture.

Verification successful

Integration of yorkie-intelligence service verified successfully.

The yorkie-intelligence service is correctly integrated with no port conflicts. It uses port 11434, which is not shared with any other service in the Docker Compose files.

  • Port 11434 is used only by the yorkie-intelligence service.
  • No other services conflict with this port.
Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the integration of the `yorkie-intelligence` service in Docker Compose.

# Test: Check for port conflicts and ensure correct service integration.
rg --type yaml 'ports:.*11434'

Length of output: 32


Script:

#!/bin/bash
# List all services and their exposed ports in Docker Compose files to verify integration and check for conflicts.
rg --type yaml 'ports:' -A 3

Length of output: 1896

backend/docker/docker-compose-full.yml (1)

43-47: Ensure proper security and resource allocation for the new service.

The yorkie-intelligence service has been added with a restart policy and port mapping. Consider the following:

  1. Security: Ensure that the exposed port 11434 is secured and only accessible where necessary.
  2. Resource Allocation: Depending on the resource requirements of the ollama model, you might need to specify resource limits (e.g., CPU, memory) to prevent overconsumption.

Ensure that the new service integrates well with existing services and that there are no conflicts with other configurations.

backend/package.json (1)

35-35: Verify the necessity and compatibility of the new dependency.

The @langchain/ollama package has been added. Ensure that:

  1. This dependency is necessary for the new functionality.
  2. It is compatible with other packages in your project, especially those within the Langchain ecosystem.

Ensure that the dependency is used effectively in the codebase and does not introduce conflicts.

backend/.env.development (1)

40-43: Clarify the model configuration for Yorkie Intelligence.

The YORKIE_INTELLIGENCE variable now specifies a model name "gemma2:2b". Ensure that:

  1. The model name is correctly configured and available in the Ollama library.
  2. Documentation is updated to reflect this change, explaining how to modify or update the model if needed.

Check the integration of this model with the application to ensure it functions as expected.

- add docker image in docker compose file
- change yorkie intelligence env var
- add lib related to ollama
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 57178b5 and aadd96c.

Files ignored due to path filters (1)
  • backend/package-lock.json is excluded by !**/package-lock.json
Files selected for processing (6)
  • backend/.env.development (2 hunks)
  • backend/docker/docker-compose-full.yml (1 hunks)
  • backend/docker/docker-compose.yml (1 hunks)
  • backend/package.json (1 hunks)
  • backend/src/langchain/langchain.module.ts (1 hunks)
  • backend/src/settings/settings.service.ts (1 hunks)
Files skipped from review due to trivial changes (1)
  • backend/package.json
Files skipped from review as they are similar to previous changes (5)
  • backend/.env.development
  • backend/docker/docker-compose-full.yml
  • backend/docker/docker-compose.yml
  • backend/src/langchain/langchain.module.ts
  • backend/src/settings/settings.service.ts

@devleejb devleejb self-assigned this Aug 15, 2024
@devleejb devleejb self-requested a review August 15, 2024 01:05
@devleejb devleejb assigned sihyeong671 and unassigned devleejb Aug 15, 2024
Copy link
Member

@devleejb devleejb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a question.

I have a question. What happens if someone requests Yorkie intelligence while the model is downloading?

backend/src/langchain/langchain.module.ts Outdated Show resolved Hide resolved
backend/.env.development Outdated Show resolved Hide resolved
backend/.env.development Outdated Show resolved Hide resolved
backend/src/langchain/langchain.module.ts Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between aadd96c and 2dadca1.

Files selected for processing (1)
  • backend/src/langchain/langchain.module.ts (1 hunks)
Files skipped from review as they are similar to previous changes (1)
  • backend/src/langchain/langchain.module.ts

- fix docker compose file (user can change ollama conatainer port)
- fix readme docs(add --env-file option)
- add usable model
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Outside diff range, codebase verification and nitpick comments (1)
README.md (1)

62-62: Ensure Proper Configuration of Environment Variables

The .env.development file contains placeholders for GITHUB_CLIENT_ID and GITHUB_CLIENT_SECRET. Please replace these placeholders with actual values to ensure proper configuration and functionality of the application.

Analysis chain

LGTM! Verify the .env.development setup.

The addition of --env-file ./backend/.env.development to the docker-compose command is beneficial for managing environment configurations.

Ensure that the .env.development file is correctly set up with all necessary environment variables.

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that the `.env.development` file contains necessary environment variables.

# Test: Check for the presence of environment variables. Expect: Non-empty values for each variable.
grep -E 'GITHUB_CLIENT_ID|GITHUB_CLIENT_SECRET' backend/.env.development

Length of output: 302

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 2dadca1 and 0d2d77a.

Files selected for processing (6)
  • README.md (2 hunks)
  • backend/.env.development (1 hunks)
  • backend/README.md (1 hunks)
  • backend/docker/docker-compose-full.yml (1 hunks)
  • backend/docker/docker-compose.yml (1 hunks)
  • backend/src/langchain/langchain.module.ts (1 hunks)
Files skipped from review as they are similar to previous changes (4)
  • backend/.env.development
  • backend/docker/docker-compose-full.yml
  • backend/docker/docker-compose.yml
  • backend/src/langchain/langchain.module.ts
Additional comments not posted (2)
backend/README.md (1)

27-27: LGTM! Verify the .env.development setup.

The addition of --env-file ./backend/.env.development to the docker-compose command is a good practice for managing environment configurations.

Ensure that the .env.development file is correctly set up with all necessary environment variables.

Verification successful

The .env.development file is correctly set up with placeholders for necessary environment variables. The file contains entries for GITHUB_CLIENT_ID and GITHUB_CLIENT_SECRET, which are expected to be filled with actual values by the user. This is a standard practice for environment configuration files.

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that the `.env.development` file contains necessary environment variables.

# Test: Check for the presence of environment variables. Expect: Non-empty values for each variable.
grep -E 'GITHUB_CLIENT_ID|GITHUB_CLIENT_SECRET' backend/.env.development

Length of output: 302

README.md (1)

91-91: LGTM! Verify the .env.development setup.

The addition of --env-file ./backend/.env.development to the docker-compose command is beneficial for managing environment configurations.

Ensure that the .env.development file is correctly set up with all necessary environment variables.

@sihyeong671 sihyeong671 requested a review from devleejb August 18, 2024 07:14
README.md Outdated Show resolved Hide resolved
backend/.env.development Outdated Show resolved Hide resolved
backend/src/langchain/langchain.module.ts Outdated Show resolved Hide resolved
backend/README.md Outdated Show resolved Hide resolved
backend/docker/docker-compose-full.yml Outdated Show resolved Hide resolved
backend/docker/docker-compose.yml Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Outside diff range, codebase verification and nitpick comments (1)
backend/src/langchain/langchain.module.ts (1)

49-49: Improve the error message for clarity.

Consider providing additional guidance on where to verify the model name.

- throw new Error(`${modelType} is not found. please check your model name`);
+ throw new Error(`${modelType} is not found. Please check your model name in the environment configuration.`);
Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 0d2d77a and 0674eea.

Files ignored due to path filters (1)
  • backend/package-lock.json is excluded by !**/package-lock.json
Files selected for processing (5)
  • backend/.env.development (1 hunks)
  • backend/docker/docker-compose-full.yml (1 hunks)
  • backend/docker/docker-compose.yml (1 hunks)
  • backend/package.json (1 hunks)
  • backend/src/langchain/langchain.module.ts (1 hunks)
Files skipped from review due to trivial changes (2)
  • backend/docker/docker-compose-full.yml
  • backend/package.json
Files skipped from review as they are similar to previous changes (2)
  • backend/.env.development
  • backend/docker/docker-compose.yml
Additional context used
Biome
backend/src/langchain/langchain.module.ts

[error] 32-32: Change to an optional chain.

Unsafe fix: Change to an optional chain.

(lint/complexity/useOptionalChain)

Additional comments not posted (3)
backend/src/langchain/langchain.module.ts (3)

3-3: Import of ChatOllama is appropriate.

The import statement is necessary for the new functionality involving the ChatOllama model.


6-22: modelList definition is well-structured.

The modelList object provides a clear mapping of providers to their models, facilitating model selection based on the environment variable.


26-50: chatModelFactory logic is sound.

The factory correctly parses the YORKIE_INTELLIGENCE environment variable, validates the model, and instantiates the appropriate chat model. Error handling is in place for invalid model types.

Ensure that the YORKIE_INTELLIGENCE environment variable is set correctly in the deployment environment to prevent runtime errors.

Verification successful

Ensure YORKIE_INTELLIGENCE is correctly configured.

The YORKIE_INTELLIGENCE environment variable is referenced in multiple files, including:

  • backend/src/settings/settings.service.ts: Used to determine if a feature is enabled.
  • backend/src/langchain/langchain.module.ts: Used to determine the model type.
  • backend/docker/docker-compose-full.yml: Set to false.

Verify that the YORKIE_INTELLIGENCE variable is set appropriately in the deployment environment to prevent runtime errors related to model configuration.

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that the YORKIE_INTELLIGENCE environment variable is set in the environment files.

# Test: Check for the presence of YORKIE_INTELLIGENCE in environment files. Expect: At least one occurrence.
rg --type env YORKIE_INTELLIGENCE

Length of output: 63


Script:

#!/bin/bash
# Description: Search for the YORKIE_INTELLIGENCE environment variable in common configuration files.

# Test: Check for the presence of YORKIE_INTELLIGENCE in any file. Expect: At least one occurrence.
rg 'YORKIE_INTELLIGENCE' --glob '*.{env,js,json,ts,py,yml,yaml}'

Length of output: 349

Tools
Biome

[error] 32-32: Change to an optional chain.

Unsafe fix: Change to an optional chain.

(lint/complexity/useOptionalChain)

backend/src/langchain/langchain.module.ts Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 0674eea and 4ec2f73.

Files selected for processing (1)
  • backend/src/langchain/langchain.module.ts (1 hunks)
Files skipped from review as they are similar to previous changes (1)
  • backend/src/langchain/langchain.module.ts

@devleejb devleejb self-requested a review August 20, 2024 13:57
Copy link
Member

@devleejb devleejb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your contribution.

@devleejb devleejb merged commit 95d8a1f into yorkie-team:main Aug 20, 2024
2 checks passed
minai621 pushed a commit that referenced this pull request Nov 5, 2024
* Add .gitignore into root project directory(#279)

* chore: just combine front, back ignore file

- remove .gitignore in each folder

* chore: fix legacy file & seperate OD part in gitignore

* Add ollama llm for yorkie intelligence # 255

- add docker image in docker compose file
- change yorkie intelligence env var
- add lib related to ollama

* apply formatting

* Add ollama model option #255

- fix docker compose file (user can change ollama conatainer port)
- fix readme docs(add --env-file option)
- add usable model

* feat: add modelList type, change port to baseurl #255

- apply github code review

* fix: apply npm format

* fix: refactor by github review
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Status: Done
Development

Successfully merging this pull request may close these issues.

Add Ollama Model for Yorkie intelligence
2 participants