Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Two Step Edits #530

Draft
wants to merge 17 commits into
base: main
Choose a base branch
from
Draft

Two Step Edits #530

wants to merge 17 commits into from

Conversation

biobootloader
Copy link
Member

  • move _stream_model_response out of Conversation and into new file

Pull Request Checklist

  • Documentation has been updated, or this change doesn't require that

)

return parsed_llm_response

async def get_model_response(self) -> ParsedLLMResponse:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The get_model_response method has been significantly simplified by moving the logic for streaming model responses into the stream_model_response.py file. This is a good refactor as it reduces the complexity of the Conversation class and adheres to the single responsibility principle. However, it's important to ensure that all functionalities related to model response handling are thoroughly tested in their new location to prevent any regressions.

stream.send(line)


async def stream_model_response_two_step(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The stream_model_response_two_step function contains several TODO comments that suggest incomplete implementation, such as handling interrupts, tracking costs of all calls, and logging API calls. It's crucial to address these TODOs before merging this feature into the main branch to ensure the two-step edit process is fully functional and robust.


list_files_response = await llm_api_handler.call_llm_api(
list_files_messages,
model="gpt-3.5-turbo-0125", # TODO add config for secondary model
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using a hardcoded model (gpt-3.5-turbo-0125) for the two-step edit process might limit the flexibility of this feature. Consider adding a configuration option for the secondary model used in the two-step edit process. This would allow users to choose the model that best fits their needs and potentially improve the accuracy and efficiency of the edits.

Copy link
Contributor

mentatbot bot commented Mar 5, 2024

MENTAT CODE REVIEW IN ACTIVE DEVELOPMENT. Only in use on mentat and internal repos.
Please Reply with feedback.

This pull request introduces a significant refactor and a new feature for handling two-step edits with the LLM. The changes overall are well-structured, and the separation of concerns between files is improved. However, there are several areas, particularly in the new stream_model_response_two_step function, that require attention to ensure the feature is complete and robust. Addressing the hardcoded model usage, ensuring TODOs are resolved, and handling potential assertion failures gracefully are key areas to focus on before finalizing this feature.


list_files_response = await llm_api_handler.call_llm_api(
list_files_messages,
model="gpt-3.5-turbo-0125", # TODO add config for secondary model
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider implementing the interrupt functionality for the two-step edit process to enhance user experience and control. This would allow users to stop the model's response if it's taking too long or if they notice an issue early on.


rewrite_file_response = await llm_api_handler.call_llm_api(
rewrite_file_messages,
model="gpt-3.5-turbo-0125", # TODO add config for secondary model
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's important to handle potential assertion failures gracefully in production code. Consider adding error handling for cases where the expected rewrite_file_response format does not match the assumptions (e.g., missing ``` markers). This could include logging a warning and skipping the file or providing a clear error message to the user.


list_files_response = await llm_api_handler.call_llm_api(
list_files_messages,
model="gpt-3.5-turbo-0125", # TODO add config for secondary model
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To ensure the feature's robustness, consider adding tests specifically for the two-step edit process. These tests should cover various scenarios, including successful edits, user interruptions, and handling of unexpected model responses.

Copy link
Contributor

mentatbot bot commented Mar 6, 2024

MENTAT CODE REVIEW IN ACTIVE DEVELOPMENT. Only in use on mentat and internal repos.
Please Reply with feedback.

The introduction of the two-step edit process is a promising addition, enhancing the flexibility and capability of the system. However, attention to detail in handling edge cases, user control, and testing will be crucial to ensure its success and reliability. Addressing the mentioned concerns and implementing the suggested improvements will significantly contribute to the robustness and user satisfaction of this feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant