-
Notifications
You must be signed in to change notification settings - Fork 244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Two Step Edits #530
base: main
Are you sure you want to change the base?
Two Step Edits #530
Conversation
) | ||
|
||
return parsed_llm_response | ||
|
||
async def get_model_response(self) -> ParsedLLMResponse: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The get_model_response
method has been significantly simplified by moving the logic for streaming model responses into the stream_model_response.py
file. This is a good refactor as it reduces the complexity of the Conversation
class and adheres to the single responsibility principle. However, it's important to ensure that all functionalities related to model response handling are thoroughly tested in their new location to prevent any regressions.
stream.send(line) | ||
|
||
|
||
async def stream_model_response_two_step( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The stream_model_response_two_step
function contains several TODO comments that suggest incomplete implementation, such as handling interrupts, tracking costs of all calls, and logging API calls. It's crucial to address these TODOs before merging this feature into the main branch to ensure the two-step edit process is fully functional and robust.
|
||
list_files_response = await llm_api_handler.call_llm_api( | ||
list_files_messages, | ||
model="gpt-3.5-turbo-0125", # TODO add config for secondary model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using a hardcoded model (gpt-3.5-turbo-0125
) for the two-step edit process might limit the flexibility of this feature. Consider adding a configuration option for the secondary model used in the two-step edit process. This would allow users to choose the model that best fits their needs and potentially improve the accuracy and efficiency of the edits.
MENTAT CODE REVIEW IN ACTIVE DEVELOPMENT. Only in use on mentat and internal repos. This pull request introduces a significant refactor and a new feature for handling two-step edits with the LLM. The changes overall are well-structured, and the separation of concerns between files is improved. However, there are several areas, particularly in the new |
|
||
list_files_response = await llm_api_handler.call_llm_api( | ||
list_files_messages, | ||
model="gpt-3.5-turbo-0125", # TODO add config for secondary model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider implementing the interrupt functionality for the two-step edit process to enhance user experience and control. This would allow users to stop the model's response if it's taking too long or if they notice an issue early on.
|
||
rewrite_file_response = await llm_api_handler.call_llm_api( | ||
rewrite_file_messages, | ||
model="gpt-3.5-turbo-0125", # TODO add config for secondary model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's important to handle potential assertion failures gracefully in production code. Consider adding error handling for cases where the expected rewrite_file_response
format does not match the assumptions (e.g., missing ``` markers). This could include logging a warning and skipping the file or providing a clear error message to the user.
|
||
list_files_response = await llm_api_handler.call_llm_api( | ||
list_files_messages, | ||
model="gpt-3.5-turbo-0125", # TODO add config for secondary model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To ensure the feature's robustness, consider adding tests specifically for the two-step edit process. These tests should cover various scenarios, including successful edits, user interruptions, and handling of unexpected model responses.
MENTAT CODE REVIEW IN ACTIVE DEVELOPMENT. Only in use on mentat and internal repos. The introduction of the two-step edit process is a promising addition, enhancing the flexibility and capability of the system. However, attention to detail in handling edge cases, user control, and testing will be crucial to ensure its success and reliability. Addressing the mentioned concerns and implementing the suggested improvements will significantly contribute to the robustness and user satisfaction of this feature. |
_stream_model_response
out ofConversation
and into new filePull Request Checklist