-
-
Notifications
You must be signed in to change notification settings - Fork 340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revamped Jupyter AI #1157
base: main
Are you sure you want to change the base?
Revamped Jupyter AI #1157
Conversation
One more thing: because we call into the kernel, the subshell JEP would benefit this implementation as well, otherwise things like autocomplete and getting LLM descriptions can be blocked |
Thank you @govinda18 and @mlucool for contributing to the future of this project by proposing these big ideas! I really appreciate the immense amount of effort put into the design & implementation of the changes. ❤️ These are all very sound ideas. In fact, I believe we have existing issues for each of them. Here is a quick summary for others in this thread:
Currently, our team has shifted focus away from Jupyter AI v2 to focus on building the next major release, Jupyter AI v3, which is planned for mid-Feb 2025. We are developing this on the Given this, I'll share some context on v3 and provide some suggestions on how we can move forward on this 🤗. Jupyter AI v3In v3, all of the logic for managing chat state in the backend & rendering chats in the frontend has been migrated to a project called
Suggestions on moving forwardOut of all the fantastic ideas proposed here, I think local variable inclusion (e.g. v3 is still early in development, so
Finally, it may be helpful to establish a dedicated communication channel for technical discussion as needed. Question: Do you all want a comms channel? AWS has a Slack workspace that allows for external connections; I can explore this if interested. |
I don't know if AWS team had a chance to explore the Jupyter Zulip channel but a lot of dev chatter around Jupyter nowadays happens over there (and it is still public). |
@dlqqq please include me if there would be a comms channel on this. I've also been running a hacky patched version of jupyter-ai that has a very similar automatic context feature for the active file and info of variables via the active kernel. I would like to see how will be enabled in v3. I would primarily have interest in how extendable and configurable it would be for developers. For example, in my version, I only extract dataframe schemas from variables and would not include output of cells in the context to mitigate risk of sending sensitive information to the models. I would hope that I would be able to configure it to do something like that or at minimum have the possibility of monkey patching it to do so. |
Awareness of the current Notebook and interaction with the associated kernel would be a really nice feature. FYI I opened jupyterlab/jupyter-chat#128 to track this in |
@michaelchia Absolutely! As @krassowski mentioned, Zulip could be the tool for this. That way, it's accessible to people who aren't able to use Slack. I'll explore this in parallel now, since it'd be helpful to have a |
This is great work! I wonder if y'all saw my demo in last month's community call: https://youtu.be/ildFScV6mZQ?si=7Wa7JFkZXsUnHK0K&t=1348 There is a lot of similar UX I demoed that I'd love to coordinate/sort out with you all. I've written some extensible plugins similar to yours:
along with some other useful things like getting feedback from users. I've been slowly working on shipping the building blocks as individual plugins:
I broke all of these things into separate repos (for now) so that they can be as extensible as possible and (possibly) role into Jupyter AI more easily. Maybe we can collaborate on these efforts? |
Sounds great @Zsailer we'll have a version of what we were thinking ready too. Some of the ideas you propose seem pretty great. Noting some differences from your demo that I think are important
I'm working with @dlqqq to get a slack channel created (its our preferred real-time choice also), so we can invite you (and whomever else) to that room if we are able to get this setup |
I went back and forth on this, and dogfooded this quite a bit. I have some loose opinions here after using this stuff a lot, but could see pros/cons for both. You're totally right—you give up the multi-turn conversation if don't overlay, but man, typing in the cell is just a buttery smooth, simple experience 😅. There's something about just writing directly in a cell that I couldn't get over.
Totally. I was working on this as well. I've been focused on UX primarily. Integrating with Jupyter AI commands and tools (coming in #991) is something we absolutely need—I just didn't get there yet.
I also did this originally, but didn't like the experience. I felt like I was baby-sitting the AI. I didn't want to have to copy-edit the LLMs response by default every time. That's what you're imposing on the user by automatically showing an inline (possibly complex) diff. The user has to manually accept everything. Instead, I'd rather assume my AI collaborator is decent enough at its job, so I'd prefer to try its work and only peak at a diff when I'm skeptical. Perhaps I'm too trusting of the AI ;) but my long term thinking here is that the AI is going to become a pretty good collaborator going forward. I don't want the AI to create "homework" for me (i.e. reviewing their work everytime) when using it. I recognize, however, I might be in the minority here. haha
This is an interesting outcome, though, that might overturn my opinion in the last paragraph :)
One thing I should note... I added a undo + redo button to the cell toolbar. When dogfooding, undo/redo using the cell toolbar felt quite natural—the user assumes the AI is pretty decent, only shows diff if they're skeptical, and can flip to previous states using undo / redo buttons easily. This keeps the "purity" of the code cell in place, without layering on diffs and causing confusion around what the "true" state of the code cell is. The AI button, cell diffs, and undo/redo actions are ancillary activities to main code cell.
Yes, please add me :) I'm thrilled to see this stuff land. Would love to talk through these ideas more concretely; I've been |
Great discussion here. @Zsailer I would love to sync later about these features you've built and how to get them in Jupyter AI in the future. I haven't been kept in-the-loop about your progress. Feel free to ping me on Slack if you have availability to chat in the next 2 weeks; otherwise I'll follow up on Jan 2 after folks have their holidays. Also, I reached out to the Jupyter Media Strategy WG, and @andrii-i helped create a |
@mlucool and I are still working on the Slack channel. Will keep folks posted. |
Revamped Jupyter AI
Hi team,
We at D.E. Shaw have forked Jupyter AI and made a significant number of changes to greatly enhance the user experience. Our goal was to align Jupyter AI with the capabilities offered by leading AI-based IDEs such as Cursor, Copilot etc. This pull request includes the majority of the changes we implemented.
This proposal is intended to outline these changes and may require some deliberation. We anticipate that this PR will need to be broken down into several smaller PRs, with some design changes to ensure smooth integration. Additionally, we are planning further enhancements, including per notebook chat and inline code generation, details of which are included in this proposal.
The demo showcases the powerful capabilities of Jupyter AI by incorporating notebook context, leveraging the kernel, and enhancing keyboard accessibility (I did not use my mouse at all while creating this).
jai_demo.mp4
Point of contact: @govinda18 @mlucool
Philosophy
Unlike in JupyterAI, IDEs like Cursor or Copilot have gone towards anything in the IDE is fair to be used. We believe this is the correct user experience for sharing context with the LLM. For Jupyter AI to be effective, it should always provide context automatically, further allowing quick code iterations through insertions and replacements.
We also want to leverage the one major difference that sets notebooks and JupyterLab apart from any other IDE - a runtime kernel. This opens up possibilities where you can include any context of any variable declared in your notebook and further inspect methods and objects to understand their usage. Checkout jupyter/enhancement-proposals#128 for our pre-proposal on streamlining the same.
For autocompletion to be helpful, it needs to be aware of the context in previous cells to provide a Copilot-like experience. For both chat and inline completion, we automatically let it send the entire context of the notebook. We further optimize for context window of the LLM. Checkout the current limitations section for more details.
Lastly, we felt that the chat interface feels a little distant and difficult to access. Most power users of JupyterLab would not use a mouse while working with a notebook, but it's currently not easy to use Jupyter AI without mouse intervention.
Overall, we want to bring the chat and the notebook close enough such that the LLM intelligently understands what the user is talking about without the need to manually decide what context needs to be added. This should further integrate into the natural workflow of the user by providing inline code generation and keyboard accessibility.
What has changed?
This proposal aims to make Jupyter AI much more powerful and accessible, bringing a host of new features designed to enhance user productivity and streamline their data science workflows.
Major Features
Inline Code Completion: The Jupyter AI inline completer is now context-aware, utilizing code from both preceding and succeeding cells to provide more accurate suggestions.
Currently, changes are sent from the frontend (reference), but we believe that jupyter_ydoc should handle this more effectively. We encountered some issues related to jupyter-collaboration issue #202 and did not want to wait for it.
Notebook Aware Chat: The chat model is now aware of your active notebook, current cell, and text selection. You can directly ask it to perform actions like "Refactor this code" and it will refactor your notebook or active cell accordingly, saving you valuable time. This was also proposed in Add all notebook to context, #1037.
Again, while currently this is being sent from the frontend, we believe that jupyter_ydoc should handle this more effectively. Refer this prompt to understand more.
Further, we have added the ability to modify prompts directly from the Jupyter AI settings, enabling quicker iterations. Users can also view the exact prompt sent to the LLM by using the View Prompt call-to-action (CTA) in the chat interface.
Keyboard Accessibility: Navigate Jupyter AI with ease using keyboard shortcuts. Use
Ctrl + Shift + K
to enter chat mode (this is inline with the eventualCtrl + K
for inline code generation) andEscape
to exit(ref). You can easily add generated code to your notebook by navigating withShift + Tab
.For enhanced user experience, we have
Shift Tab
away. (ref)Variable Insertion: Make the Large Language Model (LLM) aware of your variables effortlessly by inserting them into the chat using the
@
symbol. This helps the model better understands your context and provides more accurate suggestions by asking the kernel for runtime information. We felt this should work directly without using a prefix like @var:var_name as it is one of the most powerful benefits of having a live kernel.This functionality is driven by a variable description registry that knows how to describe any object to an LLM. Additionally, we are working on drafting a proposal to IPython for a generic
__llm__
method that can be used by any Python class to describe itself to an LLM. More details in Pre-proposal: standardize object representations for ai and a protocol to retrieve them jupyter/enhancement-proposals#128.This also enhances user experience by providing an autocomplete of the variables declared in the currently active notebook. (ref)
The current implementation is also pluggable as documented here.
Dropped Features
We suppressed some of the existing features as well. There are different reasons though but we feel each of these features needs to be given a bit more thought before we can allow them generically.
Note that these features still work but are not shown in the autocomplete (ref) or help message (ref).
@file
: We believe that adding an entire file in a context needs to be given a bit more thought. Most users would want to add the current notebook in the context which we now do by default. For adding different types of files, there should be some pre-processing. For example, adding a python file would be siginficiantly different that how a csv should be added./learn
and/ask
: There are some fundamental issues with the current implementation:/learn
and/ask
don't scale to a large number of files as accuracy greatly decreases as you add more embeddings. This is just a problem with semantic search in general. Things like hybrid search, reranking and metadata filtering should be researched and added. We found for our internal data that it was too easy to add too much for it to be accurate (adding smaller amount of data was useful but not very practical)./fix
should be replaced with an eventualFix with AI
button that we plan to have whenever an error occurs in the notebook.Enhancements
Current Limitations
To avoid exceeding the context window when including the entire notebook, we have introduced an abstraction called
process_notebook_for_context
. Individual providers can implement a method like the one below to optimize the context window:View Code
This PR has only been tested for the models that stream but it should be easy to extend support for those who do not as well.
Variable Context is limited to basic data types and pandas only. While we default to
__str__
for any object, we further aim to expand support for these.Other ideas we are working on
Inline Code Generation: Inspired by Cursor, we are developing an inline code generation feature in Jupyter AI. Although still in development, here is a GIF demonstrating our vision for this feature (note that some major UI changes are still pending):
Per Notebook Chat: Currently, the chat is shared across notebooks, which creates a poor user experience as chat history becomes irrelevant when switching notebooks. We are working on adding support for maintaining a per-notebook chat to address this issue.
Fix with AI Button: Each error in JupyterLab will have a "Fix with AI" button, which essentially implements an inline version of the
/fix
command, making it easier for users to resolve errors directly within their workflow.