Skip to content

Commit

Permalink
Merge branch 'main' into AzureOpenAIEmbeddingsSupport
Browse files Browse the repository at this point in the history
  • Loading branch information
gsrikant7 authored Aug 14, 2024
2 parents 3ac5700 + 512d642 commit a2b3523
Show file tree
Hide file tree
Showing 18 changed files with 192 additions and 15 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/bedrock-chat-basemodel.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/bedrock-custom-models.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/bedrock-finetuned-model.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/bedrock-model-access.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/bedrock-model-select.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,3 +57,5 @@
},
],
}

html_sidebars = {"**": []}
47 changes: 47 additions & 0 deletions docs/source/developers/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -391,3 +391,50 @@ custom = "custom_package:CustomChatHandler"

Then, install your package so that Jupyter AI adds custom chat handlers
to the existing chat handlers.

## Custom message footer

You can provide a custom message footer that will be rendered under each message
in the UI. To do so, you need to write or install a labextension containing a
plugin that provides the `IJaiMessageFooter` token. This plugin should return a
`IJaiMessageFooter` object, which defines the custom footer to be rendered.

The `IJaiMessageFooter` object contains a single property `component`, which
should reference a React component that defines the custom message footer.
Jupyter AI will render this component under each chat message, passing the
component a `message` prop with the definition of each chat message as an
object. The `message` prop takes the type `AiService.ChatMessage`, where
`AiService` is imported from `@jupyter-ai/core/handler`.

Here is a reference plugin that shows some custom text under each agent message:

```tsx
import React from 'react';
import {
JupyterFrontEnd,
JupyterFrontEndPlugin
} from '@jupyterlab/application';
import { IJaiMessageFooter, IJaiMessageFooterProps } from '@jupyter-ai/core/tokens';

export const footerPlugin: JupyterFrontEndPlugin<IJaiMessageFooter> = {
id: '@your-org/your-package:custom-footer',
autoStart: true,
requires: [],
provides: IJaiMessageFooter,
activate: (app: JupyterFrontEnd): IJaiMessageFooter => {
return {
component: MessageFooter
};
}
};

function MessageFooter(props: IJaiMessageFooterProps) {
if (props.message.type !== 'agent' && props.message.type !== 'agent-stream') {
return null;
}

return (
<div>This is a test footer that renders under each agent message.</div>
);
}
```
60 changes: 60 additions & 0 deletions docs/source/users/bedrock.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# Using Amazon Bedrock with Jupyter AI

[(Return to Chat Interface page for Bedrock)](index.md#amazon-bedrock-usage)

Bedrock supports many language model providers such as AI21 Labs, Amazon, Anthropic, Cohere, Meta, and Mistral AI. To use the base models from any supported provider make sure to enable them in Amazon Bedrock by using the AWS console. Go to Amazon Bedrock and select `Model Access` as shown here:

<img src="../_static/bedrock-model-access.png"
width="75%"
alt='Screenshot of the left panel in the AWS console where Bedrock model access is provided.'
class="screenshot" />

Click through on `Model Access` and follow the instructions to grant access to the models you wish to use, as shown below. Make sure to accept the end user license (EULA) as required by each model. You may need your system administrator to grant access to your account if you do not have authority to do so.

<img src="../_static/bedrock-model-select.png"
width="75%"
alt='Screenshot of the Bedrock console where models may be selected.'
class="screenshot" />

You should also select embedding models in addition to language completion models if you intend to use retrieval augmented generation (RAG) on your documents.

You may now select a chosen Bedrock model from the drop-down menu box title `Completion model` in the chat interface. If RAG is going to be used then pick an embedding model that you chose from the Bedrock models as well. An example of these selections is shown below:

<img src="../_static/bedrock-chat-basemodel.png"
width="50%"
alt='Screenshot of the Jupyter AI chat panel where the base language model and embedding model is selected.'
class="screenshot" />

Bedrock also allows custom models to be trained from scratch or fine-tuned from a base model. Jupyter AI enables a custom model to be called in the chat panel using its `arn` (Amazon Resource Name). As with custom models, you can also call a base model by its `model id` or its `arn`. An example of using a base model with its `model id` through the custom model interface is shown below:

<img src="../_static/bedrock-chat-basemodel-modelid.png"
width="75%"
alt='Screenshot of the Jupyter AI chat panel where the base model is selected using model id.'
class="screenshot" />

An example of using a base model using its `arn` through the custom model interface is shown below:

<img src="../_static/bedrock-chat-basemodel-arn.png"
width="75%"
alt='Screenshot of the Jupyter AI chat panel where the base model is selected using its ARN.'
class="screenshot" />

To train a custom model in Amazon Bedrock, select `Custom models` in the Bedrock console as shown below, and then you may customize a base model by fine-tuning it or continuing to pre-train it:

<img src="../_static/bedrock-custom-models.png"
width="75%"
alt='Screenshot of the Bedrock custom models access in the left panel of the Bedrock console.'
class="screenshot" />

For details on fine-tuning a base model from Bedrock, see this [reference](https://aws.amazon.com/blogs/aws/customize-models-in-amazon-bedrock-with-your-own-data-using-fine-tuning-and-continued-pre-training/); with related [documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/custom-models.html).

Once the model is fine-tuned, it will have its own `arn`, as shown below:

<img src="../_static/bedrock-finetuned-model.png"
width="75%"
alt='Screenshot of the Bedrock fine-tuned model ARN in the Bedrock console.'
class="screenshot" />

As seen above, you may click on `Purchase provisioned throughput` to buy inference units with which to call the custom model's API. Enter the model's `arn` in Jupyter AI's Language model user interface to use the provisioned model.

[(Return to Chat Interface page for Bedrock)](index.md#amazon-bedrock-usage)
34 changes: 29 additions & 5 deletions docs/source/users/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,11 +175,7 @@ Jupyter AI supports the following model providers:
The environment variable names shown above are also the names of the settings keys used when setting up the chat interface.
If multiple variables are listed for a provider, **all** must be specified.

To use the Bedrock models, you need access to the Bedrock service. For more information, see the
[Amazon Bedrock Homepage](https://aws.amazon.com/bedrock/).

To use Bedrock models, you will need to authenticate via
[boto3](https://github.com/boto/boto3).
To use the Bedrock models, you need access to the Bedrock service, and you will need to authenticate via [boto3](https://github.com/boto/boto3). For more information, see the [Amazon Bedrock Homepage](https://aws.amazon.com/bedrock/).

You need the `pillow` Python package to use Hugging Face Hub's text-to-image models.

Expand Down Expand Up @@ -273,6 +269,34 @@ The chat backend remembers the last two exchanges in your conversation and passe
alt='Screen shot of an example follow up question sent to Jupyternaut, who responds with the improved code and explanation.'
class="screenshot" />


### Amazon Bedrock Usage

Jupyter AI enables use of language models hosted on [Amazon Bedrock](https://aws.amazon.com/bedrock/) on AWS. First, ensure that you have authentication to use AWS using the `boto3` SDK with credentials stored in the `default` profile. Guidance on how to do this can be found in the [`boto3` documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html).

For more detailed workflows, see [Using Amazon Bedrock with Jupter AI](bedrock.md).

Bedrock supports many language model providers such as AI21 Labs, Amazon, Anthropic, Cohere, Meta, and Mistral AI. To use the base models from any supported provider make sure to enable them in Amazon Bedrock by using the AWS console. You should also select embedding models in Bedrock in addition to language completion models if you intend to use retrieval augmented generation (RAG) on your documents.

You may now select a chosen Bedrock model from the drop-down menu box title `Completion model` in the chat interface. If RAG is going to be used then pick an embedding model that you chose from the Bedrock models as well. An example of these selections is shown below:

<img src="../_static/bedrock-chat-basemodel.png"
width="50%"
alt='Screenshot of the Jupyter AI chat panel where the base language model and embedding model is selected.'
class="screenshot" />

If your provider requires an API key, please enter it in the box that will show for that provider. Make sure to click on `Save Changes` to ensure that the inputs have been saved.

Bedrock also allows custom models to be trained from scratch or fine-tuned from a base model. Jupyter AI enables a custom model to be called in the chat panel using its `arn` (Amazon Resource Name). The interface is shown below:

<img src="../_static/bedrock-chat-custom-model-arn.png"
width="75%"
alt='Screenshot of the Jupyter AI chat panel where the custom model is selected using model arn.'
class="screenshot" />

For detailed workflows, see [Using Amazon Bedrock with Jupter AI](bedrock.md).


### SageMaker endpoints usage

Jupyter AI supports language models hosted on SageMaker endpoints that use JSON
Expand Down
8 changes: 7 additions & 1 deletion packages/jupyter-ai/src/components/chat-input.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -207,6 +207,7 @@ export function ChatInput(props: ChatInputProps): JSX.Element {
props.chatHandler.sendMessage({ prompt, selection });
}

const inputExists = !!input.trim();
function handleKeyDown(event: React.KeyboardEvent<HTMLDivElement>) {
if (event.key !== 'Enter') {
return;
Expand All @@ -218,6 +219,12 @@ export function ChatInput(props: ChatInputProps): JSX.Element {
return;
}

if (!inputExists) {
event.stopPropagation();
event.preventDefault();
return;
}

if (
event.key === 'Enter' &&
((props.sendWithShiftEnter && event.shiftKey) ||
Expand All @@ -240,7 +247,6 @@ export function ChatInput(props: ChatInputProps): JSX.Element {
</span>
);

const inputExists = !!input.trim();
const sendButtonProps: SendButtonProps = {
onSend,
sendWithShiftEnter: props.sendWithShiftEnter,
Expand Down
5 changes: 5 additions & 0 deletions packages/jupyter-ai/src/components/chat-messages.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,12 @@ import { AiService } from '../handler';
import { RendermimeMarkdown } from './rendermime-markdown';
import { useCollaboratorsContext } from '../contexts/collaborators-context';
import { ChatMessageMenu } from './chat-messages/chat-message-menu';
import { IJaiMessageFooter } from '../tokens';

type ChatMessagesProps = {
rmRegistry: IRenderMimeRegistry;
messages: AiService.ChatMessage[];
messageFooter: IJaiMessageFooter | null;
};

type ChatMessageHeaderProps = {
Expand Down Expand Up @@ -215,6 +217,9 @@ export function ChatMessages(props: ChatMessagesProps): JSX.Element {
message.type === 'agent-stream' ? !!message.complete : true
}
/>
{props.messageFooter && (
<props.messageFooter.component message={message} />
)}
</Box>
);
})}
Expand Down
14 changes: 11 additions & 3 deletions packages/jupyter-ai/src/components/chat.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ import { SelectionContextProvider } from '../contexts/selection-context';
import { SelectionWatcher } from '../selection-watcher';
import { ChatHandler } from '../chat_handler';
import { CollaboratorsContextProvider } from '../contexts/collaborators-context';
import { IJaiCompletionProvider } from '../tokens';
import { IJaiCompletionProvider, IJaiMessageFooter } from '../tokens';
import {
ActiveCellContextProvider,
ActiveCellManager
Expand All @@ -30,6 +30,7 @@ type ChatBodyProps = {
setChatView: (view: ChatView) => void;
rmRegistry: IRenderMimeRegistry;
focusInputSignal: ISignal<unknown, void>;
messageFooter: IJaiMessageFooter | null;
};

/**
Expand All @@ -51,7 +52,8 @@ function ChatBody({
chatHandler,
focusInputSignal,
setChatView: chatViewHandler,
rmRegistry: renderMimeRegistry
rmRegistry: renderMimeRegistry,
messageFooter
}: ChatBodyProps): JSX.Element {
const [messages, setMessages] = useState<AiService.ChatMessage[]>([
...chatHandler.history.messages
Expand Down Expand Up @@ -139,7 +141,11 @@ function ChatBody({
return (
<>
<ScrollContainer sx={{ flexGrow: 1 }}>
<ChatMessages messages={messages} rmRegistry={renderMimeRegistry} />
<ChatMessages
messages={messages}
rmRegistry={renderMimeRegistry}
messageFooter={messageFooter}
/>
<PendingMessages messages={pendingMessages} />
</ScrollContainer>
<ChatInput
Expand Down Expand Up @@ -170,6 +176,7 @@ export type ChatProps = {
openInlineCompleterSettings: () => void;
activeCellManager: ActiveCellManager;
focusInputSignal: ISignal<unknown, void>;
messageFooter: IJaiMessageFooter | null;
};

enum ChatView {
Expand Down Expand Up @@ -223,6 +230,7 @@ export function Chat(props: ChatProps): JSX.Element {
setChatView={setView}
rmRegistry={props.rmRegistry}
focusInputSignal={props.focusInputSignal}
messageFooter={props.messageFooter}
/>
)}
{view === ChatView.Settings && (
Expand Down
11 changes: 7 additions & 4 deletions packages/jupyter-ai/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ import { ChatHandler } from './chat_handler';
import { buildErrorWidget } from './widgets/chat-error';
import { completionPlugin } from './completions';
import { statusItemPlugin } from './status';
import { IJaiCompletionProvider } from './tokens';
import { IJaiCompletionProvider, IJaiMessageFooter } from './tokens';
import { IRenderMimeRegistry } from '@jupyterlab/rendermime';
import { ActiveCellManager } from './contexts/active-cell-context';
import { Signal } from '@lumino/signaling';
Expand All @@ -42,7 +42,8 @@ const plugin: JupyterFrontEndPlugin<void> = {
IGlobalAwareness,
ILayoutRestorer,
IThemeManager,
IJaiCompletionProvider
IJaiCompletionProvider,
IJaiMessageFooter
],
requires: [IRenderMimeRegistry],
activate: async (
Expand All @@ -51,7 +52,8 @@ const plugin: JupyterFrontEndPlugin<void> = {
globalAwareness: Awareness | null,
restorer: ILayoutRestorer | null,
themeManager: IThemeManager | null,
completionProvider: IJaiCompletionProvider | null
completionProvider: IJaiCompletionProvider | null,
messageFooter: IJaiMessageFooter | null
) => {
/**
* Initialize selection watcher singleton
Expand Down Expand Up @@ -88,7 +90,8 @@ const plugin: JupyterFrontEndPlugin<void> = {
completionProvider,
openInlineCompleterSettings,
activeCellManager,
focusInputSignal
focusInputSignal,
messageFooter
);
} catch (e) {
chatWidget = buildErrorWidget(themeManager);
Expand Down
20 changes: 20 additions & 0 deletions packages/jupyter-ai/src/tokens.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
import React from 'react';
import { Token } from '@lumino/coreutils';
import { ISignal } from '@lumino/signaling';
import type { IRankedMenu } from '@jupyterlab/ui-components';
import { AiService } from './handler';

export interface IJaiStatusItem {
addItem(item: IRankedMenu.IItemOptions): void;
Expand All @@ -26,3 +28,21 @@ export const IJaiCompletionProvider = new Token<IJaiCompletionProvider>(
'jupyter_ai:IJaiCompletionProvider',
'The jupyter-ai inline completion provider API'
);

export type IJaiMessageFooterProps = {
message: AiService.ChatMessage;
};

export interface IJaiMessageFooter {
component: React.FC<IJaiMessageFooterProps>;
}

/**
* The message footer provider token. Another extension should provide this
* token to add a footer to each message.
*/

export const IJaiMessageFooter = new Token<IJaiMessageFooter>(
'jupyter_ai:IJaiMessageFooter',
'Optional component that is used to render a footer on each Jupyter AI chat message, when provided.'
);
6 changes: 4 additions & 2 deletions packages/jupyter-ai/src/widgets/chat-sidebar.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ import { Chat } from '../components/chat';
import { chatIcon } from '../icons';
import { SelectionWatcher } from '../selection-watcher';
import { ChatHandler } from '../chat_handler';
import { IJaiCompletionProvider } from '../tokens';
import { IJaiCompletionProvider, IJaiMessageFooter } from '../tokens';
import { IRenderMimeRegistry } from '@jupyterlab/rendermime';
import type { ActiveCellManager } from '../contexts/active-cell-context';

Expand All @@ -21,7 +21,8 @@ export function buildChatSidebar(
completionProvider: IJaiCompletionProvider | null,
openInlineCompleterSettings: () => void,
activeCellManager: ActiveCellManager,
focusInputSignal: ISignal<unknown, void>
focusInputSignal: ISignal<unknown, void>,
messageFooter: IJaiMessageFooter | null
): ReactWidget {
const ChatWidget = ReactWidget.create(
<Chat
Expand All @@ -34,6 +35,7 @@ export function buildChatSidebar(
openInlineCompleterSettings={openInlineCompleterSettings}
activeCellManager={activeCellManager}
focusInputSignal={focusInputSignal}
messageFooter={messageFooter}
/>
);
ChatWidget.id = 'jupyter-ai::chat';
Expand Down

0 comments on commit a2b3523

Please sign in to comment.