Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add documentation on how to use Amazon Bedrock #936

Merged
merged 12 commits into from
Aug 10, 2024
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/bedrock-chat-basemodel.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/bedrock-custom-models.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/bedrock-finetuned-model.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/bedrock-model-access.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/source/_static/bedrock-model-select.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,3 +57,5 @@
},
],
}

html_sidebars = {"**": []}
60 changes: 60 additions & 0 deletions docs/source/users/bedrock.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# Using Amazon Bedrock with Jupyter AI

[(Return to Chat Interface page for Bedrock)](index.md#amazon-bedrock-usage)

Bedrock supports many language model providers such as AI21 Labs, Amazon, Anthropic, Cohere, Meta, and Mistral AI. To use the base models from any supported provider make sure to enable them in Amazon Bedrock by using the AWS console. Go to Amazon Bedrock and select `Model Access` as shown here:

<img src="../_static/bedrock-model-access.png"
width="75%"
alt='Screenshot of the left panel in the AWS console where Bedrock model access is provided.'
class="screenshot" />

Click through on `Model Access` and follow the instructions to grant access to the models you wish to use, as shown below. Make sure to accept the end user license (EULA) as required by each model. You may need your system administrator to grant access to your account if you do not have authority to do so.

<img src="../_static/bedrock-model-select.png"
width="75%"
alt='Screenshot of the Bedrock console where models may be selected.'
class="screenshot" />

You should also select embedding models in addition to language completion models if you intend to use retrieval augmented generation (RAG) on your documents.

You may now select a chosen Bedrock model from the drop-down menu box title `Completion model` in the chat interface. If RAG is going to be used then pick an embedding model that you chose from the Bedrock models as well. An example of these selections is shown below:

<img src="../_static/bedrock-chat-basemodel.png"
width="50%"
alt='Screenshot of the Jupyter AI chat panel where the base language model and embedding model is selected.'
class="screenshot" />

Bedrock also allows custom models to be trained from scratch or fine-tuned from a base model. Jupyter AI enables a custom model to be called in the chat panel using its `arn` (Amazon Resource Name). As with custom models, you can also call a base model by its `model id` or its `arn`. An example of using a base model with its `model id` through the custom model interface is shown below:

<img src="../_static/bedrock-chat-basemodel-modelid.png"
width="75%"
alt='Screenshot of the Jupyter AI chat panel where the base model is selected using model id.'
class="screenshot" />

An example of using a base model using its `arn` through the custom model interface is shown below:

<img src="../_static/bedrock-chat-basemodel-arn.png"
width="75%"
alt='Screenshot of the Jupyter AI chat panel where the base model is selected using its ARN.'
class="screenshot" />

To train a custom model in Amazon Bedrock, select `Custom models` in the Bedrock console as shown below, and then you may customize a base model by fine-tuning it or continuing to pre-train it:

<img src="../_static/bedrock-custom-models.png"
width="75%"
alt='Screenshot of the Bedrock custom models access in the left panel of the Bedrock console.'
class="screenshot" />

For details on fine-tuning a base model from Bedrock, see this [reference](https://aws.amazon.com/blogs/aws/customize-models-in-amazon-bedrock-with-your-own-data-using-fine-tuning-and-continued-pre-training/); with related [documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/custom-models.html).

Once the model is fine-tuned, it will have its own `arn`, as shown below:

<img src="../_static/bedrock-finetuned-model.png"
width="75%"
alt='Screenshot of the Bedrock fine-tuned model ARN in the Bedrock console.'
class="screenshot" />

As seen above, you may click on `Purchase provisioned throughput` to buy inference units with which to call the custom model's API. Enter the model's `arn` in Jupyter AI's Language model user interface to use the provisioned model.

[(Return to Chat Interface page for Bedrock)](index.md#amazon-bedrock-usage)
34 changes: 29 additions & 5 deletions docs/source/users/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,11 +175,7 @@ Jupyter AI supports the following model providers:
The environment variable names shown above are also the names of the settings keys used when setting up the chat interface.
If multiple variables are listed for a provider, **all** must be specified.

To use the Bedrock models, you need access to the Bedrock service. For more information, see the
[Amazon Bedrock Homepage](https://aws.amazon.com/bedrock/).

To use Bedrock models, you will need to authenticate via
[boto3](https://github.com/boto/boto3).
To use the Bedrock models, you need access to the Bedrock service, and you will need to authenticate via [boto3](https://github.com/boto/boto3). For more information, see the [Amazon Bedrock Homepage](https://aws.amazon.com/bedrock/).

You need the `pillow` Python package to use Hugging Face Hub's text-to-image models.

Expand Down Expand Up @@ -273,6 +269,34 @@ The chat backend remembers the last two exchanges in your conversation and passe
alt='Screen shot of an example follow up question sent to Jupyternaut, who responds with the improved code and explanation.'
class="screenshot" />


### Amazon Bedrock Usage

Jupyter AI enables use of language models hosted on [Amazon Bedrock](https://aws.amazon.com/bedrock/) on AWS. First, ensure that you have authentication to use AWS using the `boto3` SDK with credentials stored in the `default` profile. Guidance on how to do this can be found in the [`boto3` documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html).

For more detailed workflows, see [Using Amazon Bedrock with Jupter AI](bedrock.md).

Bedrock supports many language model providers such as AI21 Labs, Amazon, Anthropic, Cohere, Meta, and Mistral AI. To use the base models from any supported provider make sure to enable them in Amazon Bedrock by using the AWS console. You should also select embedding models in Bedrock in addition to language completion models if you intend to use retrieval augmented generation (RAG) on your documents.

You may now select a chosen Bedrock model from the drop-down menu box title `Completion model` in the chat interface. If RAG is going to be used then pick an embedding model that you chose from the Bedrock models as well. An example of these selections is shown below:

<img src="../_static/bedrock-chat-basemodel.png"
width="50%"
alt='Screenshot of the Jupyter AI chat panel where the base language model and embedding model is selected.'
class="screenshot" />

If your provider requires an API key, please enter it in the box that will show for that provider. Make sure to click on `Save Changes` to ensure that the inputs have been saved.

Bedrock also allows custom models to be trained from scratch or fine-tuned from a base model. Jupyter AI enables a custom model to be called in the chat panel using its `arn` (Amazon Resource Name). The interface is shown below:

<img src="../_static/bedrock-chat-custom-model-arn.png"
width="75%"
alt='Screenshot of the Jupyter AI chat panel where the custom model is selected using model arn.'
class="screenshot" />

For detailed workflows, see [Using Amazon Bedrock with Jupter AI](bedrock.md).


### SageMaker endpoints usage

Jupyter AI supports language models hosted on SageMaker endpoints that use JSON
Expand Down
61 changes: 36 additions & 25 deletions packages/jupyter-ai/src/components/chat-settings.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -146,20 +146,31 @@ export function ChatSettings(props: ChatSettingsProps): JSX.Element {
const newApiKeys: Record<string, string> = {};
const lmAuth = lmProvider?.auth_strategy;
const emAuth = emProvider?.auth_strategy;
if (lmAuth?.type === 'env') {
if (
lmAuth?.type === 'env' &&
!server.config.api_keys.includes(lmAuth.name)
) {
newApiKeys[lmAuth.name] = '';
}
if (lmAuth?.type === 'multienv') {
lmAuth.names.forEach(apiKey => {
newApiKeys[apiKey] = '';
if (!server.config.api_keys.includes(apiKey)) {
newApiKeys[apiKey] = '';
}
});
}
if (emAuth?.type === 'env') {

if (
emAuth?.type === 'env' &&
!server.config.api_keys.includes(emAuth.name)
) {
newApiKeys[emAuth.name] = '';
}
if (emAuth?.type === 'multienv') {
emAuth.names.forEach(apiKey => {
newApiKeys[apiKey] = '';
if (!server.config.api_keys.includes(apiKey)) {
newApiKeys[apiKey] = '';
}
});
}

Expand Down Expand Up @@ -471,28 +482,28 @@ export function ChatSettings(props: ChatSettingsProps): JSX.Element {

{/* API Keys section */}
<h2 className="jp-ai-ChatSettings-header">API Keys</h2>

{Object.entries(apiKeys).length === 0 &&
server.config.api_keys.length === 0 ? (
<p>No API keys are required by the selected models.</p>
) : null}

{/* API key inputs for newly-used providers */}
{Object.entries(apiKeys).length === 0 ? (
<p>No API Keys needed for selected model.</p>
) : (
Object.entries(apiKeys).map(([apiKeyName, apiKeyValue], idx) =>
!server.config.api_keys.includes(apiKeyName) ? (
<TextField
key={idx}
label={apiKeyName}
value={apiKeyValue}
fullWidth
type="password"
onChange={e =>
setApiKeys(apiKeys => ({
...apiKeys,
[apiKeyName]: e.target.value
}))
}
/>
) : null
)
)}
{Object.entries(apiKeys).map(([apiKeyName, apiKeyValue], idx) => (
<TextField
key={idx}
label={apiKeyName}
value={apiKeyValue}
fullWidth
type="password"
onChange={e =>
setApiKeys(apiKeys => ({
...apiKeys,
[apiKeyName]: e.target.value
}))
}
/>
))}
{/* Pre-existing API keys */}
<ExistingApiKeys
alert={apiKeysAlert}
Expand Down
Loading