diff --git a/docs/source/_static/bedrock-chat-basemodel-arn.png b/docs/source/_static/bedrock-chat-basemodel-arn.png
new file mode 100644
index 000000000..fbd0457f5
Binary files /dev/null and b/docs/source/_static/bedrock-chat-basemodel-arn.png differ
diff --git a/docs/source/_static/bedrock-chat-basemodel-modelid.png b/docs/source/_static/bedrock-chat-basemodel-modelid.png
new file mode 100644
index 000000000..e930edb3c
Binary files /dev/null and b/docs/source/_static/bedrock-chat-basemodel-modelid.png differ
diff --git a/docs/source/_static/bedrock-chat-basemodel.png b/docs/source/_static/bedrock-chat-basemodel.png
new file mode 100644
index 000000000..5dd261a4a
Binary files /dev/null and b/docs/source/_static/bedrock-chat-basemodel.png differ
diff --git a/docs/source/_static/bedrock-chat-custom-model-arn.png b/docs/source/_static/bedrock-chat-custom-model-arn.png
new file mode 100644
index 000000000..111fc7cce
Binary files /dev/null and b/docs/source/_static/bedrock-chat-custom-model-arn.png differ
diff --git a/docs/source/_static/bedrock-custom-models.png b/docs/source/_static/bedrock-custom-models.png
new file mode 100644
index 000000000..86b67d1ac
Binary files /dev/null and b/docs/source/_static/bedrock-custom-models.png differ
diff --git a/docs/source/_static/bedrock-finetuned-model.png b/docs/source/_static/bedrock-finetuned-model.png
new file mode 100644
index 000000000..4687e33e6
Binary files /dev/null and b/docs/source/_static/bedrock-finetuned-model.png differ
diff --git a/docs/source/_static/bedrock-model-access.png b/docs/source/_static/bedrock-model-access.png
new file mode 100644
index 000000000..b3325d484
Binary files /dev/null and b/docs/source/_static/bedrock-model-access.png differ
diff --git a/docs/source/_static/bedrock-model-select.png b/docs/source/_static/bedrock-model-select.png
new file mode 100644
index 000000000..a321c52cf
Binary files /dev/null and b/docs/source/_static/bedrock-model-select.png differ
diff --git a/docs/source/conf.py b/docs/source/conf.py
index 190b7e0a2..142d47b78 100644
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -57,3 +57,5 @@
},
],
}
+
+html_sidebars = {"**": []}
diff --git a/docs/source/users/bedrock.md b/docs/source/users/bedrock.md
new file mode 100644
index 000000000..558bf93c9
--- /dev/null
+++ b/docs/source/users/bedrock.md
@@ -0,0 +1,60 @@
+# Using Amazon Bedrock with Jupyter AI
+
+[(Return to Chat Interface page for Bedrock)](index.md#amazon-bedrock-usage)
+
+Bedrock supports many language model providers such as AI21 Labs, Amazon, Anthropic, Cohere, Meta, and Mistral AI. To use the base models from any supported provider make sure to enable them in Amazon Bedrock by using the AWS console. Go to Amazon Bedrock and select `Model Access` as shown here:
+
+
+
+Click through on `Model Access` and follow the instructions to grant access to the models you wish to use, as shown below. Make sure to accept the end user license (EULA) as required by each model. You may need your system administrator to grant access to your account if you do not have authority to do so.
+
+
+
+You should also select embedding models in addition to language completion models if you intend to use retrieval augmented generation (RAG) on your documents.
+
+You may now select a chosen Bedrock model from the drop-down menu box title `Completion model` in the chat interface. If RAG is going to be used then pick an embedding model that you chose from the Bedrock models as well. An example of these selections is shown below:
+
+
+
+Bedrock also allows custom models to be trained from scratch or fine-tuned from a base model. Jupyter AI enables a custom model to be called in the chat panel using its `arn` (Amazon Resource Name). As with custom models, you can also call a base model by its `model id` or its `arn`. An example of using a base model with its `model id` through the custom model interface is shown below:
+
+
+
+An example of using a base model using its `arn` through the custom model interface is shown below:
+
+
+
+To train a custom model in Amazon Bedrock, select `Custom models` in the Bedrock console as shown below, and then you may customize a base model by fine-tuning it or continuing to pre-train it:
+
+
+
+For details on fine-tuning a base model from Bedrock, see this [reference](https://aws.amazon.com/blogs/aws/customize-models-in-amazon-bedrock-with-your-own-data-using-fine-tuning-and-continued-pre-training/); with related [documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/custom-models.html).
+
+Once the model is fine-tuned, it will have its own `arn`, as shown below:
+
+
+
+As seen above, you may click on `Purchase provisioned throughput` to buy inference units with which to call the custom model's API. Enter the model's `arn` in Jupyter AI's Language model user interface to use the provisioned model.
+
+[(Return to Chat Interface page for Bedrock)](index.md#amazon-bedrock-usage)
diff --git a/docs/source/users/index.md b/docs/source/users/index.md
index bb660c882..a57afbde7 100644
--- a/docs/source/users/index.md
+++ b/docs/source/users/index.md
@@ -175,11 +175,7 @@ Jupyter AI supports the following model providers:
The environment variable names shown above are also the names of the settings keys used when setting up the chat interface.
If multiple variables are listed for a provider, **all** must be specified.
-To use the Bedrock models, you need access to the Bedrock service. For more information, see the
-[Amazon Bedrock Homepage](https://aws.amazon.com/bedrock/).
-
-To use Bedrock models, you will need to authenticate via
-[boto3](https://github.com/boto/boto3).
+To use the Bedrock models, you need access to the Bedrock service, and you will need to authenticate via [boto3](https://github.com/boto/boto3). For more information, see the [Amazon Bedrock Homepage](https://aws.amazon.com/bedrock/).
You need the `pillow` Python package to use Hugging Face Hub's text-to-image models.
@@ -273,6 +269,34 @@ The chat backend remembers the last two exchanges in your conversation and passe
alt='Screen shot of an example follow up question sent to Jupyternaut, who responds with the improved code and explanation.'
class="screenshot" />
+
+### Amazon Bedrock Usage
+
+Jupyter AI enables use of language models hosted on [Amazon Bedrock](https://aws.amazon.com/bedrock/) on AWS. First, ensure that you have authentication to use AWS using the `boto3` SDK with credentials stored in the `default` profile. Guidance on how to do this can be found in the [`boto3` documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html).
+
+For more detailed workflows, see [Using Amazon Bedrock with Jupter AI](bedrock.md).
+
+Bedrock supports many language model providers such as AI21 Labs, Amazon, Anthropic, Cohere, Meta, and Mistral AI. To use the base models from any supported provider make sure to enable them in Amazon Bedrock by using the AWS console. You should also select embedding models in Bedrock in addition to language completion models if you intend to use retrieval augmented generation (RAG) on your documents.
+
+You may now select a chosen Bedrock model from the drop-down menu box title `Completion model` in the chat interface. If RAG is going to be used then pick an embedding model that you chose from the Bedrock models as well. An example of these selections is shown below:
+
+
+
+If your provider requires an API key, please enter it in the box that will show for that provider. Make sure to click on `Save Changes` to ensure that the inputs have been saved.
+
+Bedrock also allows custom models to be trained from scratch or fine-tuned from a base model. Jupyter AI enables a custom model to be called in the chat panel using its `arn` (Amazon Resource Name). The interface is shown below:
+
+
+
+For detailed workflows, see [Using Amazon Bedrock with Jupter AI](bedrock.md).
+
+
### SageMaker endpoints usage
Jupyter AI supports language models hosted on SageMaker endpoints that use JSON