Skip to content

Commit

Permalink
Merge branch 'master' into patch-1
Browse files Browse the repository at this point in the history
  • Loading branch information
AhmedTammaa authored Dec 20, 2024
2 parents 533bc90 + 2a7469e commit f31e4b7
Show file tree
Hide file tree
Showing 18 changed files with 1,615 additions and 353 deletions.
491 changes: 491 additions & 0 deletions docs/docs/integrations/chat/predictionguard.ipynb

Large diffs are not rendered by default.

165 changes: 91 additions & 74 deletions docs/docs/integrations/document_loaders/web_base.ipynb

Large diffs are not rendered by default.

425 changes: 296 additions & 129 deletions docs/docs/integrations/llms/predictionguard.ipynb

Large diffs are not rendered by default.

6 changes: 6 additions & 0 deletions docs/docs/integrations/providers/google.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,12 @@

All functionality related to [Google Cloud Platform](https://cloud.google.com/) and other `Google` products.

Integration packages for Gemini models and the VertexAI platform are maintained in
the [langchain-google](https://github.com/langchain-ai/langchain-google) repository.
You can find a host of LangChain integrations with other Google APIs in the
[googleapis](https://github.com/googleapis?q=langchain-&type=all&language=&sort=)
Github organization.

## Chat models

We recommend individual developers to start with Gemini API (`langchain-google-genai`) and move to Vertex AI (`langchain-google-vertexai`) when they need access to commercial support and higher rate limits. If you’re already Cloud-friendly or Cloud-native, then you can get started in Vertex AI straight away.
Expand Down
7 changes: 6 additions & 1 deletion docs/docs/integrations/providers/localai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,11 @@
For proper compatibility, please ensure you are using the `openai` SDK at version **0.x**.
:::

:::info
`langchain-localai` is a 3rd party integration package for LocalAI. It provides a simple way to use LocalAI services in Langchain.
The source code is available on [Github](https://github.com/mkhludnev/langchain-localai)
:::

## Installation and Setup

We have to install several python packages:
Expand All @@ -24,5 +29,5 @@ pip install tenacity openai
See a [usage example](/docs/integrations/text_embedding/localai).

```python
from langchain_community.embeddings import LocalAIEmbeddings
from langchain_localai import LocalAIEmbeddings
```
127 changes: 53 additions & 74 deletions docs/docs/integrations/providers/predictionguard.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,100 +3,79 @@
This page covers how to use the Prediction Guard ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Prediction Guard wrappers.

## Installation and Setup
- Install the Python SDK with `pip install predictionguard`
- Get a Prediction Guard access token (as described [here](https://docs.predictionguard.com/)) and set it as an environment variable (`PREDICTIONGUARD_TOKEN`)
This integration is maintained in the [langchain-predictionguard](https://github.com/predictionguard/langchain-predictionguard)
package.

## LLM Wrapper
## Installation and Setup

There exists a Prediction Guard LLM wrapper, which you can access with
```python
from langchain_community.llms import PredictionGuard
- Install the PredictionGuard Langchain partner package:
```

You can provide the name of the Prediction Guard model as an argument when initializing the LLM:
```python
pgllm = PredictionGuard(model="MPT-7B-Instruct")
pip install langchain-predictionguard
```

You can also provide your access token directly as an argument:
- Get a Prediction Guard API key (as described [here](https://docs.predictionguard.com/)) and set it as an environment variable (`PREDICTIONGUARD_API_KEY`)

## Prediction Guard Langchain Integrations
|API|Description|Endpoint Docs| Import | Example Usage |
|---|---|---|---------------------------------------------------------|-------------------------------------------------------------------------------|
|Chat|Build Chat Bots|[Chat](https://docs.predictionguard.com/api-reference/api-reference/chat-completions)| `from langchain_predictionguard import ChatPredictionGuard` | [ChatPredictionGuard.ipynb](/docs/integrations/chat/predictionguard) |
|Completions|Generate Text|[Completions](https://docs.predictionguard.com/api-reference/api-reference/completions)| `from langchain_predictionguard import PredictionGuard` | [PredictionGuard.ipynb](/docs/integrations/llms/predictionguard) |
|Text Embedding|Embed String to Vectores|[Embeddings](https://docs.predictionguard.com/api-reference/api-reference/embeddings)| `from langchain_predictionguard import PredictionGuardEmbeddings` | [PredictionGuardEmbeddings.ipynb](/docs/integrations/text_embedding/predictionguard) |

## Getting Started

## Chat Models

### Prediction Guard Chat

See a [usage example](/docs/integrations/chat/predictionguard)

```python
pgllm = PredictionGuard(model="MPT-7B-Instruct", token="<your access token>")
from langchain_predictionguard import ChatPredictionGuard
```

Finally, you can provide an "output" argument that is used to structure/ control the output of the LLM:
#### Usage

```python
pgllm = PredictionGuard(model="MPT-7B-Instruct", output={"type": "boolean"})
# If predictionguard_api_key is not passed, default behavior is to use the `PREDICTIONGUARD_API_KEY` environment variable.
chat = ChatPredictionGuard(model="Hermes-3-Llama-3.1-8B")

chat.invoke("Tell me a joke")
```

## Example usage
## Embedding Models

### Prediction Guard Embeddings

See a [usage example](/docs/integrations/text_embedding/predictionguard)

Basic usage of the controlled or guarded LLM wrapper:
```python
import os

import predictionguard as pg
from langchain_community.llms import PredictionGuard
from langchain_core.prompts import PromptTemplate
from langchain.chains import LLMChain

# Your Prediction Guard API key. Get one at predictionguard.com
os.environ["PREDICTIONGUARD_TOKEN"] = "<your Prediction Guard access token>"

# Define a prompt template
template = """Respond to the following query based on the context.
Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! πŸŽ‰ We have officially added TWO new candle subscription box options! πŸ“¦
Exclusive Candle Box - $80
Monthly Candle Box - $45 (NEW!)
Scent of The Month Box - $28 (NEW!)
Head to stories to get ALL the deets on each box! πŸ‘† BONUS: Save 50% on your first box with code 50OFF! πŸŽ‰
Query: {query}
Result: """
prompt = PromptTemplate.from_template(template)

# With "guarding" or controlling the output of the LLM. See the
# Prediction Guard docs (https://docs.predictionguard.com) to learn how to
# control the output with integer, float, boolean, JSON, and other types and
# structures.
pgllm = PredictionGuard(model="MPT-7B-Instruct",
output={
"type": "categorical",
"categories": [
"product announcement",
"apology",
"relational"
]
})
pgllm(prompt.format(query="What kind of post is this?"))
from langchain_predictionguard import PredictionGuardEmbeddings
```

Basic LLM Chaining with the Prediction Guard wrapper:
#### Usage
```python
import os

from langchain_core.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_community.llms import PredictionGuard
# If predictionguard_api_key is not passed, default behavior is to use the `PREDICTIONGUARD_API_KEY` environment variable.
embeddings = PredictionGuardEmbeddings(model="bridgetower-large-itm-mlm-itc")

# Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows
# you to access all the latest open access models (see https://docs.predictionguard.com)
os.environ["OPENAI_API_KEY"] = "<your OpenAI api key>"
text = "This is an embedding example."
output = embeddings.embed_query(text)
```

# Your Prediction Guard API key. Get one at predictionguard.com
os.environ["PREDICTIONGUARD_TOKEN"] = "<your Prediction Guard access token>"
## LLMs

pgllm = PredictionGuard(model="OpenAI-gpt-3.5-turbo-instruct")
### Prediction Guard LLM

template = """Question: {question}
See a [usage example](/docs/integrations/llms/predictionguard)

Answer: Let's think step by step."""
prompt = PromptTemplate.from_template(template)
llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
```python
from langchain_predictionguard import PredictionGuard
```

question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
#### Usage
```python
# If predictionguard_api_key is not passed, default behavior is to use the `PREDICTIONGUARD_API_KEY` environment variable.
llm = PredictionGuard(model="Hermes-2-Pro-Llama-3-8B")

llm_chain.predict(question=question)
```
llm.invoke("Tell me a joke about bears")
```
Loading

0 comments on commit f31e4b7

Please sign in to comment.