diff --git a/docs/docs/integrations/providers/couchbase.mdx b/docs/docs/integrations/providers/couchbase.mdx index 906fbda6b28b3..35ef674806967 100644 --- a/docs/docs/integrations/providers/couchbase.mdx +++ b/docs/docs/integrations/providers/couchbase.mdx @@ -33,7 +33,7 @@ from langchain_community.document_loaders.couchbase import CouchbaseLoader ### CouchbaseCache Use Couchbase as a cache for prompts and responses. -See a [usage example](/docs/integrations/llm_caching/#couchbase-cache). +See a [usage example](/docs/integrations/llm_caching/#couchbase-caches). To import this cache: ```python @@ -61,7 +61,7 @@ set_llm_cache( Semantic caching allows users to retrieve cached prompts based on the semantic similarity between the user input and previously cached inputs. Under the hood it uses Couchbase as both a cache and a vectorstore. The CouchbaseSemanticCache needs a Search Index defined to work. Please look at the [usage example](/docs/integrations/vectorstores/couchbase) on how to set up the index. -See a [usage example](/docs/integrations/llm_caching/#couchbase-semantic-cache). +See a [usage example](/docs/integrations/llm_caching/#couchbase-caches). To import this cache: ```python diff --git a/docs/docs/integrations/providers/elasticsearch.mdx b/docs/docs/integrations/providers/elasticsearch.mdx index c3b123d47b80f..734ef9d46ce8e 100644 --- a/docs/docs/integrations/providers/elasticsearch.mdx +++ b/docs/docs/integrations/providers/elasticsearch.mdx @@ -84,7 +84,7 @@ from langchain_elasticsearch import ElasticsearchChatMessageHistory ## LLM cache -See a [usage example](/docs/integrations/llm_caching/#elasticsearch-cache). +See a [usage example](/docs/integrations/llm_caching/#elasticsearch-caches). ```python from langchain_elasticsearch import ElasticsearchCache