Skip to content

Commit

Permalink
chore: added why use LCEL doc (#2900)
Browse files Browse the repository at this point in the history
* chore: added why use LCEL doc

* Update docs/extras/expression_language/why.mdx
  • Loading branch information
bracesproul authored Oct 13, 2023
1 parent 92e609f commit 5019bb4
Show file tree
Hide file tree
Showing 2 changed files with 11 additions and 0 deletions.
3 changes: 3 additions & 0 deletions docs/docs_skeleton/docs/expression_language/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,3 +13,6 @@ The base interface shared by all LCEL objects

#### [Cookbook](/docs/expression_language/cookbook)
Examples of common LCEL usage patterns

#### [Why use LCEL](/docs/expression_language/why)
A deeper dive into the benefits of LCEL
8 changes: 8 additions & 0 deletions docs/extras/expression_language/why.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Why use LCEL?

The LangChain Expression Language was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (we’ve seen folks successfully running in production LCEL chains with 100s of steps). To highlight a few of the reasons you might want to use LCEL:

- optimised parallel execution: whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, for the smallest possible latency.
- support for retries and fallbacks: more recently we’ve added support for configuring retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. We’re currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.
- accessing intermediate results: for more complex chains it’s often very useful to access the results of intermediate steps even before the final output is produced. This can be used let end-users know something is happening, or even just to debug your chain. We’ve added support for [streaming intermediate results](https://x.com/LangChainAI/status/1711806009097044193?s=20), and it’s available on every LangServe server.
- tracing with LangSmith: all chains built with LCEL have first-class tracing support, which can be used to debug your chains, or to understand what’s happening in production. To enable this all you have to do is add your [LangSmith](https://www.langchain.com/langsmith) API key as an environment variable.

0 comments on commit 5019bb4

Please sign in to comment.