-
Notifications
You must be signed in to change notification settings - Fork 15.9k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
6 additions
and
17 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -6,19 +6,19 @@ LangChain has a large ecosystem of integrations with various external resources | |
|
||
When building such applications developers should remember to follow good security practices: | ||
|
||
* Limit Permissions: Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. When relevant, consider further sandboxing the code appropriately (e.g., run it inside a container). | ||
* Account for Errors: Just as humans can err, so can Large Language Models (LLMs). Such errors might lead to unintended consequences. | ||
* Anticipate Potential Misuse: Always assume that the credentials may be used in any way allowed by the permissions they are assigned. | ||
* [**Limit Permissions**](https://en.wikipedia.org/wiki/Principle_of_least_privilege): Scope permissions specifically to the application's need. Granting broad or excessive permissions can introduce significant security vulnerabilities. To avoid such vulnerabilities, consider using read-only credentials, disallowing access to sensitive resources, using sandboxing techniques (such as running inside a container), etc. as appropriate for your application. | ||
* **Anticipate Potential Misuse**: Just as humans can err, so can Large Language Models (LLMs). Always assume that any system access or credentials may be used in any way allowed by the permissions they are assigned. For example, if a pair of database credentials allows deleting data, it’s safest to assume that any LLM able to use those credentials may in fact delete data. | ||
* [**Defense in Depth**](https://en.wikipedia.org/wiki/Defense_in_depth_(computing)): No security technique is perfect. Fine-tuning and good chain design can reduce, but not eliminate, the odds that a Large Language Model (LLM) may make a mistake. It’s best to combine multiple layered security approaches rather than relying on any single layer of defense to ensure security. For example: use both read-only permissions and sandboxing to ensure that LLMs are only able to access data that is explicitly meant for them to use. | ||
|
||
Risks of not doing so include, but are not limited to: | ||
* Data corruption or loss. | ||
* Unauthorized access to confidential information. | ||
* Compromising performance or availability of external resources. | ||
* Compromised performance or availability of critical resources. | ||
|
||
Example scenarios with mitigation strategies: | ||
|
||
* A user may ask an agent with access to the file system to delete files that should not be deleted or read the content of files that contain sensitive information. To mitigate, limit the agent to only use a specific directory and only allow it to read or write files that are safe to read or write. Consider further sandboxing the agent by running it in a container. | ||
* A user may ask an agent with access to an external API with write permissions to write malicious data to the API. To mitigate, only allow the agent to interact with endpoints that are safe to use. | ||
* A user may ask an agent with write access to an external API to write malicious data to the API, or delete data from that API. To mitigate, give the agent read-only API keys, or limit it to only use endpoints that are already resistant to such misuse. | ||
* A user may ask an agent with access to a database to drop a table or mutate the schema. To mitigate, scope the credentials to only the tables that the agent needs to access and consider issuing READ-ONLY credentials. | ||
|
||
If you're building applications that access external resources like file systems, APIs | ||
|
@@ -27,15 +27,4 @@ design and secure your applications. | |
|
||
## Reporting a Vulnerability | ||
|
||
Please report security vulnerabilities by email to [email protected]. This email is | ||
an alias to a subset of our maintainers, and will ensure the issue is promptly triaged | ||
and acted upon as needed. | ||
|
||
## Roadmap | ||
|
||
Over the next few months, we are planning on breaking apart the LangChain package | ||
into smaller packages to separate out core functionality (e.g., schema, callbacks, LLMs) | ||
from integrations that manipulate external resources (e.g., database access, file management etc.). | ||
|
||
This should make it easier for developers and the security teams in their organizations | ||
to manage and assess the security of their applications. | ||
Please report security vulnerabilities by email to [email protected]. This will ensure the issue is promptly triaged and acted upon as needed. |