diff --git a/docs/python-api.md b/docs/python-api.md index ae135a68..72011e84 100644 --- a/docs/python-api.md +++ b/docs/python-api.md @@ -54,7 +54,7 @@ response = model.prompt( ### Attachments -Model that accept multi-modal input (images, audio, video etc) can be passed attachments using the `attachments=` keyword argument. This accepts a list of `llm.Attachment()` instances. +Models that accept multi-modal input (images, audio, video etc) can be passed attachments using the `attachments=` keyword argument. This accepts a list of `llm.Attachment()` instances. This example shows two attachments - one from a file path and one from a URL: ```python @@ -69,7 +69,18 @@ response = model.prompt( ] ) ``` -Use `llm.Attachment(content=b"binary image content here")` to pass binary content directly. +Use `llm.Attachment(content=b"binary image content here")` to pass binary content directly. You may optionqlly specify a content type - if tou do not it will be detected automatically: + +```python +image = llm.Attachment( + content=b"...", + type="image/png" +) +response = model.prompt( + "extract text", + attachments=[image] +) +``` ### Model options @@ -206,4 +217,4 @@ Here the `default=` parameter specifies the value that should be returned if the ### set_default_embedding_model(alias) and get_default_embedding_model() -These two methods work the same as `set_default_model()` and `get_default_model()` but for the default {ref}`embedding model ` instead. \ No newline at end of file +These two methods work the same as `set_default_model()` and `get_default_model()` but for the default {ref}`embedding model ` instead.