From bca2e367e6a98e2f333bad193e88a844a7d5b4dc Mon Sep 17 00:00:00 2001 From: Agnieszka Marzec <97166305+agnieszka-m@users.noreply.github.com> Date: Mon, 9 Oct 2023 11:07:26 +0200 Subject: [PATCH] Docs: Lg updates (#213) * update lg Some suggestions to make it more concise and in line with our writing and voice guidelines. * Add use cases for each installation method * Replace chained with connected --------- Co-authored-by: bilgeyucel --- content/overview/quick-start.md | 43 +++++++++++++++++++++------------ 1 file changed, 27 insertions(+), 16 deletions(-) diff --git a/content/overview/quick-start.md b/content/overview/quick-start.md index 9a134964..b4aea985 100644 --- a/content/overview/quick-start.md +++ b/content/overview/quick-start.md @@ -19,13 +19,13 @@ You can find the source code for Haystack on GitHub. This is also the main chann ## Installation -These are the instructions for installing Haystack. The most straightforward way to install the latest release of Haystack is through [pip](https://github.com/pypa/pip). +Use [pip](https://github.com/pypa/pip) to install the latest Haystack release: -{{< tabs totalTabs="3">}} +{{< tabs totalTabs="4">}} {{< tab tabName="Minimal" >}} -This command installs everything needed for basic Pipelines that use an InMemoryDocumentStore and external LLM provider (e.g. OpenAI). +This command installs everything needed for basic Pipelines using InMemoryDocumentStore and an external LLM provider (for example, OpenAI). Use this installation method for basic features such as keyword-based retrieval, web search and text generation with LLMs including generative question answering. ```python pip install farm-haystack @@ -35,42 +35,53 @@ pip install farm-haystack {{< tab tabName="Basic" >}} -This command installs everything you need for basic Pipelines that use an InMemoryDocumentStore, as well as all necessary dependencies for model inference on local machine, including torch. +This command installs everything needed for basic Pipelines using InMemoryDocumentStore, and necessary dependencies for model inference on a local machine, including torch. Use this installation option for features such as document retrieval with semantic similarity and extractive question answering. ```python -pip install farm-haystack[inference] +pip install 'farm-haystack[inference]' ``` {{< /tab >}} +{{< tab tabName="Custom" >}} + +This command installs given dependencies. Use this installation option when you are using various features of Haystack and want to keep the dependency list as small as possible. + +```python +pip install 'farm-haystack[DEPENDENCY_OPTION_1, DEPENDENCY_OPTION_2, DEPENDENCY_OPTION_3...]' +``` + +For the full list of dependency options, read [Custom Installation](https://docs.haystack.deepset.ai/docs/installation#custom-installation) section in the documentation. + +{{< /tab >}} + {{< tab tabName="Full" >}} -This command installs further dependencies for more advanced features, like certain DocumentStores, FileConverters, OCR, or Ray. +This command installs all dependencies required for all document stores, file converters, OCR, Ray and more. Use this installation option if you don't want to install dependencies separately or if you're still experimenting with Haystack and don't have a final list of features you want to use in your application. ```python -pip install --upgrade pip pip install 'farm-haystack[all]' ## or 'all-gpu' for the GPU-enabled dependencies ``` {{< /tab >}} {{< /tabs >}} -For a more comprehensive installation guide, inlcuding methods for various operating systems, refer to our documentation. +For a more comprehensive installation guide, including methods for various operating systems, refer to our documentation. {{< button url="https://docs.haystack.deepset.ai/docs/installation" text="Docs: Installation" color="green">}} -## Build Your First RAG Pipeline +## Build Your First Retrieval Augmented Generation (RAG) Pipeline -Haystack is built around the concept of pipelines. A pipeline is a powerful structure made up of components that can be used to perform a task. -For example, you can connect together a Retriever and a PromptNode to build a Generative Question Answering pipeline. +Haystack is built around the concept of pipelines. A pipeline is a powerful structure that performs an NLP task. It's made up of components connected together. +For example, you can connect a Retriever and a PromptNode to build a Generative Question Answering pipeline that uses your own data. -Try out how Haystack answers questions about Game of Thrones using **Retrieval Augmented Generation (RAG)** approach 👇 +Try out how Haystack answers questions about Game of Thrones using the **RAG** approach 👇 -Install Haystack in the minimal form: +Run the minimal Haystack installation: ```bash pip install farm-haystack ``` -Ask a question on your data after indexing your data to the DocumentStore and building a RAG pipeline: +Index your data to the DocumentStore, build a RAG pipeline, and ask a question on your data: ```python from haystack.document_stores import InMemoryDocumentStore from haystack.utils import build_pipeline, add_example_data, print_answers @@ -95,7 +106,7 @@ result = pipeline.run(query="Who is the father of Arya Stark?") # For details, like which documents were used to generate the answer, look into the object print_answers(result, details="medium") ``` -The output of the pipeline will look like this, referencing the documents used to generate the answer: +The output of the pipeline references the documents used to generate the answer: ```text 'Query: Who is the father of Arya Stark?' @@ -104,6 +115,6 @@ The output of the pipeline will look like this, referencing the documents used t 'Winterfell. [Document 1, Document 4, Document 5]'}] ``` -For a hands-on guide to build your first RAG Pipeline, see our tutorial. +For a hands-on guide on how to build your first RAG Pipeline, see our tutorial. {{< button url="https://haystack.deepset.ai/tutorials/22_pipeline_with_promptnode" text="Tutorial: Creating a RAG Pipeline" color="green">}}