diff --git a/.typo-ci.yml b/.typo-ci.yml index c992f599e0d2..69ff4f1f6cf0 100644 --- a/.typo-ci.yml +++ b/.typo-ci.yml @@ -172,5 +172,6 @@ excluded_words: - crfentityextractor - Comerica - entitysynonymmapper + - memoizationpolicy spellcheck_filenames: false diff --git a/docs/docs/business-logic.mdx b/docs/docs/business-logic.mdx index 6bacf09e3a7f..0f0d507a4673 100644 --- a/docs/docs/business-logic.mdx +++ b/docs/docs/business-logic.mdx @@ -4,7 +4,7 @@ sidebar_label: Handling Business Logic title: Handling Business Logic --- -Conversational assistants often suppor user goals that involve collecting required information +Conversational assistants often support user goals that involve collecting required information from the user before doing something for them. For example, a restaurant search bot would need to gather a few pieces of information about the user's preferences to find them a suitable restaurant: @@ -309,6 +309,6 @@ example above, this is a summary of what you'll need to do: To try out your newly defined form, retrain the bot and start `rasa shell`. If your final action is a custom action, you'll need to start the action server -in a seperate terminal when running `rasa shell`. If you're using `DucklingEntityExtractor` to extract +in a separate terminal when running `rasa shell`. If you're using `DucklingEntityExtractor` to extract entities, you'll need to start Duckling in the background as well (see the [instructions for running Duckling](entity-extractors.mdx#ducklingentityextractor)). diff --git a/docs/docs/chitchat-faqs.mdx b/docs/docs/chitchat-faqs.mdx index ee78d17bcbcd..01a7fd529d6b 100644 --- a/docs/docs/chitchat-faqs.mdx +++ b/docs/docs/chitchat-faqs.mdx @@ -80,7 +80,7 @@ pipeline: ``` By default, the `ResponseSelector` will build a single retrieval model for all retrieval intents. -If you want to retrieve responses for FAQs and chitchat seperately, you can use multiple `ResponseSelector` components +If you want to retrieve responses for FAQs and chitchat separately, you can use multiple `ResponseSelector` components and specify the `retrieval_intent` key, for example: ```rasa-yaml @@ -141,7 +141,7 @@ nlu: - what language does rasa support? - which language do you support? - which languages supports rasa - - can I use rasa also for another laguage? + - can I use rasa also for another language? - languages supported - intent: faq/ask_rasax examples: | diff --git a/docs/docs/command-line-interface.mdx b/docs/docs/command-line-interface.mdx index 4ece80398364..0325cb442960 100644 --- a/docs/docs/command-line-interface.mdx +++ b/docs/docs/command-line-interface.mdx @@ -250,7 +250,7 @@ the following arguments: ``` This command will attempt to keep the proportions of intents the same in train and test. -If you have NLG data for retrieval actions, this will be saved to seperate files: +If you have NLG data for retrieval actions, this will be saved to separate files: ```bash ls train_test_split diff --git a/docs/docs/contextual-conversations.mdx b/docs/docs/contextual-conversations.mdx index 7f028a09c46e..d9cf1419f8f1 100644 --- a/docs/docs/contextual-conversations.mdx +++ b/docs/docs/contextual-conversations.mdx @@ -112,7 +112,7 @@ policies: You want to make sure `max_history` is set high enough to account for the most context your assistant will need to make an accurate -prediction about awhat to do next. +prediction about what to do next. For more details see the docs on [conversation featurization](policies.mdx#conversationfeaturization). ### `TEDPolicy` diff --git a/docs/docs/fallback-handoff.mdx b/docs/docs/fallback-handoff.mdx index c06b9b2c623e..1d8135da8f1a 100644 --- a/docs/docs/fallback-handoff.mdx +++ b/docs/docs/fallback-handoff.mdx @@ -231,7 +231,7 @@ starterpacks. You'll need to define rules for fallback and out-of-scope situations. You'll only need NLU training data for out-of-scope intents. For human handoff, no training data is needed unless you want -a seperate rule for handing off to a human vs. Two-Stage-Fallback, +a separate rule for handing off to a human vs. Two-Stage-Fallback, or if you want to create a `human_handoff` intent that can be predicted directly. ### NLU Training Data @@ -278,20 +278,13 @@ rules: Using [Rules](./rules.mdx) or [Stories](./stories.mdx) you can implement any desired fallback behavior. -<<<<<<< HEAD -* `data/nlu.yml`: -======= #### Rules for Two-Stage Fallback ->>>>>>> 9a6687ebb2b... Conversation patterns To use the `Two-Stage-Fallback` for messages with low NLU confidence, add the following [Rule](./rules.mdx) to your training data. This rule will make sure that the `Two-Stage-Fallback` will be activated whenever a message is received with low classification confidence. -<<<<<<< HEAD -* `data/stories.yml`: -======= ```yaml rules: - rule: Implementation of the Two-Stage-Fallback @@ -300,7 +293,6 @@ rules: - action: two_stage_fallback - form: two_stage_fallback ``` ->>>>>>> 9a6687ebb2b... Conversation patterns #### Rules for out-of-scope requests