Skip to content

Commit

Permalink
typo fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
indam23 committed Sep 1, 2020
1 parent 661b6d1 commit 5873b18
Show file tree
Hide file tree
Showing 6 changed files with 8 additions and 15 deletions.
1 change: 1 addition & 0 deletions .typo-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -172,5 +172,6 @@ excluded_words:
- crfentityextractor
- Comerica
- entitysynonymmapper
- memoizationpolicy

spellcheck_filenames: false
4 changes: 2 additions & 2 deletions docs/docs/business-logic.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ sidebar_label: Handling Business Logic
title: Handling Business Logic
---

Conversational assistants often suppor user goals that involve collecting required information
Conversational assistants often support user goals that involve collecting required information
from the user before doing something for them. For example, a restaurant search bot would need to gather a few pieces of information
about the user's preferences to find them
a suitable restaurant:
Expand Down Expand Up @@ -309,6 +309,6 @@ example above, this is a summary of what you'll need to do:

To try out your newly defined form, retrain the bot and start `rasa shell`.
If your final action is a custom action, you'll need to start the action server
in a seperate terminal when running `rasa shell`. If you're using `DucklingEntityExtractor` to extract
in a separate terminal when running `rasa shell`. If you're using `DucklingEntityExtractor` to extract
entities, you'll need to start Duckling in the background as well
(see the [instructions for running Duckling](entity-extractors.mdx#ducklingentityextractor)).
4 changes: 2 additions & 2 deletions docs/docs/chitchat-faqs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ pipeline:
```

By default, the `ResponseSelector` will build a single retrieval model for all retrieval intents.
If you want to retrieve responses for FAQs and chitchat seperately, you can use multiple `ResponseSelector` components
If you want to retrieve responses for FAQs and chitchat separately, you can use multiple `ResponseSelector` components
and specify the `retrieval_intent` key, for example:

```rasa-yaml
Expand Down Expand Up @@ -141,7 +141,7 @@ nlu:
- what language does rasa support?
- which language do you support?
- which languages supports rasa
- can I use rasa also for another laguage?
- can I use rasa also for another language?
- languages supported
- intent: faq/ask_rasax
examples: |
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/command-line-interface.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ the following arguments:
```

This command will attempt to keep the proportions of intents the same in train and test.
If you have NLG data for retrieval actions, this will be saved to seperate files:
If you have NLG data for retrieval actions, this will be saved to separate files:

```bash
ls train_test_split
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/contextual-conversations.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ policies:

You want to make sure `max_history` is set high enough
to account for the most context your assistant will need to make an accurate
prediction about awhat to do next.
prediction about what to do next.
For more details see the docs on [conversation featurization](policies.mdx#conversationfeaturization).

### `TEDPolicy`
Expand Down
10 changes: 1 addition & 9 deletions docs/docs/fallback-handoff.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,7 @@ starterpacks.
You'll need to define rules for fallback and out-of-scope situations.
You'll only need NLU training data for out-of-scope intents.
For human handoff, no training data is needed unless you want
a seperate rule for handing off to a human vs. Two-Stage-Fallback,
a separate rule for handing off to a human vs. Two-Stage-Fallback,
or if you want to create a `human_handoff` intent that can be predicted directly.

### NLU Training Data
Expand Down Expand Up @@ -278,20 +278,13 @@ rules:
Using [Rules](./rules.mdx) or [Stories](./stories.mdx) you can implement any desired
fallback behavior.

<<<<<<< HEAD
* `data/nlu.yml`:
=======
#### Rules for Two-Stage Fallback
>>>>>>> 9a6687ebb2b... Conversation patterns

To use the `Two-Stage-Fallback` for messages with low NLU confidence, add the
following [Rule](./rules.mdx) to your training data. This rule will make sure that the
`Two-Stage-Fallback` will be activated whenever a message is received with
low classification confidence.

<<<<<<< HEAD
* `data/stories.yml`:
=======
```yaml
rules:
- rule: Implementation of the Two-Stage-Fallback
Expand All @@ -300,7 +293,6 @@ rules:
- action: two_stage_fallback
- form: two_stage_fallback
```
>>>>>>> 9a6687ebb2b... Conversation patterns


#### Rules for out-of-scope requests
Expand Down

0 comments on commit 5873b18

Please sign in to comment.