Skip to content

Commit

Permalink
typos (#9314)
Browse files Browse the repository at this point in the history
Signed-off-by: Nithin Rao Koluguri <nithinraok>
Co-authored-by: Nithin Rao Koluguri <nithinraok>
  • Loading branch information
nithinraok authored May 25, 2024
1 parent 7235f2b commit 0411b7c
Show file tree
Hide file tree
Showing 5 changed files with 7 additions and 7 deletions.
2 changes: 1 addition & 1 deletion tutorials/00_NeMo_Primer.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -588,7 +588,7 @@
"id": "U7Eezf_sAVS0"
},
"source": [
"You might wonder why we didnt explicitly set `citrinet.cfg.optim = cfg.optim`. \n",
"You might wonder why we didn't explicitly set `citrinet.cfg.optim = cfg.optim`. \n",
"\n",
"This is because the `setup_optimization()` method does it for you! You can still update the config manually."
]
Expand Down
4 changes: 2 additions & 2 deletions tutorials/asr/ASR_Confidence_Estimation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -284,7 +284,7 @@
" eps_padded_hyp, labels, padded_labels, fill_confidence_deletions(confidence_scores, labels)\n",
" ):\n",
" word_len = len(word)\n",
" # shield angle brakets for <eps>\n",
" # shield angle brackets for <eps>\n",
" if html and word == \"<eps>\":\n",
" word = \"&lt;eps&gt;\"\n",
" if current_line_len + word_len + 1 <= terminal_width:\n",
Expand All @@ -307,7 +307,7 @@
" current_word_line = \"\"\n",
" for word, score in zip(transcript_list, confidence_scores):\n",
" word_len = len(word)\n",
" # shield angle brakets for <eps>\n",
" # shield angle brackets for <eps>\n",
" if html and word == \"<eps>\":\n",
" word = \"&lt;eps&gt;\"\n",
" if current_line_len + word_len + 1 <= terminal_width:\n",
Expand Down
2 changes: 1 addition & 1 deletion tutorials/asr/ASR_Context_Biasing.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -361,7 +361,7 @@
"source": [
"## Create a context-biasing list\n",
"\n",
"Now, we need to select the words, recognition of wich we want to improve by CTC-WS context-biasing.\n",
"Now, we need to select the words, recognition of which we want to improve by CTC-WS context-biasing.\n",
"Usually, we select only nontrivial words with the lowest recognition accuracy.\n",
"Such words should have a character length >= 3 because short words in a context-biasing list may produce high false-positive recognition.\n",
"In this toy example, we will select all the words that look like names with a recognition accuracy less than 1.0.\n",
Expand Down
4 changes: 2 additions & 2 deletions tutorials/asr/Speech_Commands.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1431,10 +1431,10 @@
"# Lets change the scheduler\n",
"optim_sched_cfg.sched.name = \"CosineAnnealing\"\n",
"\n",
"# \"power\" isnt applicable to CosineAnnealing so let's remove it\n",
"# \"power\" isn't applicable to CosineAnnealing so let's remove it\n",
"optim_sched_cfg.sched.pop('power')\n",
"\n",
"# \"hold_ratio\" isnt applicable to CosineAnnealing, so let's remove it\n",
"# \"hold_ratio\" isn't applicable to CosineAnnealing, so let's remove it\n",
"optim_sched_cfg.sched.pop('hold_ratio')\n",
"\n",
"# Set \"min_lr\" to lower value\n",
Expand Down
2 changes: 1 addition & 1 deletion tutorials/nlp/Joint_Intent_and_Slot_Classification.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -749,7 +749,7 @@
"source": [
"### Optimizing Threshold\n",
"\n",
"As mentioned above, when classifiying a given query such as `show all flights and fares from denver to san francisco`, our model checks whether each individual intent would be suitable. Before assigning the final labels for a query, the model assigns a probability an intent matches the query. For example, if our `dict.intents.csv` had 5 different intents, then the model could output for a given query \\[0.52, 0.38, 0.21, 0.67. 0.80\\] where each value represents the probability that query matches that particular intent. \n",
"As mentioned above, when classifying a given query such as `show all flights and fares from denver to san francisco`, our model checks whether each individual intent would be suitable. Before assigning the final labels for a query, the model assigns a probability an intent matches the query. For example, if our `dict.intents.csv` had 5 different intents, then the model could output for a given query \\[0.52, 0.38, 0.21, 0.67. 0.80\\] where each value represents the probability that query matches that particular intent. \n",
"\n",
"We need to use these probabilities to generate final label predictions of 0 or 1 for each label. While we can use 0.5 as the probability threshold, it is usually the case that there is a better threshold to use depending on the metric we want to optimize. For this tutorial, we will be finding the threshold that gives us the best micro-F1 score on the validation set. After running the `optimize_threshold` method, the threshold attribute for our model will be updated."
]
Expand Down

0 comments on commit 0411b7c

Please sign in to comment.