diff --git a/tutorials/finetune.md b/tutorials/finetune.md index 2bfae2a013..e2d2d5b321 100644 --- a/tutorials/finetune.md +++ b/tutorials/finetune.md @@ -62,7 +62,7 @@ or litgpt finetune adapter_v2 ``` -Simillar to LoRA, adapter finetuning is a parameter-efficient finetuning technique that only requires training a small subset of weight parameters, making this finetuning method more memory-efficient than full-parameter finetuning. +Similar to LoRA, adapter finetuning is a parameter-efficient finetuning technique that only requires training a small subset of weight parameters, making this finetuning method more memory-efficient than full-parameter finetuning. **More information and resources:**