From bbae7af5e3ef20db615b1429529cee3f75abde83 Mon Sep 17 00:00:00 2001 From: Sebastian Raschka Date: Mon, 25 Mar 2024 16:51:04 -0500 Subject: [PATCH] Fix link in Readme (#1191) --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 57b28b4108..d6be0e3625 100644 --- a/README.md +++ b/README.md @@ -27,7 +27,7 @@ ✅  Optimized and efficient code: Flash Attention v2, multi-GPU support via fully-sharded data parallelism, [optional CPU offloading](tutorials/oom.md#do-sharding-across-multiple-gpus), and [TPU and XLA support](extensions/xla). -✅  [Pretraining](tutorials/pretraining.md), [finetuning](tutorials/finetune.md), and [inference](tutorials/inference.md) in various precision settings: FP32, FP16, BF16, and FP16/FP32 mixed. +✅  [Pretraining](tutorials/pretrain_tinyllama.md), [finetuning](tutorials/finetune.md), and [inference](tutorials/inference.md) in various precision settings: FP32, FP16, BF16, and FP16/FP32 mixed. ✅  [Configuration files](config_hub) for great out-of-the-box performance.