Skip to content

Commit

Permalink
Extensions directory (#1169)
Browse files Browse the repository at this point in the history
  • Loading branch information
carmocca authored Mar 20, 2024
1 parent 95cc753 commit 12da7e7
Show file tree
Hide file tree
Showing 7 changed files with 11 additions and 11 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@

 [The latest model weights](tutorials/download_model_weights.md): Gemma, Mistral, Mixtral, Phi 2, Llama 2, Falcon, CodeLlama, and [many more](tutorials/download_model_weights.md).

 Optimized and efficient code: Flash Attention v2, multi-GPU support via fully-sharded data parallelism, [optional CPU offloading](tutorials/oom.md#do-sharding-across-multiple-gpus), and [TPU and XLA support](./xla).
 Optimized and efficient code: Flash Attention v2, multi-GPU support via fully-sharded data parallelism, [optional CPU offloading](tutorials/oom.md#do-sharding-across-multiple-gpus), and [TPU and XLA support](extensions/xla).

 [Pretraining](tutorials/pretraining.md), [finetuning](tutorials/finetune.md), and [inference](tutorials/inference.md) in various precision settings: FP32, FP16, BF16, and FP16/FP32 mixed.

Expand Down Expand Up @@ -344,7 +344,7 @@ helping democratize AI for millions of developers and researchers worldwide.
Using TPUs with Lightning is as straightforward as changing one line of code.
We provide scripts fully optimized for TPUs in the [XLA directory](xla).
We provide scripts fully optimized for TPUs in the [XLA directory](extensions/xla).
Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,12 @@
from litgpt.utils import check_valid_checkpoint_dir, chunked_cross_entropy, estimate_flops, lazy_load, num_parameters

# support running without installing as a package
wd = Path(__file__).parent.parent.parent.resolve()
wd = Path(__file__).parents[3].resolve()
sys.path.append(str(wd))

from xla.generate.base import generate
from xla.scripts.prepare_alpaca import generate_prompt
from xla.utils import rank_print, sequential_load_and_fsdp_wrap
from extensions.xla.generate.base import generate
from extensions.xla.scripts.prepare_alpaca import generate_prompt
from extensions.xla.utils import rank_print, sequential_load_and_fsdp_wrap

eval_interval = 200
save_interval = 200
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,11 @@
from litgpt.utils import check_valid_checkpoint_dir, lazy_load

# support running without installing as a package
wd = Path(__file__).parent.parent.parent.resolve()
wd = Path(__file__).parents[3].resolve()
sys.path.append(str(wd))

from xla.generate.base import generate
from xla.utils import rank_print
from extensions.xla.generate.base import generate
from extensions.xla.utils import rank_print


def setup(
Expand Down
4 changes: 2 additions & 2 deletions xla/generate/base.py → extensions/xla/generate/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,10 @@
from litgpt.utils import check_valid_checkpoint_dir, lazy_load

# support running without installing as a package
wd = Path(__file__).parent.parent.parent.resolve()
wd = Path(__file__).parents[3].resolve()
sys.path.append(str(wd))

from xla.utils import rank_print
from extensions.xla.utils import rank_print


# xla does not support `inference_mode`: RuntimeError: Cannot set version_counter for inference tensor
Expand Down
File renamed without changes.
File renamed without changes.

0 comments on commit 12da7e7

Please sign in to comment.