How to fuse adapter_model.bin file with base model? #410
Replies: 1 comment
-
Hi, It seems that you are trying to combine the adapter_model.bin with the base model using the script from the Alpaca-LoRa repository. To help you resolve the issue, let's go through the steps to ensure you're doing it correctly: Clone the Alpaca-LoRa repository to your local machine or server: Run the export_hf_checkpoint.py script with the following command: Copy code |
Beta Was this translation helpful? Give feedback.
-
Hi,
I am trying to combine adapter_model.bin with base model, in the repo they are saying
https://github.com/tloen/alpaca-lora/blob/main/export_hf_checkpoint.py to run this file..
when i run this file again i get small sized file in ./lora-alpaca.
Am i doing this wrong way?
I would appreciate some help - can you tell me for base model we are using
this command !python export_hf_checkpoint.py
This is our output:
2023-04-27 12:57:42.419059: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
bin /usr/local/lib/python3.9/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so
/usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: /usr/lib64-nvidia did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...
warn(msg)
/usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')}
warn(msg)
/usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('8013'), PosixPath('http'), PosixPath('//172.28.0.1')}
warn(msg)
/usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-t4-hm-1hhertlphrzkm --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true'), PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https')}
warn(msg)
/usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')}
warn(msg)
/usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//ipykernel.pylab.backend_inline'), PosixPath('module')}
warn(msg)
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
/usr/local/lib/python3.9/dist-packages/bitsandbytes/cuda_setup/main.py:145: UserWarning: Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0'), PosixPath('/usr/local/cuda/lib64/libcudart.so')}.. We'll flip a coin and try one of these, in order to fail forward.
Either way, this might cause trouble in the future:
If you get
CUDA error: invalid device function
errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env.warn(msg)
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so.11.0
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
CUDA SETUP: Detected CUDA version 118
CUDA SETUP: Loading binary /usr/local/lib/python3.9/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so...
Downloading (…)lve/main/config.json: 100% 427/427 [00:00<00:00, 66.9kB/s]
Downloading (…)model.bin.index.json: 100% 25.5k/25.5k [00:00<00:00, 3.82MB/s]
Downloading (…)l-00001-of-00033.bin: 100% 405M/405M [00:06<00:00, 65.8MB/s]
Downloading (…)l-00002-of-00033.bin: 100% 405M/405M [00:02<00:00, 135MB/s]
Downloading (…)l-00003-of-00033.bin: 100% 405M/405M [00:03<00:00, 103MB/s]
Downloading (…)l-00004-of-00033.bin: 100% 405M/405M [00:02<00:00, 136MB/s]
Downloading (…)l-00005-of-00033.bin: 100% 405M/405M [00:03<00:00, 134MB/s]
Downloading (…)l-00006-of-00033.bin: 100% 405M/405M [00:03<00:00, 130MB/s]
Downloading (…)l-00007-of-00033.bin: 100% 405M/405M [00:02<00:00, 135MB/s]
Downloading (…)l-00008-of-00033.bin: 100% 405M/405M [00:03<00:00, 129MB/s]
Downloading (…)l-00009-of-00033.bin: 100% 405M/405M [00:03<00:00, 133MB/s]
Downloading (…)l-00010-of-00033.bin: 100% 405M/405M [00:03<00:00, 134MB/s]
Downloading (…)l-00011-of-00033.bin: 100% 405M/405M [00:02<00:00, 135MB/s]
Downloading (…)l-00012-of-00033.bin: 100% 405M/405M [00:02<00:00, 135MB/s]
Downloading (…)l-00013-of-00033.bin: 100% 405M/405M [00:02<00:00, 135MB/s]
Downloading (…)l-00014-of-00033.bin: 100% 405M/405M [00:03<00:00, 133MB/s]
Downloading (…)l-00015-of-00033.bin: 100% 405M/405M [00:02<00:00, 136MB/s]
Downloading (…)l-00016-of-00033.bin: 100% 405M/405M [00:02<00:00, 136MB/s]
Downloading (…)l-00017-of-00033.bin: 100% 405M/405M [00:03<00:00, 135MB/s]
Downloading (…)l-00018-of-00033.bin: 100% 405M/405M [00:03<00:00, 134MB/s]
Downloading (…)l-00019-of-00033.bin: 100% 405M/405M [00:03<00:00, 132MB/s]
Downloading (…)l-00020-of-00033.bin: 100% 405M/405M [00:02<00:00, 136MB/s]
Downloading (…)l-00021-of-00033.bin: 100% 405M/405M [00:02<00:00, 136MB/s]
Downloading (…)l-00022-of-00033.bin: 100% 405M/405M [00:02<00:00, 137MB/s]
Downloading (…)l-00023-of-00033.bin: 100% 405M/405M [00:03<00:00, 132MB/s]
Downloading (…)l-00024-of-00033.bin: 100% 405M/405M [00:03<00:00, 133MB/s]
Downloading (…)l-00025-of-00033.bin: 100% 405M/405M [00:04<00:00, 97.8MB/s]
Downloading (…)l-00026-of-00033.bin: 100% 405M/405M [00:04<00:00, 99.5MB/s]
Downloading (…)l-00027-of-00033.bin: 100% 405M/405M [00:03<00:00, 132MB/s]
Downloading (…)l-00028-of-00033.bin: 100% 405M/405M [00:02<00:00, 135MB/s]
Downloading (…)l-00029-of-00033.bin: 100% 405M/405M [00:03<00:00, 134MB/s]
Downloading (…)l-00030-of-00033.bin: 100% 405M/405M [00:04<00:00, 92.4MB/s]
Downloading (…)l-00031-of-00033.bin: 100% 405M/405M [00:03<00:00, 104MB/s]
Downloading (…)l-00032-of-00033.bin: 100% 405M/405M [00:04<00:00, 101MB/s]
Downloading (…)l-00033-of-00033.bin: 100% 524M/524M [00:03<00:00, 136MB/s]
Loading checkpoint shards: 100% 33/33 [01:14<00:00, 2.26s/it]
Downloading (…)neration_config.json: 100% 124/124 [00:00<00:00, 22.0kB/s]
Downloading (…)/adapter_config.json: 100% 350/350 [00:00<00:00, 191kB/s]
Downloading adapter_model.bin: 100% 8.43M/8.43M [00:00<00:00, 36.0MB/s]
As per my understanding- if we are combing adapter with base - we should get a bigger file , but we get again small sized file. Am i wrong here.
Beta Was this translation helpful? Give feedback.
All reactions