Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model conversion from HF to GGUF crashes due to lack of memory #120

Open
philtomson opened this issue Nov 21, 2024 · 3 comments
Open

Model conversion from HF to GGUF crashes due to lack of memory #120

philtomson opened this issue Nov 21, 2024 · 3 comments

Comments

@philtomson
Copy link

philtomson commented Nov 21, 2024

This was for the 8B param model in the instructions - quickly ran through the 32GB RAM on my Linux PC. Is there someplace that pre-converted GGUF versions of these models might reside so that this conversion wouldn't need to be done?

@kth8
Copy link

kth8 commented Nov 22, 2024

@lfoppiano
Copy link
Contributor

Same issue here, with my M2 Macbook Air with 16Gb RAM worked. Linux 32Gb get killed by the SO 🤔

@lfoppiano
Copy link
Contributor

On Linux, I solved by using a 32Gb RAM machine + 10Gb Swap.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants