Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build fails: Failed to copy server binary to output directory: No such file or directory (os error 2) #3399

Open
yurivict opened this issue Nov 11, 2024 · 3 comments

Comments

@yurivict
Copy link

Describe the bug

 -- Installing: /wrkdirs/usr/ports/devel/tabby/work/target/release/build/llama-cpp-server-8837603d1835d022/out/bin/llama-tokenize
  cargo:root=/wrkdirs/usr/ports/devel/tabby/work/target/release/build/llama-cpp-server-8837603d1835d022/out

  --- stderr
  CMake Warning at cmake/build-info.cmake:14 (message):
    Git not found.  Build info will not be accurate.
  Call Stack (most recent call first):
    CMakeLists.txt:77 (include)


  CMake Warning at ggml/src/CMakeLists.txt:274 (message):
    AMX requires gcc version > 11.0.  Turning off GGML_AMX.


  CMake Warning at common/CMakeLists.txt:30 (message):
    Git repository not found; to enable automatic generation of build info,
    make sure Git is installed and the project is a Git repository.


  CMake Warning:
    Manually-specified variables were not used by the project:

      CMAKE_ASM_COMPILER
      CMAKE_ASM_FLAGS


  thread 'main' panicked at crates/llama-cpp-server/build.rs:66:36:
  Failed to copy server binary to output directory: No such file or directory (os error 2)

Information about your version
0.20.0

Additional context
FreeBSD 14.1

@zwpaper
Copy link
Member

zwpaper commented Nov 11, 2024

Hi @yurivict, it looks like the llama-server is not successfully built, can you provide some more details, like

  1. the command you used
  2. the full logs output

so that we can dig deeper into it

@yurivict
Copy link
Author

Here is a complete log.

Also: we already have llama-cpp package available, so there should be no need to bundle it in tabby.
Is it possible to use the external llama-cpp package?

@wsxiaoys
Copy link
Member

Also: we already have llama-cpp package available, so there should be no need to bundle it in tabby.

Yes - you can turn off the llama-cpp binary building by disabling this feature:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants