Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Torchchat on Android crashes on second prompt #1395

Open
infil00p opened this issue Nov 25, 2024 · 4 comments
Open

Torchchat on Android crashes on second prompt #1395

infil00p opened this issue Nov 25, 2024 · 4 comments
Assignees
Labels
bug Something isn't working Mobile - Android Issues Related to the Android Workflow

Comments

@infil00p
Copy link
Contributor

🐛 Describe the bug

Device Info:
Device: Google Pixel 9
Android Version: 15
API Level: 35

Steps to reproduce the bug:

  • Follow the steps in the documentation on llama-3.2-3b-instruct and copy both the llama model and the associated tokenizer model in the temp directory
  • Load torchchat in Android Stuido
  • Type a random prompt for the first promt
  • After the first prompt, write a second prompt

Expected:
The Llama model should produce output

What happened:

2024-11-25 14:52:50.659 19932-20110 ExecuTorch              org.pytorch.torchchat                I  RSS after loading model: 2391.855469 MiB (0 if unsupported)
2024-11-25 14:52:50.660 19932-20110 ExecuTorch              org.pytorch.torchchat                A  In function generate(), assert failed (num_prompt_tokens < metadata_.at(kMaxSeqLen)): num_prompt_tokens 140 >= max_seq_len_ 128, Max seq length exceeded - please increase max seq len value in .../llama2/model.py
2024-11-25 14:52:50.661 19932-20110 libc                    org.pytorch.torchchat                A  Fatal signal 6 (SIGABRT), code -1 (SI_QUEUE) in tid 20110 (pool-3-thread-1), pid 19932 (torch.torchchat)
2024-11-25 14:52:50.782 19932-20004 HWUI                    org.pytorch.torchchat                I  Davey! duration=3084ms; Flags=0, FrameTimelineVsyncId=8776715, IntendedVsync=648189722387971, Vsync=648192674316622, InputEventId=276502475, HandleInputStart=648192688975798, AnimationStart=648192689011239, PerformTraversalsStart=648192689012013, DrawStart=648192795535247, FrameDeadline=648189738987971, FrameInterval=648192688396331, FrameStartTime=16677563, SyncQueued=648192798785491, SyncStart=648192799176116, IssueDrawCommandsStart=648192799894540, SwapBuffers=648192802982390, FrameCompleted=648192807580860, DequeueBufferDuration=332927, QueueBufferDuration=514974, GpuCompleted=648192807580860, SwapBuffersCompleted=648192803706715, DisplayPresentTime=648178625308026, CommandSubmissionCompleted=648192802982390, 
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A  Cmdline: org.pytorch.torchchat
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A  pid: 19932, tid: 20110, name: pool-3-thread-1  >>> org.pytorch.torchchat <<<
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A        #01 pc 00000000015fdf54  /data/app/~~vQVnC2iQW4Ws7d2zTIrIEQ==/org.pytorch.torchchat-ta0utv2u-lYvdiOSyxOdLA==/lib/arm64/libexecutorch.so (et_pal_abort+8) (BuildId: 87abca08e486390fd661b9f8676b8b0c40ba5d04)
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A        #02 pc 00000000015fdd80  /data/app/~~vQVnC2iQW4Ws7d2zTIrIEQ==/org.pytorch.torchchat-ta0utv2u-lYvdiOSyxOdLA==/lib/arm64/libexecutorch.so (executorch::runtime::runtime_abort()+8) (BuildId: 87abca08e486390fd661b9f8676b8b0c40ba5d04)
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A        #03 pc 0000000001583d9c  /data/app/~~vQVnC2iQW4Ws7d2zTIrIEQ==/org.pytorch.torchchat-ta0utv2u-lYvdiOSyxOdLA==/lib/arm64/libexecutorch.so (example::Runner::generate(std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, std::__ndk1::allocator<char>> const&, int, std::__ndk1::function<void (std::__ndk1::basic_string<char, std::__ndk1::char_traits<char>, std::__ndk1::allocator<char>> const&)>, std::__ndk1::function<void (executorch::extension::llm::Stats const&)>, bool, bool)+3748) (BuildId: 87abca08e486390fd661b9f8676b8b0c40ba5d04)
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A        #04 pc 00000000001e8b18  /data/app/~~vQVnC2iQW4Ws7d2zTIrIEQ==/org.pytorch.torchchat-ta0utv2u-lYvdiOSyxOdLA==/lib/arm64/libexecutorch.so (executorch_jni::ExecuTorchLlamaJni::generate(facebook::jni::alias_ref<_jintArray*>, int, int, int, facebook::jni::alias_ref<_jstring*>, int, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>, unsigned char)+408) (BuildId: 87abca08e486390fd661b9f8676b8b0c40ba5d04)
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A        #05 pc 00000000001e9438  /data/app/~~vQVnC2iQW4Ws7d2zTIrIEQ==/org.pytorch.torchchat-ta0utv2u-lYvdiOSyxOdLA==/lib/arm64/libexecutorch.so (facebook::jni::detail::MethodWrapper<int (executorch_jni::ExecuTorchLlamaJni::*)(facebook::jni::alias_ref<_jintArray*>, int, int, int, facebook::jni::alias_ref<_jstring*>, int, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>, unsigned char), &executorch_jni::ExecuTorchLlamaJni::generate(facebook::jni::alias_ref<_jintArray*>, int, int, int, facebook::jni::alias_ref<_jstring*>, int, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>, unsigned char), executorch_jni::ExecuTorchLlamaJni, int, facebook::jni::alias_ref<_jintArray*>, int, int, int, facebook::jni::alias_ref<_jstring*>, int, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>, unsigned char>::dispatch(facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::HybridClass<executorch_jni::ExecuTorchLlamaJni, facebook::jni::detail::BaseHybridClass>::JavaPart, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::alias_ref<_jintArray*>&&, int&&, int&&, int&&, facebook::jni::alias_ref<_jstring*>&&, int&&, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>&&, unsigned char&&)+156) (BuildId: 87abca08e486390fd661b9f8676b8b0c40ba5d04)
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A        #06 pc 00000000001e9304  /data/app/~~vQVnC2iQW4Ws7d2zTIrIEQ==/org.pytorch.torchchat-ta0utv2u-lYvdiOSyxOdLA==/lib/arm64/libexecutorch.so (facebook::jni::detail::FunctionWrapper<int (*)(facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::HybridClass<executorch_jni::ExecuTorchLlamaJni, facebook::jni::detail::BaseHybridClass>::JavaPart, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::alias_ref<_jintArray*>&&, int&&, int&&, int&&, facebook::jni::alias_ref<_jstring*>&&, int&&, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>&&, unsigned char&&), facebook::jni::detail::JTypeFor<facebook::jni::HybridClass<executorch_jni::ExecuTorchLlamaJni, facebook::jni::detail::BaseHybridClass>::JavaPart, facebook::jni::JObject, void>::_javaobject*, int, facebook::jni::alias_ref<_jintArray*>, int, int, int, facebook::jni::alias_ref<_jstring*>, int, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>, unsigned char>::call(_JNIEnv*, _jobject*, _jintArray*, int, int, int, _jstring*, int, facebook::jni::detail::JTypeFor<executorch_jni::ExecuTorchLlamaCallbackJni, facebook::jni::JObject, void>::_javaobject*, unsigned char, int (*)(facebook::jni::alias_ref<facebook::jni::detail::JTypeFor<facebook::jni::HybridClass<executorch_jni::ExecuTorchLlamaJni, facebook::jni::detail::BaseHybridClass>::JavaPart, facebook::jni::JObject, void>::_javaobject*>, facebook::jni::alias_ref<_jintArray*>&&, int&&, int&&, int&&, facebook::jni::alias_ref<_jstring*>&&, int&&, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>&&, unsigned char&&))+164) (BuildId: 87abca08e486390fd661b9f8676b8b0c40ba5d04)
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A        #07 pc 00000000001e794c  /data/app/~~vQVnC2iQW4Ws7d2zTIrIEQ==/org.pytorch.torchchat-ta0utv2u-lYvdiOSyxOdLA==/lib/arm64/libexecutorch.so (facebook::jni::detail::MethodWrapper<int (executorch_jni::ExecuTorchLlamaJni::*)(facebook::jni::alias_ref<_jintArray*>, int, int, int, facebook::jni::alias_ref<_jstring*>, int, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>, unsigned char), &executorch_jni::ExecuTorchLlamaJni::generate(facebook::jni::alias_ref<_jintArray*>, int, int, int, facebook::jni::alias_ref<_jstring*>, int, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>, unsigned char), executorch_jni::ExecuTorchLlamaJni, int, facebook::jni::alias_ref<_jintArray*>, int, int, int, facebook::jni::alias_ref<_jstring*>, int, facebook::jni::alias_ref<executorch_jni::ExecuTorchLlamaCallbackJni>, unsigned char>::call(_JNIEnv*, _jobject*, _jintArray*, int, int, int, _jstring*, int, facebook::jni::detail::JTypeFor<executorch_jni::ExecuTorchLlamaCallbackJni, facebook::jni::JObject, void>::_javaobject*, unsigned char)+40) (BuildId: 87abca08e486390fd661b9f8676b8b0c40ba5d04)
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A        #14 pc 0000000000357504  /data/app/~~vQVnC2iQW4Ws7d2zTIrIEQ==/org.pytorch.torchchat-ta0utv2u-lYvdiOSyxOdLA==/base.apk (org.pytorch.executorch.LlamaModule.generate+0)
2024-11-25 14:52:51.146 20135-20135 DEBUG                                                        A        #19 pc 0000000000005d08  /data/app/~~vQVnC2iQW4Ws7d2zTIrIEQ==/org.pytorch.torchchat-ta0utv2u-lYvdiOSyxOdLA==/base.apk (org.pytorch.torchchat.MainActivity$4.run+0)

It seems that the tokens are miscounted on the second call. This isn't the case for the iOS application. I haven't looked at the Android version of the demo located in the executorch repo. I haven't tested on other models yet, but I can start moving other llama models over to the Android device to see if I can reproduce this tokenizer bug.

Versions

Here's the info on my MBP.

Collecting environment information...
PyTorch version: 2.6.0.dev20241002
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 14.6.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: version 3.30.4
Libc version: N/A

Python version: 3.10.15 | packaged by conda-forge | (main, Sep 30 2024, 17:48:38) [Clang 17.0.6 ] (64-bit runtime)
Python platform: macOS-14.6.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Apple M1 Max

Versions of relevant libraries:
[pip3] executorch==0.5.0a0+72b3bb3
[pip3] numpy==1.26.4
[pip3] torch==2.6.0.dev20241002
[pip3] torchao==0.5.0
[pip3] torchaudio==2.5.0.dev20241007
[pip3] torchsr==1.0.4
[pip3] torchtune==0.4.0.dev20241010+cpu
[pip3] torchvision==0.20.0.dev20241002
[conda] numpy                     1.26.4          py312h7f4fdc5_0  
[conda] numpy-base                1.26.4          py312he047099_0  
[conda] numpydoc                  1.7.0           py312hca03da5_0  
@HeresMyGit
Copy link

I'm seeing the same issue. I get this crash log on the device:

Abort message: 'In function generate(), assert failed (num_prompt_ tokens < metadata_.at (kMaxSeqLen)): num_prompt_tokens 138 >= max_seq_len_ 128, Max seq length exceeded - please increase max seq len value in .../llama2/model-py'

@infil00p infil00p changed the title Torchchat on Android crashes on second prompt with Llama-3.2-3b-instruct Torchchat on Android crashes on second prompt Nov 26, 2024
@infil00p
Copy link
Contributor Author

I tested with the Llama-3.1-8b model that was used in the instructions, and I'm getting the same behaviour.

@Jack-Khuu Jack-Khuu added Mobile - Android Issues Related to the Android Workflow bug Something isn't working labels Nov 26, 2024
@Jack-Khuu
Copy link
Contributor

@kirklandsign Can you take a look at this? Might be related to the tps workaround that Scott looked into a while back

@infil00p
Copy link
Contributor Author

infil00p commented Nov 28, 2024

After testing the Software Maison React Native code and adding Executorch to my project, the issue appears to be with the Android app written for Torchchat and shared with the example app in Executorch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Mobile - Android Issues Related to the Android Workflow
Projects
None yet
Development

No branches or pull requests

4 participants