-
Notifications
You must be signed in to change notification settings - Fork 790
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] failed to call OrtRun(). error code = 1. When I try to load Xenova/pygmalion-350m #334
Comments
I'm using a mac with brave browser and chrome 117. |
Same on Linux / Chrome 117, not browser related. We had these errors before and sometimes it's "easy" to fix them: Sometimes they crash the entire browser 🙈 error code = 1 means ORT_FAIL, it couldn't be more descriptive than that 😅 Sometimes I question the WASM overhead, it makes everything just so much more difficult to debug and I had a project where WASM is actually slower than V8 jitted/optimized JS (which would still be easy to debug). Here is a full error:
|
Hi there 👋 thanks for making the report. Indeed, this is a known issue (see here for the PR which introduced the OPT models): #276 For now, the only way I've found to get it working is by using the unquantized versions of the models. Example code: const generator = await pipeline('text-generation', 'Xenova/opt-125m', {
quantized: false, // NOTE: quantized models are currently broken
});
const prompt = 'Once upon a';
const output = await generator(prompt);
// [{ generated_text: 'Once upon a time, I was a student at the University of California, Berkeley. I was' }] @kungfooman do you maybe know if this still happens in onnxruntime-web 1.16? cc @fs-eire @fxmarty too |
I didn't spend the time yet to figure out how to use 1.16, for some reason is just doesn't find any backends: Whereas using this works like a charm: https://cdnjs.cloudflare.com/ajax/libs/onnxruntime-web/1.14.0/ort.es6.min.js |
Ah yes, it defaults to use WASM files served via jsdelivr, which is what the errors indicate. No worries then, I can do further testing for an upcoming release. |
For the DynamicQuantizeMatMul, sorry but I do not know much details. Maybe need who write this kernel to take a look. https://cdnjs.cloudflare.com/ajax/libs/onnxruntime-web/1.16.0/ort.es6.min.js <-- this seems working on my html when I use |
@xenova I haven't tried 1.16.0 but when I try with the unquantized version, it gives an error saying |
Let me know if you think the issue is related to the export. |
Yep, I have a simple example project here: https://github.com/kungfooman/transformers-object-detection/ It's a bit hacky because there doesn't seem to be any published ESM build yet. Because I like to use repo's as "source of truth", I also converted what I needed once from TS to ESM here: microsoft/onnxruntime@main...kungfooman:onnxruntime:main (but I don't have the time to maintain it rn) In general the future is ESM + importmap, every browser aims for that nowadays and npm packages follow, for example: https://babeljs.io/docs/v8-migration (only ships as ESM now) Would be nice if ONNX ships browser-compatible/working ESM files too (aka don't drop the file extensions). Thank you for looking into this @fs-eire 👍 |
Getting same error when trying to run a gpt2 model on the latest 2.11.0. |
@uahmad235 The repo you linked to does not include ONNX weights in a subfolder called "onnx". You can do the conversion yourself by installing these requirements and then following the tutorial here. Please ensure the repo structure is the same as this one. |
Apologies @xenova . My bad. I forgot to mention that i have already converted weights using the given instructions. The converted weights does have a file called |
@uahmad235 No worries :) In that case, could you try run the unquantized version? const generator = await pipeline('text-generation', '<your_model_id>', {
quantized: false, // <-- HERE
}); This may be due to a missing op support in v1.14.0 of onnxruntime-web. |
@uahmad235 Feel free to open up a new issue with code that allows me to reproduce this. It might just be a configuration issue. 😇 |
Sure. Let me give another try and I can add a separate issue in case it does not work. Thanks for the prompt response :) |
I'm getting an error
failed to call OrtRun(). error code = 1.
When I try to load Xenova/pygmalion-350m. The error is as followsAnd my Code for running it is this
I see that
OrtRun
is something returned by the OnnxRuntime on a failure but have you had success in running the Pygmalion-350m model ?The text was updated successfully, but these errors were encountered: