-
Notifications
You must be signed in to change notification settings - Fork 791
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature request] Logging Level and Progress Bar for Model Downloads #117
Comments
I agree - that would be a great improvement! It was not originally considered since the library was designed for browsers, but since there has been a lot of interest for Node.js-like environments, it's definitely something to consider. I think this would be a good first issue for someone who wants to contribute :) |
That is exactly my use case, that would be for a |
The screenshot log comes from WASM and you have to specify the log verbosity level in async function constructSession(pretrained_model_name_or_path, fileName, options) {
// TODO add option for user to force specify their desired execution provider
let modelFileName = `onnx/${fileName}${options.quantized ? '_quantized' : ''}.onnx`;
let buffer = await getModelFile(pretrained_model_name_or_path, modelFileName, true, options);
/** @type {InferenceSession.SessionOptions} */
const extraSessionOptions = {
logVerbosityLevel: 4,
logSeverityLevel: 4,
}
try {
return await InferenceSession.create(buffer, {
executionProviders,
...extraSessionOptions
});
} catch (err) {
// If the execution provided was only wasm, throw the error
if (executionProviders.length === 1 && executionProviders[0] === 'wasm') {
throw err;
}
console.warn(err);
console.warn(
'Something went wrong during model construction (most likely a missing operation). ' +
'Using `wasm` as a fallback. '
)
return await InferenceSession.create(buffer, {
executionProviders: ['wasm'],
...extraSessionOptions
});
}
} Requires a bit discussion how the PR should then expose the session options to the transformers.js users 🤔 |
A global |
I kinda lend to extending the options in We could just make it a passthrough-options-ONNX-session-options-object, in the case that we need to use other session options later aswell, inside the existing pipeline options object. Something like: /**
* Utility factory method to build a [`Pipeline`] object.
*
* @param {string} task The task of the pipeline.
* @param {string} [model=null] The name of the pre-trained model to use. If not specified, the default model for the task will be used.
* @param {PretrainedOptions} [options] Optional parameters for the pipeline.
* @returns {Promise<Pipeline>} A Pipeline object for the specified task.
* @throws {Error} If an unsupported pipeline is requested.
*/
export async function pipeline(
task,
model = null,
{
quantized = true,
progress_callback = null,
config = null,
cache_dir = null,
local_files_only = false,
revision = 'main',
ortExtraSessionOptions = {},
} = {}
) { But we could make the API easier to use aswell (e.g. using Do we even need both options, I have a slight feeling that other dev's are also a bit confused: |
My thought exactly. This is why a global logging level might be okay. It will also be better for when we add more backends (not just onnx). If someone REALLY wants those log levels, they can set them manually with
I'm not too keen on adding that to the pipeline function, just because it's not something users will modify often. It will also then have to be used in |
For people looking for the final answer / code. adding a nodejs code snippet here to set log level to 3 . warning levels
your code
|
Is there anything else to work on this feature? I would like to contribute but it is not clear to me if this needs further work. Thank you! |
Right now, the output can be quite lengthy and verbose, see:
Could it be possible to expose logger options to granularly control the output as well as offering visual feedback for the status of the model download from Hugging Face?
The text was updated successfully, but these errors were encountered: