You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi again and thanks for the magnificent project!
So when trying to feed high bitrate for best quality (e.g. fullhd 50p, 100Mbit), it appeared to make the browser hang for a pretty long time until it even tries to work on the next frame. But then it is too late already as i feed live video. I limited the bitrate/resolution to about 720p50 at 30Mbit/s as you mention very often, then i had no problems.
I tried to debug it a little bit and it appeared to loop in the decodeSlice part, somewhere in the loop: while (!this.bits.nextBytesAreStartCode()); -- this.decodeMacroblock();
What i did was to log the offset in every loop iteration and i belive what i saw was that it slowly but surely went through about 200k bytes or more which ate all the cpu.
So it appeared to me that for some reason i did not receive the full data from the server which i think to be kind of an expected case for live transmission. I would not mind this one defective frame every x seconds but hanging browser for a long time is not good.
I am not quite sure if the error resilience is implemented correctly this way or if it can be optimized.
My thinking was "i want to change this decoder by the ffmpeg emscripten version" but i guess this will be a pretty complex task taking some weeks.
In the ffmpeg mpeg12dec.c we decodeslice function we find something like this
for (;;) { //until there are no more MB's
if ((ret = mpeg_decode_mb(s, s->block)) < 0)
return ret;
//continue processing MB
If i understand it correctly (95% chance i don't), this skips the whole slice in difference to your code which appears to just try to find the next MB until all the remaining input frame's data is exceeded. But then again it should find some MB in the remaining 200kb... hm i am a little lost here.
What do you think, can error resilience be optimized?
The text was updated successfully, but these errors were encountered:
emcodem
changed the title
decoder tends to "hang" on errors
Can Browser hangs be solved on Error "Too large macroblock_address_increment"
Apr 4, 2024
Hi again and thanks for the magnificent project!
So when trying to feed high bitrate for best quality (e.g. fullhd 50p, 100Mbit), it appeared to make the browser hang for a pretty long time until it even tries to work on the next frame. But then it is too late already as i feed live video. I limited the bitrate/resolution to about 720p50 at 30Mbit/s as you mention very often, then i had no problems.
I tried to debug it a little bit and it appeared to loop in the decodeSlice part, somewhere in the loop:
while (!this.bits.nextBytesAreStartCode()); -- this.decodeMacroblock();
What i did was to log the offset in every loop iteration and i belive what i saw was that it slowly but surely went through about 200k bytes or more which ate all the cpu.
So it appeared to me that for some reason i did not receive the full data from the server which i think to be kind of an expected case for live transmission. I would not mind this one defective frame every x seconds but hanging browser for a long time is not good.
I am not quite sure if the error resilience is implemented correctly this way or if it can be optimized.
My thinking was "i want to change this decoder by the ffmpeg emscripten version" but i guess this will be a pretty complex task taking some weeks.
In the ffmpeg mpeg12dec.c we decodeslice function we find something like this
If i understand it correctly (95% chance i don't), this skips the whole slice in difference to your code which appears to just try to find the next MB until all the remaining input frame's data is exceeded. But then again it should find some MB in the remaining 200kb... hm i am a little lost here.
What do you think, can error resilience be optimized?
The text was updated successfully, but these errors were encountered: