You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have test MT-RNN, MPRNet, MIMO-UNet and MIMO-UNet+ on a single 2080Ti card (Theoretical performance is higher than TitanXp). The details are mentioned in another issue, chosj95/MIMO-UNet#9 .
For MT-RNN, the asynchronous inference time and synchronized time is 46ms and 480ms, respectively. The time reported in MT-RNN paper is 0.07s on Titan V.
For MPRNet, the asynchronous inference time and synchronized time is 150ms and >1500ms, respectively. The time reported in MPRNet paper is 0.18s on Titan XP.
For MIMO-UNet, the asynchronous inference time and synchronized time is 15ms and 209ms, respectively. The time reported in MPRNet paper is 8ms on Titan XP.
For MIMO-UNet+, the asynchronous inference time and synchronized time is 30ms and 459ms, respectively. The time reported in MPRNet paper is 17ms on Titan XP.
To my surprise, the time difference after CUDA synchronization is more than 10 times. In fact, as can be seen from the above experiments, these model can only run less than 5 FPS, and can not be applied to practical applications with real-time requirements. I think the unsynchronized time reported in existing methods will cause misunderstanding, especially the time less than 30ms which meets the real-time requirements.
Not for your article, but for the whole community of image deblurring. What is your opinion on which time should be reported in an academic paper?
The text was updated successfully, but these errors were encountered:
I have test MT-RNN, MPRNet, MIMO-UNet and MIMO-UNet+ on a single 2080Ti card (Theoretical performance is higher than TitanXp). The details are mentioned in another issue, chosj95/MIMO-UNet#9 .
For MT-RNN, the asynchronous inference time and synchronized time is 46ms and 480ms, respectively. The time reported in MT-RNN paper is 0.07s on Titan V.
For MPRNet, the asynchronous inference time and synchronized time is 150ms and >1500ms, respectively. The time reported in MPRNet paper is 0.18s on Titan XP.
For MIMO-UNet, the asynchronous inference time and synchronized time is 15ms and 209ms, respectively. The time reported in MPRNet paper is 8ms on Titan XP.
For MIMO-UNet+, the asynchronous inference time and synchronized time is 30ms and 459ms, respectively. The time reported in MPRNet paper is 17ms on Titan XP.
To my surprise, the time difference after CUDA synchronization is more than 10 times. In fact, as can be seen from the above experiments, these model can only run less than 5 FPS, and can not be applied to practical applications with real-time requirements. I think the unsynchronized time reported in existing methods will cause misunderstanding, especially the time less than 30ms which meets the real-time requirements.
Not for your article, but for the whole community of image deblurring. What is your opinion on which time should be reported in an academic paper?
The text was updated successfully, but these errors were encountered: