You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your great work and code release!
I found that the disparity of KITTI scene flow dataset is based on frame one, so that the disparity of pixels for the second frame can not be gotten directly. How dou you get them, from the raw lidar data or some method else?
If you use the raw lidar data, how could you get so dense depth values?
If you did not use the raw lidar data, why do you use 150 frames rather than 200?
The text was updated successfully, but these errors were encountered:
Thank you for your great work and code release!
I found that the disparity of KITTI scene flow dataset is based on frame one, so that the disparity of pixels for the second frame can not be gotten directly. How dou you get them, from the raw lidar data or some method else?
If you use the raw lidar data, how could you get so dense depth values?
If you did not use the raw lidar data, why do you use 150 frames rather than 200?
The text was updated successfully, but these errors were encountered: