-
Notifications
You must be signed in to change notification settings - Fork 97
extract of boxes #6
Comments
Hi, The ETH/UCY dataset comes with the homography matrices so we could use that to map world coordinates to pixel coordinates. Example pseudo code for Hotel:
Then, we resize all videos (and the pixel coordinates) to 720x576. The person boxes are extracted based on average size of people in the videos:
|
Hi! I'm working with zara2, I understand that is part of the ETH/UCY dataset. You comment previously that are homography matrices available, is there any place where I can find them? Regards! |
They are in the rar files. |
It is saved in H.txt |
I get some rar files, but they only contain splines ): |
Hello @JunweiLiang, For the perspective transform function: pixel_points = cv2.perspectiveTransform(np.array([[[x_meter,y_meter]]]), h) |
The x_meter and y_meter are the original coordinates of the ETH/UCY dataset files, which are world coordinates in meters. |
where can I find them please? |
|
For my case, I want to visualize the observed, predicted and groundtruth trajectories after the test. |
in my case, it's different. I have x_obs & y_obs for time 0 to 7 and x_pred & y_pred for time 0 to 11. Thanks |
If your model is trained with pixel coordinates, then you do not need to convert them. |
the Problem Formulation part in paper says "Based on the coordinates, we can automatically extract their bounding boxes" . I wonder how to get it in eth/ucy dataset? Especially how to get the box of human target because the coordinates provided by eth/ucy dataset is in world plane
Thanks for your repo!
The text was updated successfully, but these errors were encountered: