-
Notifications
You must be signed in to change notification settings - Fork 249
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Agustín Castro
committed
Feb 8, 2024
1 parent
604c679
commit e34b949
Showing
2 changed files
with
60 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,59 @@ | ||
# Multi-Camera Demo | ||
|
||
In this example, we show how to associate trackers of different synchronized videos in Norfair. | ||
|
||
Why would we want that? | ||
|
||
- When subjects that are being tracked go out of frame in one video, you might still be able to track them and recognize that it is the same individual if it is still visible in other videos. | ||
- Take footage from one or many videos to a common reference frame. For example, if you are watching a soccer match, you might want to combine the information from different cameras and show the position of the players from a top-down view. | ||
|
||
## Example 1: Associating different videos | ||
|
||
This method will allow you to associate trackers from different footage of the same scene. You can use as many videos as you want. | ||
|
||
```bash | ||
python3 demo.py video1.mp4 video2.mp4 video3.mp4 | ||
``` | ||
|
||
A UI will appear to associate points in `video1.mp4` with points in the other videos, to set `video1.mp4` as a common frame of reference. You can save the transformation you have created in the UI by using the `--save-transformation` flag, and you can load it later with the `--load-transformation` flag. | ||
|
||
If the videos move, you should also use the `--use-motion-estimator-footage` flag to consider camera movement. | ||
|
||
## Example 2: Creating a new perspective | ||
|
||
This method will allow you to associate trackers from different footage of the same scen, and create a new perspective of the scene which didn't exist in those videos. You can use as many videos as you want, and also you need to provide one reference (either an image or video) corresponding to the new perspective. In the soccer example, the reference could be a cenital view of a soccer field. | ||
|
||
```bash | ||
python3 demo.py video1.mp4 video2.mp4 video3.mp4 --reference path_to_reference_file | ||
``` | ||
|
||
As before, you will have to use the UI, or if you have already done that and saved the transformation with the `--save-transformation` flag, you can load that same transformation with the `--load-transformation` flag. | ||
|
||
If the videos where you are tracking have camera movement, you should also use the `--use-motion-estimator-footage` flag to consider camera movement in those videos. | ||
|
||
If you are using a video for the reference file, and the camera moves in the reference, then you should use the `--use-motion-estimator-reference` flag. | ||
|
||
|
||
For additional settings, you may display the instructions using `python demo.py --help`. | ||
|
||
|
||
|
||
|
||
|
||
## UI usage | ||
|
||
The UI has the puropose of annotating points that match in the reference and the footage, to estimate a transformation. | ||
|
||
To add a point, just click a pair of points (one from the footage window, and another from the reference window) and select `"Add"`. | ||
To remove a point, just select the corresponding point at the bottom left corner, and select `"Remove"`. | ||
You can also ignore points, by clicking them and selecting `"Ignore"`. The transformation will not used ingored points. | ||
To 'uningnore' points that have been previously ignored, just click them and select `"Unignore"`. | ||
|
||
If either footage or reference are videos, you can jump to future frames to pick points that match. | ||
For example, to jump 215 frames in the footage, just write that number next to `'Frames to skip (footage)'`, and select `"Skip frames"`. | ||
|
||
You can go back to the first frame of the video (in either footage or reference) by selecting "Reset video". | ||
|
||
Once a transformation has been estimated (you will know that if the `"Finished"` button is green), you can test it: | ||
To Test your transformation, Select the `"Test"` mode, and pick a point in either the reference or the footage, and see the associated point in the other window. | ||
You can go back to the `"Annotate"` mode keep adding more associated points until you are satisfied with the estimated transformation. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters