diff --git a/_pages/favor.md b/_pages/favor.md index 4d61ba3..e54d631 100644 --- a/_pages/favor.md +++ b/_pages/favor.md @@ -5,15 +5,16 @@ subtitle: "" description: WACV (2025) paper on rendering feature descriptors from unseen views permalink: /favor/ nav_order: 9984 -usemathjax: true nav_exclude: true --- + [ arXiv pre-print ](https://arxiv.org/abs/2409.07571){: .btn .btn-blue } + {::nomarkdown}
The video below shows the camera pose relocalization computed using FaVoR. The purple frame indicates the starting camera position provided by the first DenseVLAD result, while the blue frame represents the ground truth camera pose of the query image. The estimated camera pose is shown in black, connected to the initial pose by a green line.
@@ -284,7 +285,7 @@ Camera relocalization methods range from dense image alignment to direct cameraThe video below shows the camera pose relocalization computed using FaVoR. The purple frame indicates the starting camera position provided by the first DenseVLAD result, while the blue frame represents the ground truth camera pose of the query image. The estimated camera pose is shown in black, connected to the initial pose by a green line.
@@ -314,7 +315,7 @@ Camera relocalization methods range from dense image alignment to direct cameraThe video below shows the camera pose relocalization computed using FaVoR. The purple frame indicates the starting camera position provided by the first DenseVLAD result, while the blue frame represents the ground truth camera pose of the query image. The estimated camera pose is shown in black, connected to the initial pose by a green line.
@@ -344,7 +345,7 @@ Camera relocalization methods range from dense image alignment to direct cameraThe video below shows the camera pose relocalization computed using FaVoR. The purple frame indicates the starting camera position provided by the first DenseVLAD result, while the blue frame represents the ground truth camera pose of the query image. The estimated camera pose is shown in black, connected to the initial pose by a green line.
@@ -374,7 +375,7 @@ Camera relocalization methods range from dense image alignment to direct cameraThe video below shows the camera pose relocalization computed using FaVoR. The purple frame indicates the starting camera position provided by the first DenseVLAD result, while the blue frame represents the ground truth camera pose of the query image. The estimated camera pose is shown in black, connected to the initial pose by a green line.
@@ -410,20 +411,12 @@ Camera relocalization methods range from dense image alignment to direct camera In the video below, we extract Alike-l features from a target image. We then match the target features with those extracted from a query image under Standard feature matching. On the rigth side, we report the matches in three iterations of the FaVoR method queried from the target image pose. It is noticeble that the amount of matches is much higher in the 3rd iteration of FaVoR compared to the standard matching approach. The text at the bottom left of the image reports the distance in meters and degree between the target image and the query images and the number of matches for both the methods, the text turn out red when the number of standard feature matches is higher than the FaVoR ones.