From 2b8ee53acc543249f92828580ece71a7776b0624 Mon Sep 17 00:00:00 2001 From: AIRLegend Date: Fri, 11 Sep 2020 22:58:31 +0200 Subject: [PATCH 1/8] Remove old docs --- Doc/DEVELOP.md | 37 ---------------------------- Doc/USER_GUIDE.md | 62 ----------------------------------------------- 2 files changed, 99 deletions(-) delete mode 100644 Doc/DEVELOP.md delete mode 100644 Doc/USER_GUIDE.md diff --git a/Doc/DEVELOP.md b/Doc/DEVELOP.md deleted file mode 100644 index ae7efde..0000000 --- a/Doc/DEVELOP.md +++ /dev/null @@ -1,37 +0,0 @@ -# Develop AiTrack - -This guide is aimed at developers who want to contribute to the project. - - -## Development environment -- **Visual Studio 2019** (at least). Or MSBuild tools for C++. -- **Qt framework**: AITrack has been developed using version 5.9.9 and also tested with the most recent version at the moment (5.15), so it should work with the latest one. -- **cmake**: For setting up the project/dependencies. - - -## Setting up the project - -1. Clone this repo: `git clone https://github.com/AIRLegend/aitracker` -2. Cd into the directory: `cd aitracker`. -3. Download and unpack dependencies: `cmake .` -4. Open `Camera.sln` with Visual Studio (or build it with `msbuild Camera.sln`). -5. Copy the `models/` dir to the same location as `Client.exe`. -6. Copy the necessary .dlls to the executable directory (`libusb.dll`, `onnxruntime.dll`, `opencv_world430.dll`). And then you can run `windeployqt <.exe path>` to let Qt auto-copy the remaining .dlls. -7. You should be good to go! - - -## How to contribute - -1. Create a fork of the repo and clone it on your local machine. -1. Once cloned, create a new branch for your feature/fix. -2. Try to document your code in order to debug/review it better. -3. Test your feature building and using the program. -4. If possible, squash all your commits into one with clear explanation of the changes. -5. Push your code to your fork. -6. Make a pull request. - - To /master if you consider is a fully functional feature. - - To /dev if it's part of something bigger that needs to be further developed. - -You can start with anything (documentation, code cleaning...) and when you're more familiar with the codebase, try getting yout hands dirtier! - -Thank you if you decide to contribute! \ No newline at end of file diff --git a/Doc/USER_GUIDE.md b/Doc/USER_GUIDE.md deleted file mode 100644 index da70da8..0000000 --- a/Doc/USER_GUIDE.md +++ /dev/null @@ -1,62 +0,0 @@ -# User guide: Setup, common problems and tips - -## Video version - -Thanks to Sims Smith, for making a tutorial on how to setup this software to work with XPlane and Flight Simulator 2020. Although it's made for AITrack v0.4, the core process is almost the same. - -Although this tutorial covers pretty much everything, it's worth pointing out that: - - -
- -[](https://youtu.be/LPlahUVPx4o) - -
- -1) You don't need to configure "Use remote client" anymore if you're running Opentrack in your local machine. -2) You should take you time for [tweaking your curves in Opentrack](https://www.youtube.com/watch?v=u0TBI7SoGkc) to your preferences. -3) Experiment with Opentrack's built in filters. Acella it's the recommended one at the moment. Configure it's smoothing parameter to reduce camera shaking. - -## Configuring opentrack and AiTrack - -AiTrack sends data over UDP to opentrack, which in turn, sends it to your game, so both of them need to be running. - -- You will need to create a new profile (under "Game Data" section, click the "Profile" drop down and create a new profile). - -- Then, under "Input", select "UDP over network". - -- In order to correct some of the noise AITrack has, it's recommended to use Acella filter on Opentrack with pretty high smoothing (over 2 degrees for rotation and about 1mm for position). However, Kalman filter works also okay (adjust its settings as you like). - -Example of Opentrack configuration: -![](../Images/OpentrackConfig.png) -![](../Images/OpentrackConfig1.png) - -Then, on AITrack, just click "Start tracking". The program will use the default configuration, which assumes that opentrack is on the same machine and it's listening port is 4242 (Opentrack's default). In case you want to use other config, just change it on AITrack and save it. After that, just click "Start" in Opentrack. - - -## Common problems -- If you find your head movements are inverted on your game: - - Under `Options>Output` invert Pitch and Z axes. Also, swap X and Y axes if needed. - -- If you find your view on the game making "strange jumps": - - Look at the video preview on AITrack and confirm the facial landmarks are correctly positioned. - - If not, check your illumination, and the angle your have your camera. - - If the landmarks are recognized correctly try fine-tunning the distance parameter on AITrack. (Don't forget to click **Apply** each time you make any change). - -- If the view makes "jumps" when you move your head try adjusting the curves on Opentrack (`Mapping` button). - - See **Tips** section - - Here is a video with some tips https://www.youtube.com/watch?v=u0TBI7SoGkc on how to configure your curves. - -- If you move left/right my your goes up/down: - - You have to flip your X and Y axes in Opentrack (see screenshot above: Options>Output). - -## Tips - -Based on the testing made so far, here are some recommendations for getting the best performance: - -- Configure well your movement curves on Opentrack. Leave a little "dead zone" at the beginning of each curve and use the asymetric mapping feature for "Pitch". -- Position your camera at about 0.5-2 meters horizontally from your face. - * It's better if the camera is directly in front of you, but it doesn't mattter if you have some lateral offset. -- The camera should be, approximately, at about your nose-level. Good positions are on top of our monitor, or at its base. -- You shouldn't need to touch "Initial parameters" on AITrack. If you find the tracking is not accurate, fine-tune "Distance" parameter only to the actual value (distance from your face to the camera, in meters). -- Use an Acella filter with Opentrack and set the smoothing near the maximum values (both rotation and position). \ No newline at end of file From da8911b028aa3ff9a1d59fafee2b6a06a52c59c8 Mon Sep 17 00:00:00 2001 From: AIRLegend Date: Sat, 26 Sep 2020 07:28:25 +0200 Subject: [PATCH 2/8] Updated solver. Now, if user selects Medium or Heavy model, the solver will use 30 points instead of 18 to estimate the position. This should solve the problem some users have of its vew jumping like crazy. --- AITracker/src/PositionSolver.cpp | 107 +++++++++++++++++--------- AITracker/src/PositionSolver.h | 9 ++- Client/src/camera/OCVCamera.cpp | 2 +- Client/src/tracker/TrackerFactory.cpp | 5 +- Client/src/version.h | 2 +- aitrack.json | 6 +- 6 files changed, 85 insertions(+), 46 deletions(-) diff --git a/AITracker/src/PositionSolver.cpp b/AITracker/src/PositionSolver.cpp index f123bec..bcce056 100644 --- a/AITracker/src/PositionSolver.cpp +++ b/AITracker/src/PositionSolver.cpp @@ -2,9 +2,10 @@ PositionSolver::PositionSolver(int width, int height, - float prior_pitch, float prior_yaw, float prior_distance) : - contour_indices{ 0,1,8,15,16,27,28,29,30,31,32,33,34,35,36,39,42,45 }, - landmark_points_buffer(NB_CONTOUR_POINTS, 1, CV_32FC2), + float prior_pitch, float prior_yaw, float prior_distance, bool complex) : + //contour_indices{ 0,1,8,15,16,27,28,29,30,31,32,33,34,35,36,39,42,45 }, + //contour_indices{ 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,27,28,29,30,31,32,33,34,35,36,39,42,45 }, + landmark_points_buffer(complex ? NB_CONTOUR_POINTS_COMPLEX: NB_CONTOUR_POINTS_BASE, 1, CV_32FC2), rv({ 0, 0, 0 }), tv({ 0, 0, 0 }) { @@ -16,28 +17,67 @@ PositionSolver::PositionSolver(int width, int height, this->rv[1] = this->prior_yaw; this->tv[2] = this->prior_distance; - //std::cout << "PRIORS CALCULATED: \nPITCH: " <prior_pitch << " YAW: " << this->prior_yaw << " DISTANCE: " << this->prior_distance; - - mat3dcontour = (cv::Mat_(18, 3) << - 0.45517698, -0.30089578, 0.76442945, - 0.44899884, -0.16699584, 0.765143, - 0., 0.621079, 0.28729478, - -0.44899884, -0.16699584, 0.765143, - -0.45517698, -0.30089578, 0.76442945, - 0., -0.2933326, 0.1375821, - 0., -0.1948287, 0.06915811, - 0., -0.10384402, 0.00915182, - 0., 0., 0., - 0.08062635, 0.04127607, 0.13416104, - 0.04643935, 0.05767522, 0.10299063, - 0., 0.06875312, 0.09054535, - -0.04643935, 0.05767522, 0.10299063, - -0.08062635, 0.04127607, 0.13416104, - 0.31590518, -0.2983375, 0.2851074, - 0.13122973, -0.28444737, 0.23423915, - -0.13122973, -0.28444737, 0.23423915, - -0.31590518, -0.2983375, 0.2851074 - ); + + if (!complex) + { + contour_indices = { 0,1,8,15,16,27,28,29,30,31,32,33,34,35,36,39,42,45 }; + mat3dcontour = (cv::Mat_(NB_CONTOUR_POINTS_BASE, 3) << + 0.45517698, -0.30089578, 0.76442945, + 0.44899884, -0.16699584, 0.765143, + 0., 0.621079, 0.28729478, + -0.44899884, -0.16699584, 0.765143, + -0.45517698, -0.30089578, 0.76442945, + 0., -0.2933326, 0.1375821, + 0., -0.1948287, 0.06915811, + 0., -0.10384402, 0.00915182, + 0., 0., 0., + 0.08062635, 0.04127607, 0.13416104, + 0.04643935, 0.05767522, 0.10299063, + 0., 0.06875312, 0.09054535, + -0.04643935, 0.05767522, 0.10299063, + -0.08062635, 0.04127607, 0.13416104, + 0.31590518, -0.2983375, 0.2851074, + 0.13122973, -0.28444737, 0.23423915, + -0.13122973, -0.28444737, 0.23423915, + -0.31590518, -0.2983375, 0.2851074 + ); + } + else + { + contour_indices = { 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,27,28,29,30,31,32,33,34,35,36,39,42,45 }; + mat3dcontour = (cv::Mat_(NB_CONTOUR_POINTS_COMPLEX, 3) << + 0.45517698, -0.30089578, 0.76442945, + 0.44899884, -0.16699584, 0.76514298, + 0.43743154, -0.02265548, 0.73926717, + 0.41503343, 0.08894145, 0.74794745, + 0.38912359, 0.23238003, 0.70478839, + 0.3346301, 0.36126539, 0.61558759, + 0.2637251, 0.46000972, 0.49147922, + 0.16241622, 0.55803716, 0.33944517, + 0., 0.62107903, 0.28729478, + -0.16241622, 0.55803716, 0.33944517, + -0.2637251, 0.46000972, 0.49147922, + -0.3346301, 0.36126539, 0.61558759, + -0.38912359, 0.23238003, 0.70478839, + -0.41503343, 0.08894145, 0.74794745, + -0.43743154, -0.02265548, 0.73926717, + -0.44899884, -0.16699584, 0.76514298, + 0., -0.29333261, 0.13758209, + 0., -0.1948287, 0.06915811, + 0., -0.10384402, 0.00915182, + 0., 0., 0., + 0.08062635, 0.04127607, 0.13416104, + 0.04643935, 0.05767522, 0.10299063, + 0., 0.06875312, 0.09054535, + -0.04643935, 0.05767522, 0.10299063, + -0.08062635, 0.04127607, 0.13416104, + 0.31590518, -0.29833749, 0.2851074, + 0.13122973, -0.28444737, 0.23423915, + -0.13122973, -0.28444737, 0.23423915, + -0.31590518, -0.29833749, 0.2851074 + ); + } + camera_matrix = (cv::Mat_(3, 3) << height, 0, height / 2, @@ -46,6 +86,8 @@ PositionSolver::PositionSolver(int width, int height, ); camera_distortion = (cv::Mat_(4, 1) << 0, 0, 0, 0); + + if(complex) std::cout << "Using complex solver" << std::endl; } void PositionSolver::solve_rotation(FaceData* face_data) @@ -53,7 +95,7 @@ void PositionSolver::solve_rotation(FaceData* face_data) int contour_idx = 0; for (int j = 0; j < 2; j++) { - for (int i = 0; i < NB_CONTOUR_POINTS; i++) + for (int i = 0; i < contour_indices.size(); i++) { contour_idx = contour_indices[i]; landmark_points_buffer.at(i, j) = (int)face_data->landmark_coords[2 * contour_idx + j]; @@ -63,17 +105,8 @@ void PositionSolver::solve_rotation(FaceData* face_data) cv::Mat rvec(rv, true), tvec(tv, true); - /*solvePnP(mat3dcontour, - landmark_points_buffer, - this->camera_matrix, - this->camera_distortion, - rvec, - tvec, - true, //extrinsic guess - cv::SOLVEPNP_ITERATIVE - );*/ - solvePnP(mat3dcontour, + solvePnP(mat3dcontour, landmark_points_buffer, this->camera_matrix, this->camera_distortion, @@ -93,6 +126,8 @@ void PositionSolver::solve_rotation(FaceData* face_data) face_data->translation[i] = tvec.at(i, 0) * 10; } + std::cout << face_data->to_string() << std::endl; + correct_rotation(*face_data); } diff --git a/AITracker/src/PositionSolver.h b/AITracker/src/PositionSolver.h index 53cd0d2..0b2fb93 100644 --- a/AITracker/src/PositionSolver.h +++ b/AITracker/src/PositionSolver.h @@ -20,7 +20,8 @@ class PositionSolver int im_height, float prior_pitch = -2.f, float prior_yaw = -2.f, - float prior_distance = -1.f); + float prior_distance = -1.f, + bool complex = false); /** Stores solved translation/rotation on the face_data object @@ -32,12 +33,14 @@ class PositionSolver void set_prior_distance(float new_distance); private: - static const int NB_CONTOUR_POINTS = 18; + static const int NB_CONTOUR_POINTS_COMPLEX = 29; + static const int NB_CONTOUR_POINTS_BASE = 18; const double TO_DEG = (180.0 / 3.14159265); cv::Mat mat3dface; cv::Mat mat3dcontour; - int contour_indices[NB_CONTOUR_POINTS]; // Facial landmarks that interest us + //int contour_indices[NB_CONTOUR_POINTS]; // Facial landmarks that interest us + std::vector contour_indices; //Buffer so we dont have to allocate a list on every solve_rotation call. cv::Mat landmark_points_buffer; diff --git a/Client/src/camera/OCVCamera.cpp b/Client/src/camera/OCVCamera.cpp index 20056ea..2563879 100644 --- a/Client/src/camera/OCVCamera.cpp +++ b/Client/src/camera/OCVCamera.cpp @@ -24,7 +24,7 @@ OCVCamera::OCVCamera(int width, int height, int fps, int index) : this->height = cam_native_height; } - if (fps < 0) + if (fps < 30) this->fps = cam_native_fps; exposure, gain = -1; diff --git a/Client/src/tracker/TrackerFactory.cpp b/Client/src/tracker/TrackerFactory.cpp index ce9db22..6ba2bda 100644 --- a/Client/src/tracker/TrackerFactory.cpp +++ b/Client/src/tracker/TrackerFactory.cpp @@ -10,11 +10,13 @@ std::unique_ptr TrackerFactory::buildTracker(int im_width, int { std::string landmark_path = model_dir; std::string detect_path = model_dir + "detection.onnx"; + bool complex_solver = true; switch(type) { case TRACKER_TYPE::TRACKER_FAST: landmark_path += "lm_f.onnx"; + complex_solver = false; break; case TRACKER_TYPE::TRACKER_MED: landmark_path += "lm_m.onnx"; @@ -30,8 +32,7 @@ std::unique_ptr TrackerFactory::buildTracker(int im_width, int std::wstring detect_wstr = std::wstring_convert>().from_bytes(detect_path); std::wstring landmark_wstr = std::wstring_convert>().from_bytes(landmark_path); - - auto solver = std::make_unique(im_width, im_height, 0, 0, distance); + auto solver = std::make_unique(im_width, im_height, -2, -2, distance, complex_solver); std::unique_ptr t; try diff --git a/Client/src/version.h b/Client/src/version.h index 094faa6..384cbf7 100644 --- a/Client/src/version.h +++ b/Client/src/version.h @@ -1,3 +1,3 @@ #pragma once -#define AITRACK_VERSION "v0.6.2-alpha" \ No newline at end of file +#define AITRACK_VERSION "v0.6.3-alpha" \ No newline at end of file diff --git a/aitrack.json b/aitrack.json index dee87c7..6b61812 100644 --- a/aitrack.json +++ b/aitrack.json @@ -1,5 +1,5 @@ { - "version": "0.6.2-alpha", - "url": "https://github.com/AIRLegend/aitrack/releases/download/v0.6.2-alpha/aitrack-v0.6.2-alpha.zip", - "bin": "aitrack-v0.6.2-alpha/AITrack.exe" + "version": "0.6.3-alpha", + "url": "https://github.com/AIRLegend/aitrack/releases/download/v0.6.3-alpha/aitrack-v0.6.3-alpha.zip", + "bin": "aitrack-v0.6.3-alpha/AITrack.exe" } \ No newline at end of file From c31a39774e6a665ca66985cafa9ddf7e59b67b97 Mon Sep 17 00:00:00 2001 From: AIRLegend Date: Sun, 4 Oct 2020 20:01:15 +0200 Subject: [PATCH 3/8] Log error when searching for cameras --- Client/src/presenter/presenter.cpp | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/Client/src/presenter/presenter.cpp b/Client/src/presenter/presenter.cpp index d199524..652ee79 100644 --- a/Client/src/presenter/presenter.cpp +++ b/Client/src/presenter/presenter.cpp @@ -29,7 +29,16 @@ Presenter::Presenter(IView& view, std::unique_ptr&& t_factory, s CameraFactory camfactory; CameraSettings camera_settings = build_camera_params(); logger->info("Searching for cameras..."); - all_cameras = camfactory.getCameras(camera_settings); + try + { + all_cameras = camfactory.getCameras(camera_settings); + } + catch (const std::exception& ex) + { + logger->error("Error querying for cameras"); + logger->error(ex.what()); + throw std::runtime_error("Error querying cameras"); + } logger->info("Number of recognized cameras: {}", all_cameras.size()); if (all_cameras.size() == 0) From 8e6e16936582dfdcfd08cd135d4b9044d5f49e41 Mon Sep 17 00:00:00 2001 From: AIRLegend Date: Sun, 4 Oct 2020 20:14:42 +0200 Subject: [PATCH 4/8] Remove old prior pitch/yaw and add camera FOV parameter --- Client/src/model/Config.cpp | 11 ++++------- Client/src/model/Config.h | 2 +- 2 files changed, 5 insertions(+), 8 deletions(-) diff --git a/Client/src/model/Config.cpp b/Client/src/model/Config.cpp index 326b1bd..898c607 100644 --- a/Client/src/model/Config.cpp +++ b/Client/src/model/Config.cpp @@ -9,9 +9,8 @@ ConfigData ConfigData::getGenericConfig() ConfigData conf = ConfigData(); conf.ip = ""; conf.port = 0; - conf.prior_pitch = 0.0; - conf.prior_yaw = 0.0; - conf.prior_distance = .6; + conf.camera_fov = 56.0; + conf.prior_distance = .7; conf.show_video_feed = true; conf.selected_model = 0; conf.selected_camera = 0; @@ -44,8 +43,7 @@ void ConfigMgr::updateConfig(const ConfigData& data) { conf.setValue("ip", data.ip.data()); conf.setValue("port", data.port); - conf.setValue("prior_pitch", data.prior_pitch); - conf.setValue("prior_yaw", data.prior_yaw); + conf.setValue("camera_fov", data.camera_fov); conf.setValue("prior_distance", data.prior_distance); conf.setValue("video_feed", data.show_video_feed); conf.setValue("model", data.selected_model); @@ -64,8 +62,7 @@ ConfigData ConfigMgr::getConfig() ConfigData c = ConfigData(); c.ip = conf.value("ip", "").toString().toStdString(); c.port = conf.value("port", 0).toInt(); - c.prior_pitch = conf.value("prior_pitch", 0.0).toDouble(); - c.prior_yaw = conf.value("prior_yaw", 0.0).toDouble(); + c.camera_fov = conf.value("camera_fov", 0.0).toDouble(); c.prior_distance = conf.value("prior_distance", 0.0).toDouble(); c.show_video_feed = conf.value("video_feed", true).toBool(); c.use_landmark_stab = conf.value("stabilize_landmarks", true).toBool(); diff --git a/Client/src/model/Config.h b/Client/src/model/Config.h index 91de2eb..50e4aad 100644 --- a/Client/src/model/Config.h +++ b/Client/src/model/Config.h @@ -15,7 +15,7 @@ struct ConfigData int video_height; int video_width; int video_fps; - double prior_pitch, prior_yaw, prior_distance; + double prior_distance, camera_fov; bool show_video_feed; bool use_landmark_stab; bool autocheck_updates; From 7cc1bae1972a918183baebc5a731353d04e8f4ce Mon Sep 17 00:00:00 2001 From: AIRLegend Date: Sun, 4 Oct 2020 21:30:57 +0200 Subject: [PATCH 5/8] Begun improving solver. Now taking into account the camera FOV makes the tracking better and could solve the issues of random jumping. --- AITracker/src/PositionSolver.cpp | 35 +++++++++++++++++++++++++----- AITracker/src/PositionSolver.h | 1 + Client/src/presenter/presenter.cpp | 6 ----- 3 files changed, 30 insertions(+), 12 deletions(-) diff --git a/AITracker/src/PositionSolver.cpp b/AITracker/src/PositionSolver.cpp index bcce056..e70b8f8 100644 --- a/AITracker/src/PositionSolver.cpp +++ b/AITracker/src/PositionSolver.cpp @@ -9,9 +9,13 @@ PositionSolver::PositionSolver(int width, int height, rv({ 0, 0, 0 }), tv({ 0, 0, 0 }) { - this->prior_pitch = (1.1f * (prior_pitch + 90.f) / 180.f) - (double)2.5f; - this->prior_distance = prior_distance * -2.; - this->prior_yaw = (1.84f * (prior_yaw + 90.f) / 180.f) - (double)3.14f; + //this->prior_pitch = (1.1f * (prior_pitch + 90.f) / 180.f) - (double)2.5f; + //this->prior_yaw = (1.84f * (prior_yaw + 90.f) / 180.f) - (double)3.14f; + //this->prior_distance = prior_distance * -2.; + + this->prior_pitch = -1.57; + this->prior_yaw = -1.57; + this->prior_distance = prior_distance * -1.; this->rv[0] = this->prior_pitch; this->rv[1] = this->prior_yaw; @@ -78,12 +82,31 @@ PositionSolver::PositionSolver(int width, int height, ); } + // Taken from + // https://github.com/opentrack/opentrack/blob/3cc3ef246ad71c463c8952bcc96984b25d85b516/tracker-aruco/ftnoir_tracker_aruco.cpp#L193 + // Taking into account the camera FOV instead of assuming raw image dims is more clever and + // will make the solver more camera-agnostic. + float diag_fov = 56 * TO_RAD; - camera_matrix = (cv::Mat_(3, 3) << + // Get expressed in sensor-size units + + double fov_w = 2. * atan(tan(diag_fov / 2.) / sqrt(1. + height / (double)width * height / (double)width)); + double fov_h = 2. * atan(tan(diag_fov / 2.) / sqrt(1. + width / (double)height * width / (double)height)); + + float i_height = .5 * height / (tan(.5*fov_w)); + float i_width = .5* width / (tan(.5*fov_h)); + + /*camera_matrix = (cv::Mat_(3, 3) << height, 0, height / 2, 0, height, width / 2, 0, 0, 1 - ); + );*/ + + camera_matrix = (cv::Mat_(3, 3) << + i_width, 0, height / 2, + 0, i_height, width / 2, + 0, 0, 1 + ); camera_distortion = (cv::Mat_(4, 1) << 0, 0, 0, 0); @@ -112,7 +135,7 @@ void PositionSolver::solve_rotation(FaceData* face_data) this->camera_distortion, rvec, tvec, - true, //extrinsic guess + false, //extrinsic guess cv::SOLVEPNP_ITERATIVE ); diff --git a/AITracker/src/PositionSolver.h b/AITracker/src/PositionSolver.h index 0b2fb93..c900a9d 100644 --- a/AITracker/src/PositionSolver.h +++ b/AITracker/src/PositionSolver.h @@ -36,6 +36,7 @@ class PositionSolver static const int NB_CONTOUR_POINTS_COMPLEX = 29; static const int NB_CONTOUR_POINTS_BASE = 18; const double TO_DEG = (180.0 / 3.14159265); + const double TO_RAD = (3.14159265 / 180.0); cv::Mat mat3dface; cv::Mat mat3dcontour; diff --git a/Client/src/presenter/presenter.cpp b/Client/src/presenter/presenter.cpp index 652ee79..1a3ed4c 100644 --- a/Client/src/presenter/presenter.cpp +++ b/Client/src/presenter/presenter.cpp @@ -219,7 +219,6 @@ void Presenter::update_tracking_data(FaceData& facedata) this->state.roll = facedata.rotation[2]; //Roll } - void Presenter::update_stabilizer(const ConfigData& data) { // Right now, only enabling/disabling it is supported @@ -259,7 +258,6 @@ void Presenter::update_camera_params() this->logger->info("Updated camera parameters. {}x{}@{}", state.video_width, state.video_height, state.video_fps); } - void Presenter::send_data(double* buffer_data) { //Send data @@ -272,7 +270,6 @@ void Presenter::send_data(double* buffer_data) udp_sender->send_data(buffer_data); } - void Presenter::toggle_tracking() { run = !run; @@ -321,13 +318,11 @@ void Presenter::save_prefs(const ConfigData& data) this->logger->info("Prefs saved"); } - void Presenter::sync_ui_inputs() { this->view->update_view_state(state); } - void Presenter::close_program() { //Assure we stop tracking loop. @@ -337,7 +332,6 @@ void Presenter::close_program() cam->stop_camera(); } - void Presenter::on_update_check_completed(bool update_exists) { if (update_exists) From 0fbca11920e8c2e5901d70b2d9aae7dd0a2669f8 Mon Sep 17 00:00:00 2001 From: AIRLegend Date: Sun, 11 Oct 2020 21:00:45 +0200 Subject: [PATCH 6/8] Implemented configuration of the FOV setting. --- AITracker/src/PositionSolver.cpp | 11 +++++-- AITracker/src/PositionSolver.h | 6 ++-- Client/src/model/Config.cpp | 2 +- Client/src/presenter/presenter.cpp | 2 ++ Client/src/tracker/TrackerFactory.cpp | 2 +- Client/src/tracker/TrackerFactory.h | 2 +- Client/src/view/ConfigWindow.cpp | 7 +++-- Client/src/view/ConfigWindow.h | 2 +- Client/src/view/ConfigWindow.ui | 42 ++++++++++++++++++++++----- 9 files changed, 56 insertions(+), 20 deletions(-) diff --git a/AITracker/src/PositionSolver.cpp b/AITracker/src/PositionSolver.cpp index e70b8f8..261e26a 100644 --- a/AITracker/src/PositionSolver.cpp +++ b/AITracker/src/PositionSolver.cpp @@ -2,7 +2,7 @@ PositionSolver::PositionSolver(int width, int height, - float prior_pitch, float prior_yaw, float prior_distance, bool complex) : + float prior_pitch, float prior_yaw, float prior_distance, bool complex, float fov) : //contour_indices{ 0,1,8,15,16,27,28,29,30,31,32,33,34,35,36,39,42,45 }, //contour_indices{ 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,27,28,29,30,31,32,33,34,35,36,39,42,45 }, landmark_points_buffer(complex ? NB_CONTOUR_POINTS_COMPLEX: NB_CONTOUR_POINTS_BASE, 1, CV_32FC2), @@ -19,6 +19,7 @@ PositionSolver::PositionSolver(int width, int height, this->rv[0] = this->prior_pitch; this->rv[1] = this->prior_yaw; + this->rv[2] = -1.57; this->tv[2] = this->prior_distance; @@ -49,7 +50,10 @@ PositionSolver::PositionSolver(int width, int height, else { contour_indices = { 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,27,28,29,30,31,32,33,34,35,36,39,42,45 }; - mat3dcontour = (cv::Mat_(NB_CONTOUR_POINTS_COMPLEX, 3) << + + landmark_points_buffer = cv::Mat(contour_indices.size(), 1, CV_32FC2); + + mat3dcontour = (cv::Mat_(contour_indices.size(), 3) << 0.45517698, -0.30089578, 0.76442945, 0.44899884, -0.16699584, 0.76514298, 0.43743154, -0.02265548, 0.73926717, @@ -86,7 +90,8 @@ PositionSolver::PositionSolver(int width, int height, // https://github.com/opentrack/opentrack/blob/3cc3ef246ad71c463c8952bcc96984b25d85b516/tracker-aruco/ftnoir_tracker_aruco.cpp#L193 // Taking into account the camera FOV instead of assuming raw image dims is more clever and // will make the solver more camera-agnostic. - float diag_fov = 56 * TO_RAD; + //float diag_fov = 56 * TO_RAD; + float diag_fov = fov * TO_RAD; // Get expressed in sensor-size units diff --git a/AITracker/src/PositionSolver.h b/AITracker/src/PositionSolver.h index c900a9d..4682765 100644 --- a/AITracker/src/PositionSolver.h +++ b/AITracker/src/PositionSolver.h @@ -21,7 +21,8 @@ class PositionSolver float prior_pitch = -2.f, float prior_yaw = -2.f, float prior_distance = -1.f, - bool complex = false); + bool complex = false, + float fov = 56.0f ); /** Stores solved translation/rotation on the face_data object @@ -40,8 +41,7 @@ class PositionSolver cv::Mat mat3dface; cv::Mat mat3dcontour; - //int contour_indices[NB_CONTOUR_POINTS]; // Facial landmarks that interest us - std::vector contour_indices; + std::vector contour_indices; // Facial landmarks that interest us //Buffer so we dont have to allocate a list on every solve_rotation call. cv::Mat landmark_points_buffer; diff --git a/Client/src/model/Config.cpp b/Client/src/model/Config.cpp index 898c607..d5185ef 100644 --- a/Client/src/model/Config.cpp +++ b/Client/src/model/Config.cpp @@ -62,7 +62,7 @@ ConfigData ConfigMgr::getConfig() ConfigData c = ConfigData(); c.ip = conf.value("ip", "").toString().toStdString(); c.port = conf.value("port", 0).toInt(); - c.camera_fov = conf.value("camera_fov", 0.0).toDouble(); + c.camera_fov = conf.value("camera_fov", 56.0).toDouble(); c.prior_distance = conf.value("prior_distance", 0.0).toDouble(); c.show_video_feed = conf.value("video_feed", true).toBool(); c.use_landmark_stab = conf.value("stabilize_landmarks", true).toBool(); diff --git a/Client/src/presenter/presenter.cpp b/Client/src/presenter/presenter.cpp index 1a3ed4c..b7afcd7 100644 --- a/Client/src/presenter/presenter.cpp +++ b/Client/src/presenter/presenter.cpp @@ -129,6 +129,7 @@ void Presenter::init_tracker(int type) buildTracker(all_cameras[state.selected_camera]->width, all_cameras[state.selected_camera]->height, (float)state.prior_distance, + this->state.camera_fov, tracker_factory->get_type(type) ); } @@ -143,6 +144,7 @@ void Presenter::init_tracker(int type) this->t = tracker_factory->buildTracker(all_cameras[state.selected_camera]->width, all_cameras[state.selected_camera]->height, (float)state.prior_distance, + this->state.camera_fov, tracker_factory->get_type(type)); } state.selected_model = type; diff --git a/Client/src/tracker/TrackerFactory.cpp b/Client/src/tracker/TrackerFactory.cpp index 6ba2bda..8bee92a 100644 --- a/Client/src/tracker/TrackerFactory.cpp +++ b/Client/src/tracker/TrackerFactory.cpp @@ -6,7 +6,7 @@ #include "TrackerWrapper.h" -std::unique_ptr TrackerFactory::buildTracker(int im_width, int im_height, float distance, TRACKER_TYPE type) +std::unique_ptr TrackerFactory::buildTracker(int im_width, int im_height, float distance, float fov, TRACKER_TYPE type) { std::string landmark_path = model_dir; std::string detect_path = model_dir + "detection.onnx"; diff --git a/Client/src/tracker/TrackerFactory.h b/Client/src/tracker/TrackerFactory.h index cbbaaea..271acb2 100644 --- a/Client/src/tracker/TrackerFactory.h +++ b/Client/src/tracker/TrackerFactory.h @@ -15,7 +15,7 @@ class TrackerFactory std::string model_dir; public: TrackerFactory(std::string modeldir); - std::unique_ptr buildTracker(int im_width, int im_height, float distance, TRACKER_TYPE type= TRACKER_TYPE::TRACKER_FAST); + std::unique_ptr buildTracker(int im_width, int im_height, float distance, float fov, TRACKER_TYPE type= TRACKER_TYPE::TRACKER_FAST); /** * Set the list with the string identifiers of the available models. diff --git a/Client/src/view/ConfigWindow.cpp b/Client/src/view/ConfigWindow.cpp index 10eca48..eef16e0 100644 --- a/Client/src/view/ConfigWindow.cpp +++ b/Client/src/view/ConfigWindow.cpp @@ -30,7 +30,7 @@ ConfigWindow::ConfigWindow(IRootView *prev_window, QWidget *parent) check_stabilization_landmarks = gp_box_priors->findChild("landmarkStabChck"); cb_modelType = gp_box_priors->findChild("modeltypeSelect"); distance_param = gp_box_priors->findChild("distanceField"); - + fov_param = gp_box_priors->findChild("fovField"); connect(btn_apply, SIGNAL(released()), this, SLOT(onApplyClick())); @@ -66,6 +66,7 @@ ConfigData ConfigWindow::get_inputs() conf.video_height = height_selector->value(); conf.selected_camera = input_camera->currentIndex(); conf.prior_distance = distance_param->text().toDouble(); + conf.camera_fov = fov_param->text().toDouble(); conf.ip = ip_field->text().toStdString(); conf.port = port_field->text().toInt(); conf.use_landmark_stab = check_stabilization_landmarks->isChecked(); @@ -104,6 +105,8 @@ void ConfigWindow::update_view_state(ConfigData conf) check_auto_update->setChecked(conf.autocheck_updates); check_stabilization_landmarks->setChecked(conf.use_landmark_stab); distance_param->setText(QString::number(conf.prior_distance)); + fov_param->setText(QString::number(conf.camera_fov)); + if (conf.ip != "" || conf.port > 0) { @@ -117,7 +120,7 @@ void ConfigWindow::update_view_state(ConfigData conf) ip_field->setText(""); port_field->setText(""); } - + } void ConfigWindow::set_enabled(bool enabled) diff --git a/Client/src/view/ConfigWindow.h b/Client/src/view/ConfigWindow.h index 228ec7c..83d571e 100644 --- a/Client/src/view/ConfigWindow.h +++ b/Client/src/view/ConfigWindow.h @@ -37,7 +37,7 @@ class ConfigWindow : public QWidget, IView QComboBox *input_camera, *cb_modelType; QCheckBox *check_stabilization_landmarks, *check_auto_update; - QLineEdit* distance_param, * ip_field, * port_field; + QLineEdit *distance_param, *fov_param, * ip_field, * port_field; QGroupBox *gp_box_camera_prefs, *gp_box_image_prefs, *gp_box_address, *gp_box_priors;; diff --git a/Client/src/view/ConfigWindow.ui b/Client/src/view/ConfigWindow.ui index ea7c9ce..aa074a9 100644 --- a/Client/src/view/ConfigWindow.ui +++ b/Client/src/view/ConfigWindow.ui @@ -250,7 +250,7 @@ 210 - 30 + 21 191 91 @@ -336,9 +336,9 @@ 210 - 130 + 120 191 - 151 + 171 @@ -348,7 +348,7 @@ 10 - 26 + 21 61 18 @@ -361,7 +361,7 @@ 90 - 27 + 22 61 20 @@ -374,7 +374,7 @@ 10 - 62 + 76 121 16 @@ -387,7 +387,7 @@ 27 - 85 + 94 121 22 @@ -415,12 +415,38 @@ true + + + + 90 + 51 + 60 + 20 + + + + + + + + + + 10 + 50 + 71 + 18 + + + + Camera FOV + + 241 - 290 + 295 131 21 From 8e0b4ef3a6849ff5018758434c4764b78a13a80f Mon Sep 17 00:00:00 2001 From: AIRLegend Date: Fri, 16 Oct 2020 15:33:16 +0200 Subject: [PATCH 7/8] Add exception logging to presenter so we can know a little bit more about why some cameras crash. --- AITracker/src/PositionSolver.cpp | 8 +--- Client/src/presenter/presenter.cpp | 61 +++++++++++++++++------------- 2 files changed, 35 insertions(+), 34 deletions(-) diff --git a/AITracker/src/PositionSolver.cpp b/AITracker/src/PositionSolver.cpp index 261e26a..7f50530 100644 --- a/AITracker/src/PositionSolver.cpp +++ b/AITracker/src/PositionSolver.cpp @@ -3,16 +3,11 @@ PositionSolver::PositionSolver(int width, int height, float prior_pitch, float prior_yaw, float prior_distance, bool complex, float fov) : - //contour_indices{ 0,1,8,15,16,27,28,29,30,31,32,33,34,35,36,39,42,45 }, - //contour_indices{ 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,27,28,29,30,31,32,33,34,35,36,39,42,45 }, + //TODO: Refactor removing prior_yaw/pitch parameters landmark_points_buffer(complex ? NB_CONTOUR_POINTS_COMPLEX: NB_CONTOUR_POINTS_BASE, 1, CV_32FC2), rv({ 0, 0, 0 }), tv({ 0, 0, 0 }) { - //this->prior_pitch = (1.1f * (prior_pitch + 90.f) / 180.f) - (double)2.5f; - //this->prior_yaw = (1.84f * (prior_yaw + 90.f) / 180.f) - (double)3.14f; - //this->prior_distance = prior_distance * -2.; - this->prior_pitch = -1.57; this->prior_yaw = -1.57; this->prior_distance = prior_distance * -1.; @@ -90,7 +85,6 @@ PositionSolver::PositionSolver(int width, int height, // https://github.com/opentrack/opentrack/blob/3cc3ef246ad71c463c8952bcc96984b25d85b516/tracker-aruco/ftnoir_tracker_aruco.cpp#L193 // Taking into account the camera FOV instead of assuming raw image dims is more clever and // will make the solver more camera-agnostic. - //float diag_fov = 56 * TO_RAD; float diag_fov = fov * TO_RAD; // Get expressed in sensor-size units diff --git a/Client/src/presenter/presenter.cpp b/Client/src/presenter/presenter.cpp index b7afcd7..f0eb2d0 100644 --- a/Client/src/presenter/presenter.cpp +++ b/Client/src/presenter/presenter.cpp @@ -168,46 +168,53 @@ void Presenter::run_loop() double buffer_data[6]; this->logger->info("Starting camera {} capture", state.selected_camera); - cam->start_camera(); - this->logger->info("Camera {} started capturing", state.selected_camera); - while(run) + try { - cam->get_frame(video_tex_pixels.get()); - cv::Mat mat(cam->height, cam->width, CV_8UC3, video_tex_pixels.get()); + cam->start_camera(); + this->logger->info("Camera {} started capturing", state.selected_camera); - t->predict(mat, d, this->filter); - - if (d.face_detected) + while(run) { - if (paint) + cam->get_frame(video_tex_pixels.get()); + cv::Mat mat(cam->height, cam->width, CV_8UC3, video_tex_pixels.get()); + + t->predict(mat, d, this->filter); + + if (d.face_detected) { - // Paint landmarks - for (int i = 0; i < 66; i++) + if (paint) { - cv::Point p(d.landmark_coords[2 * i + 1], d.landmark_coords[2 * i]); - cv::circle(mat, p, 2, color_magenta, 3); + // Paint landmarks + for (int i = 0; i < 66; i++) + { + cv::Point p(d.landmark_coords[2 * i + 1], d.landmark_coords[2 * i]); + cv::circle(mat, p, 2, color_magenta, 3); + } + cv::Point p1(d.face_coords[0], d.face_coords[1]); + cv::Point p2(d.face_coords[2], d.face_coords[3]); + cv::rectangle(mat, p1, p2, color_blue, 1); } - cv::Point p1(d.face_coords[0], d.face_coords[1]); - cv::Point p2(d.face_coords[2], d.face_coords[3]); - cv::rectangle(mat, p1, p2, color_blue, 1); + + update_tracking_data(d); + send_data(buffer_data); } - update_tracking_data(d); - send_data(buffer_data); - } + if (paint) + { + cv::cvtColor(mat, mat, cv::COLOR_BGR2RGB); + view->paint_video_frame(mat); + } - if (paint) - { - cv::cvtColor(mat, mat, cv::COLOR_BGR2RGB); - view->paint_video_frame(mat); + cv::waitKey(1000/state.video_fps); } - cv::waitKey(1000/state.video_fps); + cam->stop_camera(); + this->logger->info("Stop camera {} capture", state.selected_camera); + } + catch (std::exception& ex) { + this->logger->error(ex.what()); } - - cam->stop_camera(); - this->logger->info("Stop camera {} capture", state.selected_camera); } From 6a53771dcdfba0b5a4af50f1219f08caff5183c0 Mon Sep 17 00:00:00 2001 From: AIRLegend Date: Fri, 16 Oct 2020 15:41:45 +0200 Subject: [PATCH 8/8] Fix link to CI status --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 204a220..4a186bd 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@

- The open head tracker -

-[![Build status](https://ci.appveyor.com/api/projects/status/18wa4pqqsge9m0x3?svg=true)](https://ci.appveyor.com/project/AIRLegend/aitracker) +[![Build status](https://ci.appveyor.com/api/projects/status/18wa4pqqsge9m0x3?svg=true)](https://ci.appveyor.com/project/AIRLegend/aitrack) ## What is this?