-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
face landmark tracking over video #1
Comments
Not a big problem, all you have to do is to use I'll update a new version, you can check it out :) B.T.W, I get every frame in real time from If you want to read a saved video, that's another story. But you can do the same |
Yea I'm already doing that. But I have the request run at every new frame so the tracking is jittery from frame to frame because we aren't using any data from the previous frame. |
Doing a new detection every frame is different from detecting and then tracking subsequent frames. I can already do the former, trying to see if the latter is possible in the Vision framework because it'll perform smoother |
Are you doing detection on a saved video? |
no real time |
Oh, I got it. You wanna doing something like motion detection (smooth way) instead of just detect it every single second. |
yea :) |
I think we want something like or https://github.com/hrastnik/face_detect_n_track but with face landmark detection included based on https://developer.apple.com/documentation/vision/vndetectfacelandmarksrequest |
There's a lot of vision libraries, also Google Vision API. I don't know what exactly the difference between vision framework and these libs. But these APIs are all based on single image input. You have to feed in image every single time/frame. Also, I tested with the demo video from dlib C++ with my app(updated, with landmarks). It works pretty well. The
In my project, I just take the first option. You can definitely run on a background thread. However, if you want to update the UI (draw landmarks on screen), you have to do it on main thread. |
For real time video you also need to take into account the difference between video frame rate and "Vision API" sample rate. |
Hi shaibt, |
Hi hanoi2018, To be clear, what I meant is that your device has to perform in under 1/30 sec for face detection to run on all frames in 30fps - it mainly depends on your device processing power and Apple's SW/HW optimisations. Little you could probably do yourself to achieve it. |
Hi shaibt, Thanks for responding |
Have you achieved your goal for dealing with jittering? Hope for your methods sharing. Thank you . |
Hi ailias, I haven't achieved yet. |
@stanchiang |
Same issue here, I'm processing every CVPixelBuffer with |
hi I'm playing with the vision framework and can use the face landmark feature to get the position of facial features in real time. However, I have to run the detector for every frame. This makes the real time face mask jittery.
any ideas on how we could optimize the landmark detection in a real time feed with only the iOS frameworks?
FYI I tried the object tracker, but it wasn't as impressive as it could be. Maybe you've had better luck?
thanks
The text was updated successfully, but these errors were encountered: