-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python 3 version specifics #3
Comments
Hi Lee, |
Hi Robert
Currently running 0.8 executable and just got it running today. Managed
to place the facial markers so far. Thanks for the newer version.
Looking forward to seeing first hand what it can do.
A bit about my film making style. I want to put together a simple scene
for now, drama driven. Take a look at a short film I produced and
directed 4 years ago. https://www.youtube.com/watch?v=Ro4rUIPKB4U The
scene between the two Agents is a good example of the type of drama I
would like to test in a cg based world. Also take a look at the drama
style in The Last of Us 1 or 2. It has a very tempered real world feel
more like feature film or television series. Hoping I can get the
nuances of the actor to come across for a drama driven test with
strongtrack. With an opensource tool like yours, maybe one day I can get
actors to take part in scenes from the comfort of their laptop with a
webcam from anywhere in the world. Would be useful for remote
auditioning, scene tests, proof of concept and short films.
Thanks for the feedback.
Regards
Lee
On 2020-09-25 11:01, Robert Armstrong wrote:
Hi Lee,
Are you trying to run it via the python code directly or via the executable linked here:?https://drive.google.com/file/d/1RSHuZtHB_VTBN37-PuapUriQn49aI6jJ/view?usp=sharing
--
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub [1], or unsubscribe [2].
|
Hi Robert
I have a question that you may be able to answer. I need some guidance
as to whether an idea I have is doable considering your experience in
facial tracking and unreal 4.
I am a screenwriter and indie filmmaker who wants to produce cg versions
of my film scripts as a labour of love. No studios, no paychecks or
writers fee, no selling out my original vision to some bigwig corp that
will simply corrupt my stories, so it means I need to do most the work
myself. With unreal4 I believe its doable, but I have a concept to
revisit the idea of facial motion capture which I will try explain
clearly even though I am not that good at programming. Your advise may
help me find someone who can build this concept under GNU GPL and just
give it to the people for free. I would love to send the finished tool
to any actor who wants to try out "webcam acting". This gives me access
to any person from any country in the world without having to shoot on
location. Big money saver for indie cg films.
Here goes:
Take the front half of the default mannequins face and use a live webcam
material on its face. That's the proof of concept. I tried this with a
sphere in ue4 and its works. Next step is to add a primitive jaw to
follow the actors jaw and simple tilt, yaw etc. I have heard of
opentrack that can track how the head moves that may be useful and a
timesaver to avoid reinventing the wheel.
My reasoning is as follows, As a director I like the idea of composing
the actual actors face onto cg character. Maybe some post effects at a
later date to blend the character face into the scene.
Also, their original acting can be "seen" on the mannequin without
having to use a cg facial rig except for the jaw. It elliminates facial
mocap for quick drafts of scenes and gets the story telling engine
moving in the right direction before facial mocap is used.
I am limited in my skill of using unreal in that I don't know how to map
a webcam to the mannequin. If I can find someone who can do this and
build a proof of concept, I will readily cede my ip concept to them in
exchange for a usable tool and as long as it stays GNU GPL (or similar).
My drive is to share my kind of stories that are very drama driven, but
without studios and all the hell of production costs. In the screenshot
attached you'll see I mapped my face to a rectangle, a cylinder and
sphere, so I know UE4 can do it. The rest I am not skilled to be able to
do.
Any pointers welcome and I invite anyone who wants to be project lead on
this concept to table to talk about how we can build tool anyone can use
for free.
Regards
Lee
|
Hi Lee, Short answer: Medium answer: Long answer: If you were to, for example, manage to have totally flat lighting on the real life performer then you might be able to pull it off - and this is what they did in LA Noire I believe - but as you'll see in behind the scenes of that, it was quite an expensive, complicated process and they still had to have a very accurate 3d model of the performer within the scene. Why? Because light will hit the 3d model and cast a shadow from the nose, eye sockets etc. You'd also need to account for the turning of the head which would quickly reveal the true, simple shape of the 3d model if it doesn't accurately match the performer (think of when people where Richard Nixon masks or similar). There's also a thousand tiny details going into a face that conveys to a viewer that the performer is 'really' placed with the 3d scene, with a prime example being the reflection of the scene's light sources - or at least a credible alternative - in the eye. In photography we call this the 'catch light'. It may sound small but the human brain can subconsciously register this and it's tremendously important when trying to capture a performance and convincing us that the actor is 'real'. This is why the approach used by LA Noire was a one-off because with the more widely used conventional path, once you have a 3d model of character which is correctly rigged and textured, you can focus on figuring out how to drive that from a performance that can be captured in any number of ways whether that be hand animated, driven by an AI analysis of the audio or webcams or head cams. The flexibility this affords shouldn't be overlooked. You can then place it within any scene whether that be with a torch shining directly in someone's face or with totally diffuse lighting inside a cave. Then there is of course the thorny issue of something like hair, the concavity of the mouth, the extension of the eye lashes from the lids and so on. This kind of ties back to the idea of physically based rendering that almost all rendering solutions use nowadays where you can have an asset that -once you have gone through the pain of making it - behaves believably under any and all lighting conditions. Ultimately, LA Noire also illustrated another issue, which is that even with all the millions of dollars behind that project, there was still a subtle mismatch between the physical movement of the character's body and the head that was being 'glued on'. Having a modelled head that is fully hooked up and rigged with the body means an animator or real time system can add subtle secondary motion onto the head that implies it is truly attached to the body. Being able to give animators access to the curves is also something that is lost by relying on using direct recorded video within the scene. |
Hi Robert
Very useful info. The video of Paul Walker really nailed it. Thank you.
Based on what you describe it think its better going for a mocap/facecap
workflow with cg representation of an actual actor.
So here is the question. You did a mocap/facecap video back in 2017.
https://www.youtube.com/watch?v=A65QedG8ouA. As an indie cg film maker I
could really use a tool like that to get a story across. If you are open
to it could you discuss the tools and workflow and if there was any
animation cleanup involved? I don't know how to animate, so I am
attempting to find a workflow that reduces post-animation and cleanup as
much as humanly possible.
What I am trying to achieve with a short cg film remake of Worthy is
doing it without an animator, (can't afford one), but getting as close
to edit-free life-like performance from an actor who has a computer and
a webcam. Think of it as webcam auditioning for want of a better word.
Thanks again for the useful pointers, really useful and a time saver.
Regards
Lee
On 2020-09-30 15:20, Robert Armstrong wrote:
Hi Lee,
So if I understand correctly you'd be looking at a simple 3d model with a live video being continuously painted as the texture? In my opinion this might work as a highly stylised solution - which it might be worth you pursuing - but since you've previously referenced something such as The Last of Us, I'm guessing you're aiming for something based in physical 'realistic' classic lighting.
Short answer:
Whatever I say below, keep experimenting! Maybe I'm totally wrong!
Medium answer:
In 2020 the easiest path is still to make, purchase or auto-generate a medium or high fidelity 3d model of the character because it allows you to match the lighting to the scene and allows animators to tweak what needs tweaking later and blend together different takes. No amount of blending could really convincingly root a video performance onto scenes where the lighting is even a teeny bit different, because as humans we're very fussy about the lighting of a human face. Look at examples such as Rise of Skywalker or The Sopranos where they had well paid teams toiling away on individual scenes trying to do this and it still looks weird. (you can see this and other examples in this corridor crew video: https://www.youtube.com/watch?v=2ZKPnuUFwOk)
Long answer:
The idea of compositing live video on a 3d model sounds interesting and perhaps similar in concept to how they captured the performances in LA Noire. That being said, the challenge would be matching lighting from the original real life feed to the scene that is being rendered, or vice versa. The lighting of a rendered character's or filmed real life performer's face is overwhelming important to creating the final look of the film and making it look coherent.
If you were to, for example, manage to have totally flat lighting on the real life performer then you might be able to pull it off - and this is what they did in LA Noire I believe - but as you'll see in behind the scenes of that, it was quite an expensive, complicated process and they still had to have a very accurate 3d model of the performer within the scene. Why? Because light will hit the 3d model and cast a shadow from the nose, eye sockets etc. You'd also need to account for the turning of the head which would quickly reveal the true, simple shape of the 3d model if it doesn't accurately match the performer (think of when people where Richard Nixon masks or similar).
There's also a thousand tiny details going into a face that conveys to a viewer that the performer is 'really' placed with the 3d scene, with a prime example being the reflection of the scene's light sources - or at least a credible alternative - in the eye. In photography we call this the 'catch light'. It may sound small but the human brain can subconsciously register this and it's tremendously important when trying to capture a performance and convincing us that the actor is 'real'.
This is why the approach used by LA Noire was a one-off because with the more widely used conventional path, once you have a 3d model of character which is correctly rigged and textured, you can focus on figuring out how to drive that from a performance that can be captured in any number of ways whether that be hand animated, driven by an AI analysis of the audio or webcams or head cams. The flexibility this affords shouldn't be overlooked. You can then place it within any scene whether that be with a torch shining directly in someone's face or with totally diffuse lighting inside a cave. Then there is of course the thorny issue of something like hair, the concavity of the mouth, the extension of the eye lashes from the lids and so on. This kind of ties back to the idea of physically based rendering that almost all rendering solutions use nowadays where you can have an asset that -once you have gone through the pain of making it - behaves believably under any and all lighting
conditions.
Ultimately, LA Noire also illustrated another issue, which is that even with all the millions of dollars behind that project, there was still a subtle mismatch between the physical movement of the character's body and the head that was being 'glued on'. Having a modelled head that is fully hooked up and rigged with the body means an animator or real time system can add subtle secondary motion onto the head that implies it is truly attached to the body. Being able to give animators access to the curves is also something that is lost by relying on using direct recorded video within the scene.
--
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub [1], or unsubscribe [2].
|
Hi Robert
I just checked out making of characters for LA Noire. If I could get
close to that kind of quality for a cg film it would indeed be very cool
to get the story across. Thanks for the tip. Will be watching
strongtrack. I really like the direction its going.
Regards
Lee
On 2020-09-30 15:20, Robert Armstrong wrote:
Hi Lee,
So if I understand correctly you'd be looking at a simple 3d model with a live video being continuously painted as the texture? In my opinion this might work as a highly stylised solution - which it might be worth you pursuing - but since you've previously referenced something such as The Last of Us, I'm guessing you're aiming for something based in physical 'realistic' classic lighting.
Short answer:
Whatever I say below, keep experimenting! Maybe I'm totally wrong!
Medium answer:
In 2020 the easiest path is still to make, purchase or auto-generate a medium or high fidelity 3d model of the character because it allows you to match the lighting to the scene and allows animators to tweak what needs tweaking later and blend together different takes. No amount of blending could really convincingly root a video performance onto scenes where the lighting is even a teeny bit different, because as humans we're very fussy about the lighting of a human face. Look at examples such as Rise of Skywalker or The Sopranos where they had well paid teams toiling away on individual scenes trying to do this and it still looks weird. (you can see this and other examples in this corridor crew video: https://www.youtube.com/watch?v=2ZKPnuUFwOk)
Long answer:
The idea of compositing live video on a 3d model sounds interesting and perhaps similar in concept to how they captured the performances in LA Noire. That being said, the challenge would be matching lighting from the original real life feed to the scene that is being rendered, or vice versa. The lighting of a rendered character's or filmed real life performer's face is overwhelming important to creating the final look of the film and making it look coherent.
If you were to, for example, manage to have totally flat lighting on the real life performer then you might be able to pull it off - and this is what they did in LA Noire I believe - but as you'll see in behind the scenes of that, it was quite an expensive, complicated process and they still had to have a very accurate 3d model of the performer within the scene. Why? Because light will hit the 3d model and cast a shadow from the nose, eye sockets etc. You'd also need to account for the turning of the head which would quickly reveal the true, simple shape of the 3d model if it doesn't accurately match the performer (think of when people where Richard Nixon masks or similar).
There's also a thousand tiny details going into a face that conveys to a viewer that the performer is 'really' placed with the 3d scene, with a prime example being the reflection of the scene's light sources - or at least a credible alternative - in the eye. In photography we call this the 'catch light'. It may sound small but the human brain can subconsciously register this and it's tremendously important when trying to capture a performance and convincing us that the actor is 'real'.
This is why the approach used by LA Noire was a one-off because with the more widely used conventional path, once you have a 3d model of character which is correctly rigged and textured, you can focus on figuring out how to drive that from a performance that can be captured in any number of ways whether that be hand animated, driven by an AI analysis of the audio or webcams or head cams. The flexibility this affords shouldn't be overlooked. You can then place it within any scene whether that be with a torch shining directly in someone's face or with totally diffuse lighting inside a cave. Then there is of course the thorny issue of something like hair, the concavity of the mouth, the extension of the eye lashes from the lids and so on. This kind of ties back to the idea of physically based rendering that almost all rendering solutions use nowadays where you can have an asset that -once you have gone through the pain of making it - behaves believably under any and all lighting
conditions.
Ultimately, LA Noire also illustrated another issue, which is that even with all the millions of dollars behind that project, there was still a subtle mismatch between the physical movement of the character's body and the head that was being 'glued on'. Having a modelled head that is fully hooked up and rigged with the body means an animator or real time system can add subtle secondary motion onto the head that implies it is truly attached to the body. Being able to give animators access to the curves is also something that is lost by relying on using direct recorded video within the scene.
--
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub [1], or unsubscribe [2].
|
Hi Robert
Some previz (no sound) for the opening scene of the cg remake of Worthy
the mini-series. https://youtu.be/m7AY7umZvxU
The trailer https://www.youtube.com/watch?v=_pn-i44-xcI
The original pitch https://www.youtube.com/watch?v=Ro4rUIPKB4U
For now Scene 1 will simply be a short film to get proof of concept
across as a diy Artist-Centric cg film. I attach a reading sample to
show you where I am going with Scene 1. I am not worried about IP
protection since I have public proof dating back to 2017 on twitter and
my revision history on the full two part mini-series screenplay.
I welcome your input on how I can get a usable facial capture with a
standard webcam. I am hoping to do webcam casting auditions to get
home-bound actors to participate in this short. Body animation is with
Mixamo for now, until I can find a free solution to get motion capture
maybe OpenPose, not sure yet. As for the facial capture, you will see
from the pitch on youtube how drama-centric my writing is, there is a
lot of closeups on actors and their head movements are very constrained
when up close.
My vision is to tell unadulterated stories. No studios, no producers
twisting my stories into a mish mash of sellable junk, no directors who
will corrupt my original message. It is my hope to create this cg story
with only the help of fellow creators who do things like this as a
labour of love. When the film is done, I will be putting it online for
free. Storytelling is more important to me than making money.
Enjoy the writing sample. Please send me your email address so we may
correspond directly.
Regards
Lee
[email protected]
On 2020-10-02 13:46, Lee Franken wrote:
Hi Robert
I just checked out making of characters for LA Noire. If I could get close to that kind of quality for a cg film it would indeed be very cool to get the story across. Thanks for the tip. Will be watching strongtrack. I really like the direction its going.
Regards
Lee
On 2020-09-30 15:20, Robert Armstrong wrote:
> Hi Lee,
> So if I understand correctly you'd be looking at a simple 3d model with a live video being continuously painted as the texture? In my opinion this might work as a highly stylised solution - which it might be worth you pursuing - but since you've previously referenced something such as The Last of Us, I'm guessing you're aiming for something based in physical 'realistic' classic lighting.
>
> Short answer:
> Whatever I say below, keep experimenting! Maybe I'm totally wrong!
>
> Medium answer:
> In 2020 the easiest path is still to make, purchase or auto-generate a medium or high fidelity 3d model of the character because it allows you to match the lighting to the scene and allows animators to tweak what needs tweaking later and blend together different takes. No amount of blending could really convincingly root a video performance onto scenes where the lighting is even a teeny bit different, because as humans we're very fussy about the lighting of a human face. Look at examples such as Rise of Skywalker or The Sopranos where they had well paid teams toiling away on individual scenes trying to do this and it still looks weird. (you can see this and other examples in this corridor crew video: https://www.youtube.com/watch?v=2ZKPnuUFwOk)
>
> Long answer:
> The idea of compositing live video on a 3d model sounds interesting and perhaps similar in concept to how they captured the performances in LA Noire. That being said, the challenge would be matching lighting from the original real life feed to the scene that is being rendered, or vice versa. The lighting of a rendered character's or filmed real life performer's face is overwhelming important to creating the final look of the film and making it look coherent.
>
> If you were to, for example, manage to have totally flat lighting on the real life performer then you might be able to pull it off - and this is what they did in LA Noire I believe - but as you'll see in behind the scenes of that, it was quite an expensive, complicated process and they still had to have a very accurate 3d model of the performer within the scene. Why? Because light will hit the 3d model and cast a shadow from the nose, eye sockets etc. You'd also need to account for the turning of the head which would quickly reveal the true, simple shape of the 3d model if it doesn't accurately match the performer (think of when people where Richard Nixon masks or similar).
>
> There's also a thousand tiny details going into a face that conveys to a viewer that the performer is 'really' placed with the 3d scene, with a prime example being the reflection of the scene's light sources - or at least a credible alternative - in the eye. In photography we call this the 'catch light'. It may sound small but the human brain can subconsciously register this and it's tremendously important when trying to capture a performance and convincing us that the actor is 'real'.
>
> This is why the approach used by LA Noire was a one-off because with the more widely used conventional path, once you have a 3d model of character which is correctly rigged and textured, you can focus on figuring out how to drive that from a performance that can be captured in any number of ways whether that be hand animated, driven by an AI analysis of the audio or webcams or head cams. The flexibility this affords shouldn't be overlooked. You can then place it within any scene whether that be with a torch shining directly in someone's face or with totally diffuse lighting inside a cave. Then there is of course the thorny issue of something like hair, the concavity of the mouth, the extension of the eye lashes from the lids and so on. This kind of ties back to the idea of physically based rendering that almost all rendering solutions use nowadays where you can have an asset that -once you have gone through the pain of making it - behaves believably under any and all lighting
conditions.
>
> Ultimately, LA Noire also illustrated another issue, which is that even with all the millions of dollars behind that project, there was still a subtle mismatch between the physical movement of the character's body and the head that was being 'glued on'. Having a modelled head that is fully hooked up and rigged with the body means an animator or real time system can add subtle secondary motion onto the head that implies it is truly attached to the body. Being able to give animators access to the curves is also something that is lost by relying on using direct recorded video within the scene.
>
> --
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub [1], or unsubscribe [2].
|
Hi Robert
FYI, I just posted an offer on Unreal Forum looking for a tech to take
this over and run with it. I accept this is not in your scope, so I post
this in the hope you know someone who might want to run with this.
https://forums.unrealengine.com/development-discussion/cinematics/1822506-paint-face-realtime-webcam-material-for-remote-actor-performance-facial-capture-onto-cg-character
Looking forward to 0.9. Please send your private email if you are open
to it.
Greensquare
Regards
Lee
|
Hi
I would love to showcase your app in a short film, but really want to ask for more explicit and verbose steps to install strongtrack for a non-dev. I am using a regular laptop based webcam with win 10.
Regards and please keep on going.
Lee
The text was updated successfully, but these errors were encountered: