Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python 3 version specifics #3

Open
compusolve-rsa opened this issue Sep 25, 2020 · 8 comments
Open

Python 3 version specifics #3

compusolve-rsa opened this issue Sep 25, 2020 · 8 comments

Comments

@compusolve-rsa
Copy link

compusolve-rsa commented Sep 25, 2020

Hi

I would love to showcase your app in a short film, but really want to ask for more explicit and verbose steps to install strongtrack for a non-dev. I am using a regular laptop based webcam with win 10.

Regards and please keep on going.

Lee

@rwsarmstrong
Copy link
Owner

Hi Lee,
Are you trying to run it via the python code directly or via the executable linked here:?https://drive.google.com/file/d/1RSHuZtHB_VTBN37-PuapUriQn49aI6jJ/view?usp=sharing

@compusolve-rsa
Copy link
Author

compusolve-rsa commented Sep 25, 2020 via email

@compusolve-rsa
Copy link
Author

compusolve-rsa commented Sep 27, 2020 via email

@rwsarmstrong
Copy link
Owner

Hi Lee,
So if I understand correctly you'd be looking at a simple 3d model with a live video being continuously painted as the texture? In my opinion this might work as a highly stylised solution - which it might be worth you pursuing - but since you've previously referenced something such as The Last of Us, I'm guessing you're aiming for something based in physical 'realistic' classic lighting.

Short answer:
Whatever I say below, keep experimenting! Maybe I'm totally wrong!

Medium answer:
In 2020 the easiest path is still to make, purchase or auto-generate a medium or high fidelity 3d model of the character because it allows you to match the lighting to the scene and allows animators to tweak what needs tweaking later and blend together different takes. No amount of blending could really convincingly root a video performance onto scenes where the lighting is even a teeny bit different, because as humans we're very fussy about the lighting of a human face. Look at examples such as Rise of Skywalker or The Sopranos where they had well paid teams toiling away on individual scenes trying to do this and it still looks weird. (you can see this and other examples in this corridor crew video: https://www.youtube.com/watch?v=2ZKPnuUFwOk)

Long answer:
The idea of compositing live video on a 3d model sounds interesting and perhaps similar in concept to how they captured the performances in LA Noire. That being said, the challenge would be matching lighting from the original real life feed to the scene that is being rendered, or vice versa. The lighting of a rendered character's or filmed real life performer's face is overwhelming important to creating the final look of the film and making it look coherent.

If you were to, for example, manage to have totally flat lighting on the real life performer then you might be able to pull it off - and this is what they did in LA Noire I believe - but as you'll see in behind the scenes of that, it was quite an expensive, complicated process and they still had to have a very accurate 3d model of the performer within the scene. Why? Because light will hit the 3d model and cast a shadow from the nose, eye sockets etc. You'd also need to account for the turning of the head which would quickly reveal the true, simple shape of the 3d model if it doesn't accurately match the performer (think of when people where Richard Nixon masks or similar).

There's also a thousand tiny details going into a face that conveys to a viewer that the performer is 'really' placed with the 3d scene, with a prime example being the reflection of the scene's light sources - or at least a credible alternative - in the eye. In photography we call this the 'catch light'. It may sound small but the human brain can subconsciously register this and it's tremendously important when trying to capture a performance and convincing us that the actor is 'real'.

This is why the approach used by LA Noire was a one-off because with the more widely used conventional path, once you have a 3d model of character which is correctly rigged and textured, you can focus on figuring out how to drive that from a performance that can be captured in any number of ways whether that be hand animated, driven by an AI analysis of the audio or webcams or head cams. The flexibility this affords shouldn't be overlooked. You can then place it within any scene whether that be with a torch shining directly in someone's face or with totally diffuse lighting inside a cave. Then there is of course the thorny issue of something like hair, the concavity of the mouth, the extension of the eye lashes from the lids and so on. This kind of ties back to the idea of physically based rendering that almost all rendering solutions use nowadays where you can have an asset that -once you have gone through the pain of making it - behaves believably under any and all lighting conditions.

Ultimately, LA Noire also illustrated another issue, which is that even with all the millions of dollars behind that project, there was still a subtle mismatch between the physical movement of the character's body and the head that was being 'glued on'. Having a modelled head that is fully hooked up and rigged with the body means an animator or real time system can add subtle secondary motion onto the head that implies it is truly attached to the body. Being able to give animators access to the curves is also something that is lost by relying on using direct recorded video within the scene.

@compusolve-rsa
Copy link
Author

compusolve-rsa commented Oct 2, 2020 via email

@compusolve-rsa
Copy link
Author

compusolve-rsa commented Oct 2, 2020 via email

@compusolve-rsa
Copy link
Author

compusolve-rsa commented Oct 4, 2020 via email

@compusolve-rsa
Copy link
Author

compusolve-rsa commented Oct 15, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants