Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement a high-level data-driven rendering pipeline #5980

Draft
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

sturnclaw
Copy link
Member

So I've been productively procrastinating for the last two days. Now my fingers (metaphorically) hurt, but at least I have this PR to prove that my procrastination is productive. 😄

This PR finally implements what I've been talking about for years, that being a high-level "renderer" which can be driven from a data file rather than hardcoding every part of the rendering pipe into Camera.cpp.

It is inspired heavily by the Bitsquid/Stingray renderer design, especially the concept of conditional execution of various render passes and driving the entire render chain by injecting the user's graphics settings into the RenderSetup's parameters rather than exporting individual "techniques" and having external code be responsible for conditionally executing each, in order, by hand.

This (in my opinion) solves a very major problem we've had for a while, which is that managing the high-level render flow of a "scene" in an ad-hoc manner is a non-trivial amount of work. Trying to implement any kind of post-processing into our current rendering design would essentially result in us throwing a bunch of Graphics::RenderTarget pointers into the Camera object and then piping graphics settings from some engine object into each Camera used to render.

This PR is not finished yet. I suspect it's about 70% done, but there is some grunt work left to do and a few thorny problems to solve. I've deferred a few "details" to be finished once I come back to the PR, and accumulated a stack of longer-range TODO items (mostly documented in header doc comments) to implement post-merge. The new SceneRenderer is not fully integrated into our existing render pipe, and only the WorldView is currently taking advantage of it.

The final vision of the API is that each "viewport" that wants to render (WorldView, DeathView, Cockpit, Editor, etc.) will kick off a rendering function, passing a named RenderLayer to be executed during the rendering call. The rendering function can render a "Scene" (currently represented by passing in a Space and Camera pointer), with render passes operating on the bodies in that scene, or might be just a single Model, or perhaps even just a stack of fullscreen passes.

The renderer is written in an "intentionally expandable" style; it is meant to be modified and grown to respond to improvements in other parts of the codebase. The list of RenderGenerators is by no means complete, and will need to be expanded to implement further post-processing effects.

On that note, there are some hacked-in design "anti-decisions" in the name of getting an MVP running quickly without needing to make major changes to the rest of the code - for example, each RenderPass is meant to have a parameters block which can be read by a RenderGenerator and bound to shaders, rather than using the input and steps keys currently implemented, and shaders are meant to have their RenderStateDescriptor specified in the shaderdef file rather than manually configured by each RenderGenerator.


Despite that wall of text and the draft status of the PR, I would definitely like feedback on the design of the renderer and welcome questions - if you're uncertain about specific aspects, that's a good indicator I need to add extra documentation.

Finally, don't be scared of the diff... 95% of it is just a TOML parsing backend added to contrib/. 😄

- Also clean up the naming of the oldLightIntensities member
- Fairly heavyweight dependency, but it is significantly nicer to consume TOML files for hand-written renderer setups than raw JSON
- SceneRenderer is responsible for doing most of what Camera did in terms of organizing the scene and queing bodies to be rendered.
- RenderSetup allows defining a render pipeline through a flexible data file.
- RenderPasses defined in a render_setup file are consumed by SceneRenderer to quickly compose high-level rendering.
- Each RenderPass is associated with a Generator which is responsible for submitting the actual draw calls to the graphics API.
- ResourceManager provides a convenient API for Generators to allocate render targets to support post-processing effects and advanced rendering.
- BackgroundGenerator handles computing visibility and rendering the Background object
- BillboardGenerator renders all billboarded Bodies in the list of bodies
- LightingGenerator computes scene-wide lighting and fills the lighting UBO
- FullscreenGenerator implements a shader-based full-screen pass
- FullscreenResolve performs an MSAA resolve (or a regular blit) to a render target
- FullscreenDownsample does a multi-step downsample+upsample suitable for non-separable bloom/blur techniques
- Implementation based on "Bandwidth-Efficient Rendering" by Marius Bjorge (SIGGRAPH 2015)
- Based on Kawase Bloom by Masaki Kawase (GDC 2003)
- Temporary migration from Camera to SceneRenderer
- DeathView still uses Camera for rendering, need to eventually move the SceneRenderer ownership to a higher level (Pi::App or similar)
@sturnclaw sturnclaw added New feature C++ code Rendering Everything related to rendering labels Nov 21, 2024
@impaktor
Copy link
Member

Very cool. Bonus for linking article on web.arhcive, then you know you're reading forbidden secrets hidden by CIA and the lizard overlords. (Quite the difference between added / removed lines: +19k vs -31.)

<sturnclaw[m]> Behold: blur! https://i.imgur.com/tDz5Ijs.png [23:54:37]
aBTCXQH

@fluffyfreak
Copy link
Contributor

I am looking at this, it's just taking me a LONG time to understand even though most of it seems perfectly logical 👍

@sturnclaw
Copy link
Member Author

If you have any specific questions about how parts of the PR work I can definitely try to answer them - it's a bit more confusing to try to understand right now compared to the "final" version as parts of the system are just implied and not implemented at the moment.


// Pick up to four suitable system light sources (stars)
lightSources.clear();
lightSources.reserve(4);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to bump the number of allowed lights to 8, I was sure we had a define for it but looks we need one so avoiding hardcoded values would be good

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ahha! TOTAL_NUM_LIGHTS in light.h

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll convert this to use TOTAL_NUM_LIGHTS, but the utility is severely limited - I'd rather get the "lighting" code out of Renderer.h and implement a clustered-forward lighting scheme using the render_setup interface instead.

static void position_system_lights(Frame *camFrame, Frame *frame, std::vector<SceneRenderer::Light> &lights)
{
PROFILE_SCOPED()
if (lights.size() > 3) return;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto about hardcoded limits

@fluffyfreak
Copy link
Contributor

Ok, not got any real questions so far just a couple of minor nitpicks.

I thought it was going to be really confusing and hard to understand but it actually seems quite simple to follow once I was stepping through the code itself. Good job on keeping it readable and making sense 👍

As for the whole idea, I like it. Breaking it down into configurable passes using swappable generators I really like

@fluffyfreak
Copy link
Contributor

Could we use one of these Generator passes to add support for alpha blended rendering too? Something we kind of have hacked in a limited way at the moment but since this is a forward rendering engine it's sorely lacking.

@bszlrd
Copy link
Contributor

bszlrd commented Dec 2, 2024

I would love that. Could make decals possible.

@sturnclaw
Copy link
Member Author

Could we use one of these Generator passes to add support for alpha blended rendering too? Something we kind of have hacked in a limited way at the moment but since this is a forward rendering engine it's sorely lacking.

"Yes, but" - the intent is to migrate to a lit transparent pass using the generator mechanism, but it's going to require quite a lot of refactoring to the SceneGraph::Model architecture to support binning "sub-meshes" to different passes. This is a longer-term plan, and will probably be implemented on same time scale as shadows.

The problem is that we have a very strongly "top-down" method of rendering individual "objects" - we call Body::Render(), which calls Model::Render(), which calls Group::Render(), and so on. This means the top-level rendering code has a very limited ability to reason about what's actually in a render pass, and no way to bin individual draw calls to different passes - instead we have to render the entire thing all over again for a new pass.

I have some plans to change this model entirely, but they involve a non-trivial amount of work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C++ code New feature Rendering Everything related to rendering
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants