Gamma correctness with VSG #1291
Replies: 12 comments 6 replies
-
Nice writeup about the issues.
For reference, here's what vsgCs does to try to approach gamma correctness:
I think this is the goal to shoot for in VSG. |
Beta Was this translation helpful? Give feedback.
-
I've opened #1302, vsg-dev/vsgXchange#204 and vsg-dev/vsgXchange#204, which fix some of the problems and lay some groundwork for fixing some of the others. They include:
They notably don't include any changes CPU-side to textures as it's still more obvious which textures are probably-sRGB versus probably-linear when in the shader and about to use them, but I can look into that next. |
Beta Was this translation helpful? Give feedback.
-
I've still not thought of a great solution for osg2vsg, as the OSG more or less ignored gamma correctness as a concept, so lots of things won't look identical unless things are done incorrectly on purpose, but lots of things will look better. It's generally just scenes where things like object colours and light positions were fine-tuned to look a certain way, or setups with non-gamma-correctness mitigations (e.g. point lights with physically implausible linear attenuation) that won't look better. I don't know to what extent the priority of osg2vsg is faithfulness versus just being a way to quick-start VSG-based projects - I know it mostly started as the latter, but that's not necessarily its long-term goal. I also don't know the prevalence of OSG-based data that's been carefully crafted to look perfect but the route taken wasn't to make life easier by making a given app gamma-correct. |
Beta Was this translation helpful? Give feedback.
-
With the batch of PRs I've collectively labelled Phase One, I think I've made a big dent in the problem. The biggest remaining problems are:
|
Beta Was this translation helpful? Give feedback.
-
@robertosfield and I had a discussion about this problem, and one of the things that came out of that was that it would be a good idea to have visitors that can convert the colour spaces of vertex colours and materials in the scene graph, and switch image textures (but not data textures) between the SRGB and UNORM formats. However, I've been exploring how to do that, and everything I've come up with is a nasty mix of overengineered, backwards-incompatible and incapable of dealing with data that will already exist in the wild. If the requirement is only that the VSG's built-in shadersets are supported, it's relatively straightforward to find descriptor buffers and then inspect everything they contain for PBR and Phong materials, as they both have RTTI that identifies them, and contain the actual values we'd want to modify, and it's a similar story for However, there's an even bigger problem with vertex colours. Currently, there's nothing that remains in the scenegraph that identifies what any vertex attributes mean beyond their format (which normally just says whether they're floats or ints and whether they're scalars of vectors) and binding number. Parts of the VSG like the graphics pipeline configurator do know the attribute names at the same time as having access to the buffers, so do in principle have the ability to label new fields in classes like There's an existing annoyance that a If users have to provide their own implementations of the colour space conversion visitor(s), it would seemingly defeat the purpose of providing any. So after thinking about this for quite a long time, I still think something roughly along the lines of my original approach, where the loaders (which necessarily need to know what they're loading, except for the |
Beta Was this translation helpful? Give feedback.
-
One of the things I've been working on is an example which updates the example models that come with vsgExamples, as:
I've noticed a couple of things, though:
A good example of the difference is the teapot: Before (incorrect gamma)After (correct gamma)The hue varies across the model in both images, but it's a much smaller range in the gamma-correct version. That's a good thing, and makes sense - when going through a non-linear function, the different brightnesses of the different colour channels are affected differently, so the ratio of photons of different wavelengths gets altered, and without correct gamma, different parts of the yellow teapot pick up a red or green tinge. Also, the mid tones are more realistic, so that's a nice bonus, but that's a really obvious consequence of being gamma-correct as all it does is change mid tones. |
Beta Was this translation helpful? Give feedback.
-
The 3D models that we have in vsgExamples/data really don't show off Vulkan/VSG capabilities, no PBR, no animation and they don't all integrate with all the examples well. So I'd be inclined to find replacements that provide a better illustration of what the VSG can do out of the box. Open to suggestions on this. |
Beta Was this translation helpful? Give feedback.
-
I'm think of perhaps one or two glTF samples. I haven't recently spent the time thinking about the needs of vsgExamples/vsgTutorial but as we start looking at making a VSG-1.2 stable release I think it would make sense to look at what we can refine a bit with vsgExamples/vsgTutorials. |
Beta Was this translation helpful? Give feedback.
-
An even clearer example of the effect on midtones is the OSM example. This one doesn't need any alteration as it doesn't directly specify the shaders or textures, and lets the VSG set them up itself, but still ends up wildly different except for where the light hits it straight on: Before (not gamma-correct)After (gamma-correct)For apps that have their scenes set up in other software, this will help make things look more realistic and more consistent with other renderers, but for scenes created for a specific VSG app, things will get brighter. If people have already compensated for things being unnaturally dark by boosting lighting etc., they may end up too bright, and need tweaking to be suitably dim again. There's not much we can do about that except mentioning it in the release notes, as the amount of compensation required will depend both on artistic taste and the typical angles that surfaces are compared to any lights, so it's not a fixed brightness scale. It'll be better once this is behind us, but it's just a shame that the VSG wasn't gamma-correct from the start. At least I've got some good screenshots to show OpenMW people when they suggest that we can simply make Morrowind gamma-correct and it'll always look better and no one will need to manually adjust the placement, brightness or existence of any lights. |
Beta Was this translation helpful? Give feedback.
-
I've got all the examples in the vsgExamples repo looking how they're supposed to with the colour space changes. I still don't have access to push to the ColorSpace branches on the upstream VSG repos, so this is all available in the ColorSpace2 branches on my forks. Most of the changes were just switching colour literals to go via A few notable inconsistencies are:
|
Beta Was this translation helpful? Give feedback.
-
Most of the time with the VSG, vertex colours are three or four component vectors of floats, so the obvious right choice is to convert them to linear space upfront so there's no cost to the vertex shader. In fact, there's already room for optimisation here, as a 16-bit UNORM or half float would be plenty of precision for colour whether or not it's linear. However, vsgPoints looks like it's designed to deal with a lot of data, and already opts into using four-component 8-bit UNORM vectors for vertex colours. The whole point of sRGB is to allow colours to be quantised to eight bits per channel without ruining them, so that's been fine until now, but simply using the same approach as everywhere else and making them linear won't be good as linear colour needs around eleven bits per channel to get the same precision across the human gamut. In an ideal world, we could just set the
It looks like there's enough flexibility that whichever choice is made, user applications will be able to opt out of it and use one of the others, e.g. by replacing the stock shadersets and providing their own vertex data, so it's not a disaster if we choose badly, but obviously it's better to have a good out-of-the-box experience, so it would be better to make the decision with some information about who's doing what with vsgPoints so far. |
Beta Was this translation helpful? Give feedback.
-
By the way, there are a few non-colour-related fixes that might want cherry-picking onto master before the colour stuff is merged: 66a8adb fixes some memory corruption caused by mixing and matching glslang builds with different ABIs. e066d18 prevents doubling line breaks when loading and resaving a bca81d4 fixes recompiling SPIR-V shaders - previously the new bytecode would be appended onto the existing bytecode, which would give invalid opcode errors. |
Beta Was this translation helpful? Give feedback.
-
There are several problems with colour management in the VSG and ancillary projects, and so I thought it would be sensible to have a thread about it.
Basic background
The numbers you see representing a colour on a computer usually don't directly represent the number of photons that come out of the monitor to make that colour. Originally, CRT screens were built so that increasing the signal voltage had a (as close as was feasible) perceptually uniform effect, but human vision isn't linearly sensitive to brightness. Early computers mapped the numbers for colours directly to voltage in the cable to the screen, and then when things turned digital, screens mapped the numbers to the brightness a CRT would give for those numbers. If you're trying to do lighting for a 3D scene to present as an image, you therefore need to know what the screen's going to do with the numbers it's given, and the colour space that tells you this for a typical screen is called sRGB. The maths can be approximated pretty closely with$C_{linear} = {C_{sRGB}}^{\frac{1}{2.2}}$ . If you ignore this, you get problems when adding light from different sources together $(C_1 + C_2)^{\frac{1}{2.2}} \neq {C_1}^{\frac{1}{2.2}} + {C_2}^{\frac{1}{2.2}}$ or attenuating light with distance $a\left(C^{\frac{1}{2.2}}\right) \neq (aC)^{\frac{1}{2.2}}$ , and if you mix things up and only sometimes do the conversion, or do it when it doesn't need to happen, colours end up brighter or darker than they should.
Some history
Around the early 90s, it was typical for realtime graphics to totally ignore gamma correctness, as what you could do on a contemporary machine would never leave it as the most noticeable problem, but by the end of the nineties, there needed to be extra tricks to get away with ignoring it. A common one was giving point lights linear attenuation instead of quadratic attenuation like they do in real life, as$\left(d^{-2}C\right)^{\frac{1}{2.2}} = d^{\frac{-1}{1.1}}\left(C\right)^{\frac{1}{2.2}} \approx d^{-1}\left(C^{\frac{1}{2.2}}\right)$ .
Eventually, GPUs started getting built-in support for sRGB conversions, and you could enable
GL_FRAMEBUFFER_SRGB
to have the fragment colour automatically converted before being written to the framebuffer with correct blending, and use sRGB variants of texture formats which would convert the colours from sRGB to linear when they were accessed (again, with correct filtering). You could then usually get all the convenience of ignoring gamma correctness with all the correctness.More recently, things have got more complicated, first with post-process shader effects which may or may not want to consume linear colour, then with tonemapping (sometimes an image will look better if you do the colour space correction incorrectly intentionally, as it can preserve details that would be lost due to a screen's inability to be perfectly dark or ludicrously bright or show small brightness differences which a human can see), and most recently with the wide availability of HDR monitors with non-sRGB preferred colour spaces.
Out of the box, OpenSceneGraph didn't do much about this - you could manually enable
GL_FRAMEBUFFER_SRGB
, and visit any loaded textures to switch them to the sRGB variant of their previous format, but the image loaders loaded things as the non-sRGB versions.VulkanSceneGraph does more, e.g. some of the loaders have options controlling whether images are assumed to be sRGB or not, and the built-in PBR shader does some colour space conversions. However, it's not perfect.
Current problems
This list might not be exhaustive, but it's what I've noticed so far.
Beta Was this translation helpful? Give feedback.
All reactions