Add 16-bit buffers and 8/16 conversion to LXLayeredComponent. #2
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
LXLayeredComponent
is the base forLXEffect
andLXPattern
. It contains a color buffer calledbuffer
, which exposes its array of ints ascolors
—that's the array that effects and patterns write to in theirrun()
method.It isn't practical to have to port all effects and patterns to run in 16-bit colour in one huge swoop, so it is a major goal to support existing effects and patterns unchanged alongside new 16-bit effects and patterns.
So, the first step in adding 16-bit colour support is to make
LXLayeredComponent
manage buffers of both bit depths—buffer
andbuffer16
—and convert between them as needed whenever anyone requests a buffer of a different depth than what was last written. ThebuffersInSync
flag lets us convert when needed and avoid wasting time repeating conversions that were already done.An added twist is that
LXLayeredComponent
supports two modes of operation already: it can either allocate and own its ownbuffer
(which is what patterns do), or it can write to and modify an externally providedLXBuffer
object (which is what effects do). TheBuffered
marker interface is used to select between these two modes.We need to support both bit depths across both modes and across all subclasses, which is hard to do with inheritance. So there's a new
Uses16
marker interface that indicates whether the effect/pattern operates on the 16-bit buffer or the 8-bit buffer, and you'll see that this marker interface is used in various places to decide whether a conversion between buffers is needed and which direction to convert.