-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speed up decoding of large enums #2
Conversation
genericUnsafeDecodeEnum opts = | ||
let constructors = Object.fromFoldable (enumConstructors opts :: Array (Tuple String rep)) | ||
let constructors = Object.fromFoldable (enumConstructors to opts :: Array (Tuple String a)) | ||
in \value -> | ||
let tag = decodeString value in | ||
case Object.lookup tag constructors of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we are returning a lambda here but the lambda doesn't need to call to
anymore, because constructors
are already returning the final type and not the rep
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes (assuming JS Object
is a hashtable, which it will probably be in this case in most implementations)
To clarify: the
Also, the complexity I provided in PR description is wrong. The generic tree seems to be unbalanced, to it's |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If my understanding (in the comment) is ok, then lg to me :)
Generic
to
is quite slow on datatypes with many constructors, because it runsO(log^2(n))
instanceof
operations, wheren
is the number of constructors.This change moves this cost from the decoding loop to initialization time. Now we store values of the target type in the lookup table instead of Generic representations. This makes initialization slower, but if there are many enums values to decode, we save time there.
Before:
After:
As seen in the above benchmark results, after this change enum decoding is almost independent of enum size. Which makes sense, because now it's only a hashtable lookup, as it should be.