You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 3, 2024. It is now read-only.
Haddock allocates 800GB when rendering HTML for our codebase. The big culprits appear to be some of our "prelude" modules, which re-export a huge amount of stuff. The biggest offender is 249MB of HTML on disk.
What's worse is that we actually spend a huge amount of time just escaping HTML strings in xhtml library.
One potential solution is to store HTML fragments for a documented item on disk. Then, instead of spending a massive amount of time rebuilding the same docs over and over again, we'd be able to re-use the HTML that's already generated and just splice it in. This culd work by hashing the ExportItem DocNameI that's provided to processExport and using that as a lookup key.
One downside with this is that the instances table currently shows all the instances in scope for a type, which can potentially change at each "re-export" site. Generating these tables is also a huge cost, so saving that work would be nice, particularly if we can do so incrementally.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Haddock allocates 800GB when rendering HTML for our codebase. The big culprits appear to be some of our "prelude" modules, which re-export a huge amount of stuff. The biggest offender is 249MB of HTML on disk.
What's worse is that we actually spend a huge amount of time just escaping HTML strings in
xhtml
library.One potential solution is to store HTML fragments for a documented item on disk. Then, instead of spending a massive amount of time rebuilding the same docs over and over again, we'd be able to re-use the HTML that's already generated and just splice it in. This culd work by hashing the
ExportItem DocNameI
that's provided toprocessExport
and using that as a lookup key.One downside with this is that the instances table currently shows all the instances in scope for a type, which can potentially change at each "re-export" site. Generating these tables is also a huge cost, so saving that work would be nice, particularly if we can do so incrementally.
The text was updated successfully, but these errors were encountered: