You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Header compression, while very effective in some contexts, does have a CPU overhead on encode and decode. There are other contexts where this overhead is not worth it, for example in scenarios where bandwidth and data-transfer times are non-issues.
Other h2 implementations, like Envoy's for example, allow effectively disabling header compression by setting the HPACK table size to 0.
Right now, it's possible to set a table size on h2's client, and to set it to 0, but it does not necessarily lead to performance gains. This is because entries are simply evicted when max_size is reached. In fact, a table with max_size of 0 would most likely hit the "large header special case" and returned NotIndexed. This means that encode_str is still called (with huffman) through encode_not_indexed.
I'd like to hear your thoughts on how/if it should be possible to disable (similar to envoy) either through a table_size of 0, or even an explicit configuration option. Then, if possible, we should find a way to avoid huffman encoding in those cases and write header bytes without compression.
The text was updated successfully, but these errors were encountered:
The literal representation of a header field name or of a header field value can encode the sequence of octets either directly or using a static Huffman code (see Section 5.2).
I imagine people could want to take advantage of the Huffman compression even if it wouldn't be stored in the dynamic table, if reducing bytes is important. So then, it would seem to me that having the table size and whether compression is used be separate configuration would be more useful to people.
Does that seem right? Or did I miss something somewhere else?
Yeah that seems right to me too @seanmonstar, we can probably be more explicit if you don't mind the addition a new knob.
In our context both the encoding (majority of it) and table indexing seems to consume cycles that we don't necessarily need. Even table size 0 has some overhead when it comes to table operations it seems, so maybe a "dont compress headers" setting would be best in that scenario.
Header compression, while very effective in some contexts, does have a CPU overhead on encode and decode. There are other contexts where this overhead is not worth it, for example in scenarios where bandwidth and data-transfer times are non-issues.
Other h2 implementations, like Envoy's for example, allow effectively disabling header compression by setting the HPACK table size to
0
.Right now, it's possible to set a table size on
h2
's client, and to set it to 0, but it does not necessarily lead to performance gains. This is because entries are simply evicted when max_size is reached. In fact, a table withmax_size
of 0 would most likely hit the "large header special case" and returnedNotIndexed
. This means thatencode_str
is still called (with huffman) throughencode_not_indexed
.I'd like to hear your thoughts on how/if it should be possible to disable (similar to envoy) either through a table_size of 0, or even an explicit configuration option. Then, if possible, we should find a way to avoid huffman encoding in those cases and write header bytes without compression.
The text was updated successfully, but these errors were encountered: