You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One gives out whole pages only. On Linux this is commonly 4KiB and on Apple silicon MacBook Pros this is commonly 16KiB.
One which has one big static chunk. That's it, no subdivision (think no-std).
They are similar in that if I call HashMap::with_capacity_in or other function with a capacity of c:
Knowing that HashMap can and will often round up c to a bigger size.
Knowing that allocators in general are allowed to over-allocate and often will.
Knowing that my allocators in particular are going to significantly over-allocate the number of bytes.
I would hope that HashMap would attempt to use this extra allocation space, but it never inspects the returned size.
Are there any reservations about doing this? It will add cost on every allocation, but that's about the only reason I could think of. The benefit is that for some allocators, they will better utilize the allocation, and for some allocators this would be an extreme improvement. With that said, I am not a hash table expert in general, and certainly a noob in hash_brown's internals, so there may be things I am unaware of.
The text was updated successfully, but these errors were encountered:
I'm open to the idea, but of course we'll have to see how much this impacts common use cases (the global allocator). I don't expect much impact, so I'm pretty optimistic.
@Amanieu I made a PR for this: #523. It depends on #524. Could you please review it when you have the time? These are my first contributions to rust-lang/* and I welcome feedback, including nits. In particular, I'm not sure how ZSTs are expected to behave. There is a ZST in the test suite for test_set::collect which passes, but I'm not sure what I've done is necessarily logical.
I have some similar allocators:
They are similar in that if I call
HashMap::with_capacity_in
or other function with a capacity ofc
:HashMap
can and will often round upc
to a bigger size.I would hope that
HashMap
would attempt to use this extra allocation space, but it never inspects the returned size.Are there any reservations about doing this? It will add cost on every allocation, but that's about the only reason I could think of. The benefit is that for some allocators, they will better utilize the allocation, and for some allocators this would be an extreme improvement. With that said, I am not a hash table expert in general, and certainly a noob in
hash_brown
's internals, so there may be things I am unaware of.The text was updated successfully, but these errors were encountered: