-
-
Notifications
You must be signed in to change notification settings - Fork 18.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CoW: Use exponential backoff when clearing dead references #55518
Conversation
pandas/_libs/internals.pyx
Outdated
if nr_of_refs < self.clear_counter // 2: | ||
self.clear_counter = self.clear_counter // 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we think it is needed to also reduce this? Or, I assume this is mostly to reduce the counter again in case it has become very large, not necessarily to let it become smaller than 500
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very good point, I intended to add a max here, e.g. max(..., 500)
pandas/_libs/internals.pyx
Outdated
if nr_of_refs < self.clear_counter // 2: | ||
self.clear_counter = self.clear_counter // 2 | ||
elif nr_of_refs > self.clear_counter: | ||
self.clear_counter = min(self.clear_counter * 2, nr_of_refs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could also just do x2 instead of the min(..)
calculation?
I am wondering that if you repeatedly add a reference (for an object that doesn't go out of scope), doesn't that end up increasing the counter only with +1 every time? For example, you have 501 refs, hitting the threshold, at that moment you clear the refs, but nr_of_refs
is still 501 after doing that, and then here we set the new threshold to min(500 * 2, 501)
, i.e. 501?
I must be missing something, because otherwise I don't understand how this fixes the perf issue of the list(df.items())
example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah you are correct, already added a max here because I came to the same conclusion
doc/source/whatsnew/v2.1.2.rst
Outdated
@@ -17,6 +17,7 @@ Fixed regressions | |||
- Fixed regression in :meth:`DataFrame.join` where result has missing values and dtype is arrow backed string (:issue:`55348`) | |||
- Fixed regression in :meth:`DataFrame.resample` which was extrapolating back to ``origin`` when ``origin`` was outside its bounds (:issue:`55064`) | |||
- Fixed regression in :meth:`DataFrame.sort_index` which was not sorting correctly when the index was a sliced :class:`MultiIndex` (:issue:`55379`) | |||
- Fixed performance regression in Copy-on-Write mechanism (:issue:`55256`, :issue:`55245`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The regression occurs without Copy-on-Write too. I think we should mention that here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I struggled a bit with the wording, any suggestions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe
Fixed performance regression in DataFrame copying, DataFrame iteration, and groupby methods taking user defined functions.
?
I think it's better to leave the copy-on-write part out - I personally couldn't find a way to word it without making it seem like the issue was with Copy-on-Write on only.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I am not really happy with listing methods, since this affects all kinds of things with wide data frames
Can you add a test that just adds blocks to a |
Tests are a good idea, added the relevant cases |
Co-authored-by: Joris Van den Bossche <[email protected]>
# Use exponential backoff to decide when we want to clear references | ||
# if force=False. Clearing for every insertion causes slowdowns if | ||
# all these objects stay alive, e.g. df.items() for wide DataFrames | ||
# see GH#55245 and GH#55008 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have a reference issue to eventually change this to a WeakSet-like implementation? IIRC I saw that discussed somewhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll open an issue about that as a follow up when this is in
Edit: Lots of false positives on the first run, reran individual ASVs. No perf regressions. ASVs for d4c159b
From #55256 I'm now getting 1.34s with no CoW, 1.63s with. |
Thx @rhshadrach appreciate you running the asvs. So we are good here? I want to look into the groupby problem independently to see if there is something we can do |
] | ||
nr_of_refs = len(self.referenced_blocks) | ||
if nr_of_refs < self.clear_counter // 2: | ||
self.clear_counter = max(self.clear_counter // 2, 500) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest a shrink factor of 4 or more. If it's the same as the growth factor it can create a few corner cases that will still have O(n^2). e.g. length going back and forth between (500*2^n)-1 and (500*2^n)+1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couldn't this happen as well for a shrink factor of 4? And this would only happen If we have this interleaved with inlace modifications, e.g if force=True, correct? Merging for now, but happy to follow up
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Merging for now, but happy to follow up
+1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a factor of 4 you would need to change the length to the extremes of the range [500*2^(n-1); 500*2^n], which is at least 500 (and more for a larger n), this is much better than triggering the slow operation on just adding and removing 3 references.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good enough to address the regression for now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Thanks @phofl |
…ing dead references
… clearing dead references) (#55625) Backport PR #55518: CoW: Use exponential backoff when clearing dead references Co-authored-by: Patrick Hoefler <[email protected]>
doc/source/whatsnew/vX.X.X.rst
file if fixing a bug or adding a new feature.I'd rather keep @rhshadrach 's issue open, we might be able to find a better fix there
The runtime is back to what he had initially on this branch, e.g. 3ms on Joris examples and 1s on Richards issue