Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CoW: Use exponential backoff when clearing dead references #55518

Merged
merged 9 commits into from
Oct 22, 2023
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions doc/source/whatsnew/v2.1.2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ Fixed regressions
- Fixed regression in :meth:`DataFrame.join` where result has missing values and dtype is arrow backed string (:issue:`55348`)
- Fixed regression in :meth:`DataFrame.resample` which was extrapolating back to ``origin`` when ``origin`` was outside its bounds (:issue:`55064`)
- Fixed regression in :meth:`DataFrame.sort_index` which was not sorting correctly when the index was a sliced :class:`MultiIndex` (:issue:`55379`)
- Fixed performance regression in Copy-on-Write mechanism (:issue:`55256`, :issue:`55245`)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The regression occurs without Copy-on-Write too. I think we should mention that here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I struggled a bit with the wording, any suggestions?

Copy link
Member

@lithomas1 lithomas1 Oct 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe

Fixed performance regression in DataFrame copying, DataFrame iteration, and groupby methods taking user defined functions.

?

I think it's better to leave the copy-on-write part out - I personally couldn't find a way to word it without making it seem like the issue was with Copy-on-Write on only.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I am not really happy with listing methods, since this affects all kinds of things with wide data frames

phofl marked this conversation as resolved.
Show resolved Hide resolved

.. ---------------------------------------------------------------------------
.. _whatsnew_212.bug_fixes:
Expand Down
18 changes: 13 additions & 5 deletions pandas/_libs/internals.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -890,17 +890,25 @@ cdef class BlockValuesRefs:
"""
cdef:
public list referenced_blocks
public int clear_counter

def __cinit__(self, blk: Block | None = None) -> None:
if blk is not None:
self.referenced_blocks = [weakref.ref(blk)]
else:
self.referenced_blocks = []
self.clear_counter = 500 # set reasonably high

def _clear_dead_references(self) -> None:
self.referenced_blocks = [
ref for ref in self.referenced_blocks if ref() is not None
]
def _clear_dead_references(self, force=False) -> None:
jreback marked this conversation as resolved.
Show resolved Hide resolved
if force or len(self.referenced_blocks) > self.clear_counter:
self.referenced_blocks = [
ref for ref in self.referenced_blocks if ref() is not None
]
nr_of_refs = len(self.referenced_blocks)
if nr_of_refs < self.clear_counter // 2:
self.clear_counter = self.clear_counter // 2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we think it is needed to also reduce this? Or, I assume this is mostly to reduce the counter again in case it has become very large, not necessarily to let it become smaller than 500

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very good point, I intended to add a max here, e.g. max(..., 500)

elif nr_of_refs > self.clear_counter:
self.clear_counter = min(self.clear_counter * 2, nr_of_refs)
Copy link
Member

@jorisvandenbossche jorisvandenbossche Oct 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could also just do x2 instead of the min(..) calculation?

I am wondering that if you repeatedly add a reference (for an object that doesn't go out of scope), doesn't that end up increasing the counter only with +1 every time? For example, you have 501 refs, hitting the threshold, at that moment you clear the refs, but nr_of_refs is still 501 after doing that, and then here we set the new threshold to min(500 * 2, 501), i.e. 501?
I must be missing something, because otherwise I don't understand how this fixes the perf issue of the list(df.items()) example.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah you are correct, already added a max here because I came to the same conclusion


def add_reference(self, blk: Block) -> None:
"""Adds a new reference to our reference collection.
Expand Down Expand Up @@ -934,6 +942,6 @@ cdef class BlockValuesRefs:
-------
bool
"""
self._clear_dead_references()
self._clear_dead_references(force=True)
# Checking for more references than block pointing to itself
return len(self.referenced_blocks) > 1
Loading