-
-
Notifications
You must be signed in to change notification settings - Fork 18.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PERF: groupby.nunique #56061
PERF: groupby.nunique #56061
Conversation
I've removed the fastpath. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice. I think having one path is adequate here
Thanks @rhshadrach |
I think, it would be a good idea to comment in the source code the tradeoffs and your design choice. You should not underestimate the performance penalty for people working on very large dataset. The larger the dataset, the more important the performance improvement. 17ms or 26ms does not make much difference, but 17 hours or 26 hours makes a big difference. I agree that it is not a good idea for the general purpose nunique to impose a limit to I am not sure what is the argument against another datapath if the fastpath is not an option (beside the maintenance cost, which is an argument I can buy). I any case, having comments in the source code indicating the design choice and explaining the alternative would be great. From the source code finding this PR and this discussion is quite complex. |
Thanks for the feedback here!
In general I do not think this is sustainable. A lot of design choices are based in part on other parts of the code outside the immediate function and even upstream or downstream dependencies. Having this in code seems to me to be at great risk of becoming outdated and incorrect, and would require quite a bit of maintenance to keep up to date. In this particular case, I think a comment like
Maintenance is exactly the reason. In general having multiple code paths is hard to test and maintain in parallel. It also often leads to very surprising inconsistencies where perhaps both outputs are reasonable, but the fact they disagree under hard to detect (for users) circumstances has an awful impact. As such, I think they should only be used when the impact is deemed sufficiently significant. In my opinion, this case doesn't come close. I also don't see a way we can unit test the slowpath without a large amount of data, and as a result a very slow test. But maybe I'm just missing how. |
Richard, let me add a bit more to my point. From my experience, comments must document your current implementation. Surely the context might change in the future, but this possibly outdated comment in the future will explain why in the time context of the change, the decision was made. It enlightens the intent of the developer and vastly facilitate new developers to put hands on a code, take appropriate decision, build on past experience. There is no need to update a comment if the code is not updated, and updating a comment if you change the code is not a significant overhead.
IMHO pandas will be used more and more for large datasets, but the test suite is not adapted to find regressions that affect performance (may be I am wrong, but I don't remember seeing systematic performance regression tests, otherwise #55606 should have been caught before being merged) or large datasets. For instance, the bug I reported recently #55845 or this unique vs nunique issue are issues for very large datasets only. In my code that requires days to weeks to run, a 50% slow down is a show stopper. I also have the same issue with memory usage. Performance tests should be quite easy to create, but will require a lot of resources to run, so a dedicated infrastructure. I would be pleased to continue this discussion and see whether I can help. What would be the best medium to have such discussions? |
The test suite (in
Sure! #55007
Performance test suites can go a long way for helping to detect regressions, but I think your expectations are too high if you are thinking they should catch any performance regression, especially for a package a large and complex as pandas. |
nunique
is slower thanunique.apply(len)
on agroupby
#55972doc/source/whatsnew/vX.X.X.rst
file if fixing a bug or adding a new feature.I tried two approaches: using a hash table in Cython and the existing groupby internals. The hash table approach here is faster, but is only guaranteed to work when
(number of groups) * (number of distinct values)
is less than2**64
.It appears to me the slowpath is fast enough and we should prefer this over two branches, but wanted to get others' thoughts. Also any ideas to make the hash table approach work with pairs of integers are welcome.
Timings from #55913:
cc @jorisvandenbossche