Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PERF: groupby.nunique #56061

Merged
merged 7 commits into from
Nov 29, 2023
Merged

PERF: groupby.nunique #56061

merged 7 commits into from
Nov 29, 2023

Conversation

rhshadrach
Copy link
Member

I tried two approaches: using a hash table in Cython and the existing groupby internals. The hash table approach here is faster, but is only guaranteed to work when (number of groups) * (number of distinct values) is less than 2**64.

It appears to me the slowpath is fast enough and we should prefer this over two branches, but wanted to get others' thoughts. Also any ideas to make the hash table approach work with pairs of integers are welcome.

Timings from #55913:

unique_values = np.arange(30, dtype=np.int64)
data = np.random.choice(unique_values, size=1_000_000)
s = pd.Series(data)

%timeit s.groupby(s).nunique()
85.9 ms ± 358 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) <-- main
17.5 ms ± 154 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)  <-- PR, fastpath
26 ms ± 159 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)  <-- PR, slowpath

%timeit s.groupby(s).unique().apply(len)
34.7 ms ± 211 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
unique_values = np.arange(30000, dtype=np.int64)
data = np.random.choice(unique_values, size=1_000_000)
s = pd.Series(data)

%timeit s.groupby(s).nunique()
167 ms ± 1.68 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) <-- main
43.5 ms ± 636 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)  <-- PR, fastpath
53.4 ms ± 249 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)  <-- PR, slowpath

%timeit s.groupby(s).unique().apply(len)
1.37 s ± 8.76 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

cc @jorisvandenbossche

@rhshadrach rhshadrach added Groupby Performance Memory or execution speed performance Reduction Operations sum, mean, min, max, etc. labels Nov 19, 2023
@rhshadrach rhshadrach requested a review from mroeschke November 22, 2023 03:07
@rhshadrach rhshadrach marked this pull request as ready for review November 26, 2023 20:14
@rhshadrach
Copy link
Member Author

I've removed the fastpath.

@mroeschke mroeschke added this to the 2.2 milestone Nov 29, 2023
Copy link
Member

@mroeschke mroeschke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. I think having one path is adequate here

@mroeschke mroeschke merged commit d377cc9 into pandas-dev:main Nov 29, 2023
40 of 42 checks passed
@mroeschke
Copy link
Member

Thanks @rhshadrach

@rhshadrach rhshadrach deleted the perf_gb_nunique branch November 29, 2023 18:20
@arnaudlegout
Copy link
Contributor

I think, it would be a good idea to comment in the source code the tradeoffs and your design choice. You should not underestimate the performance penalty for people working on very large dataset. The larger the dataset, the more important the performance improvement. 17ms or 26ms does not make much difference, but 17 hours or 26 hours makes a big difference. I agree that it is not a good idea for the general purpose nunique to impose a limit to (number of groups) * (number of distinct values) lower than 2**64, but having another datapath when you exceed 2**64 does not look that bad (the test to select the datapath is trivial, so it has no significant performance penalty).

I am not sure what is the argument against another datapath if the fastpath is not an option (beside the maintenance cost, which is an argument I can buy).

I any case, having comments in the source code indicating the design choice and explaining the alternative would be great.

From the source code finding this PR and this discussion is quite complex.

@rhshadrach
Copy link
Member Author

rhshadrach commented Nov 30, 2023

Thanks for the feedback here!

I think, it would be a good idea to comment in the source code the tradeoffs and your design choice.

In general I do not think this is sustainable. A lot of design choices are based in part on other parts of the code outside the immediate function and even upstream or downstream dependencies. Having this in code seems to me to be at great risk of becoming outdated and incorrect, and would require quite a bit of maintenance to keep up to date.

In this particular case, I think a comment like hashmap might be faster, but would need to work on tuples of integers would be okay.

I am not sure what is the argument against another datapath if the fastpath is not an option (beside the maintenance cost, which is an argument I can buy).

Maintenance is exactly the reason. In general having multiple code paths is hard to test and maintain in parallel. It also often leads to very surprising inconsistencies where perhaps both outputs are reasonable, but the fact they disagree under hard to detect (for users) circumstances has an awful impact. As such, I think they should only be used when the impact is deemed sufficiently significant. In my opinion, this case doesn't come close.

I also don't see a way we can unit test the slowpath without a large amount of data, and as a result a very slow test. But maybe I'm just missing how.

@arnaudlegout
Copy link
Contributor

Richard, let me add a bit more to my point.

From my experience, comments must document your current implementation. Surely the context might change in the future, but this possibly outdated comment in the future will explain why in the time context of the change, the decision was made. It enlightens the intent of the developer and vastly facilitate new developers to put hands on a code, take appropriate decision, build on past experience.

There is no need to update a comment if the code is not updated, and updating a comment if you change the code is not a significant overhead.

I also don't see a way we can unit test the slowpath without a large amount of data, and as a result a very slow test. But maybe I'm just missing how.

IMHO pandas will be used more and more for large datasets, but the test suite is not adapted to find regressions that affect performance (may be I am wrong, but I don't remember seeing systematic performance regression tests, otherwise #55606 should have been caught before being merged) or large datasets. For instance, the bug I reported recently #55845 or this unique vs nunique issue are issues for very large datasets only.

In my code that requires days to weeks to run, a 50% slow down is a show stopper. I also have the same issue with memory usage.

Performance tests should be quite easy to create, but will require a lot of resources to run, so a dedicated infrastructure. I would be pleased to continue this discussion and see whether I can help. What would be the best medium to have such discussions?

@rhshadrach
Copy link
Member Author

rhshadrach commented Dec 1, 2023

but the test suite is not adapted to find regressions that affect performance

The test suite (in pandas.tests) is certainly not, but our ASVs are. See here: https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#running-the-performance-test-suite

I would be pleased to continue this discussion and see whether I can help. What would be the best medium to have such discussions?

Sure! #55007

I don't remember seeing systematic performance regression tests, otherwise #55606 should have been caught before being merged) or large datasets. For instance, the bug I reported recently #55845 or this unique vs nunique issue are issues for very large datasets only.

Performance test suites can go a long way for helping to detect regressions, but I think your expectations are too high if you are thinking they should catch any performance regression, especially for a package a large and complex as pandas.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Groupby Performance Memory or execution speed performance Reduction Operations sum, mean, min, max, etc.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

PERF: nunique is slower than unique.apply(len) on a groupby
3 participants