Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG/ENH: Use pyarrow grouped aggregation functions for pyarrow-backed groupby ops #55234

Closed
wants to merge 6 commits into from

Conversation

lukemanley
Copy link
Member

Use pyarrow's TableGroupBy functionality for pyarrow-backed groupby aggregations.

This has a few benefits over the current approach of round-tripping through non-arrow dtypes:

  1. proper support for groupby operations with pyarrrow decimal types
  2. removes values-dependent behavior (e.g. BUG: groupby.var() does not return arrow types with arrow backed series as input.  #54627 (comment))
  3. aligns grouped aggregation behavior with non-grouped aggregation behavior (existing implementation has a number of subtle differences)

Performance impact is mixed. I think the consistency benefits mentioned above outweigh the performance impact here. If others disagree, we could continue using the pandas implementation for specific aggregations.

Note: These timings are from a slow laptop. It would be great if someone could run this ASV to confirm (@rhshadrach possibly?)

asv continuous -f 1.1 upstream/main arrow-groupby -b groupby.GroupByAggregateArrowDtypes

           before            after    ratio
+      32.3±0.9ms         54.8±3ms     1.70  groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'all')
+        35.3±1ms         51.4±2ms     1.46  groupby.GroupByAggregateArrowDtypes.time_frame_agg(duration[ms][pyarrow], 'any')
+        36.1±1ms       50.2±0.9ms     1.39  groupby.GroupByAggregateArrowDtypes.time_frame_agg(duration[ms][pyarrow], 'all')
+        39.8±6ms         54.6±2ms     1.37  groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'any')
+        42.6±2ms         57.1±3ms     1.34  groupby.GroupByAggregateArrowDtypes.time_frame_agg(bool[pyarrow], 'min')
+      42.5±0.9ms         54.2±6ms     1.28  groupby.GroupByAggregateArrowDtypes.time_frame_agg(timestamp[s, tz=UTC][pyarrow], 'min')
+        40.7±3ms         51.4±3ms     1.26  groupby.GroupByAggregateArrowDtypes.time_frame_agg(duration[ms][pyarrow], 'max')
+        40.9±2ms         50.8±4ms     1.24  groupby.GroupByAggregateArrowDtypes.time_frame_agg(timestamp[s, tz=UTC][pyarrow], 'max')
+        46.2±3ms         56.4±2ms     1.22  groupby.GroupByAggregateArrowDtypes.time_frame_agg(bool[pyarrow], 'std')
+        41.4±2ms         49.3±1ms     1.19  groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'std')
+        43.3±2ms         50.7±1ms     1.17  groupby.GroupByAggregateArrowDtypes.time_frame_agg(duration[ms][pyarrow], 'min')
-        427±80ms          128±4ms     0.30  groupby.GroupByAggregateArrowDtypes.time_frame_agg(decimal128(25, 3)[pyarrow], 'prod')
-        395±40ms         91.3±6ms     0.23  groupby.GroupByAggregateArrowDtypes.time_frame_agg(string[pyarrow], 'max')
-         401±9ms         89.3±3ms     0.22  groupby.GroupByAggregateArrowDtypes.time_frame_agg(string[pyarrow], 'min')
-        429±50ms         89.0±8ms     0.21  groupby.GroupByAggregateArrowDtypes.time_frame_agg(decimal128(25, 3)[pyarrow], 'max')
-        449±80ms         89.1±2ms     0.20  groupby.GroupByAggregateArrowDtypes.time_frame_agg(decimal128(25, 3)[pyarrow], 'min')
-        384±60ms         56.1±2ms     0.15  groupby.GroupByAggregateArrowDtypes.time_frame_agg(decimal128(25, 3)[pyarrow], 'sum')
-        442±80ms         56.6±2ms     0.13  groupby.GroupByAggregateArrowDtypes.time_frame_agg(decimal128(25, 3)[pyarrow], 'mean')

Longer term, there are potentially big performance gains if all arrow-backed aggregations can be passed to pyarrow at once. At the moment, we're iterating column by column. Here is a small example showing the impact:

import pandas as pd
import numpy as np 
import pyarrow as pa

N = 100_000
columns = list("abcdefghij")

data = np.random.randn(N, len(columns))
df = pd.DataFrame(data, columns=columns, dtype="float64[pyarrow]")
df["key"] = np.random.randint(0, 100, N)
table = pa.table(df)

# column-by-column:
%timeit [table.group_by("key").aggregate([(col, "sum")]) for col in columns]

# all at once:
%timeit table.group_by("key").aggregate([(col, "sum") for col in columns])

# 24.7 ms ± 1.17 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)  <- column-by-column
# 4.54 ms ± 152 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)  <- all at once

@lukemanley lukemanley added Bug Groupby Arrow pyarrow functionality labels Sep 22, 2023
@lukemanley lukemanley added this to the 2.2 milestone Sep 22, 2023
Copy link
Member

@rhshadrach rhshadrach left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

generally looks good, will run ASVs tomorrow.

pa.Scalar or pa.ChunkedArray
"""
pa_type = self._pa_array.type
cast_kwargs = {"safe": False}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just pass safe=False?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see - more or less a carryover from the previous implementation. Seems a little odd to do in the new one.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, it was carryover - I've updated it.

pandas/core/arrays/arrow/array.py Outdated Show resolved Hide resolved
pandas/core/arrays/arrow/array.py Show resolved Hide resolved
pandas/tests/extension/test_arrow.py Outdated Show resolved Hide resolved
@rhshadrach
Copy link
Member

I'm seeing a bit more drastic performance regression:

| Change   | Before [2d168637] <perf_indexing_time_loc_dups>   | After [87581d4f] <arrow-groupby>   |   Ratio | Benchmark (Parameter)                                                                     |
|----------|---------------------------------------------------|------------------------------------|---------|-------------------------------------------------------------------------------------------|
| +        | 8.53±0.3ms                                        | 27.8±3ms                           |    3.26 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(duration[ms][pyarrow], 'all')          |
| +        | 9.35±0.2ms                                        | 26.7±3ms                           |    2.85 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(duration[ms][pyarrow], 'any')          |
| +        | 8.64±0.2ms                                        | 24.7±2ms                           |    2.85 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(timestamp[s, tz=UTC][pyarrow], 'min')  |
| +        | 8.81±0.2ms                                        | 25.1±2ms                           |    2.84 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(double[pyarrow], 'all')                |
| +        | 7.95±0.02ms                                       | 22.6±4ms                           |    2.84 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'all')                 |
| +        | 8.67±0.2ms                                        | 24.6±2ms                           |    2.83 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(timestamp[s, tz=UTC][pyarrow], 'max')  |
| +        | 8.79±0.2ms                                        | 24.5±2ms                           |    2.79 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'max')                 |
| +        | 8.97±0.1ms                                        | 24.9±2ms                           |    2.77 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'any')                 |
| +        | 8.09±0.2ms                                        | 22.3±0.3ms                         |    2.76 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(double[pyarrow], 'max')                |
| +        | 11.1±0.06ms                                       | 29.5±2ms                           |    2.66 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(bool[pyarrow], 'max')                  |
| +        | 9.43±0.2ms                                        | 25.1±2ms                           |    2.66 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(duration[ms][pyarrow], 'max')          |
| +        | 10.7±0.2ms                                        | 28.1±4ms                           |    2.62 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(bool[pyarrow], 'min')                  |
| +        | 9.49±0.2ms                                        | 24.4±2ms                           |    2.57 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(duration[ms][pyarrow], 'min')          |
| +        | 8.18±0.04ms                                       | 20.9±1ms                           |    2.55 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(double[pyarrow], 'min')                |
| +        | 8.25±0.2ms                                        | 20.8±0.3ms                         |    2.52 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(double[pyarrow], 'prod')               |
| +        | 9.10±0.1ms                                        | 22.7±2ms                           |    2.5  | groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'prod')                |
| +        | 10.4±0.3ms                                        | 25.1±2ms                           |    2.41 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(double[pyarrow], 'std')                |
| +        | 12.5±0.1ms                                        | 29.9±3ms                           |    2.4  | groupby.GroupByAggregateArrowDtypes.time_frame_agg(duration[ms][pyarrow], 'std')          |
| +        | 10.2±0.2ms                                        | 24.0±2ms                           |    2.34 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'std')                 |
| +        | 8.38±0.02ms                                       | 19.5±2ms                           |    2.33 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'min')                 |
| +        | 9.63±0.2ms                                        | 22.4±2ms                           |    2.32 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(double[pyarrow], 'sum')                |
| +        | 10.6±0.3ms                                        | 24.1±2ms                           |    2.26 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(bool[pyarrow], 'prod')                 |
| +        | 9.48±0.09ms                                       | 21.3±2ms                           |    2.25 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(double[pyarrow], 'mean')               |
| +        | 10.6±0.2ms                                        | 23.6±2ms                           |    2.24 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(duration[ms][pyarrow], 'sum')          |
| +        | 11.9±0.07ms                                       | 26.4±2ms                           |    2.22 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(bool[pyarrow], 'std')                  |
| +        | 9.47±0.1ms                                        | 20.9±3ms                           |    2.21 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'mean')                |
| +        | 12.1±0.3ms                                        | 26.0±2ms                           |    2.15 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(timestamp[s, tz=UTC][pyarrow], 'std')  |
| +        | 10.7±0.06ms                                       | 22.9±2ms                           |    2.14 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'sum')                 |
| +        | 10.0±0.05ms                                       | 21.4±1ms                           |    2.13 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(double[pyarrow], 'any')                |
| +        | 11.1±0.09ms                                       | 22.9±1ms                           |    2.06 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(bool[pyarrow], 'mean')                 |
| +        | 11.7±0.6ms                                        | 23.5±2ms                           |    2.01 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(bool[pyarrow], 'sum')                  |
| +        | 14.7±0.1ms                                        | 29.2±1ms                           |    1.99 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(bool[pyarrow], 'all')                  |
| +        | 11.5±0.3ms                                        | 21.2±3ms                           |    1.84 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(timestamp[s, tz=UTC][pyarrow], 'mean') |
| +        | 12.6±0.3ms                                        | 22.1±1ms                           |    1.76 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(duration[ms][pyarrow], 'mean')         |
| +        | 14.7±0.2ms                                        | 25.4±1ms                           |    1.73 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(bool[pyarrow], 'any')                  |
| -        | 122±0.7ms                                         | 85.6±20ms                          |    0.7  | groupby.GroupByAggregateArrowDtypes.time_frame_agg(decimal128(25, 3)[pyarrow], 'prod')    |
| -        | 97.9±2ms                                          | 52.2±5ms                           |    0.53 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(decimal128(25, 3)[pyarrow], 'min')     |
| -        | 95.9±0.4ms                                        | 48.8±0.5ms                         |    0.51 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(decimal128(25, 3)[pyarrow], 'max')     |
| -        | 79.4±1ms                                          | 39.1±2ms                           |    0.49 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(decimal128(25, 3)[pyarrow], 'sum')     |
| -        | 112±4ms                                           | 48.1±4ms                           |    0.43 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(string[pyarrow], 'max')                |
| -        | 87.7±1ms                                          | 37.0±2ms                           |    0.42 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(decimal128(25, 3)[pyarrow], 'mean')    |
| -        | 112±4ms                                           | 46.3±6ms                           |    0.41 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(string[pyarrow], 'min')                |

@lukemanley
Copy link
Member Author

I'm seeing a bit more drastic performance regression

hmmm, thats too bad.

The test added here (test_groupby_reductions) compares the behavior of grouped and non-grouped aggregations and expects the same result for a given group. If we run that test on main we see a number of errors which are the subtle differences I was referring to. Most are different resulting dtypes and/or small floating-point differences. Personally, I think these are bugs worth fixing, but I agree the performance is unfortunate.

Here are some of the grouped reductions that fail that test on main:

FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[uint8-sum] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[uint8-prod] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[uint16-sum] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[uint16-prod] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[uint32-sum] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[uint32-prod] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[int8-sum] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[int8-prod] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[int16-sum] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[int16-prod] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[int32-sum] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[int32-prod] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[float-sum] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[float-prod] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[float-mean] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[float-sem] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[float-std] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[float-var] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[decimal128(7, 3)-any] - NotImplementedError
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[decimal128(7, 3)-all] - NotImplementedError
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[decimal128(7, 3)-sem] - NotImplementedError
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[decimal128(7, 3)-std] - NotImplementedError
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[decimal128(7, 3)-var] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[duration[s]-any] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[duration[s]-all] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[duration[ms]-any] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[duration[ms]-all] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[duration[us]-any] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[duration[us]-all] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[duration[us]-sem] - AssertionError: Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[duration[ns]-any] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[duration[ns]-all] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[duration[ns]-sem] - AssertionError: Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[bool-sum] - AssertionError: Attributes of Series are different
FAILED pandas/tests/extension/test_arrow.py::test_groupby_reductions[bool-prod] - AssertionError: Attributes of Series are different


res = (
pa.Table.from_pydict({"id": ids, "val": arr})
.group_by("id")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC this is re-calculating the codes we already have in our GroupBy object. I see the appeal, but am wary of this pattern. I think the long-term solution is to get pyarrow to expose something to which we can pass the already-computed codes?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, that's fair. Also, it would be nice to pass all aggregations at once to pyarrow - see the last comment in the OP.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Longer term, there are potentially big performance gains if all arrow-backed aggregations can be passed to pyarrow at once. At the moment, we're iterating column by column. Here is a small example showing the impact:

Any idea what is driving the perf impact? Does the pyarrow version do something in parallel?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't tell from reading the docs whether anything is done in parallel. It could be simply (as you've pointed out) that pyarrow is only computing the group codes once when done in a single batch and recomputing them when done iteratively.

@lukemanley
Copy link
Member Author

cc @rhshadrach, @jbrockmendel, @mroeschke

Should I close this given the performance concerns or do you think we should retain some of it, e.g. use this for pyarrow decimal types.

@rhshadrach
Copy link
Member

I'm going to rerun my benchmarks in the next day or so to be sure. Do we have a good understanding of where the perf regressions are specifically coming from?

@lukemanley
Copy link
Member Author

I'm going to rerun my benchmarks in the next day or so to be sure. Do we have a good understanding of where the perf regressions are specifically coming from?

Here are two suspicions:

  1. @jbrockmendel pointed out that pyarrow is re-calculating the groupby codes for every column since we're iterating column by column. It seems this could be solved by either passing all aggregations to pyarrow in one call or seeing whether pyarrow could expose an interface for "pre-computed codes".

  2. The masked numpy dtypes have value-dependent behavior that keeps the dtype as small as possible whereas pyarrow has consistent non-value-dependent output types. The larger integer types may have an impact?

This is behavior from this branch (not main) - note the output type:

In [1]: import pandas as pd

In [2]: pd.Series([1, 2], index=["A", "A"], dtype="Int8").groupby(level=0).sum()
Out[2]: 
A    3
dtype: Int8

In [3]: pd.Series([1, 2], index=["A", "A"], dtype="int8[pyarrow]").groupby(level=0).sum()
Out[3]: 
A    3
dtype: int64[pyarrow]

This is consistent with pyarrow non-grouped behavior:

In [4]: import pyarrow as pa

In [5]: import pyarrow.compute as pc

In [6]: pc.sum(pa.array([1, 2], type=pa.int8()))
Out[6]: <pyarrow.Int64Scalar: 3>

@jbrockmendel
Copy link
Member

It seems this could be solved by either passing all aggregations to pyarrow in one call

#53158 would get us part of the way there for that. Getting the rest of the way there would require more gymnastics than I think we should do.

The masked numpy dtypes have value-dependent behavior

Might be worth discussing separately, but my knee-jerk reaction is to avoid that value-dependent behavior.

The main perf upside seems to be for decimal, which is a pretty low priority. I'd like to look for ways to address the bugs identified here without calling the pa.Table methods, and longer run try to get pyarrow to expose something that lets us pass the codes.


return data_to_reduce

def _maybe_cast_reduction_result(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC at least some of the bugfix is coming from adding this casting to _groupby_op. Could split off from this PR one that implements _values_for_reduction and _maybe_cast_reduction_result?

@rhshadrach
Copy link
Member

I'm going to rerun my benchmarks in the next day or so to be sure.

Same results approximately.

| Change   | Before [824a2738] <main>   | After [adb0d321] <arrow-groupby>   |   Ratio | Benchmark (Parameter)                                                                     |
|----------|----------------------------|------------------------------------|---------|-------------------------------------------------------------------------------------------|
| +        | 9.03±0.2ms                 | 29.2±3ms                           |    3.24 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(duration[ms][pyarrow], 'any')          |
| +        | 9.28±0.4ms                 | 28.8±1ms                           |    3.1  | groupby.GroupByAggregateArrowDtypes.time_frame_agg(duration[ms][pyarrow], 'max')          |
| +        | 8.83±0.1ms                 | 26.1±0.7ms                         |    2.95 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(timestamp[s, tz=UTC][pyarrow], 'min')  |
| +        | 8.31±0.07ms                | 24.4±2ms                           |    2.93 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'min')                 |
| +        | 7.78±0.1ms                 | 22.7±0.2ms                         |    2.92 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'all')                 |
| +        | 8.86±0.09ms                | 24.8±1ms                           |    2.8  | groupby.GroupByAggregateArrowDtypes.time_frame_agg(timestamp[s, tz=UTC][pyarrow], 'max')  |
| +        | 9.27±0.3ms                 | 25.9±3ms                           |    2.79 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(duration[ms][pyarrow], 'min')          |
| +        | 9.08±0.03ms                | 25.0±2ms                           |    2.75 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'any')                 |
| +        | 8.13±0.09ms                | 22.3±0.1ms                         |    2.74 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(double[pyarrow], 'min')                |
| +        | 8.31±0.04ms                | 22.2±0.2ms                         |    2.67 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(double[pyarrow], 'max')                |
| +        | 8.66±0.2ms                 | 23.1±2ms                           |    2.67 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(duration[ms][pyarrow], 'all')          |
| +        | 10.9±0.2ms                 | 29.1±3ms                           |    2.66 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(bool[pyarrow], 'max')                  |
| +        | 8.21±0.1ms                 | 21.2±0.8ms                         |    2.58 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(double[pyarrow], 'prod')               |
| +        | 8.82±0.2ms                 | 22.8±2ms                           |    2.58 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'max')                 |
| +        | 10.4±0.4ms                 | 26.0±3ms                           |    2.51 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(duration[ms][pyarrow], 'sum')          |
| +        | 8.81±0.08ms                | 21.7±0.2ms                         |    2.46 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(double[pyarrow], 'all')                |
| +        | 10.9±0.06ms                | 26.6±5ms                           |    2.44 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(bool[pyarrow], 'min')                  |
| +        | 9.60±0.4ms                 | 22.3±2ms                           |    2.32 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'mean')                |
| +        | 10.2±0.1ms                 | 23.3±0.2ms                         |    2.28 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(double[pyarrow], 'std')                |
| +        | 9.94±0.1ms                 | 22.2±1ms                           |    2.23 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(double[pyarrow], 'any')                |
| +        | 11.9±0.03ms                | 26.2±3ms                           |    2.19 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(bool[pyarrow], 'std')                  |
| +        | 12.4±0.1ms                 | 27.2±2ms                           |    2.19 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(timestamp[s, tz=UTC][pyarrow], 'std')  |
| +        | 9.52±0.2ms                 | 20.7±0.2ms                         |    2.18 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(double[pyarrow], 'mean')               |
| +        | 9.79±0.03ms                | 21.1±2ms                           |    2.16 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(double[pyarrow], 'sum')                |
| +        | 10.2±0.2ms                 | 22.1±0.2ms                         |    2.16 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'std')                 |
| +        | 12.3±0.5ms                 | 26.4±2ms                           |    2.15 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(duration[ms][pyarrow], 'mean')         |
| +        | 10.2±1ms                   | 21.2±0.9ms                         |    2.09 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'prod')                |
| +        | 10.7±0.09ms                | 22.4±2ms                           |    2.09 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(int32[pyarrow], 'sum')                 |
| +        | 11.0±0.2ms                 | 22.8±1ms                           |    2.06 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(bool[pyarrow], 'mean')                 |
| +        | 12.6±0.4ms                 | 25.0±0.2ms                         |    1.99 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(duration[ms][pyarrow], 'std')          |
| +        | 11.6±0.2ms                 | 22.8±2ms                           |    1.97 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(timestamp[s, tz=UTC][pyarrow], 'mean') |
| +        | 10.5±0.1ms                 | 20.7±1ms                           |    1.96 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(bool[pyarrow], 'prod')                 |
| +        | 11.8±0.06ms                | 21.3±0.5ms                         |    1.81 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(bool[pyarrow], 'sum')                  |
| +        | 14.6±0.2ms                 | 24.3±2ms                           |    1.67 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(bool[pyarrow], 'any')                  |
| +        | 14.9±0.1ms                 | 24.3±3ms                           |    1.63 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(bool[pyarrow], 'all')                  |
| -        | 122±2ms                    | 85.1±10ms                          |    0.7  | groupby.GroupByAggregateArrowDtypes.time_frame_agg(decimal128(25, 3)[pyarrow], 'prod')    |
| -        | 94.6±0.6ms                 | 48.8±0.4ms                         |    0.52 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(decimal128(25, 3)[pyarrow], 'max')     |
| -        | 93.4±1ms                   | 48.5±0.3ms                         |    0.52 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(decimal128(25, 3)[pyarrow], 'min')     |
| -        | 80.3±0.9ms                 | 36.7±0.4ms                         |    0.46 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(decimal128(25, 3)[pyarrow], 'sum')     |
| -        | 106±1ms                    | 45.6±5ms                           |    0.43 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(string[pyarrow], 'max')                |
| -        | 88.1±5ms                   | 37.0±0.09ms                        |    0.42 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(decimal128(25, 3)[pyarrow], 'mean')    |
| -        | 108±2ms                    | 45.6±5ms                           |    0.42 | groupby.GroupByAggregateArrowDtypes.time_frame_agg(string[pyarrow], 'min')                |

@lukemanley
Copy link
Member Author

Thanks for reviews. I'll close this and maybe split off some of the smaller pieces.

@lukemanley lukemanley closed this Oct 1, 2023
@lukemanley lukemanley deleted the arrow-groupby branch November 16, 2023 12:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Arrow pyarrow functionality Bug Groupby
Projects
None yet
Development

Successfully merging this pull request may close these issues.

BUG: groupby.var() does not return arrow types with arrow backed series as input.
3 participants