Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: read_parquet doesn't work for ArrowDtype dictionary types. #54392

Closed
3 tasks done
randolf-scholz opened this issue Aug 3, 2023 · 4 comments
Closed
3 tasks done
Labels
Arrow pyarrow functionality Bug IO Parquet parquet, feather Upstream issue Issue related to pandas dependency

Comments

@randolf-scholz
Copy link
Contributor

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import pandas as pd
import pyarrow as pa

pa_dict = pa.dictionary(pa.int32(), pa.string())
pd_dict = pd.ArrowDtype(pa_dict)

# create table in pyarrow
data = {"foo": [1,2], "bar": ["a", "b"]}
schema = pa.schema({"foo": pa.int32(), "bar": pa_dict})
table = pa.table(data, schema=schema)

# serialize in arrow and loading to pandas works
pa.parquet.write_table(table, "demo.parquet")
df = pd.read_parquet("demo.parquet", dtype_backend="pyarrow")

assert df.bar.dtype ==  pd_dict  # ✔ the dtype is dictionary[int32,string]

# saving and re-loading doesn't
df.to_parquet("demo2.parquet")
pd.read_parquet("demo2.parquet", dtype_backend="pyarrow")

Issue Description

When attempting to load a DataFrame that was serialized with pandas as a parquet-file the error

ValueError: format number 1 of "dictionary<values=string, indices=int32, ordered=0>[pyarrow]" is not recognized

is raised if the table contained a column with pd.ArrowDtype(pa.dictionary(pa.int32(), pa.string())).

Surprisingly, the same is not true when trying to read the same table if it was serialized as parquet via pyarrow.

Expected Behavior

It should load the DataFrame.

Installed Versions

INSTALLED VERSIONS
------------------
commit           : eaddc1d4815b96f2cdee038c6e26fe7fc9084f13
python           : 3.10.11.final.0
python-bits      : 64
OS               : Linux
OS-release       : 5.19.0-50-generic
Version          : #50-Ubuntu SMP PREEMPT_DYNAMIC Mon Jul 10 18:24:29 UTC 2023
machine          : x86_64
processor        : x86_64
byteorder        : little
LC_ALL           : None
LANG             : en_US.UTF-8
LOCALE           : en_US.UTF-8

pandas           : 2.1.0.dev0+1378.geaddc1d481
numpy            : 1.24.4
pytz             : 2023.3
dateutil         : 2.8.2
setuptools       : 65.5.0
pip              : 23.2.1
Cython           : 0.29.33
pytest           : 7.4.0
hypothesis       : 6.82.0
sphinx           : 6.2.1
blosc            : 1.11.1
feather          : None
xlsxwriter       : 3.1.2
lxml.etree       : 4.9.3
html5lib         : 1.1
pymysql          : 1.4.6
psycopg2         : 2.9.6
jinja2           : 3.1.2
IPython          : 8.14.0
pandas_datareader: None
bs4              : 4.12.2
bottleneck       : 1.3.7
brotli           : 
fastparquet      : 2023.7.0
fsspec           : 2023.6.0
gcsfs            : 2023.6.0
matplotlib       : 3.7.2
numba            : 0.57.1
numexpr          : 2.8.4
odfpy            : None
openpyxl         : 3.1.2
pandas_gbq       : None
pyarrow          : 12.0.1
pyreadstat       : 1.2.2
pyxlsb           : 1.0.10
s3fs             : 2023.6.0
scipy            : 1.11.1
snappy           : 
sqlalchemy       : 2.0.19
tables           : 3.8.0
tabulate         : 0.9.0
xarray           : 2023.7.0
xlrd             : 2.0.1
zstandard        : 0.21.0
tzdata           : 2023.3
qtpy             : None
pyqt5            : None
@randolf-scholz randolf-scholz added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Aug 3, 2023
@mroeschke
Copy link
Member

This looks like an upstream issue in pyarrow. We pass type_mapper=pd.ArrowDtype to the read but it looks like pyarrow tries inferring first which causes the error

File /opt/miniconda3/envs/pandas-dev/lib/python3.10/site-packages/pyarrow/pandas_compat.py:812, in table_to_blockmanager(options, table, categories, ignore_metadata, types_mapper)
    809     table = _add_any_metadata(table, pandas_metadata)
    810     table, index = _reconstruct_index(table, index_descriptors,
    811                                       all_columns, types_mapper)
--> 812     ext_columns_dtypes = _get_extension_dtypes(
    813         table, all_columns, types_mapper)
    814 else:
    815     index = _pandas_api.pd.RangeIndex(table.num_rows)

File /opt/miniconda3/envs/pandas-dev/lib/python3.10/site-packages/pyarrow/pandas_compat.py:865, in _get_extension_dtypes(table, columns_metadata, types_mapper)
    860 dtype = col_meta['numpy_type']
    862 if dtype not in _pandas_supported_numpy_types:
    863     # pandas_dtype is expensive, so avoid doing this for types
    864     # that are certainly numpy dtypes
--> 865     pandas_dtype = _pandas_api.pandas_dtype(dtype)
    866     if isinstance(pandas_dtype, _pandas_api.extension_dtype):
    867         if hasattr(pandas_dtype, "__from_arrow__"):

File /opt/miniconda3/envs/pandas-dev/lib/python3.10/site-packages/pyarrow/pandas-shim.pxi:136, in pyarrow.lib._PandasAPIShim.pandas_dtype()

File /opt/miniconda3/envs/pandas-dev/lib/python3.10/site-packages/pyarrow/pandas-shim.pxi:139, in pyarrow.lib._PandasAPIShim.pandas_dtype()

File ~/pandas-mroeschke/pandas/core/dtypes/common.py:1627, in pandas_dtype(dtype)
   1622     with warnings.catch_warnings():
   1623         # GH#51523 - Series.astype(np.integer) doesn't show
   1624         # numpy deprecation warning of np.integer
   1625         # Hence enabling DeprecationWarning
   1626         warnings.simplefilter("always", DeprecationWarning)
-> 1627         npdtype = np.dtype(dtype)
   1628 except SyntaxError as err:
   1629     # np.dtype uses `eval` which can raise SyntaxError
   1630     raise TypeError(f"data type '{dtype}' not understood") from err

@mroeschke mroeschke added IO Parquet parquet, feather Upstream issue Issue related to pandas dependency Arrow pyarrow functionality and removed Needs Triage Issue that has not been reviewed by a pandas team member labels Aug 3, 2023
@HariharPadhi1412
Copy link

import pandas as pd
import pyarrow as pa

pa_dict = pa.dictionary(pa.int32(), pa.string())
pd_dict = pd.ArrowDtype(pa_dict)

Save the DataFrame to Parquet

df.to_parquet("demo2.parquet")

Read the Parquet file

df_loaded = pd.read_parquet("demo2.parquet", dtype_backend="pyarrow")

Manually decode the dictionary column

df_loaded['bar'] = df_loaded['bar'].apply(lambda x: pa_dict.decode(x))

assert df_loaded.bar.dtype == pd_dict # Confirm the dtype after decoding

@takacsd
Copy link

takacsd commented Sep 11, 2023

I think it is the same underlying problem what we run into here: #53011
Basically, pandas saves the type info in the metadata as a string: dictionary<values=string, indices=int32, ordered=0>[pyarrow] but it cannot parse it back. If you drop the metadata (or never save it like when you create a parquet with pyarrow) you can load the parquet file because it can make sense the pyarrow schema saved in the file.

@mroeschke
Copy link
Member

Closing as a duplicate of #53011

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Arrow pyarrow functionality Bug IO Parquet parquet, feather Upstream issue Issue related to pandas dependency
Projects
None yet
Development

No branches or pull requests

4 participants