Skip to content

Commit

Permalink
chunks (#56)
Browse files Browse the repository at this point in the history
Signed-off-by: RichardScottOZ <[email protected]>
  • Loading branch information
RichardScottOZ authored Aug 4, 2024
1 parent 11dc7bf commit f06df9f
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ That's it!
## Why does this work?

Underneath Xarray, Dask, and Pandas, there are NumPy arrays. These are paged in
chuncks and represented contiguously in memory. It is only a matter of metadata
chunks and represented contiguously in memory. It is only a matter of metadata
that breaks them up into ndarrays. `to_dataframe()`
just changes this metadata (via a `ravel()`/`reshape()`), back into a column
amenable to a DataFrame.
Expand Down Expand Up @@ -100,7 +100,7 @@ Xarray Datasets. This approach is being pursued
Deeper still: I was thinking we could make
a [virtual](https://fsspec.github.io/kerchunk/)
filesystem for parquet that would internally map to Zarr. Raster-backed virtual
parquet would open up integrations to numeroustools like dask, pyarrow, duckdb,
parquet would open up integrations to numerous tools like dask, pyarrow, duckdb,
and BigQuery. More thoughts on this
in [#4](https://github.com/alxmrs/xarray-sql/issues/4).

Expand Down

0 comments on commit f06df9f

Please sign in to comment.