ARROW-881: [Python] Reconstruct Pandas DataFrame indexes using metadata#612
ARROW-881: [Python] Reconstruct Pandas DataFrame indexes using metadata#612cpcloud wants to merge 10 commits intoapache:masterfrom
Conversation
cpp/src/arrow/ipc/metadata.cc
Outdated
There was a problem hiding this comment.
I'm going to revert these.
cpp/src/arrow/ipc/metadata.cc
Outdated
cpp/src/arrow/type.h
Outdated
There was a problem hiding this comment.
This is a rebase artifact.
python/pyarrow/_parquet.pyx
Outdated
There was a problem hiding this comment.
Let me add a small test, doing it now.
python/pyarrow/tests/test_parquet.py
Outdated
There was a problem hiding this comment.
This limits us to MultiIndexes with <= 255 levels (because we're using string -> string for metadata). I think that's reasonable for now. We can always come with up a more complex encoding if we want to support more levels than that. I'd be surprised if this ever comes up in practice.
python/pyarrow/__init__.py
Outdated
There was a problem hiding this comment.
Maybe serialize_pandas and deserialize_pandas?
python/pyarrow/_table.pyx
Outdated
There was a problem hiding this comment.
This function is getting "chubby" enough that we should probably move it to a pandas utility module in pure Python.
python/pyarrow/_table.pyx
Outdated
There was a problem hiding this comment.
Same comment as above re: doing this in pure Python. It would also encourage adding appropriate public APIs to pyarrow.Table. We already have Table.remove_column, so it is probably better to use that if possible.
python/pyarrow/ipc.py
Outdated
There was a problem hiding this comment.
This DEFAULT_INDEX_FIELD is a slight nuisance. Perhaps add an argument to from_pandas whether to ingest the index (default could be True or False I guess)?
python/pyarrow/tests/test_parquet.py
Outdated
There was a problem hiding this comment.
I think <= 255 levels is OK. I would actually rather see this metadata stored as a JSON blob under a single pandas key, otherwise we are possibly muddying the metadata namespace.
metadata = {b'pandas': json.dumps(pandas_meta).encode('utf8')}
|
PARQUET-595 is merged |
29aa20d to
8aa4ee0
Compare
481b724 to
9a0e6c5
Compare
|
@wesm This is ready for another round of review when you get a chance. |
|
OK, taking a look now. Minor rebase conflict from #679 |
f1232be to
fc27461
Compare
|
Fixed the conflict and addressed the |
wesm
left a comment
There was a problem hiding this comment.
Overall this looks fine, this will be very nice to have! I would say we should start factoring out code from pyarrow.lib that doesn't need to be cythonized, which will make iterative development a little easier in cases too
cpp/src/arrow/type.h
Outdated
There was a problem hiding this comment.
You should be able to call this and the one above in a const context. You'll have to mark name_to_index_ as mutable to make this work
python/pyarrow/_parquet.pyx
Outdated
python/pyarrow/array.pxi
Outdated
There was a problem hiding this comment.
Maybe we should factor this out into a pandas_compat.py module, along with the rest of the stuff below
There was a problem hiding this comment.
This is pretty awkward to factor out because of the TimeUnit_* enum values. We'd have to make pandas_compat.pxi if we wanted to keep those available to Cython but not Python (which would seem to defeat part of the purpose of factoring out) or expose the enum values to Python. This doesn't seem worth it for something that will never be seen by a user. Still, if you feel strongly about it I can spend some more time on it.
There was a problem hiding this comment.
True true, no worries, this is fine as is.
python/pyarrow/parquet.py
Outdated
There was a problem hiding this comment.
I think we need an explicit read_pandas function in this class so that the user must express intent to use the additional pandas metadata
There was a problem hiding this comment.
I think this is the last thing in this patch. I would like to have the option to ignore the metadata and read the file as-is as an Arrow table (without having the index columns tacked on against my will). So we can either add a read_pandas method to enables the metadata wrangling logic, or an option to read that does the same thing.
There was a problem hiding this comment.
Yep, fully on board here. Just trying to iron out pandas_compat stuff, then moving on to this.
python/pyarrow/table.pxi
Outdated
There was a problem hiding this comment.
This is a regression, since pandas is not a hard dependency.
python/pyarrow/table.pxi
Outdated
There was a problem hiding this comment.
Move this some of this code to pyarrow.pandas_compat?
python/pyarrow/table.pxi
Outdated
There was a problem hiding this comment.
Maybe make check_index default to false?
python/pyarrow/tests/test_ipc.py
Outdated
There was a problem hiding this comment.
Test a MultiIndex here?
What is the behavior when the columns are not strings?
There was a problem hiding this comment.
This now raises a TypeError alerting the user to the fact that column names cannot be anything other than strings.
There was a problem hiding this comment.
Also added a multiindex test.
python/pyarrow/table.pxi
Outdated
|
I think this and #602 are the last things I'd like to get in before cutting 0.4.0 (outside some clean up patches). |
|
Sounds good! |
695e1d4 to
5613a4c
Compare
python/pyarrow/ipc.py
Outdated
There was a problem hiding this comment.
can you add nthreads=None here and pass through to to_pandas (single-threaded by default)
|
made a last comment #612 (comment) but outside of that i think this is about good to go |
python/pyarrow/parquet.py
Outdated
There was a problem hiding this comment.
Need to call _get_column_indices on these?
There was a problem hiding this comment.
Ah crap. Yep. Will also add a test since this wasn't failing for me locally.
|
Here's the appveyor build: https://ci.appveyor.com/project/cpcloud/arrow/build/1.0.158 +1, thanks for doing this! |
cc @mrocklin Author: Phillip Cloud <cpcloud@gmail.com> Closes apache#612 from cpcloud/ARROW-881 and squashes the following commits: 4fa679d [Phillip Cloud] Add metadata test 60f71aa [Phillip Cloud] More doc de616e8 [Phillip Cloud] Add doc a42a084 [Phillip Cloud] Decode metadata to utf8 because JSON 2198dc5 [Phillip Cloud] Call column_name_idx on index_columns 32c5e64 [Phillip Cloud] Add test for read_pandas subset 2fa1f16 [Phillip Cloud] Do not write index_column metadata if not requested 21a8829 [Phillip Cloud] Add docs to pq.read_pandas c35970c [Phillip Cloud] Add test for no index written and pq.read_pandas 59477b5 [Phillip Cloud] ARROW-881: [Python] Reconstruct Pandas DataFrame indexes using custom_metadata
cc @mrocklin