Partitioned Append on Identity Transform#555
Conversation
pyiceberg/table/__init__.py
Outdated
| table_metadata=table_metadata, | ||
| tasks=iter([WriteTask(write_uuid, next(counter), batches) for batches in bin_pack_arrow_table(df, target_file_size)]), # type: ignore | ||
| ) | ||
| if any(len(spec.fields) > 0 for spec in table_metadata.partition_specs): |
There was a problem hiding this comment.
It seems the old line was not checking whether the table is partitioned but was checking partition evolution?
if len([spec for spec in table_metadata.partition_specs if spec.spec_id != 0]) > 0:
pyiceberg/table/__init__.py
Outdated
| ) | ||
| if target_file_size is None: | ||
| raise ValueError( | ||
| "Fail to get neither TableProperties.WRITE_TARGET_FILE_SIZE_BYTES nor WRITE_TARGET_FILE_SIZE_BYTES_DEFAULT for writing target data file." |
There was a problem hiding this comment.
I have mixed feelings about this exception check, because we are setting the default value of target_file_size as TableProperties.WRITE_TARGET_FILE_SIZE_BYTES_DEFAULT right in the previous line. I feel as though this is too redundant.
I understand why we are doing it though:
PropertyUtil.property_as_int returns Optional[int], and bin_packing expects an int, so we need to type check it.
If we run into more of these type checking redundancies in the code base, where when we are using property values that are always expected to have a none-null default value, maybe we should refactor PropertyUtil instead. Maybe we can have two methods, property_as_int that returns an Optional[int], and property_as_int_with_default, that returns an int?
There was a problem hiding this comment.
property_as_int_with_default sounds better to me because all the exceptions raised due to missing default property could be centralized in the function? How do you feel about it
There was a problem hiding this comment.
I like that as well, the ValueError is misleading and it is not directly obvious why we would raise it.
There was a problem hiding this comment.
i just find the default value itself could be None:
PARQUET_COMPRESSION_LEVEL_DEFAULT = None
so this None checking is not unnecessary?
the original code for this target_file_size check just type: ignores it
pyiceberg/table/__init__.py
Outdated
| table_metadata=table_metadata, | ||
| tasks=iter([WriteTask(write_uuid, next(counter), batches) for batches in bin_pack_arrow_table(df, target_file_size)]), # type: ignore | ||
| ) | ||
| if any(len(spec.fields) > 0 for spec in table_metadata.partition_specs): |
pyiceberg/table/__init__.py
Outdated
| """ | ||
| import pyarrow as pa | ||
|
|
||
| partition_columns = get_partition_columns(iceberg_table_metadata, arrow_table) |
There was a problem hiding this comment.
How do you feel about this suggestion? Most of this function's responsibility seems to lie in making sure that the partition field is provided in the arrow_table, but we seem to already be checking the schema in the write functions now.
| partition_columns = get_partition_columns(iceberg_table_metadata, arrow_table) | |
| partition_columns = [iceberg_table_metadata.schema().find_column_name(partition_field.source_id) for partition_field in iceberg_table_metadata.spec().fields] |
There was a problem hiding this comment.
it will be more useful when there are hidden partition columns. And the check is also for mypy check because find_column_name returns optional[str]
…k of running test
jqin61
left a comment
There was a problem hiding this comment.
@syun64 Please give another round of review, thank you!
Fokko
left a comment
There was a problem hiding this comment.
Left some small comments, apart from that it looks good to me 👍
|
|
||
|
|
||
| @partition_field_to_data_file_partition_field.register(LongType) | ||
| @partition_field_to_data_file_partition_field.register(DateType) |
There was a problem hiding this comment.
This single-dispatch is there only for the TimeType it seems. Probably we should we should also convert those into a native type.
pyiceberg/table/__init__.py
Outdated
|
|
||
| if len(self.spec().fields) > 0: | ||
| raise ValueError("Cannot write to partitioned tables") | ||
| supported = {IdentityTransform} |
There was a problem hiding this comment.
Nit:
| supported = {IdentityTransform} | |
| supported_transforms = {IdentityTransform} |
pyiceberg/table/__init__.py
Outdated
| partition_key: Optional[PartitionKey] = None | ||
|
|
||
| # Later to be extended with partition information | ||
| def generate_data_file_partition_path(self) -> str: |
There was a problem hiding this comment.
Nit: This function looks redundant. The check is being done in generate_data_file_path() as well. I would merge those two.
pyiceberg/table/__init__.py
Outdated
| ) | ||
| if target_file_size is None: | ||
| raise ValueError( | ||
| "Fail to get neither TableProperties.WRITE_TARGET_FILE_SIZE_BYTES nor WRITE_TARGET_FILE_SIZE_BYTES_DEFAULT for writing target data file." |
There was a problem hiding this comment.
I like that as well, the ValueError is misleading and it is not directly obvious why we would raise it.
pyiceberg/table/__init__.py
Outdated
| return table_partitions | ||
|
|
||
|
|
||
| def partition(spec: PartitionSpec, schema: Schema, arrow_table: pa.Table) -> Iterable[TablePartition]: |
There was a problem hiding this comment.
It would be good to have a bit more length filenames. I also think we should hide this from the outside user.
| def partition(spec: PartitionSpec, schema: Schema, arrow_table: pa.Table) -> Iterable[TablePartition]: | |
| def _determine_partitions(spec: PartitionSpec, schema: Schema, arrow_table: pa.Table) -> List[TablePartition]: |
I think we can also return a list, so folks know that it is already materialized.
| schema=table_metadata.schema(), | ||
| ) | ||
| for partition in partitions | ||
| for batches in bin_pack_arrow_table(partition.arrow_table_partition, target_file_size) |
| 'double': [0.0, None, 0.9], | ||
| 'timestamp': [datetime(2023, 1, 1, 19, 25, 00), None, datetime(2023, 3, 1, 19, 25, 00)], | ||
| 'timestamptz': [datetime(2023, 1, 1, 19, 25, 00), None, datetime(2023, 3, 1, 19, 25, 00)], | ||
| 'timestamptz': [ |
tests/conftest.py
Outdated
| import pyarrow as pa | ||
|
|
||
| """PyArrow table with all kinds of columns.""" |
There was a problem hiding this comment.
| import pyarrow as pa | |
| """PyArrow table with all kinds of columns.""" | |
| """PyArrow table with all kinds of columns.""" | |
| import pyarrow as pa |
tests/conftest.py
Outdated
| import pyarrow as pa | ||
|
|
||
| """PyArrow table with all kinds of columns.""" |
There was a problem hiding this comment.
| import pyarrow as pa | |
| """PyArrow table with all kinds of columns.""" | |
| """PyArrow table with all kinds of columns.""" | |
| import pyarrow as pa |
pyiceberg/manifest.py
Outdated
|
|
||
|
|
||
| def data_file_with_partition(partition_type: StructType, format_version: TableVersion) -> StructType: | ||
| def data_file_with_partition(partition_type: StructType, format_version: Literal[1, 2]) -> StructType: |
There was a problem hiding this comment.
Nit:
| def data_file_with_partition(partition_type: StructType, format_version: Literal[1, 2]) -> StructType: | |
| def data_file_with_partition(partition_type: StructType, format_version: TableVersion) -> StructType: |
|
|
||
|
|
||
| @partition_field_to_data_file_partition_field.register(LongType) | ||
| @partition_field_to_data_file_partition_field.register(DateType) |
As discussed in the monthly meeting, this is the first PR to break #353 down into 4 PRs:
1. Partitioned append with identity transform
other three:
2. Dynamic overwrite using delete + append, 2 snapshots in one commit
3. Hidden partitioning support (for slicing the arrow table, manifest file entry.partition, data file path)
4. Static overwrite using delete + append, 2 snapshots in one commit