Skip to content

Update sql-using-python-udf.py#1

Closed
Ritinikhil wants to merge 2 commits intotimsaucer:mainfrom
Ritinikhil:main
Closed

Update sql-using-python-udf.py#1
Ritinikhil wants to merge 2 commits intotimsaucer:mainfrom
Ritinikhil:main

Conversation

@Ritinikhil
Copy link
Copy Markdown

enhance sql-using-python-udf example

  • Add comprehensive comments and documentation
  • Implement multiple data registration methods for API compatibility
  • Add version information printing for debugging
  • Improve error handling with informative messages
  • Add formatted table output for better readability
  • Include input validation through PyArrow schema
  • Add results verification with assertions

Author: Ritinikhil
Date: 2025-03-06

Which issue does this PR close?

Closes #.

Rationale for this change

What changes are included in this PR?

Are there any user-facing changes?

enhance sql-using-python-udf example

- Add comprehensive comments and documentation
- Implement multiple data registration methods for API compatibility
- Add version information printing for debugging
- Improve error handling with informative messages
- Add formatted table output for better readability
- Include input validation through PyArrow schema
- Add results verification with assertions

Author: Ritinikhil
Date: 2025-03-06
improve path handling in substrait example

- Add cross-platform path handling using os.path
- Add error handling for CSV file registration
- Improve code documentation
- Remove hard-coded Windows paths
- Keep Apache license header intact

Author: Ritinikhil
@timsaucer
Copy link
Copy Markdown
Owner

I just noticed this - can you change your upstream to the main repo rather than my fork?

@Ritinikhil Ritinikhil closed this by deleting the head repository Mar 9, 2025
timsaucer added a commit that referenced this pull request Apr 18, 2026
- Wrap CASE/WHEN method-chain examples in parentheses and assign to a
  variable so they are valid Python as shown (Copilot #1, #2).
- Fix INTERSECT/EXCEPT mapping: the default distinct=False corresponds to
  INTERSECT ALL / EXCEPT ALL, not the distinct forms. Updated both the
  Set Operations section and the SQL reference table to show both the
  ALL and distinct variants (Copilot apache#4).
- Change write_parquet / write_csv / write_json examples to file-style
  paths (output.parquet, etc.) to match the convention used in existing
  tests and examples. Note that a directory path is also valid for
  partitioned output (Copilot apache#5).

Verified INTERSECT/EXCEPT semantics with a script:
  df1.intersect(df2)                -> [1, 1, 2]  (= INTERSECT ALL)
  df1.intersect(df2, distinct=True) -> [1, 2]     (= INTERSECT)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
timsaucer added a commit that referenced this pull request Apr 23, 2026
* Add AGENTS.md and enrich __init__.py module docstring

Add python/datafusion/AGENTS.md as a comprehensive DataFrame API guide
for AI agents and users. It ships with pip automatically (Maturin includes
everything under python-source = "python"). Covers core abstractions,
import conventions, data loading, all DataFrame operations, expression
building, a SQL-to-DataFrame reference table, common pitfalls, idiomatic
patterns, and a categorized function index.

Enrich the __init__.py module docstring from 2 lines to a full overview
with core abstractions, a quick-start example, and a pointer to AGENTS.md.

Closes apache#1394 (PR 1a)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Clarify audience of root vs package AGENTS.md

The root AGENTS.md (symlinked as CLAUDE.md) is for contributors working
on the project. Add a pointer to python/datafusion/AGENTS.md which is
the user-facing DataFrame API guide shipped with the package. Also add
the Apache license header to the package AGENTS.md.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Add PR template and pre-commit check guidance to AGENTS.md

Document that all PRs must follow .github/pull_request_template.md and
that pre-commit hooks must pass before committing. List all configured
hooks (actionlint, ruff, ruff-format, cargo fmt, cargo clippy, codespell,
uv-lock) and the command to run them manually.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Remove duplicated hook list from AGENTS.md

Let the hooks be discoverable from .pre-commit-config.yaml rather than
maintaining a separate list that can drift.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Fix AGENTS.md: Arrow C Data Interface, aggregate filter, fluent example

- Clarify that DataFusion works with any Arrow C Data Interface
  implementation, not just PyArrow.
- Show the filter keyword argument on aggregate functions (the idiomatic
  HAVING equivalent) instead of the post-aggregate .filter() pattern.
- Update the SQL reference table to show FILTER (WHERE ...) syntax.
- Remove the now-incorrect "Aggregate then filter for HAVING" pitfall.
- Add .collect() to the fluent chaining example so the result is clearly
  materialized.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Update agents file after working through the first tpc-h query using only the text description

* Add feedback from working through each of the TPC-H queries

* Address Copilot review feedback on AGENTS.md

- Wrap CASE/WHEN method-chain examples in parentheses and assign to a
  variable so they are valid Python as shown (Copilot #1, #2).
- Fix INTERSECT/EXCEPT mapping: the default distinct=False corresponds to
  INTERSECT ALL / EXCEPT ALL, not the distinct forms. Updated both the
  Set Operations section and the SQL reference table to show both the
  ALL and distinct variants (Copilot apache#4).
- Change write_parquet / write_csv / write_json examples to file-style
  paths (output.parquet, etc.) to match the convention used in existing
  tests and examples. Note that a directory path is also valid for
  partitioned output (Copilot apache#5).

Verified INTERSECT/EXCEPT semantics with a script:
  df1.intersect(df2)                -> [1, 1, 2]  (= INTERSECT ALL)
  df1.intersect(df2, distinct=True) -> [1, 2]     (= INTERSECT)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Use short-form comparisons in AGENTS.md examples

Drop lit() on the RHS of comparison operators since Expr auto-wraps raw
Python values, matching the style the guide recommends (Copilot apache#3, apache#6).

Updates examples in the Aggregation, CASE/WHEN, SQL reference table,
Common Pitfalls, Fluent Chaining, and Variables-as-CTEs sections, plus
the __init__.py quick-start snippet. Prose explanations of the rule
(which cite the long form as the thing to avoid) are left unchanged.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Move user guide from python/datafusion/AGENTS.md to SKILL.md

The in-wheel AGENTS.md was not a real distribution channel -- no shipping
agent walks site-packages for AGENTS.md files. Moving to SKILL.md at the
repo root, with YAML frontmatter, lets the skill ecosystems (npx skills,
Claude Code plugin marketplaces, community aggregators) discover it.

Update the pointers in the contributor AGENTS.md and the __init__.py
module docstring accordingly. The docstring now references the GitHub
URL since the file no longer ships with the wheel.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* Address review feedback: doctest, streaming, date/timestamp

- Convert the __init__.py quick-start block to doctest format so it is
  picked up by `pytest --doctest-modules` (already the project default),
  preventing silent rot.
- Extract streaming into its own SKILL.md subsection with guidance on
  when to prefer execute_stream() over collect(), sync and async
  iteration, and execute_stream_partitioned() for per-partition streams.
- Generalize the date-arithmetic rule from Date32 to both Date32 and
  Date64 (both reject Duration at any precision, both accept
  month_day_nano_interval), and note that Timestamp columns differ and
  do accept Duration.
- Document the PyArrow-inherited type mapping returned by
  to_pydict()/to_pylist(), including the nanosecond fallback to
  pandas.Timestamp / pandas.Timedelta and the to_pandas() footgun where
  date columns come back as an object dtype.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* Distinguish user guide from agent reference in module docstring

The docstring pointed readers at SKILL.md as a "comprehensive guide," but
SKILL.md is written in a dense, skill-oriented format for agents — humans
are better served by the online user guide. Put the online docs first as
the primary reference and label the SKILL.md link as the agent reference.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants