Skip to content

Comments

[SPARK-55600][PYTHON] Fix pandas to arrow loses row count when schema has 0 columns on classic#54382

Open
Yicong-Huang wants to merge 3 commits intoapache:masterfrom
Yicong-Huang:SPARK-55600/fix/pandas-arrow-zero-columns-row-count
Open

[SPARK-55600][PYTHON] Fix pandas to arrow loses row count when schema has 0 columns on classic#54382
Yicong-Huang wants to merge 3 commits intoapache:masterfrom
Yicong-Huang:SPARK-55600/fix/pandas-arrow-zero-columns-row-count

Conversation

@Yicong-Huang
Copy link
Contributor

@Yicong-Huang Yicong-Huang commented Feb 19, 2026

What changes were proposed in this pull request?

This PR fixes the row count loss issue when creating a Spark DataFrame from a pandas DataFrame with 0 columns in classic.

The issue occurs due to PyArrow limitations when creating RecordBatches or Tables with 0 columns - row count information is lost.

Why are the changes needed?

Before this fix:

import pandas as pd
from pyspark.sql.types import StructType

pdf = pd.DataFrame(index=range(5))  # 5 rows, 0 columns
df = spark.createDataFrame(pdf, schema=StructType([]))
df.count()  # Returns 0 (wrong!)

After this fix:

df.count()  # Returns 5 (correct!)

Does this PR introduce any user-facing change?

Yes. Creating a DataFrame from a pandas DataFrame with 0 columns now correctly preserves the row count in Classic Spark.

How was this patch tested?

Added unit test test_from_pandas_dataframe_with_zero_columns in test_creation.py that tests both Arrow-enabled and Arrow-disabled paths.

Was this patch authored or co-authored using generative AI tooling?

No

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant