Skip to content

Conversation

@Shekharrajak
Copy link
Contributor

Which issue does this PR close?

Closes #3422

Rationale for this change

Comet bypasses Spark's logicalPlanOutputWithNames() entirely. It must explicitly use cmd.outputColumnNames (the table's actual column names) to achieve the same result.

What changes are included in this PR?

renames attributes in the logical plan (outputColumns) before passing to FileFormatWriter

How are these changes tested?

unit tests

@Shekharrajak Shekharrajak changed the title Fix/spark 38811 insert alter table add columns fix: spark 38811 insert alter table add columns Feb 10, 2026
}

// Refresh the catalog table cache so subsequent reads see the new data
catalogTable.foreach { ct =>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this was different issue - while running the test, realised table needs to be refreshed to get the new data.

.setOutputPath(outputPath)
.setCompression(codec)
.addAllColumnNames(cmd.query.output.map(_.name).asJava)
.addAllColumnNames(cmd.outputColumnNames.asJava)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Operator {
      plan_id: 42
      parquet_writer: ParquetWriter {
        output_path: "file:/.../spark-warehouse/t"
        compression: SNAPPY
        column_names: ["i", "s"]       
      }
......

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[COMET NATIVE WRITER] SPARK-38811 INSERT INTO on ALTER TABLE ADD COLUMNS

1 participant