Skip to content

Spark SQL job does not terminate after successful execution, continues to spawn new jobs repeatedly #2202

@Riefu

Description

@Riefu

Description

When executing a simple Spark SQL query that should only submit a single job, the job successfully completes but fails to terminate. Instead, it continues to spawn new jobs in a repetitive cycle without exiting.

Reproduction Steps

  1. Submit a simple Spark SQL query (e.g., SELECT * FROM table WHERE condition = '432')
  2. Observe the job execution in the Aurora interface
  3. Note that after the initial job succeeds, new jobs are continuously spawned

Expected Behavior

  • The Spark SQL job should terminate normally after successful execution
  • No additional jobs should be spawned after the initial job completes

Actual Behavior

  • The job continues to run indefinitely, repeatedly spawning new jobs
  • The system does not exit or terminate as expected

Screenshot/Logs

Job execution screenshot

  • Shows job ID 3 with successful job IDs list but running job ID [576] still active
  • Multiple successful job IDs indicate repeated execution

Environment Information

  • Apache Aurora version: 7.0.0
  • Spark version: 3.2.4
  • Hadoop version: 3.0.0

Additional Context

This issue occurs with simple queries that should only generate a single job, suggesting a potential termination or cleanup problem in the job lifecycle management.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions