Description
When executing a simple Spark SQL query that should only submit a single job, the job successfully completes but fails to terminate. Instead, it continues to spawn new jobs in a repetitive cycle without exiting.
Reproduction Steps
- Submit a simple Spark SQL query (e.g.,
SELECT * FROM table WHERE condition = '432')
- Observe the job execution in the Aurora interface
- Note that after the initial job succeeds, new jobs are continuously spawned
Expected Behavior
- The Spark SQL job should terminate normally after successful execution
- No additional jobs should be spawned after the initial job completes
Actual Behavior
- The job continues to run indefinitely, repeatedly spawning new jobs
- The system does not exit or terminate as expected
Screenshot/Logs

- Shows job ID 3 with successful job IDs list but running job ID [576] still active
- Multiple successful job IDs indicate repeated execution
Environment Information
- Apache Aurora version: 7.0.0
- Spark version: 3.2.4
- Hadoop version: 3.0.0
Additional Context
This issue occurs with simple queries that should only generate a single job, suggesting a potential termination or cleanup problem in the job lifecycle management.
Description
When executing a simple Spark SQL query that should only submit a single job, the job successfully completes but fails to terminate. Instead, it continues to spawn new jobs in a repetitive cycle without exiting.
Reproduction Steps
SELECT * FROM table WHERE condition = '432')Expected Behavior
Actual Behavior
Screenshot/Logs
Environment Information
Additional Context
This issue occurs with simple queries that should only generate a single job, suggesting a potential termination or cleanup problem in the job lifecycle management.