We should be able to support concurrent runs of a Spark.
Currently, we only allow concurrent runs for Workflow and MapReduce:
A workaround is to run the Spark program within a Workflow, which does support concurrent runs.
Relevant documentation that will need updating as a part of this JIRA:
Feature to support concurrent runs of a Spark program
That JIRA is for supporting more than 1 Spark job at a time (i.e. two different spark programs running concurrently).
This JIRA is about supporting two concurrent runs of the same Spark program.
It is an artificial restriction in the ProgramLifecycleService, which a single line change would fix it.