Currently there are plugins that get the Hadoop Job object and use it to set properties. This makes plugins not reusable when switching the execution engine to Spark. We should use InputFormatPvoider / OutputFormatProvider instead.
The getHadoopJob() method is already deprecated.
Need to verify Hive in secure mode to see if it still works.
We may need to introduce new method to the data pipeline API to allow plugin adding new delegation tokens for the execution.