Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. A developer should be able to create pipelines that contain aggregations (GROUP BY -> count/sum/unique)
  2. A developer should be able to create a pipeline with multiple sources
  3. A developer should be able to use a Spark ML job as a pipeline stage
  4. A developer should be able to rerun failed pipeline runs without reconfiguring the pipeline
  5. A developer should be able to de-duplicate records in a pipeline
  6. A developer should be able to join multiple branches of a pipeline
  7. A developer should be able to use an Explore action as a pipeline stage
  8. A developer should be able to create pipelines that contain Spark Streaming jobs
  9. A developer should be able to create pipelines that run based on various conditions, including input data availability and Kafka events

...