Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This document is aimed to summarize some basic use-cases for which CDAP is being used. Basic testing to be done from UI before creating a PR.

Functional Tests:

...

NOTE: This is a working draft, to replace another page. Do not edit or delete.

This document summarizes basic CDAP use-cases. This basic testing is to be performed in the CDAP UI before creating a PR.

Updated as of version CDAP-3.4.0.

Functional Tests

Use Case 1: How a Purchase is tracked and processed

This use - case skims through the dev developer section of the CDAP UI to test how a purchase history app is supposed to be used.

These tests check that Apps, Flows, MapReduce and Spark Programs, Services, Workflows, Datasets, Streams, and Explorer work fine for the base use-cases.

Objective: In summary, what we are testing: we have a flow through which we can inject events—which then writes it to a dataset—a Workflow/MapReduce will read from the dataset, process it and write it to another dataset, while a Service helps us in viewing the data (we could do the same thing with Explorer, too). Here, the purchases dataset stores all purchases made by the user and the history dataset stores the history of purchases made by the user.

Testing a Flow:

  1. Deploy PurchaseHistory (Purchase history ) app
  2. Go to Appthe app's detailed view.
  3. Go to Purchase FlowPurchaseFlow
  4. Start the Flow flow 
  5. Inject events into the stream of a flow from UI - Should the CDAP UI: it should show the count of events on the stream flowlet.
  6. See if the events flow through all the flowlets and reach the collector.
  7. Stop the flow
  8. Go to the Datasets tab - Should : should show the datasets
  9. Go to History - Should : should show the run history that we just started
  10. Go to Purchases Datasets - Status to purchases dataset: schema page should show storage as a few Bytes bytes as we just added some streamsevents to the stream
  11. Go to explore the Explore tab and execute the default "select *" query. - Should : should show the results in the bottom section table (events we injected)

GIF

...

demonstrating these steps: TestingFlow

Testing a Workflow/Mapreduce:MapReduce

  1. Go to to PurchaseHistoryWorkflow
  2. Start the Workflow - This : this should pick up the events injected into the stream of a flow
  3. Mapreduce should run fine - : initially having green border and once completed, should be shaded with green indicating success
  4. Click on the Mapreduce program to MapReduce program PurchaseHistoryBuilder to go the program and check its status - Should : should show status as completed and switching between mappers and reducers should show proper metricsmetrics (Distributed CDAP only)
  5. Hit back and it should come back to the workflow run view
  6. Go to History Dataset - Same, to history Dataset: the status page should show storage as a few bytes.
  7. Explore Exploring the dataset should show the history of purchases made by the user (Explore tab, execute query on the dataset).

...

GIF demonstrating these steps: TestingWorkflowMR.gif

Testing a Service:

Service Use - Case - 1:

  1. Go to the PurchaseHistoryService and Start  and start it.
  2. Make a request to to the "/history/{customer} end point. - The customer is the same customer " endpoint, using a customer that we referred to in our stream injection
  3. Should show the list of purchases the user has made.

...

GIF demonstrating these steps: 

...

TestingService.gif

Service Use - Case - 2:

  1. Go to the UserProfileService and Start  and start it.
  2. Make a POST call to the "/user/{id} end point with the following JSON,

{

  1. " endpoint with this JSON:
      {
       "id":"Alice",
       "firstName":"Alice",
       "lastName":"Bernard",
       "categories":["fruits"]
      }

...

  1. Go to the flow and inject events in the name of Alice

...

  1. Go to the PurchaseHistoryWorkflow

...

  1. , start it and wait

...

  1. until it completes successfully

...

...

  1. Go to the PurchaseHistoryService

...

  1.  again and make the same GET Request as we did above "/user/{customer}

...

  1. ", using the customer

...

  1. "Alice

...

  1. "
  2. We should be able to see the User profile

...

  1. in addition to the purchase history information in the response

...

GIF demonstrating these steps: TestingService.gif

Testing a Spark :Program

  1. Deploy the SparkPageRank app app
  2. Start SparkPageRankService
  3. Inject data by running ./cdapbin/cdap-cli.sh load stream backlinkURLStream examples/SparkPageRank/binresources/inject-data.sh (the script runs for a couple of minutes)
  4. Start RanksService and TotalPagesPR
  5. Go to SparkPageRankProgram
  6. Click Starturlpairs.txt
  7. Go to SparkPageRankProgram
  8. Click PageRankWorkflow to get to the workflow detail page, set the runtime arguments using spark.SparkPageRankProgram.args as the key and 3 as the value, then click the Start button
  9. Go to PageRankSpark program
  10. You should see the metrics getting ("Storage", "Stages") being updated in the page

TODO: 

...

 

The above test makes sure Apps, Flows, Mapreduce, Service, Workflows, Datasets, Streams, Explorer work fine for base use-case.

Objective: Essentially what we are testing is -

We have a flow through which we can inject events - which then writes it to a dataset - a Workflow/Mapreduce will read from the dataset, process it and write it to another dataset - a Service helps us in viewing the data or we could do the same thing with explorer too. Here purchases dataset stores all purchases made by the user and history dataset stores the history of purchases made by the user.

Use-Case - 2: How an Adapter Works

Testing Adapter Creation: 

...

Setup Source - Stream Source 

  1. Give Stream Name
  2. Set Process Time Window to 1m
  3. Set Format to Text
  4. Set Schema to:
    1.  body (type string)

...

Add GIF demonstrating these steps

Use Case 2: How a Pipeline works

These base cases should work. If not, something is wrong; the UI should say what is the error. 

Objective: See if an adapter can convert a stream that is of CSV format to a TPFSAvro dataset that we use internally anywhere.

Testing Pipeline Creation

  1. Click "Add Application" in CDAP UI Home page and select "Hydrator Pipeline"
  2. Choose the pipeline type, "Batch" or "Realtime"
  3. "Batch" pipeline
    1. Give pipeline a name: "BatchTest" 
    2. Setup Source: a Stream source, click in left sidebar
      1. Give Stream a name: "BatchTestStream"
      2. Set Duration to 1m
      3. Set Delay to "0"
      4. Set Format to "text"
      5. Set Schema:
        1. Remove all existing
        2. Add a "body" of  type string
    3. Setup a Transform: Projection transform
      1. Fields to Drop: 
        1. headers
    4. Setup Sink: a TPFSAvro sink
      1. Give Dataset a name: "BatchTestDataset"
      2. Set Schema:
        1. ts (type long)
        2. body (type string)
  4. Setup a Transform - Projection transform
    1. Fields to Drop: 
      1. headers
    1. Schedule it for every 5 mins
    (If its ETLBatch adapter)Publish the adapter.
    1. : enter in the "Pipeline Configuration": "Cron Expression", under "Min": "0/5"
    2. Save, Validate, and then Publish the pipeline
    3. This base case should work. If not, something is wrong

...

    1. and the UI should say what is the error

...

    1. Once the
    adapter
    1. pipeline is created, send one or more events to the stream
    Start the adapter now from the adapater list view /ns/default/adapters
    1. using the CDAP UI
    2. Either start the pipeline manually or wait until the pipeline runs on the schedule
    3. Every 5 mins, the dataset associated with the
    adapter
    1. pipeline should be injected with data
    we
    1. you injected through
    our stream.
    1. Send some events to the stream you created
    2. Wait for 5 minutes
    3. Explore the sink dataset sink. You should see the events you sent to the stream

...

    1. .
  1. "Realtime" pipeline

 

GIFs explaining the above steps: AdapterTest1.gif ,  and TestingAdapter2.gif

Objective: The objective of this test is to see See if an adapter can convert a stream that is of CSV format csv to a TPFSAvro dataset that we use internally anywhere.

TODO: For metrics, we need a basic test case to test.

Once the above-mentioned steps work, push the code to two different clusters, a "secure" and a "non-secure" cluster (beamer software install cluster_id cdap-ui - : should take 5 mins to beam code to a cluster)

Once the cluster is up and running, we should provide the cluster url and a gif GIF of our test. This helps for the reviewer to assume that the feature/bug fix works and could can then start reviewing the code.

Behavioral Tests

...

This is more of an open-ended section where it which depends on the user/developer to test their UI extensively. This needs more thought and automated tests to run.