Lifecycle Microservices

Use the CDAP Lifecycle Microservices to deploy or delete applications and manage the lifecycle of MapReduce and Spark programs, custom services, workers, and workflows.

For more information about CDAP components, see CDAP Components.

All methods or endpoints described in this API have a base URL (typically http://<host>:11015 or https://<host>:10443) that precedes the resource identifier, as described in the Microservices Conventions. These methods return a status code, as listed in the Microservices Status Codes.

Application Lifecycle

Create an Application

To create an application, submit an HTTP PUT request:

1 PUT /v3/namespaces/<namespace-id>/apps/<app-id>

To create an application with a non-default version, submit an HTTP POST request with the version specified:

1 POST /v3/namespaces/<namespace-id>/apps/<app-id>/versions/<version-id>/create

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application

version-id

Version of the application, typically following semantic versioning;

The version-id defaults to -SNAPSHOT when you don’t specify a version-id.

The request body is a JSON object specifying the artifact to use to create the application, and an optional application configuration. For example:

1 2 3 4 5 6 7 8 9 10 11 12 13 PUT /v3/namespaces/default/apps/purchaseWordCount { "artifact": { "name": "WordCount", "version": "6.3.0", "scope": "USER" }, "config": { “datasetName”: “purchases” }, "principal":"user/example.net@EXAMPLEKDC.NET", "app.deploy.update.schedules":"true" }

will create an application named purchaseWordCount from the example WordCount artifact. The application will receive the specified config, which will configure the application to create a dataset named purchases instead of using the default dataset name.

Optionally, you can specify a Kerberos principal with which the application should be deployed. If a Kerberos principal is specified, then all the datasets created by the application will be created with the application's Kerberos principal.

Optionally, you can set or reset the flag app.deploy.update.schedules. If true, redeploying an application will modify any schedules that currently exist for the application; if false, redeploying an application does not create any new schedules and existing schedules are neither deleted nor updated.

Update an Application

To update an application, submit an HTTP POST request:

1 POST /v3/namespaces/<namespace-id>/apps/<app-id>/update

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application

The request body is a JSON object specifying the updated artifact version and the updated application config. For example, a request body of:

1 2 3 4 5 6 7 8 9 10 11 12 POST /v3/namespaces/default/apps/purchaseWordCount/update { "artifact": { "name": "WordCount", "version": "6.3.0", "scope": "USER" }, "config": { “datasetName”: “logs”; }, "principal":"user/example.net@EXAMPLEKDC.NET" }

will update the purchaseWordCount application to use version 6.3.0 of the WordCount artifact, and update the name of the dataset to logs. If no artifact is given, the current artifact will be used.

Only changes to artifact version are supported; changes to the artifact name are not allowed. If no config is given, the current config will be used. If the config key is present, the current config will be overwritten by the config specified in the request. As the principal of an application cannot be updated, during an update the principal should either be the same or absent.

Deploy an Artifact and Application

To deploy an application from your local file system into the namespace namespace-id, submit an HTTP POST request:

1 POST /v3/namespaces/<namespace-id>/apps

with the name of the JAR file as a header:

1 X-Archive-Name: <JAR filename>

and Kerberos principal with which the application is to be deployed (if required):

1 X-Principal: <Kerberos Principal>

and enable or disable updating schedules of the existing workflows using the header:

1 X-App-Deploy-Update-Schedules: <Update Schedules>

This will add the JAR file as an artifact and then create an application from that artifact. The archive name must be in the form <artifact-name>-<artifact-version>.jar. An optional header can supply a configuration object as a serialized JSON string:

1 X-App-Config: <JSON Serialization String of the Configuration Object>

The application's content is the body of the request:

1 <JAR binary content>

Invoke the same command to update an application to a newer version. However, be sure to stop all of its Spark and MapReduce programs before updating the application.

For an application that has a configuration class such as:

1 2 3 public static class MyAppConfig extends Config { String datasetName; }

We can deploy it with this call:

1 2 3 4 POST /v3/namespaces/<namespace-id>/apps -H "X-Archive-Name: <jar-name>" \ -H "X-Principal: <kerberos-principal>" \ -H "X-App-Deploy-Update-Schedules: true" \

Note: The X-App-Config header contains the JSON serialization string of the MyAppConfig object.

List Applications

To list all of the applications in the namespace namespace-id, issue an HTTP GET request:

1 GET /v3/namespaces/<namespace-id>/apps[?artifactName=<artifact-names>[&artifactVersion=<artifact-version>]]

Parameter

Description

Parameter

Description

namespace-id

Namespace ID.

artifactName

Optional filter to list all applications that use the specified artifact name. Valid values are cdap-data-pipeline, cdap-data-streams, and delta-app.

artifactVersion

Optional filter. This is the version of the artifact given in the artifactName parameter. It will be different depending on the artifact given. To get the version list, use Artifact Microservices.

This will return a JSON String map that lists each application with its name, description, and artifact. The list can optionally be filtered by one or more artifact names. It can also be filtered by artifact version. For example:

1 GET /v3/namespaces/<namespace-id>/apps?artifactName=cdap-data-pipeline,cdap-data-streams,delta-app

will return all applications that use either the cdap-data-pipeline,cdap-data-streams,or delta-app artifacts.

Details of an Application

For detailed information on an application in a namespace namespace-id, use:

1 GET /v3/namespaces/<namespace-id>/apps/<app-id>

Parameter

Description

Parameter

Description

namespace-id

Namespace ID.

app-id

Name of the application.

Note: To get the creation time of an application and other types of metadata, see Metadata Microservices.

The information will be returned in the body of the response. It includes the name and description of the application; the artifact and datasets that it uses, all of its programs; and the Kerberos principal, if that was provided during the deployment. For example:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 { "name": "POS_Sales_per_Region", "appVersion": "-SNAPSHOT", "description": "Data Pipeline Application", "configuration": "{\"resources\":{\"memoryMB\":2048.0,\"virtualCores\":1.0},\"driverResources\":{\"memoryMB\":2048.0,\"virtualCores\":1.0},\"connections\":[{\"from\":\"GCS - POS Sales\",\"to\":\"Wrangler\"},{\"from\":\"Wrangler\",\"to\":\"GCS2\"}],\"comments\":[],\"postActions\":[],\"properties\":{},\"processTimingEnabled\":true,\"stageLoggingEnabled\":false,\"stages\":[{\"name\":\"GCS - POS Sales\",\"plugin\":{\"name\":\"GCSFile\",\"type\":\"batchsource\",\"label\":\"GCS - POS Sales\",\"artifact\":{\"name\":\"google-cloud\",\"version\":\"0.15.3\",\"scope\":\"SYSTEM\"},\"properties\":{\"project\":\"auto-detect\",\"format\":\"text\",\"skipHeader\":\"false\",\"serviceFilePath\":\"auto-detect\",\"filenameOnly\":\"false\",\"recursive\":\"false\",\"encrypted\":\"false\",\"schema\":\"{\\\"type\\\":\\\"record\\\",\\\"name\\\":\\\"etlSchemaBody\\\",\\\"fields\\\":[{\\\"name\\\":\\\"offset\\\",\\\"type\\\":\\\"long\\\"},{\\\"name\\\":\\\"body\\\",\\\"type\\\":\\\"string\\\"}]}\",\"referenceName\":\"pos-sales\",\"path\":\"gs://flat-files-1/POS-r01.txt\"}},\"outputSchema\":\"{\\\"type\\\":\\\"record\\\",\\\"name\\\":\\\"etlSchemaBody\\\",\\\"fields\\\":[{\\\"name\\\":\\\"offset\\\",\\\"type\\\":\\\"long\\\"},{\\\"name\\\":\\\"body\\\",\\\"type\\\":\\\"string\\\"}]}\",\"id\":\"GCS---POS-Sales\"},{\"name\":\"Wrangler\",\"plugin\":{\"name\":\"Wrangler\",\"type\":\"transform\",\"label\":\"Wrangler\",\"artifact\":{\"name\":\"wrangler-transform\",\"version\":\"4.2.3\",\"scope\":\"SYSTEM\"},\"properties\":{\"field\":\"*\",\"precondition\":\"false\",\"threshold\":\"1\",\"schema\":\"{\\\"type\\\":\\\"record\\\",\\\"name\\\":\\\"etlSchemaBody\\\",\\\"fields\\\":[{\\\"name\\\":\\\"Store_Nbr\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Item_Nbr\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"WM_Week\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Daily\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Whse_Nbr\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Whse_Name\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"POS_Sales\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"POS_Qty\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"POS_Cost\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Net_Ship_Qty\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Sales_Type\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Sales_Description\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Max_Shelf_Qty\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Store_Specific_Retail\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Store_Specific_Cost\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Current_HO_Retail\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]}]}\",\"workspaceId\":\"af95f757-a2d8-4efb-90b0-fad0ff2a543b\",\"directives\":\"parse-as-csv :body \\u0027,\\u0027 true\\ndrop body\"}},\"outputSchema\":\"{\\\"type\\\":\\\"record\\\",\\\"name\\\":\\\"etlSchemaBody\\\",\\\"fields\\\":[{\\\"name\\\":\\\"Store_Nbr\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Item_Nbr\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"WM_Week\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Daily\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Whse_Nbr\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Whse_Name\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"POS_Sales\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"POS_Qty\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"POS_Cost\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Net_Ship_Qty\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Sales_Type\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Sales_Description\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Max_Shelf_Qty\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Store_Specific_Retail\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Store_Specific_Cost\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Current_HO_Retail\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]}]}\",\"inputSchema\":[{\"name\":\"GCS - POS Sales\",\"schema\":\"{\\\"type\\\":\\\"record\\\",\\\"name\\\":\\\"etlSchemaBody\\\",\\\"fields\\\":[{\\\"name\\\":\\\"offset\\\",\\\"type\\\":\\\"long\\\"},{\\\"name\\\":\\\"body\\\",\\\"type\\\":\\\"string\\\"}]}\"}],\"id\":\"Wrangler\"},{\"name\":\"GCS2\",\"plugin\":{\"name\":\"GCS\",\"type\":\"batchsink\",\"label\":\"GCS2\",\"artifact\":{\"name\":\"google-cloud\",\"version\":\"0.15.3\",\"scope\":\"SYSTEM\"},\"properties\":{\"project\":\"auto-detect\",\"suffix\":\"yyyy-MM-dd-HH-mm\",\"format\":\"csv\",\"serviceFilePath\":\"auto-detect\",\"location\":\"us\",\"referenceName\":\"pos-sales-per-region\",\"path\":\"gs://flat-files-1\"}},\"inputSchema\":[{\"name\":\"Wrangler\",\"schema\":\"{\\\"type\\\":\\\"record\\\",\\\"name\\\":\\\"etlSchemaBody\\\",\\\"fields\\\":[{\\\"name\\\":\\\"Store_Nbr\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Item_Nbr\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"WM_Week\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Daily\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Whse_Nbr\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Whse_Name\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"POS_Sales\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"POS_Qty\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"POS_Cost\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Net_Ship_Qty\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Sales_Type\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Sales_Description\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Max_Shelf_Qty\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Store_Specific_Retail\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Store_Specific_Cost\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Current_HO_Retail\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]}]}\"}],\"id\":\"GCS2\"}],\"schedule\":\"0 * * * *\",\"engine\":\"spark\",\"numOfRecordsPreview\":100.0,\"description\":\"Data Pipeline Application\",\"maxConcurrentRuns\":1.0}", "datasets": [], "programs": [ { "type": "Spark", "app": "POS_Sales_per_Region", "name": "phase-1", "description": "Sources 'GCS - POS Sales' to sinks 'GCS2'." }, { "type": "Workflow", "app": "POS_Sales_per_Region", "name": "DataPipelineWorkflow", "description": "Data Pipeline Workflow" } ], "plugins": [ { "id": "GCS - POS Sales", "name": "GCSFile", "type": "batchsource" }, { "id": "GCS2:csv", "name": "csv", "type": "validatingOutputFormat" }, { "id": "GCS - POS Sales:text", "name": "text", "type": "validatingInputFormat" }, { "id": "GCS2", "name": "GCS", "type": "batchsink" }, { "id": "Wrangler", "name": "Wrangler", "type": "transform" } ], "artifact": { "name": "cdap-data-pipeline", "version": "6.2.3", "scope": "SYSTEM" } }

HTTP Responses

Status Codes

Description

Status Codes

Description

200 OK

The event successfully called the method, and the body contains the results

Details of a List of Applications

For detailed information about multiple applications that have been deployed in a namespace, use:

1 GET /v3/namespaces/<namespace-id>/apps

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

The response will contain a list of application details containing name, application version, description, configuration, datasets, plugins, programs, and artifacts. For example:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 [ { "type": "App", "name": "POS_Sales_per_Region", "version": "1.0.0", "description": "Data Pipeline Application", "artifact": { "name": "cdap-data-pipeline", "version": "6.2.3", "scope": "SYSTEM" } }, { "type": "App", "name": "POS_Sales_per_Region_v1", "version": "1.0.1", "description": "Data Pipeline Application", "artifact": { "name": "cdap-data-pipeline", "version": "6.2.3", "scope": "SYSTEM" } } ]

Upgrade an Application

Notes:

  • Upgrading real-time pipelines (cdap-data-streams) to use the latest version of application artifacts is not supported.

  • Back up all applications before performing the upgrade.

  • To get the name of the application you want to upgrade, use the GET request listed in “List Applications”.

To upgrade an application in a namespace to use the latest version of application artifacts and plugin artifacts, run the following POST request:

1 POST /v3/namespaces/<namespace-id>/apps/<app-id>/upgrade

Parameter

Description

Parameter

Description

namespace-id

Namespace ID.

app-id

Name of the application.

artifactScope

Optional scope filter. If not specified, artifacts in the USER and SYSTEM scopes are upgraded. Otherwise, only artifacts in the specified scope are upgraded.

allowSnapshot

Optional filter to allow SNAPSHOT version of artifacts for upgrade. Set to TRUE to allow SNAPSHOT version of artifacts for upgrade. Set to FALSE to ignore SNAPSHOT version of artifacts for upgrade.

Default is false.

The response will contain a list of application details containing name, application version, namespace, and entity. For example:

1 2 3 4 5 6 7 8 9 10 POST /v3/namespaces/default/apps/purchaseWordCount/upgrade { "statusCode": 200, "appId": { "application": "purchaseWordCount", "version": "-SNAPSHOT", "namespace": "default", "entity": "APPLICATION" } }

Upgrade a List of Applications

Notes:

  • Upgrading real-time pipelines to use the latest version of application artifacts is not supported.

  • Back up all applications before performing the upgrade.

  • To get a list of all the application you want to upgrade to use the latest version of application artifacts and artifact plugins, use the GET request listed in “Details of a List of Applications”.

To upgrade a list of existing applications in a namespace to use the latest version of application artifacts and plugin artifacts, run the following POST request:

1 POST /v3/namespaces/<namespace-id>/upgrade

Parameter

Description

Parameter

Description

namespace-id

Namespace ID.

artifactScope

Optional filter to allow artifacts of scope USER and SYSTEM for upgrade. Leave blank to allow artifacts of scope USER and SYSTEM for upgrade.

Optional scope filter. If not specified, artifacts in the USER and SYSTEM scopes are upgraded. Otherwise, only artifacts in the specified scope are upgraded.

allowSnapshot

Optional filter to allow SNAPSHOT version of artifacts for upgrade. Set to TRUE to allow SNAPSHOT version of artifacts for upgrade. Set to FALSE to ignore SNAPSHOT version of artifacts for upgrade.

Default is false.

The request body is a JSON object specifying the updated artifact version and the updated application config.

For example, the following request body will upgrade the listed applications in the default namespace to use the latest version of application artifacts and plugin artifacts.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 POST /v3/namespaces/default/upgrade [ {  "type": "App",  "name": "POS_Sales_per_Region",  "version": "-SNAPSHOT", "description": "POS Sales per Region",  "artifact": {     "name": "cdap-data-pipeline",     "version": "6.1.2",     "scope": "SYSTEM" } {  "type": "App",  "name": "POS_Daily_Sales_per_Region",  "version": "-SNAPSHOT", "description": "POS Daily Sales per Region",  "artifact": {     "name": "cdap-data-pipeline",     "version": "6.1.2",     "scope": "SYSTEM" }  } ]

List Versions of an Application

To list all the versions of an application, submit an HTTP GET:

1 GET /v3/namespaces/<namespace-id>/apps/<app-id>/versions

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application being called

The response will be a JSON array containing details about the application. The details returned depend on the application.

For example, depending on the versions deployed:

1 GET /v3/namespaces/default/apps/SportResults/versions

could return in a JSON array a list of the versions of the application:

1 ["1.0.1", "2.0.3"]

Delete an Application

To delete an application, together with all of its MapReduce or Spark programs, schedules, custom services, and workflows, submit an HTTP DELETE:

1 DELETE /v3/namespaces/<namespace-id>/apps/<app-id>

To delete a specific version of an application, submit an HTTP DELETE that includes the version:

1 DELETE /v3/namespaces/<namespace-id>/apps/<app-id>/versions/<version-id>

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application to be deleted

version-id

Version of the application to be deleted

Note: The app-id in this URL is the name of the application as configured by the application specification, and not necessarily the same as the name of the JAR file that was used to deploy the application.

This does not delete the datasets associated with the application because they belong to the namespace, not the application. Also, this does not delete the artifact used to create the application.

Delete All Applications

To delete all the applications in a namespace, use:

1 DELETE /v3/namespaces/<namespace-id>/apps

Export All Application Details

If you’re running Windows, you can export all application details for all namespaces as a ZIP archive file, with the following request:

1 GET /v3/export/apps

If you’re running Linux or Mac, you can use the curl command to get the output and write it to file using the command:

1 curl http://localhost:11015/v3/export/apps > outfile.zip

If you’re running Windows and have powershell, you can use this command:

1 powershell -c Invoke-WebRequest http://localhost:11015/v3/export/apps -OutFile ./outfile.zip

These commands create a folder with the name of the zip file and write the contents to a file called outfile.zip in the directory you ran the command from. output.zip contains the JSON files for all of the applications in all namespaces in the CDAP instance.

Program Lifecycle

Details of a Program

After an application is deployed, you can retrieve the details of its MapReduce and Spark programs, custom services, schedules, workers, and workflows by submitting an HTTP GET request:

1 GET /v3/namespaces/<namespace-id>/apps/<app-id>/<program-type>/<program-id>

To retrieve information about the schedules of the program's workflows, use:

1 GET /v3/namespaces/<namespace-id>/apps/<app-id>/workflows/<workflow-id>/schedules

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application being called

program-type

One of mapreduceservicessparkworkers, or workflows

program-id

Name of the MapReducecustom serviceSparkworker, or workflow being called

workflow-id

Name of the workflow being called, when retrieving schedules

The response will be a JSON array containing details about the program. The details returned depend on the program type.

For example:

1 GET /v3/namespaces/default/apps/SportResults/services/UploadService

will return in a JSON array information about the UploadService of the application SportResults. The results will be similar to this (pretty-printed and portions deleted to fit):

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 { "className": "io.cdap.cdap.examples.sportresults.UploadService", "description": "A service for uploading sport results for a given league and season.", "handlers": { "UploadHandler": { "className": "io.cdap.cdap.examples.sportresults.UploadService$UploadHandler", "datasets": [ "results" ], "description": "", "endpoints": [ { "method": "PUT", "path": "/leagues/{league}/seasons/{season}" }, ... ], "name": "UploadHandler", "plugins": {}, "properties": {} } }, "instances": 1, "name": "UploadService", "plugins": {}, "resources": { "memoryMB": 512, "virtualCores": 1 } }

MapReduce Jobs Associated with a Namespace

To get a list of MapReduce jobs associated with a namespace, use:

1 GET /v3/namespaces/<namespace-id>/mapreduce

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

The response will be a JSON array containing details about the MapReduce program:

Parameter

Description

Parameter

Description

program-type

One of mapreduceservicessparkworkers, or workflows

app-id

Name of the application being called

program-id

Name of the MapReducecustom serviceSparkworker, or workflow being called

description

Description of the program

Spark Jobs Associated with a Namespace

To get a list of Spark jobs associated with a namespace, use:

1 GET /v3/namespaces/<namespace-id>/spark

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

The response will be a JSON array containing details about the Spark program:

Parameter

Description

Parameter

Description

program-type

One of mapreduceservicessparkworkers, or workflows

app-id

Name of the application being called

program-id

Name of the MapReducecustom serviceSparkworker, or workflow being called

description

Description of the program

Workflows Associated with a Namespace

To get a list of workflows associated with a namespace, use:

1 GET /v3/namespaces/<namespace-id>/workflows

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

The response will be a JSON array containing details about the workflows:

Parameter

Description

Parameter

Description

program-type

One of mapreduceservicessparkworkers, or workflows

app-id

Name of the application being called

program-id

Name of the MapReducecustom serviceSparkworker, or workflow being called

description

Description of the program

Services Associated with a Namespace

To get a list of services associated with a namespace, use:

1 GET /v3/namespaces/<namespace-id>/services

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

The response will be a JSON array containing details about the services:

Parameter

Description

Parameter

Description

program-type

One of mapreduceservicessparkworkers, or workflows

app-id

Name of the application being called

program-id

Name of the MapReducecustom serviceSparkworker, or workflow being called

description

Description of the program

Workers Associated with a Namespace

To get a list of workers associated with a namespace, use:

1 GET /v3/namespaces/<namespace-id>/workers

The response will be a JSON array containing details about the workers:

Parameter

Description

Parameter

Description

program-type

One of mapreduceservicessparkworkers, or workflows

app-id

Name of the application being called

program-id

Name of the MapReducecustom serviceSparkworker, or workflow being called

description

Description of the program

Start a Program

After an application is deployed, you can start its MapReduce and Spark programs, custom services, workers, or workflows by submitting an HTTP POST request:

1 POST /v3/namespaces/<namespace-id>/apps/<app-id>/<program-type>/<program-id>/start

You can start a program of a particular version of the application by submitting an HTTP POST request that includes the version:

1 POST /v3/namespaces/<namespace-id>/apps/<app-id>/versions/<version-id>/<program-type>/<program-id>/start

Note: Concurrent runs of workers across multiple versions of the same application are not allowed.

When starting a program, you can optionally specify runtime arguments as a JSON map in the request body. CDAP will use these runtime arguments only for this single invocation of the program.

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application being called

version-id

Version of the application being called

program-type

One of mapreduceservicessparkworkers, or workflows

program-id

Name of the MapReducecustom serviceSparkworker, or workflow being called

ServiceSpark, and Worker programs do not allow concurrent program runs. Programs of these types cannot be started unless the program is in the STOPPED state. MapReduce and Workflow programs support concurrent runs. If you start one of these programs, a new run will be started even if other runs of the program have not finished yet.

For example:

1 2 POST /v3/namespaces/default/apps/SportResults/services/UploadService/start '{ "foo":"bar", "this":"that" }'

will start the UploadService of the SportResults application with two runtime arguments.

Start Multiple Programs

You can start multiple programs from different applications and program types by submitting an HTTP POST request:

1 POST /v3/namespaces/<namespace-id>/start

with a JSON array in the request body consisting of multiple JSON objects with these parameters:

Parameter

Description

Parameter

Description

appId

Name of the application being called

programType

One of mapreduceservicesparkworker, or workflow

programId

Name of the MapReducecustom serviceSparkworker, or workflow being started

runtimeargs

Optional JSON object containing a string to string mapping of runtime arguments to start the program with

The response will be a JSON array containing a JSON object for each object in the input. Each JSON object will contain these parameters:

Parameter

Description

Parameter

Description

appId

Name of the application being called

programType

One of mapreduceservicesparkworker, or workflow

programId

Name of the MapReducecustom serviceSparkworker, or workflow being started

statusCode

The status code from starting an individual JSON object

error

If an error, a description of why the program could not be started (for example, the specified program was not found)

For example:

1 2 3 4 5 6 POST /v3/namespaces/default/start [ {"appId": "App1", "programType": "Service", "programId": "Service1"}, {"appId": "App1", "programType": "Spark", "programId": "Spark2"}, {"appId": "App2", "programType": "Spark", "programId": "Spark1", "runtimeargs": { "arg1":"val1" }} ]'

will attempt to start the three programs listed in the request body. It will return a response such as:

1 2 3 4 5 [ {"appId": "App1", "programType": "Service", "programId": "Service1", "statusCode": 200}, {"appId": "App1", "programType": "Spark", "programId": "Spark2", "statusCode": 200}, {"appId": "App2", "programType":"Spark", "programId":"Spark1", "statusCode":404, "error": "App: App2 not found"} ]

In this particular example, the service and Spark programs in the App1 application were successfully started, and there was an error starting the last program because the App2 application does not exist.

Stop a Program

You can stop the MapReduce and Spark programs, custom services, workers, and workflows of an application by submitting an HTTP POST request:

1 POST /v3/namespaces/<namespace-id>/apps/<app-id>/<program-type>/<program-id>/stop

You can stop the programs of a particular application version by submitting an HTTP POST request that includes the version:

1 POST /v3/namespaces/<namespace-id>/apps/<app-id>/versions/<version-id>/<program-type>/<program-id>/stop

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application being called

version-id

Version of the application being called

program-type

One of mapreduceservicessparkworkers, or workflows

program-id

Name of the MapReducecustom serviceSparkworker, or workflow being stopped

A program that is in the STOPPED state cannot be stopped. If there are multiple runs of the program in the RUNNING state, this call will stop one of the runs, but not all of the runs.

For example:

1 POST /v3/namespaces/default/apps/SportResults/services/UploadService/stop

will stop the UploadService service in the SportResults application.

Stop a Program Run

You can stop a specific run of a program by submitting an HTTP POST request:

1 POST /v3/namespaces/<namespace-id>/apps/<app-id>/<program-type>/<program-id>/runs/<run-id>/stop

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application being called

program-type

One of mapreduceservicessparkworkers, or workflows

program-id

Name of the MapReducecustom serviceSparkworker, or workflow being called

run-id

Run id of the run being called

For example:

1 POST /v3/namespaces/default/apps/PurchaseHistory/mapreduce/PurchaseHistoryBuilder/runs/631bc459-a9dd-4218-9ea0-d46fb1991f82/stop

will stop a specific run of the PurchaseHistoryBuilder MapReduce program in the PurchaseHistory application.

Stop Multiple Programs

You can stop multiple programs from different applications and program types by submitting an HTTP POST request:

1 POST /v3/namespaces/<namespace-id>/stop

with a JSON array in the request body consisting of multiple JSON objects with these parameters:

Parameter

Description

Parameter

Description

appId

Name of the application being called

programType

One of mapreduceservicesparkworker, or workflow

programId

Name of the MapReducecustom serviceSparkworker, or workflow being stopped

The response will be a JSON array containing a JSON object corresponding to each object in the input. Each JSON object will contain these parameters:

Parameter

Description

Parameter

Description

appId

Name of the application being called

programType

One of mapreduceservicesparkworker, or workflow

programId

Name of the MapReducecustom serviceSparkworker, or workflow being stopped

statusCode

The status code from stopping an individual JSON object

error

If an error, a description of why the program could not be stopped (for example, the specified program was not found)

For example:

1 2 3 4 5 6 POST /v3/namespaces/default/stop [ {"appId": "App1", "programType": "Service", "programId": "Service1"}, {"appId": "App1", "programType": "Mapreduce", "programId": "MapReduce2"}, {"appId": "App2", "programType": "Spark", "programId": "Spark2"} ]'

will attempt to stop the three programs listed in the request body. It will return a response such as:

1 2 3 4 5 [ {"appId": "App1", "programType": "Service", "programId": "Service1", "statusCode": 200}, {"appId": "App1", "programType": "Mapreduce", "programId": "Mapreduce2", "statusCode": 200}, {"appId": "App2", "programType":"Spark", "programId":"Spark1", "statusCode":404, "error": "App: App2 not found"} ]

In this particular example, the service and MapReduce programs in the App1 application were successfully stopped, and there was an error starting the last program because the App2 application does not exist.

Status of a Program

To retrieve the status of a program, submit an HTTP GET request:

1 GET /v3/namespaces/<namespace-id>/apps/<app-id>/<program-type>/<program-id>/status

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application being called

program-type

One of mapreduceschedulesservicessparkworkers, or workflows

program-id

Name of the MapReduceschedulecustom serviceSparkworker, or workflow being called

The response will be a JSON array with status of the program. For example, retrieving the status of the UploadService of the program SportResults:

1 GET /v3/namespaces/default/apps/SportResults/services/UploadService/status

will return (pretty-printed) a response such as:

1 2 3 { "status": "STOPPED" }

Status of Multiple Programs

You can retrieve the status of multiple programs from different applications and program types by submitting an HTTP POST request:

1 POST /v3/namespaces/<namespace-id>/status

with a JSON array in the request body consisting of multiple JSON objects with these parameters:

Parameter

Description

Parameter

Description

appId

Name of the application being called

programType

One of mapreducescheduleservicesparkworker, or workflow

programId

Name of the MapReduceschedulecustom serviceSparkworker, or workflow being called

The response will be the same JSON array as submitted with additional parameters for each of the underlying JSON objects:

Parameter

Description

Parameter

Description

status

Maps to the status of an individual JSON object's queried program if the query is valid and the program was found

statusCode

The status code from retrieving the status of an individual JSON object

error

If an error, a description of why the status was not retrieved (for example, the specified program was not found)

The status and error fields are mutually exclusive meaning if there is an error, then there will never be a status and vice versa.

For example:

1 2 3 4 5 POST /v3/namespaces/default/status -d ' [ { "appId": "MyApp", "programType": "workflow", "programId": "MyWorkflow" }, { "appId": "MyApp2", "programType": "service", "programId": "MyService" } ]

will retrieve the status of two programs. It will return a response such as:

1 2 3 4 [ { "appId":"MyApp", "programType":"workflow", "programId":"MyWorkflow", "status":"RUNNING", "statusCode":200 }, { "appId":"MyApp2", "programType":"service", "programId":"MyService", "error":"Program not found", "statusCode":404 } ]

Schedule Lifecycle

Schedules can only be created for workflows.

Add a Schedule

To add a schedule for a program to an application, submit an HTTP PUT request:

1 PUT /v3/namespaces/<namespace-id>/apps/<app-id>/schedules/<schedule-id>

To add the schedule to an application with a non-default version, submit an HTTP PUT request with the version specified:

1 PUT /v3/namespaces/<namespace-id>/apps/<app-id>/versions/<version-id>/schedules/<schedule-id>

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application

schedule-id

Name of the schedule; it is unique to the application and, if specified, the application version

version-id

Version of the application, typically following semantic versioning

The request body is a JSON object specifying the details of the schedule to be created:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 { "name": "<name of the schedule>", "description": "<schedule description>", "namespace": "<namespace of the schedule>", "application": "<application of the schedule>", "applicationVersion": "<application version of the schedule>", "program": { "programName": "<name of the program>", "programType": "WORKFLOW" }, "properties": { "<key>": "<value>", ... }, "constraints": [ { "type": "<constraint type>", "waitUntilMet": <boolean>, ... }, ... ], "trigger": { "type": "<trigger type>", ... }, "timeoutMillis": <timeout in milliseconds> }

where a trigger is one of the trigger types. It can be a time trigger:

1 2 3 4 { "type": "TIME", "cronExpression": "<cron expression>" }

or a partition trigger:

1 2 3 4 5 6 7 8 { "type": "PARTITION", "dataset": { "namespace": "<namespace of the dataset>", "dataset": "<name of the dataset>" }, "numPartitions": <required number of partitions> }

or a program status trigger:

1 2 3 4 5 6 7 8 9 10 11 12 { "programId": { "namespace": "<namespace of the program>", "application": "<application name of the program>", "version": "<application version of the program>", "type": "<type of the program>", "entity": "PROGRAM", "program": "<name of the program>" }, "programStatuses": [ <COMPLETED>, <FAILED>, <KILLED> ], "type": "PROGRAM_STATUS" }

or an and trigger, where "triggers" is a non-empty list of any type of triggers:

1 2 3 4 5 6 7 8 9 10 { "triggers" : [ { "type": "<trigger type>", ... }, ... ], "type": "AND" }

or an or trigger, where "triggers" is a non-empty list of any type of triggers:

1 2 3 4 5 6 7 8 9 10 { "triggers" : [ { "type": "<trigger type>", ... }, ... ], "type": "OR" }

and a constraint can be one of:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 { "type": "CONCURRENCY", "maxConcurrency": <max number of runs>, "waitUntilMet": <boolean> } { "type": "DELAY", "millisAfterTrigger": <milliseconds to delay>, "waitUntilMet": <boolean> } { "type": "TIME_RANGE", "startTime": "<time in form HH:mm>", "endTime": "<time in form HH:mm>", "timeZone": "<name of the time zone, e.g., PST>", "waitUntilMet": <boolean> } { "type": "LAST_RUN", "millisSinceLastRun": <milliseconds since last run>, "waitUntilMet": <boolean> }

Note: For any schedule, the program must be for a workflow and the programType must be set to WORKLFLOW.

HTTP Responses

Status Codes

Description

Status Codes

Description

409 Conflict

Schedule with the same name already exists

Update a Schedule

To update a schedule, submit an HTTP POST request:

1 POST /v3/namespaces/<namespace-id>/apps/<app-id>/schedules/<schedule-id>/update

To update a schedule of an application with a non-default version, submit an HTTP POST request with the version specified:

1 POST /v3/namespaces/<namespace-id>/apps/<app-id>/versions/<version-id>/schedules/<schedule-id>/update

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application

schedule-id

Name of the schedule; it is unique to the application and, if specified, the application version.

version-id

Version of the application, typically following semantic versioning.

The request body is a JSON object specifying the details of the schedule to be updated, and follows the same form as documented in Add a Schedule.

Only changes to the schedule configurations are supported; changes to the schedule name, or to the program associated with it are not allowed. If any properties are provided, they will overwrite all existing properties with what is provided. You must include all properties, even ones you are are not altering.

HTTP Responses

Status Codes

Description

Status Codes

Description

400 Bad Request

If the new schedule type does not match the existing schedule type or there are other client errors

Retrieving a Schedule

To retrieve a schedule in an application, submit an HTTP GET request:

1 GET /v3/namespaces/<namespace-id>/apps/<app-id>/schedules/<schedule-name>

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application

schedule-name

Name of the schedule

The response will contain the schedule in the same form described in this topic in “Add a Schedule”.

List Schedules

To list all of the schedules for an application, use an HTTP GET request:

1 GET /v3/namespaces/<namespace-id>/apps/<app-id>/schedules

As schedules are created for a workflow, you can also list schedules for a workflow of an application. You can use the Details of a Deployed Application to obtain the workflows of an application.

Optionally, you can filter the schedules by trigger type and schedule status using the query parameters trigger-type and schedule-status. For more information, see Schedules.

To list all of the schedules of a workflow of an application, use an HTTP GET request:

1 GET /v3/namespaces/<namespace-id>/apps/<app-id>/workflows/<workflow-id>/schedules

The response will contain a list of schedules in the same form as described in “Add a Schedule”.

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application

workflow-id

Name of the workflow

Next Scheduled Run Time

To list the next time that the workflow will be be scheduled by a time trigger, use the parameter nextruntime:

1 GET /v3/namespaces/<namespace-id>/apps/<app-id>/workflows/<workflow-id>/nextruntime

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application

workflow-id

Name of the workflow

Example: Retrieving The Next Runtime

HTTP Method

GET /v3/namespaces/default/apps/PurchaseHistory/workflows/PurchaseHistoryWorkflow/nextruntime

HTTP Response

[{"id":"DEFAULT.WORKFLOW:developer:PurchaseHistory:PurchaseHistoryWorkflow:0:DailySchedule","time":1415102400000}]

Description

Retrieves the next runtime of the workflow PurchaseHistoryWorkflow of the application PurchaseHistory

Next Scheduled Run Time in Batch

To list the next time that all workflows in a namespace will be be scheduled by a time trigger, use the parameter nextruntime:

1 POST /v3/namespaces/<namespace-id>/nextruntime

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

The request body must be a JSON array of objects with the following parameters:

Parameter

Description

Parameter

Description

appId

Name of the application being called

programType

Currently, only the Workflow type is supported.

programId

Name of the program being called

The response will be an array of JSON Objects, each of which will contain the three input parameters as well as two of three possible extra fields: "schedules" or “error” if an error occurs.

Parameter

Description

Parameter

Description

schedules

The next scheduled runtimes for the program defined by the individual JSON object's parameters

statusCode

The status code from retrieving the program runs

error

If an error, a description of why the status was not retrieved (for example, the specified program was not found, or the requested JSON object was missing a parameter)

Example

HTTP Method

POST /v3/namespaces/default/nextruntime

HTTP Body

[{"appId": "App1", "programType": "Service", "programId": "Service1"}, {"appId": "App1", "programType": "Workflow", "programId": "testWorkflow"}, {"appId": "App2", "programType": "Workflow", "programId": "DataPipelineWorkflow"}]

HTTP Response

[{"appId": "App1", "programType": "Service", "programId": "Service1", "statusCode": 200, "schedules": [...]}, {"appId": "App1", "programType": "Workflow", "programId": "testWorkflow", "statusCode": 404, "error": "Program 'testWorkflow' is not found"}, {"appId": "App2", "programType": "Workflow", "programId": "DataPipelineWorkflow", "statusCode": 200, "schedules": [...]]

Description

Attempt to retrieve the next scheduled run of the service Service1 in the application App1, the workflow testWorkflow in the application App1 and the workflow DataPipelineWorkflow in the application App2, all in the namespace default

Previous Run Time of All Schedules

To list the previous scheduled run time for all programs that are passed into the data, use the parameter previousruntime:

1 POST /v3/namespaces/<namespace-id>/previousruntime

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

The request body must be a JSON array of objects with the following parameters:

Parameter

Description

Parameter

Description

appId

Name of the application being called

programType

Currently, only the Workflow type is supported.

programId

Name of the program being called

The response will be an array of JSON Objects, each of which will contain the three input parameters as well as two of three possible extra fields: "schedules" or “error” if an error occurs.

Parameter

Description

Parameter

Description

schedules

The previous scheduled runtimes for the program defined by the individual JSON object's parameters

statusCode

The status code from retrieving the program runs

error

If an error, a description of why the status was not retrieved (for example, the specified program was not found, or the requested JSON object was missing a parameter)

Example

HTTP Method

POST /v3/namespaces/default/previousruntime

HTTP Body

[{"appId": "App1", "programType": "Service", "programId": "Service1"}, {"appId": "App1", "programType": "Workflow", "programId": "testWorkflow"}, {"appId": "App2", "programType": "Workflow", "programId": "DataPipelineWorkflow"}]

HTTP Response

[{"appId": "App1", "programType": "Service", "programId": "Service1", "statusCode": 200, "schedules": [...]}, {"appId": "App1", "programType": "Workflow", "programId": "testWorkflow", "statusCode": 404, "error": "Program 'testWorkflow' is not found"}, {"appId": "App2", "programType": "Workflow", "programId": "DataPipelineWorkflow", "statusCode": 200, "schedules": [...]]

Description

Attempt to retrieve the previous scheduled run of the service Service1 in the application App1, the workflow testWorkflow in the application App1 and the workflow DataPipelineWorkflow in the application App2, all in the namespace default

Previous Run Time of a Schedule

To list the previous runtime when the scheduled program ran, use the parameter previousruntime:

1 GET /v3/namespaces/<namespace-id>/apps/<app-id>/workflows/<workflow-id>/previousruntime

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application to be deleted

workflow-id

Name of the Workflow

Example

HTTP Method

GET /v3/namespaces/default/apps/PurchaseHistory/workflows/PurchaseHistoryWorkflow/previousruntime

HTTP Response

[{"id":"DEFAULT.WORKFLOW:developer:PurchaseHistory:PurchaseHistoryWorkflow:0:DailySchedule","time":1415102400000}]

Description

Retrieves the previous runtime of the workflow PurchaseHistoryWorkflow of the application PurchaseHistory

Delete a Schedule

To delete a schedule, submit an HTTP DELETE:

1 DELETE /v3/namespaces/<namespace-id>/apps/<app-id>/schedules/<schedule-id>

To delete a schedule of an application with a non-default version, submit an HTTP DELETE request with the version specified:

1 DELETE /v3/namespaces/<namespace-id>/apps/<app-id>/versions/<version-id>/schedules/<schedule-id>

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application to be deleted

schedule-id

Name of the schedule; it is unique to the application and, if specified, the application version

version-id

Version of the application to be deleted

HTTP Responses

Status Codes

Description

Status Codes

Description

404 Bad Request

If the schedule given by schedule-id was not found

Schedule: Disable and and Enable

For a schedule, you can disable and enable it using the Microservices.

Disable: To disable a schedule means that the program associated with that schedule will not trigger again until the schedule is enabled.

Enable: To enable a schedule means that the trigger is reset, and the program associated will run again at the next scheduled time.

As a schedule is initially deployed in a disabled state, a call to this API is needed to enable it.

To disable or enable a schedule, use:

1 2 POST /v3/namespaces/<namespace-id>/apps/<app-id>/schedules/<schedule-id>/disable POST /v3/namespaces/<namespace-id>/apps/<app-id>/schedules/<schedule-id>/enable

Note: You can also use suspend and resume instead of disable and enable.

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application

schedule-id

Name of the schedule

Example: Disabling a Schedule

HTTP Method

POST /v3/namespaces/default/apps/PurchaseHistory/schedules/DailySchedule/disable

HTTP Response

OK if successfully set as disabled

Description

Disables the schedule DailySchedule of the application PurchaseHistory

Container Information

To find out the address of an program's container host and the container’s debug port, you can query CDAP for a service’s live info via an HTTP GET method:

1 GET /v3/namespaces/<namespace-id>/apps/<app-id>/<program-type>/<program-id>/live-info

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application being called

program-type

One of services or workers

program-id

Name of the program (service or worker)

Example:

1 GET /v3/namespaces/default/apps/WordCount/flows/WordCounter/live-info

The response is formatted in JSON; an example of this is shown in CDAP Testing and Debugging.

Scaling

You can retrieve the instance count executing different components from various applications and different program types using an HTTP POST method:

1 POST /v3/namespaces/<namespace-id>/instances

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

with a JSON array in the request body consisting of multiple JSON objects with these parameters:

Parameter

Description

Parameter

Description

appId

Name of the application being called

programType

One of service or worker

programId

Name of the program (service or worker) being called

The response will be the same JSON array as submitted with additional parameters for each of the underlying JSON objects:

Parameter

Description

Parameter

Description

requested

Number of instances the user requested for the program defined by the individual JSON object's parameters

provisioned

Number of instances that are actually running for the program defined by the individual JSON object's parameters.

statusCode

The status code from retrieving the instance count of an individual JSON object

error

If an error, a description of why the status was not retrieved (for example, the specified program was not found, or the requested JSON object was missing a parameter)

Note: The requested and provisioned fields are mutually exclusive of the error field.

Example

HTTP Method

POST /v3/namespaces/default/instances

HTTP Body

[{"appId":"MyApp1","programType":"Worker","programId":"MyWorker1",}, {"appId":"MyApp3","programType":"Service","programId":"MySvc1,}]

HTTP Response

[{"appId":"MyApp1","programType":"Worker","programId":"MyWorker1", "provisioned":2,"requested":2,"statusCode":200}, {"appId":"MyApp3","programType":"Service","programId":"MySvc1,}]

Description

Attempt to retrieve the instances of programType Worker in the application MyApp1, and the service handler MyHandler1 in the user service MySvc1 in the application MyApp3, all in the namespace default

Scaling Services

You can query or change the number of instances of a service by using the instances parameter with HTTP GET or PUT methods:

1 2 GET /v3/namespaces/<namespace-id>/apps/<app-id>/services/<service-id>/instances PUT /v3/namespaces/<namespace-id>/apps/<app-id>/services/<service-id>/instances

with the arguments as a JSON string in the body:

1 { "instances" : <quantity> }

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application

service-id

Name of the service

quantity

Number of instances to be used

Note: You can scale system services using the Monitor HTTP RESTful API Scaling System Services.

Examples

  • Retrieve the number of instances of the service CatalogLookup in the application PurchaseHistory in the namespace default:

    1 GET /v3/namespaces/default/apps/PurchaseHistory/services/CatalogLookup/instances
  • Set the number of handler instances of the service RetrieveCounts of the application WordCount:

    1 PUT /v3/namespaces/default/apps/WordCount/services/RetrieveCounts/instances

    with the arguments as a JSON string in the body:

    1 { "instances" : 2 }
  • Using curl and the CDAP Sandbox:

    • Linux
      $ curl -w"\n" -X PUT "http://localhost:11015/v3/namespaces/default/apps/WordCount/services/RetrieveCounts/instances" \ -d '{ "instances" : 2 }'

    • Windows
      > curl -X PUT "http://localhost:11015/v3/namespaces/default/apps/WordCount/services/RetrieveCounts/instances" ^ -d "{ \"instances\" : 2 }"

Scaling Workers

You can query or change the number of instances of a worker by using the instances parameter with HTTP GET or PUT methods:

1 2 GET /v3/namespaces/<namespace-id>/apps/<app-id>/workers/<worker-id>/instances PUT /v3/namespaces/<namespace-id>/apps/<app-id>/workers/<worker-id>/instances

with the arguments as a JSON string in the body:

1 { "instances" : <quantity> }

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application

worker-id

Name of the worker

quantity

Number of instances to be used

Example

Retrieve the number of instances of the worker DataWorker in the application DemoApp in the namespace default:

1 GET /v3/namespaces/default/apps/DemoApp/workers/DataWorker/instances

Run Records

To see all the runs of a selected program (MapReduce programs, Spark programs, services, or workflows), issue an HTTP GET to the program’s URL with the runs parameter. This will return a JSON list of all runs for the program, each with a start time, end time, and program status:

1 GET /v3/namespaces/<namespace-id>/apps/<app-id>/<program-type>/<program-id>/runs

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application

program-type

One of mapreduceservicesspark, or workflows

program-id

Name of the MapReducecustom serviceSpark, or workflow being called

You can filter the runs by the status of a program, the start and end times, and can limit the number of returned records:

Query Parameter

Description

Query Parameter

Description

status

running/completed/failed

start

start timestamp

end

end timestamp

limit

maximum number of returned records

The response will be a JSON array containing a JSON object for each object in the input. Each JSON object will contain these parameters:

Parameter

Description

Parameter

Description

runid

A UUID that uniquely identifies a run within CDAP, with the start and end times in seconds since the start of the Epoch (midnight 1/1/1970). Use that runid in subsequent calls to obtain additional information.

starting

The timestamp at which the program was requested to start by the user.

start

The timestamp at which the program actually started.

suspend

The timestamp at which this run was suspended (if it was suspended).

resume

The timestamp at which this run was resumed (if it was resumed after being suspended).

status

The status of the run in question.

properties

A map of the properties of the run. Has subfields.

properties.runtimeArgs

The runtime arguments provided to the run serialized as a JSON string.

cluster

provides information about the cluster on which the run was executed. Has subfields.

cluster.status

The current status of the cluster.

cluster.numNodes

The number of nodes in the cluster.

profile

The compute profile used for the run.

profile.profileName

The name of the compute profile.

profile.namespace

The namespace of the compute profile.

profile.entity

The profile’s entity type.

Example: Retrieving Run Records

HTTP Method

GET /v3/namespaces/default/apps/SportResults/mapreduce/ScoreCounter/runs

HTTP Response

{"runid":"...","start":1382567598,"status":"RUNNING"},

{"runid":"...","start":1382567447,"end":1382567492,"status":"STOPPED"},

{"runid":"...","start":1382567383,"end":1382567397,"status":"STOPPED"}

Description

Retrieve the run records of the MapReduce ScoreCounter of the application SportResults.

Retrieving Specific Run Information

To fetch the run record for a particular run of a program, use:

1 GET /v3/namespaces/<namespace-id>/apps/<app-id>/<program-type>/<program-id>/runs/<run-id>

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application

program-type

One of mapreduceservicesspark, or workflows

program-id

Name of the MapReducecustom serviceSpark, or workflow being called

run-id

Run id of the run

The response will be a JSON array containing a JSON object for each object in the input. Each JSON object will contain these parameters:

Parameter

Description

Parameter

Description

runid

A UUID that uniquely identifies a run within CDAP, with the start and end times in seconds since the start of the Epoch (midnight 1/1/1970). Use that runid in subsequent calls to obtain additional information.

starting

The timestamp at which the program was requested to start by the user.

start

The timestamp at which the program actually started.

suspend

The timestamp at which this run was suspended (if it was suspended).

resume

The timestamp at which this run was resumed (if it was resumed after being suspended).

status

The status of the run in question.

properties

A map of the properties of the run. Has subfields.

properties.runtimeArgs

The runtime arguments provided to the run serialized as a JSON string.

cluster

provides information about the cluster on which the run was executed. Has subfields.

cluster.status

The current status of the cluster.

cluster.numNodes

The number of nodes in the cluster.

profile

The compute profile used for the run.

profile.profileName

The name of the compute profile.

profile.namespace

The namespace of the compute profile.

profile.entity

The profile’s entity type.

Example: Retrieving a Particular Run Record

HTTP Method

GET /v3/namespaces/default/apps/SportResults/mapreduce/ScoreCounter/runs/b78d0091-da42-11e4-878c-2217c18f435d

HTTP Response

{"runid":"...","start":1382567598,"status":"RUNNING"}

Description

Retrieve the run record of the MapReduce ScoreCounter of the application SportResults run b78d0091-da42-11e4-878c-2217c18f435d

For services, you can retrieve:

  • the history of successfully completed Apache Twill service runs using:

    1 GET /v3/namespaces/<namespace-id>/apps/<app-id>/services/<service-id>/runs?status=completed

For workflows, you can retrieve:

  • the information about the currently running node(s) in the workflow:

    1 GET /v3/namespaces/<namespace-id>/apps/<app-id>/workflows/<workflow-id>/runs/<run-id>/nodes/state

More information about workflow endpoint can be found at Workflows

  • the schedules defined for a workflow (using the parameter schedules):

    1 GET /v3/namespaces/<namespace-id>/apps/<app-id>/workflows/<workflow-id>/schedules
  • the next time that the workflow is scheduled to run (using the parameter nextruntime):

    1 GET /v3/namespaces/<namespace-id>/apps/<app-id>/workflows/<workflow-id>/nextruntime

Examples

Example: Retrieving The Most Recent Run

HTTP Method

GET /v3/namespaces/default/apps/PurchaseHistory/services/CatalogLookup/runs?status=completed&limit=1

HTTP Response

[{"runid":"cad83d45-ecfb-4bf8-8cdb-4928a5601b0e","start":1415051892,"end":1415057103,"status":"STOPPED"}]

Description

Retrieve the most recent successful completed run of the service CatalogLookup of the application PurchaseHistory

Retrieving Run Records in Batch

To retrieve the latest run records for run records for multiple programs, use:

1 POST /v3/namespaces/<namespace-id>/runs

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

The request body must be a JSON array of objects with the following parameters:

Parameter

Description

Parameter

Description

runid

A UUID that uniquely identifies a run within CDAP, with the start and end times in seconds since the start of the Epoch (midnight 1/1/1970). Use that runid in subsequent calls to obtain additional information.

starting

The timestamp at which the program was requested to start by the user.

start

The timestamp at which the program actually started.

suspend

The timestamp at which this run was suspended (if it was suspended).

resume

The timestamp at which this run was resumed (if it was resumed after being suspended).

status

The status of the run in question.

properties

A map of the properties of the run. Has subfields.

properties.runtimeArgs

The runtime arguments provided to the run serialized as a JSON string.

cluster

provides information about the cluster on which the run was executed. Has subfields.

cluster.status

The current status of the cluster.

cluster.numNodes

The number of nodes in the cluster.

profile

The compute profile used for the run.

profile.profileName

The name of the compute profile.

profile.namespace

The namespace of the compute profile.

profile.entity

The profile’s entity type.

The response will be an array of JSON Objects, each of which will contain the three input parameters as well as two of three possible extra fields: runs, which is a list of the latest run record for that program, statusCode, which maps to the status code for retrieving the runs for that program, and error if there was an error retrieving runs for that program. The "statusCode" property will always be included, but runs and error are mutually exclusive.

Parameter

Description

Parameter

Description

runs

The latest run records for the program defined by the individual JSON object's parameters

statusCode

The status code from retrieving the program runs

error

If an error, a description of why the status was not retrieved (for example, the specified program was not found, or the requested JSON object was missing a parameter)

Example

HTTP Method

POST /v3/namespaces/default/runs

HTTP Body

[{"appId": "App1", "programType": "Service", "programId": "Service1"}, {"appId": "App1", "programType": "Workflow", "programId": "testWorkflow"}, {"appId": "App2", "programType": "Workflow", "programId": "DataPipelineWorkflow"}]

HTTP Response

[{"appId": "App1", "programType": "Service", "programId": "Service1", "statusCode": 200, "runs": [...]}, {"appId": "App1", "programType": "Workflow", "programId": "testWorkflow", "statusCode": 404, "error": "Program 'testWorkflow' is not found"}, {"appId": "App2", "programType": "Workflow", "programId": "DataPipelineWorkflow", "statusCode": 200, "runs": [...]]

Description

Attempt to retrieve the latest run records of the service Service1 in the application App1, the workflow testWorkflow in the application App1 and the workflow DataPipelineWorkflow in the application App2, all in the namespace default

Retrieving Run Counts in Batch

To retrieve the run counts for multiple programs, use:

1 POST /v3/namespaces/<namespace-id>/runcount

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

The request body must be a JSON array of objects with the following parameters:

Parameter

Description

Parameter

Description

appId

Name of the application being called

programType

One of mapreducesparkworkflowservice, or worker

programId

Name of the program (mapreducesparkworkflowservice, or worker) being called

The response will be an array of Json Objects, each of which will contain the three input parameters as well as two of three possible extra fields -- "runCount", which is count for the program run, "statusCode", which maps to the status code for retrieving the run count for that program, and "error" if there was an error retrieving runs for that program. The "statusCode" property will always be included, but "runCount" and "error" are mutually exclusive.

Parameter

Description

Parameter

Description

runCount

The number of program runs for the program defined by the individual JSON object's parameters

statusCode

The status code from retrieving the program run count

error

If an error, a description of why the status was not retrieved (for example, the specified program was not found, or the requested JSON object was missing a parameter)

Example

HTTP Method

POST /v3/namespaces/default/runcount

HTTP Body

[{"appId": "App1", "programType": "Service", "programId": "Service1"}, {"appId": "App1", "programType": "Workflow", "programId": "testWorkflow"}, {"appId": "App2", "programType": "Workflow", "programId": "DataPipelineWorkflow"}]

HTTP Response

[{"appId": "App1", "programType": "Service", "programId": "Service1", "statusCode": 200, "runCount": 20}, {"appId": "App1", "programType": "Workflow", "programId": "testWorkflow", "statusCode": 404, "error": "Program 'testWorkflow' is not found"}, {"appId": "App2", "programType": "Workflow", "programId": "DataPipelineWorkflow", "statusCode": 200, "runCount": 300}]

Description

Attempt to retrieve the run count of the service Service1 in the application App1, the workflow testWorkflow in the application App1 and the workflow DataPipelineWorkflow in the application App2, all in the namespace default

Retrieving Specific Run Count

To fetch the run count for a particular program, use:

1 GET /v3/namespaces/<namespace-id>/apps/{app-id}/{program-type}/{program-id}/runcount

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application

program-type

One of mapreducesparkworkflowservice, or worker

program-id

Name of the program (mapreducesparkworkflowservice, or worker) being called

Example

HTTP Method

GET /v3/namespaces/default/apps/myApp/workflows/DataPipelineWorkflow/runcount

HTTP Response

[2]

Description

Retrieve the run count of the workflow DataPipelineWorkflow of the application myApp

Workflow Runs: Suspend and Resume

For workflows, in addition to starting and stopping, you can suspend and resume individual runs of a workflow using the RESTful API.

Suspend: To suspend means that the current activity will be taken to completion, but no further programs will be initiated. Programs will not be left partially uncompleted, barring any errors.

In the case of a workflow with multiple MapReduce programs, if one of them is running (first of three perhaps) and you suspend the workflow, that first MapReduce will be completed but the following two will not be started.

Resume: To resume means that activity will start up where it was left off, beginning with the start of the next program in the sequence.

In the case of the workflow mentioned above, resuming it after suspension would start up with the second of the three MapReduce programs, which is where it would have left off when it was suspended.

With workflows, suspend and resume require a run-id as the action takes place on either a currently running or suspended workflow.

To suspend or resume a workflow, use:

1 2 POST /v3/namespaces/<namespace-id>/apps/<app-id>/workflows/<workflow-id>/runs/<run-id>/suspend POST /v3/namespaces/<namespace-id>/apps/<app-id>/workflows/<workflow-id>/runs/<run-id>/resume

Parameter

Description

Parameter

Description

namespace-id

Namespace ID

app-id

Name of the application

workflow-id

Name of the workflow

run-id

UUID of the workflow run

Example: Suspending a Workflow

HTTP Method

POST /v3/namespaces/default/apps/PurchaseHistory/workflows/PurchaseHistoryWorkflow/runs/0ce13912-e980-11e4-a7d7-8cae4cfd0e64/suspend

HTTP Response

Program run suspended. if successfully set as suspended

Description

Suspends the run 0ce13912-e980-11e4-a7d7-8cae4cfd0e64 of the workflow PurchaseHistoryWorkflow of the application PurchaseHistory