Lifecycle Microservices
- 1 Application Lifecycle
- 1.1 Create an Application
- 1.2 Update an Application (DEPRECATED)
- 1.3 Deploy an Artifact and Application
- 1.4 List Applications
- 1.5 Details of an Application
- 1.6 Upgrade an Application
- 1.7 Upgrade a List of Applications
- 1.8 List Versions of an Application
- 1.9 Delete an Application
- 1.10 Delete All Applications
- 1.11 Export All Application Details
- 1.12 Delete a Streaming Application State (6.9.1+)
- 2 Program Lifecycle
- 2.1 Details of a Program
- 2.2 MapReduce Jobs Associated with a Namespace (Deprecated)
- 2.3 Spark Jobs Associated with a Namespace
- 2.4 Workflows Associated with a Namespace
- 2.5 Services Associated with a Namespace
- 2.6 Workers Associated with a Namespace
- 2.7 Spark program status for an application
- 2.8 Start a Program
- 2.9 Start Multiple Programs
- 2.10 Stop a Program
- 2.11 Stop a Program Run
- 2.12 Stop Multiple Programs
- 2.13 Status of a Program
- 2.14 Status of Multiple Programs
- 3 Schedule Lifecycle
- 4 Container Information
- 5 Scaling
- 5.1 Scaling Services
- 5.2 Scaling Workers
- 6 Run Records
Use the CDAP Lifecycle Microservices to deploy or delete applications and manage the lifecycle of MapReduce (DEPRECATED) and Spark programs, custom services, workers, and workflows.
For more information about CDAP components, see CDAP Components.
All methods or endpoints described in this API have a base URL (typically http://<host>:11015
or https://<host>:10443
) that precedes the resource identifier, as described in the Microservices Conventions. These methods return a status code, as listed in the Microservices Status Codes.
Application Lifecycle
Create an Application
To create an application, submit an HTTP PUT request:
PUT /v3/namespaces/<namespace-id>/apps/<app-id>
(DEPRECATED) To create an application with a non-default version, submit an HTTP POST request with the version specified:
POST /v3/namespaces/<namespace-id>/apps/<app-id>/versions/<version-id>/create
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application |
| (DEPRECATED) Version of the application, typically following semantic versioning; The |
The request body is a JSON object specifying the artifact to use to create the application, and an optional application configuration. For example:
PUT /v3/namespaces/default/apps/purchaseWordCount
{
"artifact": {
"name": "WordCount",
"version": "6.8.0",
"scope": "USER"
},
"config": {
“datasetName”: “purchases”
},
"principal":"user/example.net@EXAMPLEKDC.NET",
"app.deploy.update.schedules":"true"
}
will create an application named purchaseWordCount
from the example WordCount
artifact. The application will receive the specified config
, which will configure the application to create a dataset named purchases
instead of using the default dataset name.
Optionally, you can specify a Kerberos principal with which the application should be deployed. If a Kerberos principal is specified, then all the datasets created by the application will be created with the application's Kerberos principal.
Optionally, you can set or reset the flag app.deploy.update.schedules
. If true, redeploying an application will modify any schedules that currently exist for the application; if false, redeploying an application does not create any new schedules and existing schedules are neither deleted nor updated.
Update an Application (DEPRECATED)
To update an application, submit an HTTP POST request:
POST /v3/namespaces/<namespace-id>/apps/<app-id>/update
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application |
The request body is a JSON object specifying the updated artifact version and the updated application config. For example, a request body of:
POST /v3/namespaces/default/apps/purchaseWordCount/update
{
"artifact": {
"name": "WordCount",
"version": "6.8.0",
"scope": "USER"
},
"config": {
“datasetName”: “logs”;
},
"principal":"user/example.net@EXAMPLEKDC.NET"
}
will update the purchaseWordCount
application to use version 6.3.0 of the WordCount
artifact, and update the name of the dataset to logs
. If no artifact is given, the current artifact will be used.
Only changes to artifact version are supported; changes to the artifact name are not allowed. If no config
is given, the current config
will be used. If the config
key is present, the current config
will be overwritten by the config
specified in the request. As the principal of an application cannot be updated, during an update the principal should either be the same or absent.
Deploy an Artifact and Application
To deploy an application from your local file system into the namespace namespace-id
, submit an HTTP POST request:
POST /v3/namespaces/<namespace-id>/apps
with the name of the JAR file as a header:
X-Archive-Name: <JAR filename>
and Kerberos principal with which the application is to be deployed (if required):
X-Principal: <Kerberos Principal>
and enable or disable updating schedules of the existing workflows using the header:
X-App-Deploy-Update-Schedules: <Update Schedules>
This will add the JAR file as an artifact and then create an application from that artifact. The archive name must be in the form <artifact-name>-<artifact-version>.jar
. An optional header can supply a configuration object as a serialized JSON string:
X-App-Config: <JSON Serialization String of the Configuration Object>
The application's content is the body of the request:
<JAR binary content>
Invoke the same command to update an application to a newer version. However, be sure to stop all of its Spark and MapReduce programs before updating the application.
For an application that has a configuration class such as:
public static class MyAppConfig extends Config {
String datasetName;
}
We can deploy it with this call:
POST /v3/namespaces/<namespace-id>/apps
-H "X-Archive-Name: <jar-name>" \
-H "X-Principal: <kerberos-principal>" \
-H "X-App-Deploy-Update-Schedules: true" \
Note: The X-App-Config
header contains the JSON serialization string of the MyAppConfig
object.
List Applications
To list all of the applications in the namespace namespace-id
, issue an HTTP GET request:
GET /v3/namespaces/<namespace-id>/apps[?
[artifactName=<artifact-names>]
[&artifactVersion=<artifact-version>]
[&pageSize=<page-size>]
[&pageToken=<page-token>]
[&orderBy=<order-by>]
[&nameFilter=<name-filter>]
[&nameFilterType=<name-filter-type>]
[&sortCreationTime=<sort-creation-time>]
[&latestOnly=<latest-only>]]
Parameter | Version Introduced | Description |
---|---|---|
|
| Namespace ID. |
|
| Optional filter to list all applications that use the specified artifact name. Valid values are |
|
| Optional filter. This is the version of the artifact given in the |
| 6.6.0 | Optional filter. The number of pipelines to be returned in the response. If |
| 6.6.0 | Optional filter. If |
| 6.6.0 | Optional filter. Specifies the sorting order. The sorting is by Application Name and then Application Version. Values can be |
| 6.8.0 | Optional filter. Filters the application name based on |
| 6.8.0 | Optional filter. Values can be
|
| 6.8.0 | Optional Boolean. Values can be Default is |
| 6.8.0 | Optional filter. Values can be |
Note: When upgrading the instance from versions < 6.8 to versions >= 6.8, it’s important to follow the “upgrade applications” process in order to have the UI properly render all existing applications.
If pageSize
is not specified, this returns an array of JSON Objects that lists each application with its name, description, and artifact. The list can optionally be filtered by one or more artifact names. It can also be filtered by artifact version. For example:
GET /v3/namespaces/<namespace-id>/apps?
artifactName=cdap-data-pipeline,cdap-data-streams,delta-app
will return all applications that use either the cdap-data-pipeline
,cdap-data-streams
,or delta-app
artifacts.
The following is an example response when pageSize
is not specified:
[
{
"type": "App",
"name": "POS_Sales_per_Region",
"version": "1.0.0",
"description": "Data Pipeline Application",
"artifact": {
"name": "cdap-data-pipeline",
"version": "6.8.0",
"scope": "SYSTEM"
},
"change": {
"author": "joe",
"creationTimeMillis": 1668540944833,
"latest": true
}
},
{
"type": "App",
"name": "POS_Sales_per_Country",
"version": "1.0.0",
"description": "Data Pipeline Application",
"artifact": {
"name": "cdap-data-pipeline",
"version": "6.8.0",
"scope": "SYSTEM"
},
"change": {
"author": "joe",
"creationTimeMillis": 1668543617736,
"latest": true
}
}
]
If pageSize
is specified, the result is a JSON object that returns the applications as a JSON array under the applications
key. The page token identifier for the next page of results is specified under nextPageToken
key. The absence of nextPageToken
in the response denotes that it was the last page in the results.
GET /v3/namespaces/<namespace-id>/apps?
namespaces/default/apps?pageSize=2&orderBy=ASC
{
"applications": [
{
"type": "App",
"name": "POS_Sales_per_Region",
"version": "1.0.0",
"description": "Data Pipeline Application",
"artifact": {
"name": "cdap-data-pipeline",
"version": "6.8.0",
"scope": "SYSTEM"
},
"change": {
"author": "joe",
"creationTimeMillis": 1668540944833,
"latest": true
}
},
{
"type": "App",
"name": "POS_Sales_per_Country",
"version": "1.0.0",
"description": "Data Pipeline Application",
"artifact": {
"name": "cdap-data-pipeline",
"version": "6.8.0",
"scope": "SYSTEM"
},
"change": {
"author": "joe",
"creationTimeMillis": 1668543617736,
"latest": true
}
}
],
"nextPageToken": "POS_Sales_per_Country"
}
Details of an Application
For detailed information on an application in a namespace namespace-id
, use:
GET /v3/namespaces/<namespace-id>/apps/<app-id>
Parameter | Description |
---|---|
| Namespace ID. |
| Name of the application. |
Note: To get the creation time of an application and other types of metadata, see Metadata Microservices.
The information will be returned in the body of the response. It includes the name and description of the application; the artifact and datasets that it uses, all of its programs; and the Kerberos principal, if that was provided during the deployment. For example:
{
"name": "POS_Sales_per_Region",
"appVersion": "-SNAPSHOT",
"description": "Data Pipeline Application",
"change": {
"author": "joe",
"creationTimeMillis": 1668540944833,
"latest": true
},
"configuration": "{\"resources\":{\"memoryMB\":2048.0,\"virtualCores\":1.0},\"driverResources\":{\"memoryMB\":2048.0,\"virtualCores\":1.0},\"connections\":[{\"from\":\"GCS - POS Sales\",\"to\":\"Wrangler\"},{\"from\":\"Wrangler\",\"to\":\"GCS2\"}],\"comments\":[],\"postActions\":[],\"properties\":{},\"processTimingEnabled\":true,\"stageLoggingEnabled\":false,\"stages\":[{\"name\":\"GCS - POS Sales\",\"plugin\":{\"name\":\"GCSFile\",\"type\":\"batchsource\",\"label\":\"GCS - POS Sales\",\"artifact\":{\"name\":\"google-cloud\",\"version\":\"0.15.3\",\"scope\":\"SYSTEM\"},\"properties\":{\"project\":\"auto-detect\",\"format\":\"text\",\"skipHeader\":\"false\",\"serviceFilePath\":\"auto-detect\",\"filenameOnly\":\"false\",\"recursive\":\"false\",\"encrypted\":\"false\",\"schema\":\"{\\\"type\\\":\\\"record\\\",\\\"name\\\":\\\"etlSchemaBody\\\",\\\"fields\\\":[{\\\"name\\\":\\\"offset\\\",\\\"type\\\":\\\"long\\\"},{\\\"name\\\":\\\"body\\\",\\\"type\\\":\\\"string\\\"}]}\",\"referenceName\":\"pos-sales\",\"path\":\"gs://flat-files-1/POS-r01.txt\"}},\"outputSchema\":\"{\\\"type\\\":\\\"record\\\",\\\"name\\\":\\\"etlSchemaBody\\\",\\\"fields\\\":[{\\\"name\\\":\\\"offset\\\",\\\"type\\\":\\\"long\\\"},{\\\"name\\\":\\\"body\\\",\\\"type\\\":\\\"string\\\"}]}\",\"id\":\"GCS---POS-Sales\"},{\"name\":\"Wrangler\",\"plugin\":{\"name\":\"Wrangler\",\"type\":\"transform\",\"label\":\"Wrangler\",\"artifact\":{\"name\":\"wrangler-transform\",\"version\":\"4.2.3\",\"scope\":\"SYSTEM\"},\"properties\":{\"field\":\"*\",\"precondition\":\"false\",\"threshold\":\"1\",\"schema\":\"{\\\"type\\\":\\\"record\\\",\\\"name\\\":\\\"etlSchemaBody\\\",\\\"fields\\\":[{\\\"name\\\":\\\"Store_Nbr\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Item_Nbr\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"WM_Week\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Daily\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Whse_Nbr\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Whse_Name\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"POS_Sales\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"POS_Qty\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"POS_Cost\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Net_Ship_Qty\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Sales_Type\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Sales_Description\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Max_Shelf_Qty\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Store_Specific_Retail\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Store_Specific_Cost\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Current_HO_Retail\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]}]}\",\"workspaceId\":\"af95f757-a2d8-4efb-90b0-fad0ff2a543b\",\"directives\":\"parse-as-csv :body \\u0027,\\u0027 true\\ndrop body\"}},\"outputSchema\":\"{\\\"type\\\":\\\"record\\\",\\\"name\\\":\\\"etlSchemaBody\\\",\\\"fields\\\":[{\\\"name\\\":\\\"Store_Nbr\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Item_Nbr\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"WM_Week\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Daily\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Whse_Nbr\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Whse_Name\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"POS_Sales\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"POS_Qty\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"POS_Cost\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Net_Ship_Qty\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Sales_Type\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Sales_Description\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Max_Shelf_Qty\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Store_Specific_Retail\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Store_Specific_Cost\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Current_HO_Retail\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]}]}\",\"inputSchema\":[{\"name\":\"GCS - POS Sales\",\"schema\":\"{\\\"type\\\":\\\"record\\\",\\\"name\\\":\\\"etlSchemaBody\\\",\\\"fields\\\":[{\\\"name\\\":\\\"offset\\\",\\\"type\\\":\\\"long\\\"},{\\\"name\\\":\\\"body\\\",\\\"type\\\":\\\"string\\\"}]}\"}],\"id\":\"Wrangler\"},{\"name\":\"GCS2\",\"plugin\":{\"name\":\"GCS\",\"type\":\"batchsink\",\"label\":\"GCS2\",\"artifact\":{\"name\":\"google-cloud\",\"version\":\"0.15.3\",\"scope\":\"SYSTEM\"},\"properties\":{\"project\":\"auto-detect\",\"suffix\":\"yyyy-MM-dd-HH-mm\",\"format\":\"csv\",\"serviceFilePath\":\"auto-detect\",\"location\":\"us\",\"referenceName\":\"pos-sales-per-region\",\"path\":\"gs://flat-files-1\"}},\"inputSchema\":[{\"name\":\"Wrangler\",\"schema\":\"{\\\"type\\\":\\\"record\\\",\\\"name\\\":\\\"etlSchemaBody\\\",\\\"fields\\\":[{\\\"name\\\":\\\"Store_Nbr\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Item_Nbr\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"WM_Week\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Daily\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Whse_Nbr\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Whse_Name\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"POS_Sales\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"POS_Qty\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"POS_Cost\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Net_Ship_Qty\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Sales_Type\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Sales_Description\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Max_Shelf_Qty\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Store_Specific_Retail\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Store_Specific_Cost\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]},{\\\"name\\\":\\\"Current_HO_Retail\\\",\\\"type\\\":[\\\"string\\\",\\\"null\\\"]}]}\"}],\"id\":\"GCS2\"}],\"schedule\":\"0 * * * *\",\"engine\":\"spark\",\"numOfRecordsPreview\":100.0,\"description\":\"Data Pipeline Application\",\"maxConcurrentRuns\":1.0}",
"datasets": [],
"programs": [
{
"type": "Spark",
"app": "POS_Sales_per_Region",
"name": "phase-1",
"description": "Sources 'GCS - POS Sales' to sinks 'GCS2'."
},
{
"type": "Workflow",
"app": "POS_Sales_per_Region",
"name": "DataPipelineWorkflow",
"description": "Data Pipeline Workflow"
}
],
"plugins": [
{
"id": "GCS - POS Sales",
"name": "GCSFile",
"type": "batchsource"
},
{
"id": "GCS2:csv",
"name": "csv",
"type": "validatingOutputFormat"
},
{
"id": "GCS - POS Sales:text",
"name": "text",
"type": "validatingInputFormat"
},
{
"id": "GCS2",
"name": "GCS",
"type": "batchsink"
},
{
"id": "Wrangler",
"name": "Wrangler",
"type": "transform"
}
],
"artifact": {
"name": "cdap-data-pipeline",
"version": "6.8.0",
"scope": "SYSTEM"
}
}
HTTP Responses
Status Codes | Description |
---|---|
| The event successfully called the method, and the body contains the results |
Upgrade an Application
Notes:
For all realtime pipelines, except pipelines created in CDAP 6.8.0 with a Kafka streaming source, upgrading realtime pipelines to use the latest version of application artifacts is not supported.
Back up all applications before performing the upgrade.
To get the name of the application you want to upgrade, use the GET request listed in “List Applications”.
If upgrading the instance from versions < 6.8 to versions >= 6.8, it’s important to set
latestOnly=false
for the above GET request.
To upgrade an application in a namespace to use the latest version of application artifacts and plugin artifacts, run the following POST request:
POST /v3/namespaces/<namespace-id>/apps/<app-id>/upgrade
Parameter | Description |
---|---|
| Namespace ID. |
| Name of the application. |
| Optional scope filter. If not specified, artifacts in the |
| Optional filter to allow SNAPSHOT version of artifacts for upgrade. Set to TRUE to allow SNAPSHOT version of artifacts for upgrade. Set to FALSE to ignore SNAPSHOT version of artifacts for upgrade. Default is false. |
The response will contain a list of application details containing name, application version, namespace, and entity. For example:
POST /v3/namespaces/default/apps/purchaseWordCount/upgrade
{
"statusCode": 200,
"appId": {
"application": "purchaseWordCount",
"version": "-SNAPSHOT",
"namespace": "default",
"entity": "APPLICATION"
}
}
Upgrade a List of Applications
Notes:
Upgrading real-time pipelines to use the latest version of application artifacts is not supported.
Back up all applications before performing the upgrade.
To get a list of all the application you want to upgrade to use the latest version of application artifacts and artifact plugins, use the GET request listed in “Details of a List of Applications”.
If upgrading the instance from versions < 6.8 to versions >= 6.8, it’s important to set
latestOnly=false
for the above GET request.
To upgrade a list of existing applications in a namespace to use the latest version of application artifacts and plugin artifacts, run the following POST request:
POST /v3/namespaces/<namespace-id>/upgrade
Parameter | Description |
---|---|
| Namespace ID. |
| Optional filter to allow artifacts of scope USER and SYSTEM for upgrade. Leave blank to allow artifacts of scope USER and SYSTEM for upgrade. Optional scope filter. If not specified, artifacts in the |
| Optional filter to allow SNAPSHOT version of artifacts for upgrade. Set to TRUE to allow SNAPSHOT version of artifacts for upgrade. Set to FALSE to ignore SNAPSHOT version of artifacts for upgrade. Default is false. |
The request body is a JSON object specifying the updated artifact version and the updated application config.
For example, the following request body will upgrade the listed applications in the default namespace to use the latest version of application artifacts and plugin artifacts.
POST /v3/namespaces/default/upgrade
[
{
"type": "App",
"name": "POS_Sales_per_Region",
"version": "-SNAPSHOT",
"description": "POS Sales per Region",
"artifact": {
"name": "cdap-data-pipeline",
"version": "6.7.2",
"scope": "SYSTEM"
}
{
"type": "App",
"name": "POS_Daily_Sales_per_Region",
"version": "-SNAPSHOT",
"description": "POS Daily Sales per Region",
"artifact": {
"name": "cdap-data-pipeline",
"version": "6.7.2",
"scope": "SYSTEM"
}
}
]
List Versions of an Application
To list all the versions of an application, submit an HTTP GET:
GET /v3/namespaces/<namespace-id>/apps/<app-id>/versions
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application being called |
The response will be a JSON array containing details about the application. The details returned depend on the application.
For example, depending on the versions deployed:
GET /v3/namespaces/default/apps/SportResults/versions
could return in a JSON array a list of the versions of the application:
["1.0.1", "2.0.3"]
Delete an Application
To delete an application, together with all of its MapReduce or Spark programs, schedules, custom services, and workflows, submit an HTTP DELETE:
DELETE /v3/namespaces/<namespace-id>/apps/<app-id>
(DEPRECATED) To delete a specific version of an application, submit an HTTP DELETE that includes the version:
DELETE /v3/namespaces/<namespace-id>/apps/<app-id>/versions/<version-id>
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application to be deleted |
| (DEPRECATED) Version of the application to be deleted |
Note: The app-id
in this URL is the name of the application as configured by the application specification, and not necessarily the same as the name of the JAR file that was used to deploy the application.
This does not delete the datasets associated with the application because they belong to the namespace, not the application. Also, this does not delete the artifact used to create the application.
Delete All Applications
To delete all the applications in a namespace, use:
DELETE /v3/namespaces/<namespace-id>/apps
Export All Application Details
If you’re running Windows, you can export all application details for all namespaces as a ZIP archive file, with the following request:
GET /v3/export/apps
If you’re running Linux or Mac, you can use the curl command to get the output and write it to file using the command:
curl http://localhost:11015/v3/export/apps > outfile.zip
If you’re running Windows and have powershell, you can use this command:
powershell -c Invoke-WebRequest http://localhost:11015/v3/export/apps -OutFile ./outfile.zip
These commands create a folder with the name of the zip file and write the contents to a file called outfile.zip in the directory you ran the command from. output.zip contains the JSON files for all of the applications in all namespaces in the CDAP instance.
Delete a Streaming Application State (6.9.1+)
To delete a streaming application state, submit an HTTP DELETE:
DELETE namespaces/<namespace-name>/apps/<app-name>/state
Parameter | Description |
| Namespace name. |
| Name of the application with the state to be deleted. |
You might use this endpoint after you upgrade a CDAP instance or stop a streaming pipeline to delete the state for the last processed record.
Note: This endpoint is supported for Kafka Consumer Streaming and Google Cloud Pub/Sub Streaming sources.
Program Lifecycle
Details of a Program
After an application is deployed, you can retrieve the details of its MapReduce and Spark programs, custom services, schedules, workers, and workflows by submitting an HTTP GET request:
GET /v3/namespaces/<namespace-id>/apps/<app-id>/<program-type>/<program-id>
To retrieve information about the schedules of the program's workflows, use:
GET /v3/namespaces/<namespace-id>/apps/<app-id>/workflows/<workflow-id>/schedules
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application being called |
| One of |
| Name of the MapReduce, custom service, Spark, worker, or workflow being called |
| Name of the workflow being called, when retrieving schedules |
The response will be a JSON array containing details about the program. The details returned depend on the program type.
For example:
GET /v3/namespaces/default/apps/SportResults/services/UploadService
will return in a JSON array information about the UploadService of the application SportResults. The results will be similar to this (pretty-printed and portions deleted to fit):
{
"className": "io.cdap.cdap.examples.sportresults.UploadService",
"description": "A service for uploading sport results for a given league and season.",
"handlers": {
"UploadHandler": {
"className": "io.cdap.cdap.examples.sportresults.UploadService$UploadHandler",
"datasets": [
"results"
],
"description": "",
"endpoints": [
{
"method": "PUT",
"path": "/leagues/{league}/seasons/{season}"
},
...
],
"name": "UploadHandler",
"plugins": {},
"properties": {}
}
},
"instances": 1,
"name": "UploadService",
"plugins": {},
"resources": {
"memoryMB": 512,
"virtualCores": 1
}
}
MapReduce Jobs Associated with a Namespace (Deprecated)
To get a list of MapReduce jobs associated with a namespace, use:
GET /v3/namespaces/<namespace-id>/mapreduce
Parameter | Description |
---|---|
| Namespace ID |
The response will be a JSON array containing details about the MapReduce program:
Parameter | Description |
---|---|
| One of |
| Name of the application being called |
| Name of the MapReduce, custom service, Spark, worker, or workflow being called |
| Description of the program |
Spark Jobs Associated with a Namespace
To get a list of Spark jobs associated with a namespace, use:
GET /v3/namespaces/<namespace-id>/spark
Parameter | Description |
---|---|
| Namespace ID |
The response will be a JSON array containing details about the Spark program:
Parameter | Description |
---|---|
| One of |
| Name of the application being called |
| Name of the MapReduce, custom service, Spark, worker, or workflow being called |
| Description of the program |
Workflows Associated with a Namespace
To get a list of workflows associated with a namespace, use:
GET /v3/namespaces/<namespace-id>/workflows
Parameter | Description |
---|---|
| Namespace ID |
The response will be a JSON array containing details about the workflows:
Parameter | Description |
---|---|
| One of |
| Name of the application being called |
| Name of the MapReduce, custom service, Spark, worker, or workflow being called |
| Description of the program |
Services Associated with a Namespace
To get a list of services associated with a namespace, use:
GET /v3/namespaces/<namespace-id>/services
Parameter | Description |
---|---|
| Namespace ID |
The response will be a JSON array containing details about the services:
Parameter | Description |
---|---|
| One of |
| Name of the application being called |
| Name of the MapReduce, custom service, Spark, worker, or workflow being called |
| Description of the program |
Workers Associated with a Namespace
To get a list of workers associated with a namespace, use:
GET /v3/namespaces/<namespace-id>/workers
The response will be a JSON array containing details about the workers:
Parameter | Description |
---|---|
| One of |
| Name of the application being called |
| Name of the MapReduce, custom service, Spark, worker, or workflow being called |
| Description of the program |
Spark program status for an application
To check if a Spark program is available for an application, use:
GET /v3/namespaces/<namespace-id>/apps/{app-name}/spark/{program-name}/available
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application. |
| Name of the program. |
Start a Program
After an application is deployed, you can start its MapReduce and Spark programs, custom services, workers, or workflows by submitting an HTTP POST request:
POST /v3/namespaces/<namespace-id>/apps/<app-id>/<program-type>/<program-id>/start
You can start a program of a particular version of the application by submitting an HTTP POST request that includes the version:
POST /v3/namespaces/<namespace-id>/apps/<app-id>/versions/<version-id>/<program-type>/<program-id>/start
Note: Concurrent runs of workers across multiple versions of the same application are not allowed.
When starting a program, you can optionally specify runtime arguments as a JSON map in the request body. CDAP will use these runtime arguments only for this single invocation of the program.
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application being called |
| Version of the application being called |
| One of |
| Name of the MapReduce, custom service, Spark, worker, or workflow being called |
Service, Spark, and Worker programs do not allow concurrent program runs. Programs of these types cannot be started unless the program is in the STOPPED state. MapReduce and Workflow programs support concurrent runs. If you start one of these programs, a new run will be started even if other runs of the program have not finished yet.
For example:
POST /v3/namespaces/default/apps/SportResults/services/UploadService/start
'{ "foo":"bar", "this":"that" }'
will start the UploadService of the SportResults application with two runtime arguments.
Start Multiple Programs
You can start multiple programs from different applications and program types by submitting an HTTP POST request:
POST /v3/namespaces/<namespace-id>/start
with a JSON array in the request body consisting of multiple JSON objects with these parameters:
Parameter | Description |
---|---|
| Name of the application being called |
| One of |
| Name of the MapReduce, custom service, Spark, worker, or workflow being started |
| Optional JSON object containing a string to string mapping of runtime arguments to start the program with |
The response will be a JSON array containing a JSON object for each object in the input. Each JSON object will contain these parameters:
Parameter | Description |
---|---|
| Name of the application being called |
| One of |
| Name of the MapReduce, custom service, Spark, worker, or workflow being started |
| The status code from starting an individual JSON object |
| If an error, a description of why the program could not be started (for example, the specified program was not found) |
runId | A UUID that uniquely identifies a run with CDAP |
For example:
POST /v3/namespaces/default/start
[
{"appId": "App1", "programType": "Service", "programId": "Service1"},
{"appId": "App1", "programType": "Spark", "programId": "Spark2"},
{"appId": "App2", "programType": "Spark", "programId": "Spark1", "runtimeargs": { "arg1":"val1" }}
]'
will attempt to start the three programs listed in the request body. It will return a response such as:
[
{"appId": "App1", "runId":"5f55fa1a-5700-11ed-a5d2-76b189bf0786", "programType": "Service", "programId": "Service1", "statusCode": 200},
{"appId": "App1", "runId":"5f55fa1a-5700-11ed-a5d2-76b189bf0786", "programType": "Spark", "programId": "Spark2", "statusCode": 200},
{"appId": "App2", "runId":"5f55fa1a-5700-11ed-a5d2-76b189bf0786", "programType":"Spark", "programId":"Spark1", "statusCode":404, "error": "App: App2 not found"}
]
In this particular example, the service and Spark programs in the App1 application were successfully started, and there was an error starting the last program because the App2 application does not exist.
Stop a Program
You can stop the MapReduce and Spark programs, custom services, workers, and workflows of an application by submitting an HTTP POST request:
POST /v3/namespaces/<namespace-id>/apps/<app-id>/<program-type>/<program-id>/stop
You can stop the programs of a particular application version by submitting an HTTP POST request that includes the version:
POST /v3/namespaces/<namespace-id>/apps/<app-id>/versions/<version-id>/<program-type>/<program-id>/stop
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application being called |
| Version of the application being called |
| One of |
| Name of the MapReduce, custom service, Spark, worker, or workflow being stopped |
A program that is in the STOPPED state cannot be stopped. If there are multiple runs of the program in the RUNNING state, this call will stop one of the runs, but not all of the runs.
For example:
POST /v3/namespaces/default/apps/SportResults/services/UploadService/stop
will stop the UploadService service in the SportResults application.
Stop a Program Run
You can stop a specific run of a program by submitting an HTTP POST request:
POST /v3/namespaces/<namespace-id>/apps/<app-id>/<program-type>/<program-id>/runs/<run-id>/stop
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application being called |
| One of |
| Name of the MapReduce, custom service, Spark, worker, or workflow being called |
| Run id of the run being called |
For example:
POST /v3/namespaces/default/apps/PurchaseHistory/mapreduce/PurchaseHistoryBuilder/runs/631bc459-a9dd-4218-9ea0-d46fb1991f82/stop
will stop a specific run of the PurchaseHistoryBuilder MapReduce program in the PurchaseHistory application.
Stop Multiple Programs
You can stop multiple programs from different applications and program types by submitting an HTTP POST request:
POST /v3/namespaces/<namespace-id>/stop
with a JSON array in the request body consisting of multiple JSON objects with these parameters:
Parameter | Description |
---|---|
| Name of the application being called |
| One of |
| Name of the MapReduce, custom service, Spark, worker, or workflow being stopped |
The response will be a JSON array containing a JSON object corresponding to each object in the input. Each JSON object will contain these parameters:
Parameter | Description |
---|---|
| Name of the application being called |
| One of |
| Name of the MapReduce, custom service, Spark, worker, or workflow being stopped |
| The status code from stopping an individual JSON object |
| If an error, a description of why the program could not be stopped (for example, the specified program was not found) |
For example:
POST /v3/namespaces/default/stop
[
{"appId": "App1", "programType": "Service", "programId": "Service1"},
{"appId": "App1", "programType": "Mapreduce", "programId": "MapReduce2"},
{"appId": "App2", "programType": "Spark", "programId": "Spark2"}
]'
will attempt to stop the three programs listed in the request body. It will return a response such as:
[
{"appId": "App1", "programType": "Service", "programId": "Service1", "statusCode": 200},
{"appId": "App1", "programType": "Mapreduce", "programId": "Mapreduce2", "statusCode": 200},
{"appId": "App2", "programType":"Spark", "programId":"Spark1", "statusCode":404, "error": "App: App2 not found"}
]
In this particular example, the service and MapReduce programs in the App1 application were successfully stopped, and there was an error starting the last program because the App2 application does not exist.
Status of a Program
To retrieve the status of a program, submit an HTTP GET request:
GET /v3/namespaces/<namespace-id>/apps/<app-id>/<program-type>/<program-id>/status
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application being called |
| One of |
| Name of the MapReduce, schedule, custom service, Spark, worker, or workflow being called |
The response will be a JSON array with status of the program. For example, retrieving the status of the UploadService of the program SportResults:
GET /v3/namespaces/default/apps/SportResults/services/UploadService/status
will return (pretty-printed) a response such as:
{
"status": "STOPPED"
}
Status of Multiple Programs
You can retrieve the status of multiple programs from different applications and program types by submitting an HTTP POST request:
POST /v3/namespaces/<namespace-id>/status
with a JSON array in the request body consisting of multiple JSON objects with these parameters:
Parameter | Description |
---|---|
| Name of the application being called |
| One of |
| Name of the MapReduce, schedule, custom service, Spark, worker, or workflow being called |
The response will be the same JSON array as submitted with additional parameters for each of the underlying JSON objects:
Parameter | Description |
---|---|
| Maps to the status of an individual JSON object's queried program if the query is valid and the program was found |
| The status code from retrieving the status of an individual JSON object |
| If an error, a description of why the status was not retrieved (for example, the specified program was not found) |
The status
and error
fields are mutually exclusive meaning if there is an error, then there will never be a status and vice versa.
For example:
POST /v3/namespaces/default/status -d '
[
{ "appId": "MyApp", "programType": "workflow", "programId": "MyWorkflow" },
{ "appId": "MyApp2", "programType": "service", "programId": "MyService" }
]
will retrieve the status of two programs. It will return a response such as:
[
{ "appId":"MyApp", "programType":"workflow", "programId":"MyWorkflow", "status":"RUNNING", "statusCode":200 },
{ "appId":"MyApp2", "programType":"service", "programId":"MyService", "error":"Program not found", "statusCode":404 }
]
Schedule Lifecycle
Schedules can only be created for workflows.
Add a Schedule
To add a schedule for a program to an application, submit an HTTP PUT request:
PUT /v3/namespaces/<namespace-id>/apps/<app-id>/schedules/<schedule-id>
To add the schedule to an application with a non-default version, submit an HTTP PUT request with the version specified:
PUT /v3/namespaces/<namespace-id>/apps/<app-id>/versions/<version-id>/schedules/<schedule-id>
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application |
| Name of the schedule; it is unique to the application and, if specified, the application version |
| Version of the application, typically following semantic versioning |
The request body is a JSON object specifying the details of the schedule to be created:
{
"name": "<name of the schedule>",
"description": "<schedule description>",
"namespace": "<namespace of the schedule>",
"application": "<application of the schedule>",
"applicationVersion": "<application version of the schedule>",
"program": {
"programName": "<name of the program>",
"programType": "WORKFLOW"
},
"properties": {
"<key>": "<value>",
...
},
"constraints": [
{
"type": "<constraint type>",
"waitUntilMet": <boolean>,
...
},
...
],
"trigger": {
"type": "<trigger type>",
...
},
"timeoutMillis": <timeout in milliseconds>
}
where a trigger is one of the trigger types. It can be a time trigger:
{
"type": "TIME",
"cronExpression": "<cron expression>"
}
or a partition trigger:
{
"type": "PARTITION",
"dataset": {
"namespace": "<namespace of the dataset>",
"dataset": "<name of the dataset>"
},
"numPartitions": <required number of partitions>
}
or a program status trigger:
{
"programId": {
"namespace": "<namespace of the program>",
"application": "<application name of the program>",
"version": "<application version of the program>",
"type": "<type of the program>",
"entity": "PROGRAM",
"program": "<name of the program>"
},
"programStatuses": [ <COMPLETED>, <FAILED>, <KILLED> ],
"type": "PROGRAM_STATUS"
}
or an AND trigger, where "triggers" is a non-empty list of any type of triggers:
{
"triggers" : [
{
"type": "<trigger type>",
...
},
...
],
"type": "AND"
}
or an OR trigger, where "triggers" is a non-empty list of any type of triggers:
{
"triggers" : [
{
"type": "<trigger type>",
...
},
...
],
"type": "OR"
}
and a constraint can be one of:
{
"type": "CONCURRENCY",
"maxConcurrency": <max number of runs>,
"waitUntilMet": <boolean>
}
{
"type": "DELAY",
"millisAfterTrigger": <milliseconds to delay>,
"waitUntilMet": <boolean>
}
{
"type": "TIME_RANGE",
"startTime": "<time in form HH:mm>",
"endTime": "<time in form HH:mm>",
"timeZone": "<name of the time zone, e.g., PST>",
"waitUntilMet": <boolean>
}
{
"type": "LAST_RUN",
"millisSinceLastRun": <milliseconds since last run>,
"waitUntilMet": <boolean>
}
Note: For any schedule, the program must be for a workflow and the programType
must be set to WORKLFLOW
.
HTTP Responses
Status Codes | Description |
---|---|
| Schedule with the same name already exists |
Update a Schedule
To update a schedule, submit an HTTP POST request:
POST /v3/namespaces/<namespace-id>/apps/<app-id>/schedules/<schedule-id>/update
To update a schedule of an application with a non-default version, submit an HTTP POST request with the version specified:
POST /v3/namespaces/<namespace-id>/apps/<app-id>/versions/<version-id>/schedules/<schedule-id>/update
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application |
| Name of the schedule; it is unique to the application and, if specified, the application version. |
| Version of the application, typically following semantic versioning. |
The request body is a JSON object specifying the details of the schedule to be updated, and follows the same form as documented in Add a Schedule.
Only changes to the schedule configurations are supported; changes to the schedule name, or to the program associated with it are not allowed. If any properties are provided, they will overwrite all existing properties with what is provided. You must include all properties, even ones you are are not altering.
HTTP Responses
Status Codes | Description |
---|---|
| If the new schedule type does not match the existing schedule type or there are other client errors |
Retrieving a Schedule
To retrieve a schedule in an application, submit an HTTP GET request:
GET /v3/namespaces/<namespace-id>/apps/<app-id>/schedules/<schedule-name>
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application |
| Name of the schedule |
The response will contain the schedule in the same form described in this topic in “Add a Schedule”.
List Schedules
To list all of the schedules for an application, use an HTTP GET request:
GET /v3/namespaces/<namespace-id>/apps/<app-id>/schedules
As schedules are created for a workflow, you can also list schedules for a workflow of an application. You can use the Details of a Deployed Application to obtain the workflows of an application.
Optionally, you can filter the schedules by trigger type and schedule status using the query parameters trigger-type
and schedule-status
. For more information, see Schedules.
To list all of the schedules of a workflow of an application, use an HTTP GET request:
GET /v3/namespaces/<namespace-id>/apps/<app-id>/workflows/<workflow-id>/schedules
The response will contain a list of schedules in the same form as described in “Add a Schedule”.
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application |
| Name of the workflow |
Next Scheduled Run Time
To list the next time that the workflow will be be scheduled by a time trigger, use the parameter nextruntime
:
GET /v3/namespaces/<namespace-id>/apps/<app-id>/workflows/<workflow-id>/nextruntime
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application |
| Name of the workflow |
Example: Retrieving The Next Runtime
HTTP Method |
|
---|---|
HTTP Response |
|
Description | Retrieves the next runtime of the workflow PurchaseHistoryWorkflow of the application PurchaseHistory |
Next Scheduled Run Time in Batch
To list the next time that all workflows in a namespace will be be scheduled by a time trigger, use the parameter nextruntime
:
POST /v3/namespaces/<namespace-id>/nextruntime
Parameter | Description |
---|---|
| Namespace ID |
The request body must be a JSON array of objects with the following parameters:
Parameter | Description |
---|---|
| Name of the application being called |
| Currently, only the Workflow type is supported. |
| Name of the program being called |
The response will be an array of JSON Objects, each of which will contain the three input parameters as well as two of three possible extra fields: "schedules" or “error” if an error occurs.
Parameter | Description |
---|---|
| The next scheduled runtimes for the program defined by the individual JSON object's parameters |
| The status code from retrieving the program runs |
| If an error, a description of why the status was not retrieved (for example, the specified program was not found, or the requested JSON object was missing a parameter) |
Example
HTTP Method |
|
---|---|
HTTP Body |
|
HTTP Response |
|
Description | Attempt to retrieve the next scheduled run of the service Service1 in the application App1, the workflow testWorkflow in the application App1 and the workflow DataPipelineWorkflow in the application App2, all in the namespace default |
Previous Run Time of All Schedules
To list the previous scheduled run time for all programs that are passed into the data, use the parameter previousruntime
:
POST /v3/namespaces/<namespace-id>/previousruntime
Parameter | Description |
---|---|
| Namespace ID |
The request body must be a JSON array of objects with the following parameters:
Parameter | Description |
---|---|
| Name of the application being called |
| Currently, only the Workflow type is supported. |
| Name of the program being called |
The response will be an array of JSON Objects, each of which will contain the three input parameters as well as two of three possible extra fields: "schedules" or “error” if an error occurs.
Parameter | Description |
---|---|
| The previous scheduled runtimes for the program defined by the individual JSON object's parameters |
| The status code from retrieving the program runs |
| If an error, a description of why the status was not retrieved (for example, the specified program was not found, or the requested JSON object was missing a parameter) |
Example
HTTP Method |
|
---|---|
HTTP Body |
|
HTTP Response |
|
Description | Attempt to retrieve the previous scheduled run of the service Service1 in the application App1, the workflow testWorkflow in the application App1 and the workflow DataPipelineWorkflow in the application App2, all in the namespace default |
Previous Run Time of a Schedule
To list the previous runtime when the scheduled program ran, use the parameter previousruntime
:
GET /v3/namespaces/<namespace-id>/apps/<app-id>/workflows/<workflow-id>/previousruntime
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application to be deleted |
| Name of the Workflow |
Example
HTTP Method |
|
---|---|
HTTP Response |
|
Description | Retrieves the previous runtime of the workflow PurchaseHistoryWorkflow of the application PurchaseHistory |
Delete a Schedule
To delete a schedule, submit an HTTP DELETE:
DELETE /v3/namespaces/<namespace-id>/apps/<app-id>/schedules/<schedule-id>
To delete a schedule of an application with a non-default version, submit an HTTP DELETE request with the version specified:
DELETE /v3/namespaces/<namespace-id>/apps/<app-id>/versions/<version-id>/schedules/<schedule-id>
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application to be deleted |
| Name of the schedule; it is unique to the application and, if specified, the application version |
| Version of the application to be deleted |
HTTP Responses
Status Codes | Description |
---|---|
| If the schedule given by |
Schedule: Disable and and Enable
For a schedule, you can disable and enable it using the Microservices.
Disable: To disable a schedule means that the program associated with that schedule will not trigger again until the schedule is enabled.
Enable: To enable a schedule means that the trigger is reset, and the program associated will run again at the next scheduled time.
As a schedule is initially deployed in a disabled state, a call to this API is needed to enable it.
To disable or enable a schedule, use:
POST /v3/namespaces/<namespace-id>/apps/<app-id>/schedules/<schedule-id>/disable
POST /v3/namespaces/<namespace-id>/apps/<app-id>/schedules/<schedule-id>/enable
Note: You can also use suspend and resume instead of disable and enable.
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application |
| Name of the schedule |
Example: Disabling a Schedule
HTTP Method |
|
---|---|
HTTP Response |
|
Description | Disables the schedule DailySchedule of the application PurchaseHistory |
Container Information
To find out the address of an program's container host and the container’s debug port, you can query CDAP for a service’s live info via an HTTP GET method:
GET /v3/namespaces/<namespace-id>/apps/<app-id>/<program-type>/<program-id>/live-info
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application being called |
| One of |
| Name of the program (service or worker) |
Example:
GET /v3/namespaces/default/apps/WordCount/flows/WordCounter/live-info
The response is formatted in JSON; an example of this is shown in CDAP Testing and Debugging.
Scaling
You can retrieve the instance count executing different components from various applications and different program types using an HTTP POST method:
POST /v3/namespaces/<namespace-id>/instances
Parameter | Description |
---|---|
| Namespace ID |
with a JSON array in the request body consisting of multiple JSON objects with these parameters:
Parameter | Description |
---|---|
| Name of the application being called |
| One of |
| Name of the program (service or worker) being called |
The response will be the same JSON array as submitted with additional parameters for each of the underlying JSON objects:
Parameter | Description |
---|---|
| Number of instances the user requested for the program defined by the individual JSON object's parameters |
| Number of instances that are actually running for the program defined by the individual JSON object's parameters. |
| The status code from retrieving the instance count of an individual JSON object |
| If an error, a description of why the status was not retrieved (for example, the specified program was not found, or the requested JSON object was missing a parameter) |
Note: The requested
and provisioned
fields are mutually exclusive of the error
field.
Example
HTTP Method |
|
---|---|
HTTP Body |
|
HTTP Response |
|
Description | Attempt to retrieve the instances of programType Worker in the application MyApp1, and the service handler MyHandler1 in the user service MySvc1 in the application MyApp3, all in the namespace default |
Scaling Services
You can query or change the number of instances of a service by using the instances
parameter with HTTP GET or PUT methods:
GET /v3/namespaces/<namespace-id>/apps/<app-id>/services/<service-id>/instances
PUT /v3/namespaces/<namespace-id>/apps/<app-id>/services/<service-id>/instances
with the arguments as a JSON string in the body:
{ "instances" : <quantity> }
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application |
| Name of the service |
| Number of instances to be used |
Note: You can scale system services using the Monitor HTTP RESTful API Scaling System Services.
Examples
Retrieve the number of instances of the service CatalogLookup in the application PurchaseHistory in the namespace default:
GET /v3/namespaces/default/apps/PurchaseHistory/services/CatalogLookup/instances
Set the number of handler instances of the service RetrieveCounts of the application WordCount:
PUT /v3/namespaces/default/apps/WordCount/services/RetrieveCounts/instances
with the arguments as a JSON string in the body:
{ "instances" : 2 }
Using
curl
and the CDAP Sandbox:Linux
$ curl -w"\n" -X PUT "http://localhost:11015/v3/namespaces/default/apps/WordCount/services/RetrieveCounts/instances" \ -d '{ "instances" : 2 }'
Windows
> curl -X PUT "http://localhost:11015/v3/namespaces/default/apps/WordCount/services/RetrieveCounts/instances" ^ -d "{ \"instances\" : 2 }"
Scaling Workers
You can query or change the number of instances of a worker by using the instances
parameter with HTTP GET or PUT methods:
GET /v3/namespaces/<namespace-id>/apps/<app-id>/workers/<worker-id>/instances
PUT /v3/namespaces/<namespace-id>/apps/<app-id>/workers/<worker-id>/instances
with the arguments as a JSON string in the body:
{ "instances" : <quantity> }
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application |
| Name of the worker |
| Number of instances to be used |
Example
Retrieve the number of instances of the worker DataWorker in the application DemoApp in the namespace default:
GET /v3/namespaces/default/apps/DemoApp/workers/DataWorker/instances
Run Records
To see all the runs of a selected program (MapReduce programs, Spark programs, services, or workflows), issue an HTTP GET to the program’s URL with the runs
parameter. This will return a JSON list of all runs for the program, each with a start time, end time, and program status:
GET /v3/namespaces/<namespace-id>/apps/<app-id>/<program-type>/<program-id>/runs
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application |
| One of |
| Name of the MapReduce, custom service, Spark, or workflow being called |
You can filter the runs by the status of a program, the start and end times, and can limit the number of returned records:
Query Parameter | Description |
---|---|
|
|
| start timestamp |
| end timestamp |
| maximum number of returned records |
The response will be a JSON array containing a JSON object for each object in the input. Each JSON object will contain these parameters:
Parameter | Description |
---|---|
| A UUID that uniquely identifies a run within CDAP, with the start and end times in seconds since the start of the Epoch (midnight 1/1/1970). Use that |
| The timestamp at which the program was requested to start by the user. |
| The timestamp at which the program actually started. |
| The timestamp at which this run was suspended (if it was suspended). |
| The timestamp at which this run was resumed (if it was resumed after being suspended). |
| The timestamp at which the request to stop this run was made. |
| The timestamp after which the run will be forcefully killed if it does not stop gracefully. |
| The status of the run in question. |
| A map of the properties of the run. Has subfields. |
| The runtime arguments provided to the run serialized as a JSON string. |
| provides information about the cluster on which the run was executed. Has subfields. |
| The current status of the cluster. |
| The number of nodes in the cluster. |
| The compute profile used for the run. |
| The name of the compute profile. |
| The namespace of the compute profile. |
| The profile’s entity type. |
Example: Retrieving Run Records
HTTP Method |
|
---|---|
HTTP Response |
|
Description | Retrieve the run records of the MapReduce ScoreCounter of the application SportResults. |
Retrieving Specific Run Information
To fetch the run record for a particular run of a program, use:
GET /v3/namespaces/<namespace-id>/apps/<app-id>/<program-type>/<program-id>/runs/<run-id>
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application |
| One of |
| Name of the MapReduce, custom service, Spark, or workflow being called |
| Run id of the run |
The response will be a JSON array containing a JSON object for each object in the input. Each JSON object will contain these parameters:
Parameter | Description |
---|---|
| A UUID that uniquely identifies a run within CDAP, with the start and end times in seconds since the start of the Epoch (midnight 1/1/1970). Use that |
| The timestamp at which the program was requested to start by the user. |
| The timestamp at which the program actually started. |
| The timestamp at which this run was suspended (if it was suspended). |
| The timestamp at which this run was resumed (if it was resumed after being suspended). |
| The timestamp at which the request to stop this run was made. |
| The timestamp after which the run will be forcefully killed if it does not stop gracefully. |
| The status of the run in question. |
| A map of the properties of the run. Has subfields. |
| The runtime arguments provided to the run serialized as a JSON string. |
| provides information about the cluster on which the run was executed. Has subfields. |
| The current status of the cluster. |
| The number of nodes in the cluster. |
| The compute profile used for the run. |
| The name of the compute profile. |
| The namespace of the compute profile. |
| The profile’s entity type. |
Example: Retrieving a Particular Run Record
HTTP Method |
|
---|---|
HTTP Response |
|
Description | Retrieve the run record of the MapReduce ScoreCounter of the application SportResults run b78d0091-da42-11e4-878c-2217c18f435d |
For services, you can retrieve:
the history of successfully completed Apache Twill service runs using:
GET /v3/namespaces/<namespace-id>/apps/<app-id>/services/<service-id>/runs?status=completed
For workflows, you can retrieve:
the information about the currently running node(s) in the workflow:
GET /v3/namespaces/<namespace-id>/apps/<app-id>/workflows/<workflow-id>/runs/<run-id>/nodes/state
More information about workflow endpoint can be found at Workflows
the schedules defined for a workflow (using the parameter
schedules
):GET /v3/namespaces/<namespace-id>/apps/<app-id>/workflows/<workflow-id>/schedules
the next time that the workflow is scheduled to run (using the parameter
nextruntime
):GET /v3/namespaces/<namespace-id>/apps/<app-id>/workflows/<workflow-id>/nextruntime
Examples
Example: Retrieving The Most Recent Run
HTTP Method |
|
---|---|
HTTP Response |
|
Description | Retrieve the most recent successful completed run of the service CatalogLookup of the application PurchaseHistory |
Retrieving Run Records in Batch
To retrieve the latest run records for run records for multiple programs, use:
POST /v3/namespaces/<namespace-id>/runs
Parameter | Description |
---|---|
| Namespace ID |
The request body must be a JSON array of objects with the following parameters:
Parameter | Description |
---|---|
| A UUID that uniquely identifies a run within CDAP, with the start and end times in seconds since the start of the Epoch (midnight 1/1/1970). Use that |
| The timestamp at which the program was requested to start by the user. |
| The timestamp at which the program actually started. |
| The timestamp at which this run was suspended (if it was suspended). |
| The timestamp at which this run was resumed (if it was resumed after being suspended). |
| The timestamp at which the request to stop this run was made. |
| The timestamp after which the run will be forcefully killed if it does not stop gracefully. |
| The status of the run in question. |
| A map of the properties of the run. Has subfields. |
| The runtime arguments provided to the run serialized as a JSON string. |
| provides information about the cluster on which the run was executed. Has subfields. |
| The current status of the cluster. |
| The number of nodes in the cluster. |
| The compute profile used for the run. |
| The name of the compute profile. |
| The namespace of the compute profile. |
| The profile’s entity type. |
The response will be an array of JSON Objects, each of which will contain the three input parameters as well as two of three possible extra fields: runs
, which is a list of the latest run record for that program, statusCode
, which maps to the status code for retrieving the runs for that program, and error
if there was an error retrieving runs for that program. The "statusCode" property will always be included, but runs
and error
are mutually exclusive.
Parameter | Description |
---|---|
| The latest run records for the program defined by the individual JSON object's parameters |
| The status code from retrieving the program runs |
| If an error, a description of why the status was not retrieved (for example, the specified program was not found, or the requested JSON object was missing a parameter) |
Example
HTTP Method |
|
---|---|
HTTP Body |
|
HTTP Response |
|
Description | Attempt to retrieve the latest run records of the service Service1 in the application App1, the workflow testWorkflow in the application App1 and the workflow DataPipelineWorkflow in the application App2, all in the namespace default |
Retrieving Run Counts in Batch
To retrieve the run counts for multiple programs, use:
POST /v3/namespaces/<namespace-id>/runcount
Parameter | Description |
---|---|
| Namespace ID |
The request body must be a JSON array of objects with the following parameters:
Parameter | Description |
---|---|
| Name of the application being called |
| One of |
| Name of the program (mapreduce, spark, workflow, service, or worker) being called |
The response will be an array of Json Objects, each of which will contain the three input parameters as well as two of three possible extra fields -- "runCount", which is count for the program run, "statusCode", which maps to the status code for retrieving the run count for that program, and "error" if there was an error retrieving runs for that program. The "statusCode" property will always be included, but "runCount" and "error" are mutually exclusive.
Parameter | Description |
---|---|
| The number of program runs for the program defined by the individual JSON object's parameters over the entire lifetime |
| The status code from retrieving the program run count |
| If an error, a description of why the status was not retrieved (for example, the specified program was not found, or the requested JSON object was missing a parameter) |
Example
HTTP Method |
|
---|---|
HTTP Body |
|
HTTP Response |
|
Description | Attempt to retrieve the run count of the service Service1 in the application App1, the workflow testWorkflow in the application App1 and the workflow DataPipelineWorkflow in the application App2, all in the namespace default |
Retrieving Specific Run Count
To fetch the run count for a particular program, use:
GET /v3/namespaces/<namespace-id>/apps/{app-id}/{program-type}/{program-id}/runcount
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application |
| One of |
| Name of the program (mapreduce, spark, workflow, service, or worker) being called |
Example
HTTP Method |
|
---|---|
HTTP Response |
|
Description | Retrieve the run count of the workflow DataPipelineWorkflow of the application myApp |
Workflow Runs: Suspend and Resume
For workflows, in addition to starting and stopping, you can suspend and resume individual runs of a workflow using the RESTful API.
Suspend: To suspend means that the current activity will be taken to completion, but no further programs will be initiated. Programs will not be left partially uncompleted, barring any errors.
In the case of a workflow with multiple MapReduce programs, if one of them is running (first of three perhaps) and you suspend the workflow, that first MapReduce will be completed but the following two will not be started.
Resume: To resume means that activity will start up where it was left off, beginning with the start of the next program in the sequence.
In the case of the workflow mentioned above, resuming it after suspension would start up with the second of the three MapReduce programs, which is where it would have left off when it was suspended.
With workflows, suspend and resume require a run-id as the action takes place on either a currently running or suspended workflow.
To suspend or resume a workflow, use:
POST /v3/namespaces/<namespace-id>/apps/<app-id>/workflows/<workflow-id>/runs/<run-id>/suspend
POST /v3/namespaces/<namespace-id>/apps/<app-id>/workflows/<workflow-id>/runs/<run-id>/resume
Parameter | Description |
---|---|
| Namespace ID |
| Name of the application |
| Name of the workflow |
| UUID of the workflow run |
Example: Suspending a Workflow
HTTP Method |
|
---|---|
HTTP Response |
|
Description | Suspends the run |
Related content
Created in 2020 by Google Inc.