Overview
The purpose of this page is to illustrate the plan for ApplicationTemplate and Application consolidation. This work is being tracked in
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
Motivation
Why do we want to consolidate templates and applications? In CDAP 3.0, an ApplicationTemplate is a way for somebody to write an Application that can be given some configuration to create an Adapter. The story is confusing; one would expect an ApplicationTemplate to create... Applications. Instead, we use the term Adapter because Application means something else already. In addition an ApplicationTemplate can only include a single workflow or a single worker, giving people different experiences for templates and applications.
Really, the goal of templates was to be able to write one piece of Application code that could be used to create multiple Applications. To do this requires that an Application can be configured at creation time instead of at compile time. For example, a user should be able to set the name of their dataset based on configuration instead of hardcoding it in the code. To support this, we plan on making it possible to get a configuration object from the ApplicationContext available in Application's configure() method. This allows somebody to pass in a config when creating an Application through the RESTful API, which can be used to configure an Application. The relevant programmatic API changes are shown below, with an example of how they might be used. We will use this example to walk through some use cases.
...
.
Definitions
Artifact - A jar file containing classes that can be used by CDAP.
Application Class - A java class that implements the CDAP Application interface. Bundled in an artifact.
Application Config - Configuration given to CDAP to create an Application (can be empty).
Application - An instantiation of an Application Class, created by passing an Application Config to an Application Class
Plugin - An extension to an Artifact. Usually implements an interface used by Application Classes in the Artifact.
old terminology | new terminology | description |
---|---|---|
ApplicationTemplate | Artifact |
|
Adapter | Application | in 3.0 and 3.1, you create an Adapter by specifying an ApplicationTemplate and optionally some config in 3.2, you create an Application by specifying an Artifact and optionally some config |
Application | Application |
|
Application jar | Artifact |
Use Case Walkthrough
1. Create an Application that uses config
1.1 Deploying the Artifact
A developer writes a configurable Application Class that uses a Flow to read from a stream and write to a Table.
Code Block |
---|
public class MyApp extends AbstractApplication<MyApp.MyConfig> { public static class MyConfig extends Config { @Nullable @Description("The name of the stream to read from. Defaults to 'A'.") private String stream; @Nullable @Description("The name of the table to write to. Defaults to 'X'.") private String table; @Name("flow") private MyFlowConfig flowConfig; private MyConfig() { this.stream = "A"; this.table = "X"; } } public void configure() { // ApplicationContext now has a method to get a custom config object whose fields will // be injected using the values given in the RESTful API MyConfig config = getContext().getConfig(); addStream(new Stream(config.stream)); createDataset(config.table, Table.class); addFlow(new MyFlow(config.stream, config.table, config.flowConfig)); } } public class MyFlow implements Flow { @Property private String stream; @Property private String table; @Property private FlowConfig flowConfig; public static final FlowConfig extends Config { private ReaderConfig reader; private WriterConfig writer; } MyFlow(String streamMyFlow(String stream, String table, FlowConfig flowConfig) { this.stream = stream; this.table = table; this.flowConfig = flowConfig; } @Override public FlowSpecification configure() { return FlowSpecification.Builder.with() .setName("MyFlow") .setDescription("Reads from a stream and writes to a table") .withFlowlets() .add("reader", new StreamReaderReader(flowConfig.reader)) .connect() .add("writer", new TableWriter(flowConfig.writer)) .connect() .fromStream(stream).to("reader") .from("reader").to("writer") .build(); } } public class StreamReaderReader extends AbstractFlowlet { private@Property OutputEmitter<Put> emitter; private String @PropertytableName; private ReaderConfigTable readerConfigtable; private Reader reader; Reader(String tableName) { public static class ReaderConfig extendsthis.tableName Config= {tableName; } @Description("The name of the@Override reader plugin to use.") String name;public void initialize(FlowletContext context) throws Exception { table @Description("The properties needed by the chosen reader plugin.") @PluginType("reader")= context.getDataset(tableName); } PluginProperties properties; @ProcessInput } public staticvoid interface Readerprocess(StreamEvent event) { Put read(StreamEventput = new Put(Bytes.toBytes(event.getHeaders().get(config.rowkey))); } StreamReader(ReaderConfig readerConfig) { put.add("timestamp", event.getTimestamp()); this.readerConfig = readerConfigput.add("body", Bytes.toBytes(event.getBody())); } @Override public FlowletSpecification configure() { // arguments are: type, name, id, properties usePlugin("reader", readerConfig.name, "streamReader", readerConfig.properties); } @Override public void initialize(FlowletContext context) throws Exception { reader = context.newPluginInstance("streamReader"); } @ProcessInput public void process(StreamEvent event) { emitter.emit(reader.read(event)); } } @Plugin(type = "reader") @Name("default") @Description("Writes timestamp and body as two columns and expects the row key to come as a header in the stream event.") public class DefaultStreamReader implements StreamReader.Reader { private DefaultConfig config; public static class DefaultConfig extends PluginConfig { @Description("The header that should be used as the row key to write to. Defaults to 'rowkey'.") @Nullable private String rowkey; private DefaultConfig() { rowkey = "rowkey"; } } public Put read(StreamEvent event) { Put put = new Put(Bytes.toBytes(event.getHeaders().get(config.rowkey))); put.add("timestamp", event.getTimestamp()); put.add("body", Bytes.toBytes(event.getBody())); return put; } } |
Use Case Walkthrough
1. Deploying an Artifact
A development team creates a project built on top of CDAP. Their CI build runs and produces a jar file. An administrator deploys the jar by making a REST call:
Code Block |
---|
POST /namespaces/default/artifacts/myapp --data-binary @myapp-1.0.0.jar |
CDAP opens the jar, figures out the artifact version based on the the bundle-version in the manifest, figures out what apps, programs, datasets, and plugins are in the artifact, then stores the artifact on the filesystem and metadata in a table.
The administrator can examine the metadata by making a call:
Code Block |
---|
GET /namespaces/default/artifacts/myapp/versions/1.0.0 { "name": "purchase", "version": "3.1.0", "meta": { "created": "1234567890000", ... }, "classes": { "apps": [ { "className": "co.cask.cdap.examples.myapp.MyApp", "properties": { "stream": { "name": "stream", "description": "The name of the stream to read from. Defaults to 'A'.", "type": "string", "required": false }, "table": { "name": "table", "description": "The name of the table to write to. Defaults to 'X'.", "type": "string", "required": false, }, "flowConfig": { "name": "flow", "description": "", "type": "config", "fields": { "reader": { "name": "reader", "description": "", "type": "config", "required": true, "fields": { "name": { "name": "name", "description": "The name of the reader plugin to use.", "type": "string", "required": true }, "properties": { "name": "properties", "description": "The properties needed by the chosen reader plugin.", "type": "plugin", "plugintype": "reader", "required": true } } }, "writer": { ... } } } } } ], "plugins": [ { "name": "default", "type": "reader", "description": "Writes timestamp and body as two columns and expects the row key to come as a header in the stream event.", " table.put(put); } } |
A jar named 'myapp-1.0.0.jar' is built which contains the Application Class. The jar is deployed via the RESTful API:
Code Block |
---|
POST /namespaces/default/artifacts/myapp --data-binary @myapp-1.0.0.jar |
Version is determined from the Bundle-Version in the artifact Manifest. It can also be provided as a header. Artifact details are now visible through other RESTful API calls:
Code Block |
---|
GET /namespaces/default/artifacts
[
{
"name": "myapp",
"version": "1.0.0"
}
]
GET /namespaces/default/artifacts/myapp/versions/1.0.0
{
"name": "myapp",
"version": "1.0.0",
"classes": {
"apps": [
{
"className": "co.cask.cdap.examples.myapp.MyApp",
"properties": {
"stream": {
"name": "stream",
"description": "The name of the stream to read from. Defaults to 'A'.",
"type": "string",
"required": false
},
"table": {
"name": "table",
"description": "The name of the table to write to. Defaults to 'X'.",
"type": "string",
"required": false,
}
}
}
],
"flows": [ ... ],
"flowlets": [ ... ],
"datasetModules": [ ... ]
}
} |
In addition, a call can be made to get all Application Classes:
Code Block |
---|
GET /namespaces/default/classes/apps
[
{
"className": "co.cask.cdap.examples.myapp.MyApp",
"artifact": {
"name": "myapp",
"version": "1.0.0"
}
}
] |
1.2 Creating an Application
The user decides to create an application from the deployed artifact. From the calls above, the user gathers that input and output are both configurable. The user decides to create an Application that reads from the 'purchases' stream and writes to the 'events' table.
Code Block |
---|
PUT /namespaces/default/apps/purchaseDump -H 'Content-Type: application/json' -d '
{
"artifact": {
"name": "myapp",
"version": "1.0.0"
},
"config": {
"stream": "purchases",
"table": "events"
}
}' |
The Application now shows up in all the normal RESTful APIs, with all its programs, streams, and datasets.
1.3 Updating an Application
A bug is found in the code, a fix is provided, and a 'myapp-1.0.1.jar' release is made. The artifact is deployed:
Code Block |
---|
POST /namespaces/default/artifacts/myapp --data-binary @myapp-1.0.1.jar |
Note: Artifacts are immutable unless they are snapshot versions. Deploying again to version 1.0.0 would cause a conflict error.
A call can be made to determine if there are any Applications using the older artifact:
Code Block |
---|
GET /namespaces/default/apps?artifactName=myapp&artifactVersion=1.0.0
[
{
"name": "purchaseDump",
"description": "",
"artifactName": "myapp",
"version": "1.0.0"
}
] |
Calls are made to stop running programs. Another call is then made to update the app:
Code Block |
---|
POST /namespaces/default/apps/purchaseDump/update -d '
{
"artifact": {
"name": "myapp",
"version": "1.0.1"
},
"config": {
"stream": "purchases",
"table": "events"
}
}' |
The config section is optional. If none is given, the previous config will be used. If it is given, it will replace the old config (no merging is done).
1.4 Rolling Back an Application
Actually, version 1.0.1 has a bug that's even worse and needs to be rolled back. The same update call can be made:
Code Block |
---|
POST /namespaces/default/apps/purchaseDump/update -d '
{
"artifact": {
"name": "myapp",
"version": "1.0.0"
},
"config": {
"stream": "purchases",
"table": "events"
}
}' |
1.5 Deploying an Artifact and Creating an App in one step
For backwards compatibility, the deploy app API will remain the same and will internally deploy an artifact and create the app in one call. An additional header will be supported specifying the Application Config.
Code Block |
---|
POST /namespaces/default/apps --data-binary @myapp-1.0.0.jar -H 'X-App-Config: { "stream": "purchases", "table": "events" }' |
2. Create an Application that uses plugins
2.1 Application Class changes
Now the user decides to update the MyApp Application Class to support pluggable ways of reading from a stream. This is done by introducing a 'StreamReader' interface in their project:
Code Block |
---|
public interface StreamReader {
Put read(StreamEvent event);
} |
The user wants this StreamReader interface to be pluggable. There can be many implementations of StreamReader, and which implementation to use should be configurable. The Flowlet code changes to use the new StreamReader interface using the plugin java API:
Code Block |
---|
public class Reader extends AbstractFlowlet {
@Property
private String tableName;
private Table table;
private StreamReader streamReader;
Reader(String tableName) {
this.tableName = tableName;
}
@Override
public void initialize(FlowletContext context) throws Exception {
table = context.getDataset(tableName);
streamReader = context.newPluginInstance("readerPluginID");
}
@ProcessInput
public void process(StreamEvent event) {
table.put(streamReader.read(event));
}
} |
The Application Class is changed to register a "streamreader" plugin based on configuration:
Code Block |
---|
public class MyApp extends AbstractApplication<MyApp.MyConfig> {
public static class MyConfig extends Config {
@Nullable
@Description("The name of the stream to read from. Defaults to 'A'.")
private String stream;
@Nullable
@Description("The name of the table to write to. Defaults to 'X'.")
private String table;
@Description("The name of the streamreader plugin to use.")
private String readerPlugin;
@Nullable
@Description("Properties to send to the streamreader plugin.")
@PluginType("streamreader")
private PluginProperties readerPluginProperties;
private MyConfig() {
this.stream = "A";
this.table = "X";
}
}
@Override
public void configure() {
// ApplicationContext now has a method to get a custom config object whose fields will
// be injected using the values given in the RESTful API
MyConfig config = getContext().getConfig();
addStream(new Stream(config.stream));
createDataset(config.table, Table.class);
addFlow(new MyFlow(config.stream, config.table, config.flowConfig));
// arguments are: type, name, id, properties
usePlugin("streamreader", config.readerPlugin, "readerPluginID", config.readerPluginProperties);
}
} |
This becomes v2 of the Application Class. It is deployed via the same RESTful API:
Code Block |
---|
POST /namespaces/default/artifacts/myapp --data-binary @myapp-2.0.0.jar |
The metadata about this artifact now includes additional information about the config:
Code Block |
---|
GET /namespaces/default/artifacts/myapp/versions/2.0.0 { "name": "myapp", "version": "2.0.0", "classes": { "apps": [ { "className": "co.cask.cdap.examples.myapp.plugins.DefaultStreamReaderMyApp", "properties": { "rowkeystream": { "name": "rowkeystream", "description": "The headername that should be used as of the rowstream key to writeread tofrom. Defaults to 'rowkeyA'.", "type": "string", "required": false }, } "table": { } ], "flowsname": [ ... ], "table", "flowletsdescription": [ ... ], "The name of the table to write to. Defaults to 'X'.", "datasetModulestype": [ ... ]"string", } } |
2. Creating an Application
The administrator notices there is an app 'co.cask.cdap.examples.myapp.MyApp' contained in the artifact. Based on the app properties, the admin gathers that it needs a config of the form:
Code Block |
---|
{ "stream": "A""required": false, "table": "X", "flow": { }, "reader": { "namereaderPlugin": "<some{ plugin name>", "properties": { <properties for plugins of type "reader"> }"name": "readerPlugin", }, "writerdescription": { ... } } } |
He then makes a call to see what plugins of type 'reader' are available:
Code Block |
---|
GET /namespaces/default/plugintypes/reader [ { "The name of the streamreader plugin to use.", "type": "readerstring", "name": "default", "descriptionrequired": "Writestrue timestamp and body as two columns and expects the row key to}, come as a header in the stream event.", "classNamereaderPluginProperties": "co.cask.cdap.examples.myapp.plugins.DefaultStreamReader"{ "artifact": { "namespacename": "defaultreaderPluginProperties", "name": "myapp", "versiondescription": "1.0.0" } } ] |
It looks like there is only one plugin of type reader available. Another call gives more details about what that plugin requires:
Code Block |
---|
GET /namespaces/default/plugintypes/reader/plugins/default [ {Properties to send to the streamreader plugin.", "type": "readerplugin", "nameplugintype": "defaultstreamreader", "description "required": "Writesfalse timestamp and body as two columns and expects the row key} to come as a header in the stream event.",} "className": "co.cask.cdap.examples.myapp.plugins.DefaultStreamReader", } "properties": { ], "flows": [ ... ], "rowkeyflowlets": {[ ... ], "namedatasetModules": "rowkey", [ ... ] "description": "The header that should be used as} } |
2.2 Adding plugins
A default implementation of the streamreader plugin is created to implement the previous logic:
Code Block |
---|
@Plugin(type = "streamreader") @Name("default") @Description("Writes timestamp and body as two columns and expects the row key to write to. Defaults to 'rowkey'.", "type": "string", "required": false } }, come as a header in the stream event.") public class DefaultStreamReader implements StreamReader { private DefaultConfig config; public static class DefaultConfig extends PluginConfig { @Description("The header that should be used as the row key to write to. Defaults to 'rowkey'.") @Nullable private String rowkey; private DefaultConfig() { rowkey = "rowkey"; } } public Put read(StreamEvent event) { Put put = new Put(Bytes.toBytes(event.getHeaders().get(config.rowkey))); put.add("timestamp", event.getTimestamp()); put.add("body", Bytes.toBytes(event.getBody())); return put; } } |
The plugin is bundled into a 'streamreaders-1.0.0.jar' artifact. It is added as an extension to the myapp artifact:
Code Block |
---|
POST /namespaces/default/artifacts/streamreaders --data-binary streamreaders-1.0.0.jar -H 'X-Extends-Artifacts: myapp-[2.0.0,3.0.0)' |
The plugin details can now be seen by querying for extensions to myapp:
Code Block |
---|
GET /namespaces/default/artifacts/myapp/versions/2.0.0/extensions
[ "streamreader" ]
GET /namespaces/default/artifacts/myapp/versions/2.0.0/extensions/streamreader
[
{
"name": "default",
"type": "reader",
"description": "Writes timestamp and body as two columns and expects the row key to come as a header in the stream event.",
"className": "co.cask.cdap.examples.myapp.plugins.DefaultStreamReader",
"artifact": {
"name": "streamreaders",
"version": "1.0.0"
}
}
]
GET /namespaces/default/artifacts/myapp/versions/2.0.0/extensions/streamreader/plugins/default
[
{
"name": "default",
"type": "reader",
"description": "Writes timestamp and body as two columns and expects the row key to come as a header in the stream event.",
"className": "co.cask.cdap.examples.myapp.plugins.DefaultStreamReader",
"properties": {
"rowkey": {
"name": "rowkey",
"description": "The header that should be used as the row key to write to. Defaults to 'rowkey'.",
"type": "string",
"required": false
}
}
"artifact": {
"name": "streamreaders",
"version": "1.0.0"
}
}
] |
2.3 Creating an Application that uses plugins
With this information a user is now able to create an Application that uses a Plugin:
Code Block |
---|
PUT /namespaces/default/apps/userDump -H 'Content-Type: application/json' -d '
{
"artifact": {
"name": "myapp",
"version": "2.0.0"
},
"config": {
"stream": "users",
"table": "events"
"readerPlugin": "default",
"readerPluginProperties": {
"rowkey": "user-id"
}
}
}' |
3. System Artifacts
System artifacts are special artifacts that can be accessed in other namespaces. They cannot be deployed through the RESTful API unless a conf setting is set. Instead, they are placed in a directory on the CDAP master host. When CDAP starts up, the directory will be scanned and those artifacts will be added to the system. Example uses for system artifacts are the ETLBatch and ETLRealtime applications that we want to include out of the box.
System artifacts are included in results by default and are indicated with a special flag.
Code Block |
---|
GET /namespaces/default/artifacts
[
{
"name": "ETLBatch",
"version": "3.1.0",
"isSystem": true
},
{
"name": "ETLRealtime",
"version": "3.1.0",
"isSystem": true
},
{
"name": "ETLPlugins",
"version": "3.1.0",
"isSystem": true
},
{
"name": "myapp",
"version": "1.0.0",
"isSystem": false
},
{
"name": "myapp",
"version": "1.0.1",
"isSystem": false
}
] |
System artifacts can be excluded from results using a filter:
Code Block |
---|
GET /namespaces/default/artifacts?includeSystem=false
[
{
"name": "myapp",
"version": "1.0.0",
"isSystem": false
},
{
"name": "myapp",
"version": "1.0.1",
"isSystem": false
}
] |
When a user wants to create an application from a system artifact, they make the same RESTful call as before, except adding a special flag to indicate it is a system artifact:
Code Block |
---|
PUT /namespaces/default/apps/somePipeline -d '
{
"artifact": {
"name":"ETLBatch",
"version":"3.1.0",
"isSystem": true
},
"config": { ... }
}' |
4. Deleting an Artifact
Non-snapshot artifacts will be immutable. Advanced users can delete an existing artifact, but the assumption will be that they know exactly what they are doing. Deleting an artifact may cause programs that are using it to fail.
5. CDAP Upgrade
The programmatic API changes are all backwards compatible, so existing apps will not need to be recompiled. They will, however, need to be added to the artifact repository as part of the upgrade tool (or force people to redeploy their existing apps).
Any existing adapters will need to be migrated. Ideally, the upgrade tool will create matching applications based on the adapter conf, but at a minimum we will simply delete existing adapters and templates.
6. Application Versioning
This was mentioned in stories 1 and 2, but versioning is now explicitly managed by CDAP.
Suppose a development team is working on a search application. There is a dev instance of CDAP running, and an initial version 0.1.0-SNAPSHOT of the artifact is deployed, and a corresponding application is created from it:
Code Block |
---|
POST /namespaces/default/artifacts/searchapp --data-binary @searchapp-0.1.0-SNAPSHOT.jar PUT /namespaces/default/apps/search -H 'Content-Type: application/json' -d ' { "artifact": { "namespacename": "defaultsearchapp", "nameversion": "myapp"0.1.0-SNAPSHOT" }, "config": { "versionstream": "1.0.0docs" } } } ] |
contained in the artifact.
Users will still be able to deploy an app in one call. Suppose a user wants to deploy their application contained in myapp-1.0.0.jar. They make the same RESTful call they would today:
Code Block |
---|
PUT /namespaces/default/apps/myapp -H "X-Archive-Name: myapp-1.0.0.jar" -H "Content-Type: application/octet-stream"}' |
During development, every day, a new version of the artifact is built and deployed:
Code Block |
---|
POST /namespaces/default/artifacts/searchapp --data-binary @searchapp-0.1.0-SNAPSHOT.jar |
This replaces the version of the artifact that was there before. Any running programs will be using the old code, but any new programs started after the artifact is added will use the new code. Therefore, as part of the deployment process, application programs are restarted. After some time, the initial version of the application code is deemed ready for release. The project version is bumped to version 0.1.0, an artifact is built and deployed to CDAP:
Code Block |
---|
POST /namespaces/default/artifacts/searchapp --data-binary @myapp@searchapp-0.1.0.0.jar |
Internally, CDAP will add the jar to its ArtifactRepository (new in CDAP 3.1), and then create an application from that artifact. In this example, the application creates stream A and Table X by default.
After the app is deployed, a user can now create Application myapp2 by referencing the artifact and config in their request without actually including the jar contents in the request. This lets users create applications using only config.
...
This adds a newer version of the artifact. This version is not a snapshot version and is therefore immutable. Attempts to re-deploy it will fail. Deploying the artifact has no impact on existing applications. The 'search' application will continue to use version 0.1.0-SNAPSHOT of the artifact until it is updated to the new version:
Code Block |
---|
POST /namespaces/default/apps/myapp2 -H "Content-Type: application/json"search/update -d ' { "artifact": { "name": "myappsearchapp", "version": "1.0.0" }, "config": { "stream": "B", "table": "X" } }' |
Deploying an Artifact, then an Application
Users can also deploy an artifact without creating an application.
"0.1.0"
}
} |
If no config is given, the existing config will be used. Otherwise, if a config is given, it will entirely replace the existing config. Artifact version cannot be changed unless all running programs are stopped.
After some time, some bugs are found and version 0.1.1 is developed and released. The jar is built and deployed to CDAP:
Code Block |
---|
POST /namespaces/default/artifacts/myappsearchapp --data-binary @myapp@searchapp-0.1.0.1.jar |
An application can then be created from that artifact in a separate call.The application is updated to use the new version of the artifact with the bug fixes:
Code Block |
---|
PUTPOST /namespaces/default/apps/myapp3 -H "Content-Type: application/json"search/update -d ' { "artifact": { "name": "my-appsearchapp", "version": "0.1.0.1" }, "config": {"stream": "C", "table": "X" } }' |
Updating an Application
Users will also be able to update their applications to use a different version of an artifact.
Code Block |
---|
PUT |
After some more time, additional features are added and version 0.2.0 is built and released, and the application is changed to use the new version of the artifact:
Code Block |
---|
POST /namespaces/default/artifacts/searchapp --data-binary @searchapp-0.2.0.jar POST /namespaces/default/apps/myappsearch/propertiesupdate -H "Content-Type: application/json" -d '{d ' { "artifact": { "name": "myappsearchapp", "version":"1.0.1" }, "config": { "streamversion": "A0.2.0", "table": "X" } }' |
System Artifacts
System artifacts are special artifacts that can be accessed in other namespaces. They cannot be deployed through the RESTful API. Instead, they are placed in a directory on the CDAP master host. When CDAP starts up, the directory will be scanned and those artifacts will be added to the system. Example uses for system artifacts are the ETLBatch and ETLRealtime applications that we want to include out of the box. When a user wants to create an application from a system artifact, they make the same RESTful call as before, except adding the namespace to the artifact section of the call:
Code Block |
---|
PUT |
If there is any schema evolution happening or any other backwards impacting changes, they must be handled correctly by the application logic. CDAP will not migrate data or have any guarantees of compatibility between artifact versions.
During the release, a serious bug is discovered and the application is rolled back to use the previous artifact version:
Code Block |
---|
POST /namespaces/default/apps/somePipeline -H "Content-Type: application/json"search/update -d ' { "artifact": { "namespacename": "systemsearchapp", "name":"ETLBatch", "version": "30.1.0" }, "config": { ...1" } }' |
Again, no compatibility guarantees are made by CDAP. This operation may not be safe if the application logic does not make it safe, for example if there is data written in a new format that the old code cannot understand.
RESTful API changes
Application APIs
Type | Path | Body | Headers | Description | |||
---|---|---|---|---|---|---|---|
GET | /v3/namespaces/<namespace-id>/apps?label=<label>artifactName=<name>[&artifactVersion=<version>] | for example, to get all "ETLBatch" applications | get all apps using the given artifact name and version | ||||
POST | /v3/namespaces/<namespace-id>/apps | application jar contents | Application-Config: <json of config> | same as deploy api today, except allows passing config as a header | |||
PUT | /v3/namespaces/<namespace-id>/apps/<app-name> | application jar contents | Application-Config: <json of config> | same as deploy api today, except allows passing config as a header | |||
PUT | /v3/namespaces/<namespace-id>/apps/<app-name> |
| Content-Type: application/json | create an application from an existing artifact. Note: Edits existing API, different behavior based on content-type | |||
PUTPOST | /v3/namespaces/<namespace-id>/apps/<app-name>/propertiesupdate |
| update an existing application. No programs can be running |
Artifact APIs
Type | Path | Body | Headers | Description | |||||
---|---|---|---|---|---|---|---|---|---|
GET | /v3/namespaces/<namespace-id>/artifacts | ||||||||
GET | /v3/namespaces/<namespace-id>/artifacts/<artifact-name> | Get data about all artifact versions | |||||||
POST | /v3/namespaces/<namespace-id>/artifacts/<artifact-name> | jar contents | Artifact-Version: <version> Artifact-Plugins: <json of plugins in the artifact> | Add a new artifact. Version header only needed if Bundle-Version is not in jar Manifest. If both present, header wins.header wins. Artifact plugins can be explicitly given as a header. This is to support the use case of 3rd party classes used as plugins, such as jdbc drivers | |||||
GET | /v3/namespaces/<namespace-id>/artifacts/<artifact-name>/versions/<version> | Get details about the artifact, such as what plugins and applications are in the artifact and properties they support | PUT | /v3/namespaces/<namespace-id>/artifacts/<artifact-name>/versions/<version>/plugins | list of plugins contained in the jar | This is required for 3rd party jars, such as the mysql jdbc connector. It is the equivalent of the .json file we have in 3.0in the artifact and properties they support | |||
GET | /v3/namespaces/<namespace-id>/artifacts/<artifact-name>/versions/<version>/extensions |
| |||||||
GET | /v3/namespaces/<namespace-id>/artifacts/<artifact-name>/versions/<version>/extensions/<plugin-type> | ||||||||
GET | /v3/namespaces/<namespace-id>/artifacts/<artifact-name>/versions/<version>/extensions/<plugin-type>/plugins/<plugin-name> | config properties can be nested now. For example:
| |||||||
GET | /v3/namespaces/<namespace-id>/classes/apps | ||||||||
GET | /v3/namespaces/<namespace-id>/classes/apps/<app-classname> |
Template APIs (will be removed)
Type | Path | Replaced By | ||
---|---|---|---|---|
GET | /v3/templates | /v3/namespaces/<namespace-id>/artifacts?scope=system | ||
GET | /v3/templates/<template-name> | -name> | /v3/namespaces/<namespace-id>/artifacts/[cdap-etl-batch | cdap-etl-realtime]?scope=system | |
GET | /v3/templates/<template-name>/extensions/<plugin-type> | /v3/namespaces/<namespace-id>/artifacts/<artifact-name>/versions/<version>/extensions/<plugin-type> | ||
GET | /v3/templates/<template-name>/extensions/<plugin-type>/plugins/<plugin-name> | /v3/namespaces/<namespace-id>/artifacts/<artifact-name>/versions/<version>/extensions/<plugin-type>/plugins/<plugin-name> | ||
PUT | /v3/namespaces/<namespace-id>/templates/<template-id> | POST /v3/namespaces/system/artifacts | ||
GET | /v3/namespaces/<namespace-id>/adapters | /v3/namespaces/<namespace-id>/apps?artifactName=cdap-etl-batch | ||
GET | /v3/namespaces/<namespace-id>/adapters/<adapter-name> | /v3/namespaces/<namespace-id>/apps/<app-name> | ||
POST | /v3/namespaces/<namespace-id>/adapters/<adapter-name>/start | start | resume workflow schedule api for etl-batch, start worker api for etl-realtime | |
POST | /v3/namespaces/<namespace-id>/adapters/<adapter-name>/stop | pause workflow schedule api for etl-batch, stop worker api for etl-realtime | ||
GET | /v3/namespaces/<namespace-id>/adapters/<adapter-name>/status | workflow schedule status api for etl-batch, worker status api for etl-realtime | ||
GET | /v3/namespaces/<namespace-id>/adapters/<adapter-name>/runs | workflow runs api for etl-batch, worker runs api for etl-realtime | ||
GET | /v3/namespaces/<namespace-id>/adapters/<adapter-name>/runs/<run-id> | workflow runs api for etl-batch, worker runs api for etl-realtime | ||
DELETE | /v3/namespaces/<namespace-id>/adapters/<adapter-name> | /v3/namespaces/<namespace-id>/apps/<app-name> |