Hydrator Backend App
Hydrator Backend Application
To develop a back-end app to encapsulate business logic, that acts as as an intermediary between CDAP-UI and CDAP backend. The back-end app simplifies developing new features in CDAP-UI as it encapsulates the logic to translate business logic request/action to appropriate CDAP backend requests/actions and returns to the UI relevant information. This will make CDAP-UI to focus more on the UI aspects and less about the business logic involved. Ideally this back-end app will remove the "view in CDAP" as the UI will be able to get the relevant information required from the backend-app.
Checklist
Use-cases
Case #1
User adds a database plugin to the pipeline, clicks on the database plugin to fill in the configuration
User provides JDBC string, table name or SELECT query, username, password.
User then clicks on the button to populate the schema
UI will make the backend call to Hydrator App to retrieve the schema associated depending on whether it's based on Table or SELECT query.
User then has the choice to include the schema as the output schema of the database plugin.
The information of the schema associated with the database plugin is stored as spec in the exported pipeline.
Case #2
User adds a database plugin to the pipeline, clicks on the database plugin to fill in the configuration
User provides JDBC string (include database and other configurations), username and password
User on selecting table will click on the button to list the tables.
UI makes the backend call to retrieve the list of tables and show it to the user
User then selects the table which automatically populates the schema as the output schema of the database plugin.
Case #3
Shankar is using the Hydrator Studio instance to build a pipeline, he is building a batch pipeline for processing data from the Stream
Albert is also using the same instance of Hydrator Studio to build his pipeline, he is building a real-time pipeline for processing data from Twitter
Both Albert and Shankar have complex pipelines to build and they want to ensure that their work is not lost, so they are periodically saving it as draft
When both of them save drafts asynchronously to each other, the draft from each are visible to each other.
User Stories
There are hydrator specific functionalities which could leverage CDAP’s features.
Drafts
User wants to add a new draft or save the pipeline he is working as a draft
User can update an existing draft of a pipeline as new version – previous version of pipelines are saved (upto 20 versions)
User can go back to previous version of draft or for any version of draft
User wants to retrieve the latest version of draft for a pipeline
User wants to view all available pipeline drafts across all users
User wants the ability to write a pipeline draft
User has access to only those pipelines that are available in the namespace the user is in.
Plugin Output Schema
User using DB-Source wants to enter connection-string, table name and automatically populate table schema information.
List Field values
User provides connection-string, user-name and password and expects list of available tables returned in DB-Source.
Design
Option #1
Description
The hydrator app needs to be able to write/read to a dataset to store and retrieve drafts and other information about business logic. We can implement a Hydrator CDAP Application with a service that can have REST endpoints to serve the required hydrator functionalities. Enabling Hydrator in a namespace will deploy this Hydrator app and start the service. Hydrator UI would ping for this service to be available before coming up. The back-end business logic actions which directly needs to use the CDAP services endpoints can be made generic.
Pros
Everything (Drafts, etc) stored in the same namespace, proper cleanup when namespace is deleted.
Cons
Every namespace will have an extra app for supporting hydrator if hydrator is enabled. Running this service, will run 2 containers per namespace. we can add an option to enable/disable hydrator if we are not using hydrator in a namespace. It might feel weird as a user app, as the user didn't write/create this app.
Option #2
Description
We will still use an Hydrator CDAP app but we create an "Extensions" namespace and have the "hydrator" app only deployed in the "extensions" namespace, this app would serve the hydrator requests for all namespaces.
It will use a single dataset to store the drafts, row keys can be name spaced for storing the drafts, while deleting the namespace, the rows belonging to the namespace will be deleted from the dataset.
Pros
Less amount of resources used, only 2 container's used rather than 2 container’s per namespace, only one dataset is used.
Only one app for using hydrator across namespace and not an app per namespace, less clutter.
New extensions could be added to the same namespace to support other use cases in future.
Cons
Using a single dataset for storing all drafts across namespace is less secure?.
User won't be able to create a new namespace called "Extensions", as it will be reserved.
Open Questions
How to delete the drafts when the namespace is deleted ?
When to stop this service?
Availability of the service?
Security
If we decide to add more capability in hydrator back-end app, Eg: Make the pipeline validation/deploy app, etc, then in secure environment,
The hydrator-service can discover appropriate cdap.service and call appropriate endpoints?
Option #3
Story 1 - Schema and field value suggestions :
Plugin annotation @Endpoint:
Plugin’s can have custom plugin-specific methods that can be annotated as @Endpoint.
UI can learn about available endpoints for a plugin from the plugin properties.
UI can call the app-fabric endpoint identifying {artifact, plugin} with the method name and method-parameter as request body, the app-fabric endpoint will then load the corresponding plugin and call the method identified by method-name, if the method is annotated as @Endpoint.
The response from this method call is sent as response of the HTTP request.
REST API :
POST : /namespaces/{namespace-id}/artifacts/{artifact-name}/versions/{artifact-version}/types/{plugin-type}
plugins/{plugin-name}/methods/{plugin-method}?scope={artifact-scope}
Request-Body : JSON - fieldName to value mapping.
Response :
200, Successful Response JSON string
404, Not Found, Plugin Specific Error Message (Example : DB, Table not found)
500, Error, Plugin Specific Error Message (Example : JDBC Connection error)
Description : In the request we refer to the plugin-artifact and not the parent artifact. we could use one of the available parent artifact.
Endpoint Annotation
@Retention(RetentionPolicy.RUNTIME)
public @interface Endpoint {
/**
* Returns the endpoint.
*/
String endpoint();
}
Example Methods in Plugin DBSource:
DBSource
@Endpoint("listTables")
List<String> listTables(ListTableRequest request)
@Endpoint("getSchema")
Map<String, String> getSchema(SchemaRequest request)
Story 2 - Drafts
Configurations HTTP Handler:
Single HTTP Handler for unifying Console Setting Handler and Dashboards HTTP Handler.
HTTP Request Type | Endpoint : (Table Assumes we are using config-type -> drafts) | Request Body | Response Status | Response Body | |
PUT | /namespaces/{namespace-id}/configurations/{config-type}/objects/{object-id}/ | content stored as is | 200 OK: config object saved successfully 409 CONFLICT: config with object-id already exists 500 Error: while saving the draft | { "version" : "version-id" } | |
POST | /namespaces/{namespace-id}/configurations/{config-type}/objects/{object-id}/versions | content stored as is | 200 OK: config object updated successfully 404 NOT Found : config object doesn't exist already, cannot be updated. 500 Error while updating the config | { "version" : "version-id" } | |
GET | /namespaces/{namespace-id}/configurations/{config-type}/objects/{object-id}/versions |
| 200 return all the versions for the config identified by the object-id 404 config object not found 500 error while getting config object |
| |
GET | /namespaces/{namespace-id}/configurations/{config-type}/objects/{object-id}/versions/{version-number}
|
| 200 returns the specific version of the object 404 config object with version found 500 error while getting config object | contents returned as is | |
GET | /namespaces/{namespace-id}/configurations/{config-type}/objects/{object-id} Get latest version |
| 200 return the latest version for the config object 404 config object with version found 500 error while getting the latest config object | content returned as is | |
GET | /namespaces/{namespace-id}/configurations/{config-type}/objects |
| 200 return the list of metadata about config objects 500 error | [ "name" : "StreamToTPFS", "lastSaved": "..", .. } , | |
DELETE | /namespaces/{namespace-id}/configurations/{config-type}/objects/{object-id} |
| 200 successfully deleted the specified object 404 object does not exist 500 error while deleting |
|
"Drafts", "Plugin Templates", "Default versions" and "Dashboards" are type of configurations specified as "config-type" in the REST call.
The individual JSON-config or object would be identified by "object-id".
JAVA API - Config Store:
Existing configstore methods
void create(String namespace, String type, Config config) throws ConfigExistsException;
void createOrUpdate(String namespace, String type, Config config);
void delete(String namespace, String type, String id) throws ConfigNotFoundException;
List<Config> list(String namespace, String type);
Config get(String namespace, String type, String id) throws ConfigNotFoundException;
void update(String namespace, String type, Config config) throws ConfigNotFoundException;Configstore new methods
// get a particular version of an entry.
Config get(String namespace, String type, String id, int version) throws ConfigNotFoundException;
// get all the versions of an entry.
List<Config> getAllVersions(String namespace, String type, String id) throws ConfigNotFoundException;
Schema Propagation and Validation through backend - DryRuns:
Currently when pipeline is published, configurePipeline of plugins are called and we perform pipeline validation and plugin validations and also deploy the application.
1.Goal of dry-run endpoint is to validate a pipeline, then validate plugins by calling configure methods of plugin’s in the pipeline without
performing any creation of datasets or generate program etc, which are usually done during deploy.
2. using dry-run we would be able to catch issues in the pipeline earlier and fix them before deploying.
Dry-run can also be used by UI for schema propagation with some requirements from UI:
If Plugin has field “schema", UI can mutate the output schema
If plugin doesn’t have the field “schema" , UI cannot change the output schema and has to rely on result of dry
run for the output schema for that stage, which is set during plugin configuration.
we need to follow the above conditions for correctness, if UI mutates schema when there isn’t a field “schema”, the backend would have a different schema as input-schema for the next stage and the UI changes wouldn’t be reflected.
POST : namespace/{namespace-id}/dry-run
Request-Body : JSON Config.
Response-Body:
JSON Config with additional fields in the plugin for output schema,
exceptions in configuring pipeline stage, etc.
User Stories (3.5.0)
For the hydrator use case, the backend app should be able to support hydrator related functionalities listed below:
query for plugins available for a certain artifacts and list them in UI
obtaining output schema of plugins provided the input configuration information
deploying pipeline and start/stop the pipeline
query the status of a pipeline run and current status of execution if there are multiple stages.
get the next schedule of run, ability to query metrics and logs for the pipeline runs.
creating and saving pipeline drafts
get the input/output streams/datasets of the pipeline run and list them in UI.
explore the data of streams/datasets used in the pipeline if they are explorable.
Add new metadata about a pipeline and retrieve metadata by pipeline run,etc.
delete hydrator pipeline
the backend app's functionalities should be limited to hydrator and it shouldn't be like a proxy for CDAP.
Having this abilities will remove the logic in CDAP-UI to make appropriate CDAP REST calls, this encapsulation will simplify UI's interaction with the back-end and also help in debugging potential issues faster. In future, we could have more apps similar to hydrator app so our back-end app should define and implement generic cases that can be used across these apps and it should also allow extensibility to support adding new features.
Generic Endpoints
HTTP Request Type | Endpoint | Request Body | Description | Response Body |
GET | /extensions/{back-end}/status |
| 200 OK : platform service is available 404 Service unavailable |
|
GET | /extensions/{back-end}/program/{program-name}/runs |
| 200 OK: runs of the program |
....
|
POST | /extensions/{back-end}/program/{program-name}/action |
| 200 start/stop/status of program |
|
POST | /extensions/{back-end}/program/{program-name}/metrics/query Query Params : startTime, endTime, scope |
| config: time-range, tags. 200 return metrics |
|
GET | /extensions/{back-end}/program/{program-name}/logs/{log-level} Query Params : startTime, endTime |
| 200 return logs for a time-range |
|
GET | /extensions/{back-end}/program/{program-name}/schedule |
| 200 get the next schedule run-time |
|
GET | /extensions/{back-end}/program/{program-name}/datasets |
| 200 get all the input/output datasets that's used in the program |
....
|
POST | /extensions/{back-end}/program/{program-name}/datasets/{dataset-name}/explore/{action} |
| perform action {preview, download, next} for explore on dataset 200 explore result |
|
POST | /extensions/{back-end}/program/{program-name}/metadata | {
"key" : "...",
"value" : "..."
} | store metadata supplied in JSON for this program 200 ok |
|
GET | /extensions/{back-end}/program/{program-name}/metadata |
| get metadata added for this program 200 metadata result | {
"key" : "...",
"value" : "..."
} |
DELETE | /extensions/{back-end}/program/{program-name}/metadata |
| 200 successfully deleted metadata added for the program |
|
Created in 2020 by Google Inc.