Versions Compared
compared with
Key
- This line was added.
- This line was removed.
- Formatting was changed.
Comment:
Added rationale for using JDBC over AWS SDK.
Introduction
Amazon Aurora is a Mysql and Postgres compatible database offered as a service. Users will have needs to write to AuroraDB or read from AuroraDB
Use case(s)
- Users would like to batch build a data pipeline to read complete table from Amazon Aurora DB instance and write to BigTable.
- Users would like to batch build a data pipeline to perform upserts on AuroraDB tables in batch
- Users should get relevant information from the tool tip while configuring the AuroraDB source and AuroraDB sink
- The tool tip for the connection string should be customized specific to the database.
- The tool tip should describe accurately what each field is used for
- Users should get field level lineage for the source and sink that is being used
- Reference documentation be available from the source and sink plugins
Deliverables
- Source code in data integrations org
- Integration test code
- Relevant documentation in the source repo and reference documentation section in plugin
Relevant links
- Data-integrations org: https://github.com/data-integrations/
- Field level lineage: https://docs.cdap.io/cdap/6.0.0-SNAPSHOT/en/developer-manual/metadata/field-lineage.html
- Integration test repos: https://github.com/caskdata/cdap-integration-tests
Plugin Type
- Batch Source
- Batch Sink
- Real-time Source
- Real-time Sink
- Action
- Post-Run Action
- Aggregate
- Join
- Spark Model
- Spark Compute
Design / Implementation Tips
- Amazon's SDK for Relational Database Service (part of which Aurora DB is) is only useful for programmatically managing database clusters and instances (creating and deleting databases, changing instance state, managing users, creating dumps to S3, etc). From client perspective, all you need to work with Aurora DB is JDBC driver.
- Since client can connect There are two ways to connect to AuroraDB - using JDBC and using RDS Data API client. According to documentation, Data API client 'provides Http Endpoint to query RDS databases'. This means that in this case data will be accessed via REST API backed by the same JDBC calls. Thus there will be not only network delay but also additional time should be spent serializing/desirializing classes from SDK. Also, using HTTP doesn't seem to be any good for database batch operations. Another reason to stick with JDBC is the fact, that most of the CPU time in database plugins is spent on datatypes mapping and conversion (as discovered during performance testing), not on IO operations.
- Taking into account above arguments it looks like connecting to AuroraDB cluster using JDBC driver it is rational would be the most rational choice, so it might also be a good idea to reuse existing database source and sink functionality from database-plugins project.
Design
- For better user experience it is suggested to create separate plugins for AuroraDB MySQL and PostgreSQL.
Future Work
Table of Contents
Table of Contents style circle
Checklist
- User stories documented
- User stories reviewed
- Design documented
- Design reviewed
- Feature merged
- Examples and guides
- Integration tests
- Documentation for feature
- Short video demonstrating the feature