Also, BigQuery Datetime type can be directly mapped to CDAP datetime data type.
CDAP-17684, CDAP-17636: Added support for Datetime data type in Wrangler. You can now select Parse > Datetime to transform columns of strings to datetime values and Format > Datetime to change the date and time pattern of a column of datetime values.
CDAP-17611: Updated Salesforce plugins to incorporate with the new OAuth macro function
CDAP-17610: Implemented a new macro function for OAuth token exchange
CDAP-17609: Implemented new HTTP endpoints for OAuth management
CDAP-17674: Added support to allow users to specify a runtime argument, retain.staging.table, to retain BigQuery staging table to help debug issues
CDAP-17595: Added upgrade support for replication jobs
CDAP-17471: Added the ability to duplicate, export, and import replication jobs
CDAP-17337: Added property to configure dataset name in the BigQuery replication target. By default, the dataset name is the same as the Replication source database name. For more information, see Google BigQuery Target.
CDAP-16755: Added ability to add the runtime argument "event.queue.capacity" to specify the capacity of the event queue in bytes for Replication jobs. If the target plugin consumes the event slower than the source plugin emits the event, the event may stay in the queue and occupy the memory. With this capability the user can control how much memory, at most, can be used for the event queue.
CDAP-17607: Added advanced join conditions to the joiner plugin. This allows users to specify an arbitrary SQL condition to join on. These types of joins are typically much more costly to perform than basic join on equality. For more information, see Join Condition Type.
New System Plugins for Data Pipelines
PLUGIN-558: Added new post-action plugin, GCS Done File Marker. This post-action plugin marks the end of a pipeline run by creating and storing an empty DONE (or SUCCESS) file in the given GCS bucket upon a pipeline completion, success, or failure so that you can use it to orchestrate downstream/dependent processes.
CDAP-16623: Removed multiple ways to collapse/expand the Connection menu
CDAP-16008: Added support for running pipelines on Hadoop cluster with Kerberos enabled.
CDAP-15552: Fixed Wrangler to highlight new column generated by a directive
CDAP-16180: Resolved macro to preferences during pipeline validation
In CDAP 6.4.0, when you validate a plugin, macros get resolved with system and namespace preferences. In previous releases, to validate a plugin's configuration, you had to change the pipeline to remove the macros.
PLUGIN-470: Removed Multi sink runtime argument requirements, allowing users to add simple transformations in multi-source/multi-sink pipelines.
In version 6.4.0, CDAP determines the schema dynamically at runtime instead of requiring arguments to be set. Multi sink runtime argument requirements have been removed, which lets you add simple transformations in multi-source/multi-sink pipelines. In previous releases, multi-sink plugins require the pipeline to set a runtime argument for each table, with the schema for each table.
PLUGIN-610: Fixed Bigtable Batch Source plugin. Previously, all pipelines that include the Bigtable source failed.
PLUGIN-606: FTP batch source now works with empty File System Properties. See “Deprecations” below.
PLUGIN-545: Added support for strings in Min/Max aggregate functions (used in both Group By and Pivot plugins)
PLUGIN-539: Fixed Salesforce plugin to correctly parse the schema as Avro schema to make sure all the field names are accepted by Avro
PLUGIN-517: Fixed data pipeline with BigQuery sink that failed with INVALID_ARGUMENT exception if the range specified was a macro
PLUGIN-222: Fixed Kinesis Spark Streaming source, which had a class conflict, so users can now run pipelines with this source.
CDAP-17746: Fixed an issue in field validation logic in pipelines with BigQuery sink that caused a NullPointerException
CDAP-17744: Fixed Schema editor to show UI validations
CDAP-17737: Fixed Conditions plugins to work with Spark 3
CDAP-17732: Fixed the Wrangler Generate UUID directive to correctly generate a universally unique identifier (UUID) of the record
CDAP-17718: Fixed advanced joins to recognize auto broadcast setting
CDAP-17717: Fixed upgraded CDAP instances to include arrow to the Error Collector
CDAP-17713: Fixed Pipeline Studio UI to send null instead of string for blank plugin properties
CDAP-17703: Fixed Pipeline Studio to use current namespace when it fetches data pipeline drafts
CDAP-17691: Fixed SecureStore API to support SYSTEM namespace
CDAP-17683: Fixed million indicator on Replication Monitoring page
CDAP-17680: Fixed Replication statistics to display on the dashboard for SQL Server
CDAP-17678: Fixed an issue where clicking the Delete button on Replication Assessment page resulted in an error for the replication job
CDAP-17653: Removed the usage of authorization token while generating session token in nodejs proxy.
CDAP-17641: Schema name is now shown when selecting tables to replicate
CDAP-17635: Fixed Replication to correctly insert rows that were previous deleted by a replication job
CDAP-17630: Data pipelines running in Spark 3 enabled Dataproc cluster no longer fail with class not found exception
CDAP-17617: Fixed Replication Overview page to display the label of the table status when you hover over the table status
CDAP-17598: Added ability to hover over metrics in the Pipeline Summary page
CDAP-17584: Fixed Replication with a SQL Server source to generate rows correctly in BigQuery target table if snapshot failed and restarted
CDAP-17570: Fixed an issue where SQL Server replication job stopped processing data when the connection was reset by the SQL Server
CDAP-17568: Fixed the Replication wizard to close without error when you click the X icon to exit
CDAP-17495: Fixed an error in Replication wizard Step 3 "Select tables, columns and events to replicate" where selecting no columns for a table caused the wizard to fetch all columns in a table
CDAP-17491: Using a macro for a password in a replication job no longer results in an error
CDAP-17483: Fixed logical type display for data pipeline preview runs
CDAP-17476: Fixed Dashboard API to return programs running but started before the startTime
CDAP-17450: Fixed Replication job (when deployed) to show advanced configurations in UI
CDAP-17347: Fixed data pipeline with Python Evaluator transformation to run without stack trace errors
CDAP-17331: Suppressed verbose info logs from Debezium in Replication jobs
CDAP-17189: Added loading indicator while fetching logs in Log Viewer
CDAP-17028: Fixed Pipeline preview so logical start time function doesn’t display as a macro
CDAP-16804: Fixed fields with a list drop down menu in the Replication wizard to default to “Select one”
CDAP-16726: Added message in Replication Assessment when there are tables that CDAP cannot access
CDAP-16609: Used error message when an invalid expression is added in Wrangler
CDAP-16316: Fixed RENAME directive in Wrangler so it’s case sensitive
CDAP-16233: Fixed Pipeline Operations UI to stop showing the loading icon forever when it gets error from backend
CDAP-15979: Fixed Wrangler to no longer generate invalid reference names
CDAP-15509: Fixed Wrangler to display logical types instead of java types
CDAP-15465: Fixed pipelines from Wrangler to no longer generate incorrect for xml files
CDAP-13907: Added connection in Wrangler hard codes the name of the jdbc driver
CDAP-13281: Batch data pipelines with Spark 2.2 engine and HDFS sinks no longer fail with delegation token issue error
PLUGIN-678: Data pipelines that include BigQuery sinks version 0.17.0 fail or give incorrect results. This is fixed in BigQuery sink version 0.17.1, which is available for download in the Hub.
In the Hub, download Google Cloud Platform version 0.17.1.For each pipeline, replace BigQuery sink plugins version 0.17.0 with BigQuery sink plugins version 0.17.1. If a pipeline has a BigQuery sink and other Google Cloud Platform plugins, such as a BigQuery source, you must update all Google Cloud Platform plugins to version 0.17.1. Google Cloud Platform plugins in the same pipeline must be the same version. To quickly update each plugin, export all pipelines that use BigQuery sinks. You can use the Pipeline Studio to export pipelines in Draft and Deploy states. You can also use the Lifecycle Microservices to export pipelines in Deploy state in batch. Then import them back into Pipeline Studio. Pipeline Studio prompts you to update the plugins with version 0.17.1. Because CDAP exports pipelines to Draft state, you’ll need to deploy each pipeline after you import them. Also, set version 0.17.1 as the default for all Google Cloud Platform plugins. For more information, see Working with multiple versions of the same plugin.
PLUGIN-669: Joiner plugin version 2.6.0 does not show join conditions
The following issue occurs in the Joiner plugin version 2.6.0, which lets you toggle between basic and advanced join conditions. After you upgrade CDAP to 6.4.0 or import a pipeline from a previous version, and you open the Joiner properties page, the basic join condition for the configured pipeline does not appear. This issue doesn't affect how the pipeline runs, the join condition still exists.
To resolve this issue:
Click System Admin > Configuration > Make HTTP Calls.
In the HTTP calls executor fields, enter:
Paste the following JSON content in the Body field:
If your Pipeline page is open in another window, you might need to refresh the page to see the join conditions.
CDAP-17720: When you run a Replication job, if a source table has a column name that does not conform to BigQuery naming conventions, the job fails with an error similar to the following:
com.google.cloud.bigquery.BigQueryException: Invalid field name "SYS_NC00012$". Fields must contain only letters, numbers, and underscores, start with a letter or underscore, and be at most 128 characters long.
Note: In BigQuery, column names must contain only letters, numbers, and underscores, start with a letter or underscore, and be at most 128 characters long.
Workaround: Remove columns from the Replication job that do not conform to the BigQuery naming conventions.
CDAP-17897: Due to a limitation in Microsoft SQL Server CDC, if your replication source table has a newly added column, it is not automatically added to CDC tables. You must manually add it to the underlying CDC table.
FTP Batch Source (System Plugin for Data Pipelines)
The FTP Batch Source plugin installed with CDAP is deprecated and will be removed in a CDAP 7.0.0. This deprecation includes all versions of the FTP Batch Source prior to version 3.0.0. The supported version of the FTP Batch Source is version 3.0.0 and is available for download in the Hub.
FTP Batch Source version 3.0.0 is completely backward compatible, except that it uses a different artifact. This was done to ensure that updates to the plugin can be delivered out of band of CDAP releases, through the Hub.
It’s recommended that you use the FTP Batch Source plugin version 3.0.0 or later in your data pipelines.