Parquet Snapshot Dataset Batch Source (Deprecated)

Note: Datasets and the Parquet Snapshot Dataset Batch Source are deprecated and will be removed in CDAP 7.0.0.

A batch source that reads from a corresponding Parquet Snapshot Dataset sink. The source will only read the most recent snapshot written to the sink.

This source is used whenever you want to read data written to the corresponding Parquet Snapshot Dataset sink. It will read only the last snapshot written to that sink. For example, you might want to create daily snapshots of a database by reading the entire contents of a table and writing it to a Parquet Snapshot Dataset sink. You might then want to use this source to read the most recent snapshot and run a data analysis on it.

Configuration

Property

Macro Enabled?

Description

Property

Macro Enabled?

Description

Dataset Name

Yes

Required. Name of the PartitionedFileSet to which records are written. If it doesn't exist, it will be created.

Snapshot Base Path

Yes

Optional. Base path for the PartitionedFileSet. Defaults to the name of the dataset.

FileSet Properties

Yes

Optional. Advanced feature to specify any additional properties that should be used with the sink, specified as a JSON object of string to string. These properties are set on the dataset if one is created. The properties are also passed to the dataset at runtime as arguments.

Output Schema

Yes

Required. The Parquet schema of the record being written to the sink as a JSON object.

Example

This example will read from a SnapshotFileSet named users. It will read data in Parquet format using the given schema. Every time the pipeline runs, only the most recently added snapshot will be read:

{ "name": "SnapshotParquet", "type": "batchsource", "properties": { "name": "users", "schema": "{ \"type\":\"record\", \"name\":\"user\", \"fields\":[ {\"name\":\"id\",\"type\":\"long\"}, {\"name\":\"name\",\"type\":\"string\"}, {\"name\":\"birthyear\",\"type\":\"int\"} ] }" } }



Created in 2020 by Google Inc.