Goal
This is a source plugin that would allow users to read and process mainframe files defined using COBOL Copybook. This should be basic first implementation.
...
Input Format implementation : here
Design
- Assumptions:
...
- For each "AbstractFieldValue" read from the data file if the type If "AbstractFieldValue"(JRecord) type is binary, the data will be encoded to Base64 format.
Integer.parseInt(Base64.decodeBase64(Base64.encodeBase64(value.toString().getBytes())).toString());
or
Base64.decodeInteger(Base64.encodeInteger(value.asBigInteger()));
It will depend on the field data type(int or BigInteger)
- JRecord AbstractFieldValue type to JAVA primitive data type mappings used :
- char, char just right , char null terminated, char null padded - java.lang.String
- num left justified, num right justified , num zero padded - int
- binary int, binary int positive, positive binary int fields - int
- decode using
- Integer.parseInt(Base64.decodeBase64(Base64.encodeBase64(filedValue.toString().getBytes())).toString())
- Integer.parseInt(Base64.decodeBase64(Base64.encodeBase64(filedValue.toString().getBytes())).toString())
- decode using
decimal, Mainframe Packed Decimal, Mainframe Packed Decimal, Mainframe Zoned Numeric - java.math.BigDecimal
Since CDAP Schema.Type does not have a BigDecimal data type, converting everything to Double
- Binary Integer Big Endian (Mainframe, AIX etc)- Binary Integer Big Endian (Mainframe?),Binary Integer Big Endian (only +ve),Positive Integer Big Endian - BigInteger
- decode using
- Base64.decodeInteger(Base64.encodeInteger(filedValue.asBigInteger()))
- Since CDAP Schema.Type does not have BigInteger converting this to long
- decode using
- Boolean / (Y/N) - Boolean
- Default - String
Examples
Properties :
referenceName : This will be used to uniquely identify this source for lineage, annotating metadata, etc.
copybookContents : Contents Contents of the COBOL copybook file which will contain the data structure
binaryFilePath : Complete Complete path of the .bin to be read.This will be a fixed length binary format file,that matches the copybook.
drop : Comma Comma-separated list of fields to drop. For example: 'field1,field2,field3'.
maxSplitSize : Maximum split-size (MB) for each mapper in the MapReduce. \n Job. Defaults to 1MB128MB.
Example :
This example reads data from a local binary file "file:///home/cdap/DTAR020_FB.bin" and parses it using the schema given in the text area "COBOL Copybook"
...