Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Example: Consider a sample pipeline which performs join between HR dataset and Person dataset. The joined data is then normalized(to rename some fields, drop some fields, and create ID) before storing data into the data lake. Following diagram explains the sample pipeline which is reading from 2 file sources. Note that the 2D boxes represent the SCHEMA that is flowing through the pipeline while 3D boxes represents the pipeline stages.

 

In the lineage view, we show high level information first as shown below. Note that 'HR File', 'Person File', and 'Employee Data' are name of the input and output datasets, as indicated by the Reference name in the plugin properties.

Next detail level view contains the clickable fields from the input and output datasets. Note that 2D boxes represents fields belonging to the datasets. Since input datasets are of type file which does not have schema yet, plugin can provide any String name for it. In this case we are using "HR Record" and "Person Record" as name.

 

Once user clicks on particular field, field level lineage graph can be displayed.

Example: Graph for field ID, where circle represents the fields and edges represents operations with names in bubbles.

Note that "body" field is generated from "HR Record" as well as "Person Record". To distinguish it while storing we might need to prefix it with the stage name. 

As an additional information for the source and target datasets we might want to show the associated properties such as file path, regex used etc.

Store: Based on the above example, we want following pieces of information to be stored in the "FieldLevelLineage" dataset

  1. Properties associated with the Dataset. For example: File path, name of the directory associated with the "HR File", Broker Id, Topic name etc associated with the Kafka plugin. This will be single row per dataset per namespace. If the same dataset is used in multiple pipelines, but with different configurations the properties will be union of both. 
  2. Fields associated with the Dataset. This will be single row per dataset per namespace. We will store each field as a separate column in this row. The value of the column can be additional properties such as creation time, last update time, runid responsible for last update etc.
  3. Lineage associated with the each field from the target dataset. For each field belonging to each target dataset, and for each run of the pipeline writing to that dataset, there will be one row.

Example: With one run of the pipeline shown above, following will be the sample data in the store.

Row KeyColumn KeyValueNote
MyNamespace:HRFileProperties

inputDir=/data/2017/hr

regex=*.csv

failOnError=false

One Row per namespace per dataset
MyNamespace: PersonFileProperties

inputDir=/data/2017/person

regex=*.csv

failOnError=false

One Row per namespace per dataset
MyNamespace:EmployeeDataProperties

rowid=ID

/*should we store schema too? what if that changes per run?*/

One Row per namespace per dataset
MyNamespace:EmployeeData:AllFieldsID

/* We may not necessarily required to store any value*/

created_time:12345678

updated_time:12345678

last_updated_by:runid_X

One Row per namespace per dataset
MyNamespace:EmployeeData:AllFieldsName  
MyNamespace:EmployeeData:AllFieldsDepartment  
MyNamespace:EmployeeData:AllFieldsContactDetails  
MyNamespace:EmployeeData:AllFieldsJoiningDate  
MyNamespace:EmployeeData:ID:<runidX-inverted-start-time>:runidXLineage
{
  "sources":[ {
      "name": "PersonFile",
      "properties" : {
         "inputPath": "/data/2017/persons",
         "regex": "*.csv"
       }
   }, {
      "name": "HRFile",
      "properties" : {
         "inputPath": "/data/2017/hr",
         "regex": "*.csv"
       }
  }],
  "targets":[ {
     "name": "Employee Data"
    }
  ], 
  "operations": [
     {
        "inputs": [{
           "name": "PersonRecord", 
           "source": "PersonFile" 
        }], "outputs": [{
           "name": "PersonRecord.body"  
        }],
        "name": "READ",
        "description": "Read Person file." 
     },
     {
        "inputs":[{
            "name": "PersonRecord.body"   
         }
        ],
        "outputs": [
           {
             "name": "SSN"
           }
        ],
        "name": "PARSE",
        "description": "Parse the body field"
     },
     {
        "inputs": [{
           "name": "HRRecord", 
           "source": "HRFile" 
        }], "outputs": [{
           "name": "HRRecord.body"  
        }],
        "name": "READ",
        "description": "Read HR file." 
     },
     {
        "inputs":[{
            "name": "PersonRecord.body"   
         }
        ],
        "outputs": [
           {
             "name": "Employee_Name"
           }, 
           {
             "name": "Dept_Name"
           }
        ],
        "name": "PARSE",
        "description": "Parse the body field"
     },
     {
        "inputs": [
           {
             "name": "Employee_Name"
           }, 
           {
             "name": "Dept_Name"
           },
           {
              "name": "SSN"
           }
        ],
        "outputs": [
           {
              "name": "ID",
              "target": "Employee Data"
           }
        ],
        "name": "GenerateID",
        "description": "Generate unique Employee Id"
     } 
  ] 
}
 

 

 

 

 

 

 

 

  • No labels