Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Example: Consider a sample pipeline which performs join between HR dataset and Person dataset. The joined data is then normalized(to rename some fields, drop some fields, and create ID) before storing data into the data lake. Following diagram explains the sample pipeline which is reading from 2 file sources. Note that the 2D boxes represent the SCHEMA that is flowing through the pipeline while 3D boxes represents the pipeline stages.

Image Removed

 

In the lineage view, we show high level information first as shown below. Note that 'HR File', 'Person File', and 'Employee Data' are name of the input and output datasets, as indicated by the Reference name in the plugin properties.

Image Removed

Next detail level view contains the clickable fields from the input and output datasets. Note that 2D boxes represents fields belonging to the datasets. Since input datasets are of type file which does not have schema yet, plugin can provide any String name for it. In this case we are using "HR Record" and "Person Record" as name.

Image Removed

 

Once user clicks on particular field, field level lineage graph can be displayed.

Example: Graph for field ID, where circle represents the fields and edges represents operations with names in bubbles.

Image Removed

Note that "body" field is generated from "HR Record" as well as "Person Record". To distinguish it while storing we might need to prefix it with the stage name. 

As an additional information for the source and target datasets we might want to show the associated properties such as file path, regex used etc.

Store: Based on the above example, we want following pieces of information to be stored in the "FieldLevelLineage" dataset

  1. Properties associated with the Dataset. For example: File path, name of the directory associated with the "HR File", Broker Id, Topic name etc associated with the Kafka plugin. This will be single row per dataset per namespace. If the same dataset is used in multiple pipelines, but with different configurations the properties will be union of both. 
  2. Fields associated with the Dataset. This will be single row per dataset per namespace. We will store each field as a separate column in this row. The value of the column can be additional properties such as creation time, last update time, runid responsible for last update etc.
  3. Lineage associated with the each field from the target dataset. For each field belonging to each target dataset, and for each run of the pipeline writing to that dataset, there will be one row.

Example: With one run of the pipeline shown above, following will be the sample data in the store.

Row KeyColumn KeyValueNote
MyNamespace:HRFileProperties

inputDir=/data/2017/hr

regex=*.csv

failOnError=false

One Row per namespace per dataset
MyNamespace: PersonFileProperties

inputDir=/data/2017/person

regex=*.csv

failOnError=false

One Row per namespace per dataset
MyNamespace:EmployeeDataProperties

rowid=ID

/*should we store schema too? what if that changes per run?*/

One Row per namespace per dataset
MyNamespace:EmployeeData:AllFieldsID

/* We may not necessarily required to store any value*/

created_time:12345678

updated_time:12345678

last_updated_by:runid_X

One Row per namespace per dataset
MyNamespace:EmployeeData:AllFieldsName  
MyNamespace:EmployeeData:AllFieldsDepartment  
MyNamespace:EmployeeData:AllFieldsContactDetails  
MyNamespace:EmployeeData:AllFieldsJoiningDate  
MyNamespace:EmployeeData:ID:<runidX-inverted-start-time>:runidXLineage

Please see the full JSON below.

 

One row per run if field is part of target
MyNamespace:EmployeeData:Name:<runidX-inverted-start-time>:runidXLineageSimilar JSONOne row per run if field is part of target
MyNamespace:EmployeeData:ContactDetails:<runidX-inverted-start-time>:runidXLineageSimilar JSONOne row per run if field is part of target
MyNamespace:EmployeeData:JoiningDate:<runidX-inverted-start-time>:runidXLineageSimilar JSONOne row per run if field is part of target

JSON stored for ID field:

...

This document explains the design for storage and retrieval of the Field Level Lineage information.

Access Pattern:

  1. For a given dataset, find out the high level lineage (field mapping between source and destination datasets and not the detail operations which caused this conversion) going in backward direction within a given time range. Note that the response should be multi-level. For example, consider a case where "Employee" dataset is generated from "Person", "HR", and "Skills" datasets. Response would contain the field mappings between source datasets ("Person", "HR", and "Skills") and "Employee" dataset. However it is also possible that the source datasets are created/updated in the given time range. So response should also include the field mappings between the datasets which created the source datasets and source datasets themselves.  
  2. For a given dataset, find out the high level lineage (field mapping between source and destination datasets and not the detail operations which caused this conversion) going in forward direction within a given time range. Similar to the above query, response need to be multi-level.
  3. Given a dataset and field name, find out detail lineage (field mapping between the source and destination datasets along with the operations which caused this conversion) going in the backward direction. Response will only contain the operations belonging to the single level.
  4. Given a dataset and field name, find out detail lineage (field mapping between the source and destination datasets along with the operations which caused this conversion) going in the forward direction. Response will only contain the operations belonging to the single level.

REST API:

  1. Given a dataset and time range, get the high level lineage both in forward and backward direction.

    Code Block
    GET /v3/namespaces/<namespace-id>/endpoints/<endpoint-name>/fields/lineage?start=<start-ts>&end=<end-ts>&level=<level>
    
    
    Where:
    namespace-id: namespace name
    endpoint-name: name of the endpoint
    start-ts: starting timestamp(inclusive) in seconds
    end-ts: ending timestamp(exclusive) in seconds for lineage
    level: how many hops to make in backward/forward direction
    
    
    Sample response:
    [
      ...
      list of lineage mappings
      ...
    ]
    
    
    where each lineage mapping will be of the form:
    
    
    {
      "source": {
         "namespace": "ns",
         "name": "Person" 
      },
      "Destination": {
         "namespace": "ns",
         "name": "Employee"
      },
      "fieldmap": [
         { "from": "id", "to": "id" },
         { "from": "first_name", "to": "name"},
         { "from": "last_name", "to": "name"}
      ] 
    }
  2. Given a dataset and field, find out the detailed lineage.

    Code Block
    GET /v3/namespaces/<namespace-id>/endpoints/<endpoint-id>/fields/<field-name>/lineage?start=<start-ts>&end=<end-ts>&direction=<backward/forward>
     
    Where:
    namespace-id: namespace name
    endpoint-id: endpoint name
    field-name: name of the field for which lineage information to be retrieved
    start-ts: starting timestamp(inclusive) in seconds
    end-ts: ending timestamp(exclusive) in seconds for lineage
    direction: backward or forward
    
    
    Sample response:
    {
      [
       ...
          list of nodes
       ...
      ],
      [
       ...
          list of operations
       ...
      ],
      [
       ...
          list of connections
       ...
      ]    
    }
    
    where each Node is an object representing field. Node has id which is uniquely identifies the Node (combination of origin and name) and label which is used to display on the UI. Node can have optional sourceEndPoint and destinationEndPoint members which represents if this node is generated directly from Source EndPoint or written to the Destination EndPoint.
     
    {
      "id": "origin.fieldname",
      "label": "fieldname"
      "sourceEndPoint": {
         "name": "

...

  1. file",
         

...

  1. "namespace": "

...

  1. ns" 
      }  
    

...

  1. }

...

  1. 
    
    
    each Operation is represented as 
    {
      

...

  1. "name": "

...

  1. IDENTITY",
      

...

  1. "

...

  1. description": 

...

  1. "description associated with the operation". 
    }
    each Connection 

...

  1. represents transformation between two nodes with operation name that caused 

...

  1. it:
    

...

  1. {
      

...

  1. "

...

  1. from": "

...

  1. Node1.id",
      

...

  1. "

...

  1. to": "

...

  1. Node2.id",

...

  1. 
      

...

  1. "

...

  1. operation": "

...

  1. opname"
    }
    
    

...

 

 

 

 

 

 

...

Store:

Field level lineage information will be stored in the "FieldLevelLineage" dataset. 

This dataset will have following row keys

  1. Data row: This row will store the actual operations data against the checksum of operations.

    Row Keycolumn: d
    c|<checksum-value>FieldLineageInfo object
  2. Backward lineage row: From the perspective of the destination endpoints, operations will represent the backward lineage. For each destination, separate row will be created.

    Row Keycolumn: ccolumn: p
    b | <endpoint_ns> | <endpoint_name> | <inverted-start-time> | <id.run><checksum><program-run-id>
  3. Forward lineage row: From the perspective of the source endpoints, operationw will represent the forward lineage. For each source, separate row will be created.

    Row Keycolumn: ccoulmn: p
    f | <endpoint_ns> | <endpoint_name> | <inverted-start-time> | <id.run><checksum><program-run-id>

FieldLineageInfo object will store following information:

  1. Collection<Operation>: operations representing the field lineage.
  2. Checksum of operations.
  3. Set of source endpoints.
  4. Set of destination endpoints.
  5. High level bi-directional mapping of fields from source endpoints to destination endpoints. This is for serving the access pattern 1 and 2 described above.
  6. For each field of source endpoint there would be graph from that field to the destination fields.
  7. For each field of destination endpoint there would be graph resulting into that field from different source fields.

Open questions:

  1. Per UI we also want to show the type of fields which we currently do not accept through API.
  2. What constitutes the dataset schema? For example for fileset should we assume that the fields generated by the READ operations are part of schema?