It seems like when the log is huge and the browser is far away from the nodejs process (e.g. Bangalore <-> US), the nodejs process can crash with out of memory
Fixed a problem with NodeJS buffering a response before sending it to a client.
It seems like it is related to in a way that when a log avro file is corrupted (due to whatever reason), when trying to fetch the log, the nodejs process will crash.
We can try to simulate this by replacing an existing log file with a 0 byte one.
I believe there three things we need to fix:
1. The log handler should just skip corrupted and keep proceeding instead of response with error (probably 500 in this case)
2. UI nodejs shouldn't fail with out of memory error when log handler response with error
3. The log saver process shouldn't process corrupted avro file.
and 4. We have to make sure the nodejs process won't go out of memory when download All/View raw is hit.
This PR addresses this issue: https://github.com/caskdata/cdap/pull/6850