BigQuery source: fix handling of nested record with same name as parent record

Description

When BQ source reads from a table that contains a nested record with the same name as the parent record, it generates a recursive schema.

In the attached pipeline, there's a record named `record` that contains a nested record with the same name. When we attempt to write the flattened record to BQ sink, it results in an infinite loop which causes StackOverflowError.

```
java.lang.Exception: null
at io.cdap.cdap.internal.app.runtime.AbstractContext.lambda$initializeProgram$6(AbstractContext.java:605) ~[na:na]
at io.cdap.cdap.internal.app.runtime.AbstractContext.execute(AbstractContext.java:560) ~[na:na]
at io.cdap.cdap.internal.app.runtime.AbstractContext.initializeProgram(AbstractContext.java:597) ~[na:na]
at io.cdap.cdap.app.runtime.spark.SparkRuntimeService.initialize(SparkRuntimeService.java:433) ~[io.cdap.cdap.cdap-spark-core2_2.11-6.4.0-SNAPSHOT.jar:na]
at io.cdap.cdap.app.runtime.spark.SparkRuntimeService.startUp(SparkRuntimeService.java:208) ~[io.cdap.cdap.cdap-spark-core2_2.11-6.4.0-SNAPSHOT.jar:na]
at com.google.common.util.concurrent.AbstractExecutionThreadService$1$1.run(AbstractExecutionThreadService.java:47) ~[com.google.guava.guava-13.0.1.jar:na]
at io.cdap.cdap.app.runtime.spark.SparkRuntimeService$5$1.run(SparkRuntimeService.java:404) [io.cdap.cdap.cdap-spark-core2_2.11-6.4.0-SNAPSHOT.jar:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_282]
java.lang.StackOverflowError: null
at java.lang.System$2.getEnumConstantsShared(System.java:1252) ~[na:1.8.0_282]
at java.util.EnumSet.getUniverse(EnumSet.java:407) ~[na:1.8.0_282]
at java.util.EnumSet.noneOf(EnumSet.java:110) ~[na:1.8.0_282]
at com.google.api.client.util.GenericData.<init>(GenericData.java:55) ~[na:na]
at com.google.api.client.json.GenericJson.<init>(GenericJson.java:36) ~[na:na]
at com.google.api.services.bigquery.model.TableFieldSchema.<init>(TableFieldSchema.java:30) ~[na:na]
at com.google.cloud.hadoop.io.bigquery.output.BigQueryTableFieldSchema.<init>(BigQueryTableFieldSchema.java:34) ~[na:na]
at io.cdap.plugin.gcp.bigquery.sink.BigQuerySinkUtils.generateTableFieldSchema(BigQuerySinkUtils.java:106) ~[na:na]
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[na:1.8.0_282]
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384) ~[na:1.8.0_282]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) ~[na:1.8.0_282]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) ~[na:1.8.0_282]
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) ~[na:1.8.0_282]
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[na:1.8.0_282]
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566) ~[na:1.8.0_282]
at io.cdap.plugin.gcp.bigquery.sink.BigQuerySinkUtils.generateTableFieldSchema(BigQuerySinkUtils.java:122) ~[na:na]
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[na:1.8.0_282]
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384) ~[na:1.8.0_282]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) ~[na:1.8.0_282]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472) ~[na:1.8.0_282]
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) ~[na:1.8.0_282]
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[na:1.8.0_282]
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566) ~[na:1.8.0_282]
at io.cdap.plugin.gcp.bigquery.sink.BigQuerySinkUtils.generateTableFieldSchema(BigQuerySinkUtils.java:122) ~[na:na]
...
```

Release Notes

None

Assignee

Unassigned

Reporter

Prashant Jaikumar

Labels

None

Dev Complete Date

None

Publish Date

None

Reviewer

None

Sprint

Priority

Major