Kafka source plugin skips, rather than fails, on messages that are too large
Description
Kafka source plugin seems to skip, rather than fail, on messages that are too large. If we send messages larger than the default value (1 MB) then the pipeline will be successful. Kafka batch source statistics show that we have not read any messages (0/0), but the offset will be updated in hbase table (as result next run will skip these messages). Instead, the pipeline should throw error message and fail. I tried to read the same messages using Flume agent. It throws exceptions saying that the message size is too large. After adding special property to consumer config (consumer.max.partition.fetch.bytes = e.g. 5 MB) everything worked properly.
Kafka source plugin seems to skip, rather than fail, on messages that are too large.
If we send messages larger than the default value (1 MB) then the pipeline will be successful.
Kafka batch source statistics show that we have not read any messages (0/0), but the offset will be updated in hbase table (as result next run will skip these messages).
Instead, the pipeline should throw error message and fail. I tried to read the same messages using Flume agent. It throws exceptions saying that the message size is too large. After adding special property to consumer config (consumer.max.partition.fetch.bytes = e.g. 5 MB) everything worked properly.