Kafka Producer Sink

Plugin version: 3.2.0

Kafka sink that allows you to write events into CSV or JSON to Kafka. Plugin has the capability to push the data to a Kafka topic. It can also be configured to partition events being written to Kafka based on a configurable key. The sink can also be configured to operate in sync or async mode and apply different compression types to events. This plugin uses Kafka 2.6 java apis.

If the Kafka instance is SSL enabled, see Reading from and writing to SSL enabled Apache Kafka for additional configuration information.

Configuration

Property

Macro Enabled?

Version Introduced

Description

Property

Macro Enabled?

Version Introduced

Description

Use Connection

No

2.8.0/6.7.0

Optional. Whether to use a connection. If a connection is used, you do not need to provide the credentials.

Connection

Yes

2.8.0/6.7.0

Optional. Name of the connection to use. Project and service account information will be provided by the connection. You can also use the macro function ${conn(connection_name)}

Kafka Brokers

Yes

 

Required. Specifies a list of brokers to connect to.

Reference Name

No

 

Required. This will be used to uniquely identify this sink for lineage, annotating metadata, etc.

Kafka Topic

Yes

 

Required. Specifies a list of topics to which the event should be published to.

Is Async ?

Yes

 

Required. Specifies whether writing the events to broker is Asynchronous or Synchronous.

Default is FALSE.

Compression type

Yes

 

Required. Compression type to be applied on message. It can be none, gzip or snappy.

Default is none.

Additional Kafka Producer Properties

Yes

 

Optional. Specifies additional Kafka producer properties like acks, client.id as key and value pair.

Message Format

Yes

 

Required. Specifies the format, json or csv, of the event published to Kafka.

Default is CSV.

Message Key field

Yes

 

Optional. Specifies the input field that should be used as the key for the event published into Kafka. It will use String partitioner to determine Kafka event should go to which partition. Key field should be of type string.

Kerberos Principal

Yes

 

Optional. The kerberos principal used for the source when kerberos security is enabled for Kafka.

Keytab Location

Yes

 

Optional. The keytab location for the kerberos principal when kerberos security is enabled for Kafka.

Example

This example writes structured record to Kafka topic ‘alarm’ in asynchronous manner using compression type ‘gzip’. The written events will be written in csv format to Kafka running at localhost. The Kafka partition will be decided based on the provided key ‘ts’. Additional properties like number of acknowledgements and client id can also be provided.

Property

Value

Property

Value

Reference Name

Kafka

Kafka Brokers

localhost:9092

Kafka Topic

alarm

Is Async ?

FALSE

Compression type

gzip

Additional Kafka Producer Properties

acks:2,client.id:myclient

Message Format

CSV

Message Key field

message



 

Created in 2020 by Google Inc.