Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Property

Macro Enabled?

Description

Host

No

Required. Hostname of the MySQL server to read from.

Port

No

Required. Port to use to connect to the MySQL server.

JDBC Plugin Name

No

Required. Identifier for the MySQL JDBC driver, which is the name used while uploading the MySQL JDBC driver.

Database Name

No

Required. Name of the database to replicate data from.

User

No

Required. Username to use to connect to the MySQL server. Actual account used by the source while connecting to the MySQL server will be of the form 'user_name'@'%' where user_name is this field.

Password

Yes

Required. Password to use to connect to the MySQL server.

Note: If you use a macro for the password, it must be in the Secure Store. If it’s not in the secure store, the Replication job fails. For more information, see Using Secure Keys.

Consumer ID

No

Optional. Unique numeric ID to identify this origin as an event consumer. This number cannot be the same as another replication job that is reading from the server, and it cannot be the same as the server-id for any MySQL slave that is replicating from the server. By default, random number will be used.

Server Timezone

No

Optional. Time zone of the MySQL server. This is used when converting dates into timestamps.

Replicate Existing Data

No

Optional. Whether to replicate existing data from the source database. By default, pipeline will replicate the existing data from source tables. If set to false, any existing data in the source tables will be ignored and only changes happening after the pipeline started will be replicated.

Schema Mapping

For information about data type conversions from supported source databases to the BigQuery destination, see https://cloud.google.com/data-fusion/docs/reference/replication-data-types.

Schema Evolution

DDL Operation

Supported?

Create table

Yes

(New table is picked up dynamically when no tables are selected in the pipeline config) 

Rename table

No

Truncate table

Yes

Drop table

Yes

Add nullable column

Yes

Add required column

No

Alter column to make it nullable

Yes

Alter column to make it required

No

Alter column type 

No

Rename column

No

Drop column

No

Troubleshooting

If the replication job is able to start snapshotting the data, but fails when it switches over to read from the binlog with errors in the log like:

...

to change the user to use a MySQL native password.

Schema Evolution

...

DDL Operation

...

Supported?

...

Create table

...

Yes

(New table is picked up dynamically when no tables are selected in the pipeline config) 

...

Rename table

...

No

...

Truncate table

...

Yes

...

Drop table

...

Yes

...

Add nullable column

...

Yes

...

Add required column

...

No

...

Alter column to make it nullable

...

Yes

...

Alter column to make it required

...

No

...

Alter column type 

...

No

...

Rename column

...

No

...

Drop column

...

No

For more information about data type conversions from supported source databases to the BigQuery destination, see https://cloud.google.com/data-fusion/docs/reference/replication-data-types.