背景
- 归档的表在源库和目标库都要存在
- pt-archiver归档表的场景有:不删原表数据,非批量插入目标库;不删原表数据,批量插入目标库;非批量删除原表数据,非批量插入目标库;批量删除原表数据,批量插入目标库
版本
pt-archiver --version
pt-archiver 3.0.12
select @@version;
+-----------+
| @@version |
+-----------+
| 8.0.12 |
+-----------+
是否会出现不一致情况
生成100000条记录
源库和目标库在不同的实例 是否会出现不一致测试
源库
192.168.137.133:test_archiver
目标库
192.168.137.1:test_archiver
开启gerneral日志
set global general_log=on;
每5000条记录进行一次commit,每次取10000 条记录进行处理
中途kill掉 pt-archiver归档进程,源库和目标库没有出现不一致的情况
ps -ef | grep pt-archiver | awk '{print $2}' | xargs kill -9
目标库
select id from sbtest1 order by id desc limit 1;
+-------+
| id |
+-------+
| 10000 |
+-------+
1 row in set (0.00 sec)
源库
select id from sbtest1 order by id limit 1;
+-------+
| id |
+-------+
| 10001 |
+-------+
1 row in set (0.00 sec)
源库执行语句
2019-08-21T07:02:58.600832Z 56 Connect [email protected] on test_archiver using TCP/IP
2019-08-21T07:02:58.601186Z 56 Query set autocommit=0
...
2019-08-21T07:02:58.966036Z 56 Query SELECT MAX(`id`) FROM `test_archiver`.`sbtest1`
2019-08-21T07:02:58.967807Z 56 Query SELECT CONCAT(@@hostname, @@port)
2019-08-21T07:02:58.989394Z 56 Query SELECT /*!40001 SQL_NO_CACHE */ `id`,`k`,`c`,`pad` FROM `test_archiver`.`sbtest1` FORCE INDEX(`PRIMARY`) WHERE (id<100000) AND (`id` < '100000') ORDER BY `id` LIMIT 10000
...
2019-08-21T07:02:59.275620Z 56 Query commit
...
019-08-21T07:02:59.532682Z 56 Query commit
2019-08-21T07:02:59.834194Z 56 Query SELECT 'pt-archiver keepalive'
2019-08-21T07:02:59.834835Z 56 Query DELETE FROM `test_archiver`.`sbtest1` WHERE (((`id` >= '1'))) AND (((`id` <= '10000'))) AND (id<100000) LIMIT 10000
2019-08-21T07:03:09.958289Z 56 Query SELECT /*!40001 SQL_NO_CACHE */ `id`,`k`,`c`,`pad` FROM `test_archiver`.`sbtest1` FORCE INDEX(`PRIMARY`) WHERE (id<100000) AND (`id` < '100000') AND ((`id` >= '10000')) ORDER BY `id` LIMIT 10000
...
2019-08-21T07:03:10.215958Z 56 Query commit
...
2019-08-21T07:03:10.670937Z 56 Query commit
2019-08-21T07:03:10.904398Z 56 Query SELECT 'pt-archiver keepalive'
2019-08-21T07:03:10.904715Z 56 Query DELETE FROM `test_archiver`.`sbtest1` WHERE (((`id` >= '10001'))) AND (((`id` <= '20000'))) AND (id<100000) LIMIT 10000 ====》( 该语句由于没有commit 语句会rollback )
目标库执行语句
2019-08-21T07:03:00.317343Z 33 Connect [email protected] on test_archiver using TCP/IP
2019-08-21T07:03:00.338390Z 33 Query set autocommit=0
...
2019-08-21T07:03:00.633938Z 33 Query SELECT CONCAT(@@hostname, @@port)
2019-08-21T07:03:00.920655Z 33 Query commit
2019-08-21T07:03:01.177267Z 33 Query commit
2019-08-21T07:03:01.199046Z 33 Query LOAD DATA LOCAL INFILE '/tmp/jaGuzZfjSept-archiver' INTO TABLE `test_archiver`.`sbtest1`(`id`,`k`,`c`,`pad`) (插入了 1=<id <=10000的记录)
2019-08-21T07:03:11.850618Z 33 Query commit
2019-08-21T07:03:12.315829Z 33 Query commit
2019-08-21T07:03:12.337323Z 33 Query LOAD DATA LOCAL INFILE '/tmp/GQ2ybc3KCzpt-archiver' INTO TABLE `test_archiver`.`sbtest1`(`id`,`k`,`c`,`pad`) ====》( 该语句由于没有commit 该语句会rollback ,并在 机器/tmp 目录下留下临时文件)
ll /tmp/GQ2ybc3KCzpt-archiver
-rw------- 1 root root 1920000 Aug 21 15:03 /tmp/GQ2ybc3KCzpt-archiver
源库和目标库在相同的实例 是否会出现不一致测试
源库
192.168.137.133:test_archiver
目标库
192.168.137.133:test_archiver2
删除测试数据重新生成100000 条记录
每100000条记录 进行commit一次,每次取100000条记录进行处理
pt-archiver --source h=127.0.0.1,u=admin,p=admin,P=8013,D=test_archiver,t=sbtest1 --dest h=127.0.0.1,u=admin,p=admin,P=8013,D=test_archiver2 --progress 1000 --where "id<100000" --statistics --sleep 10 --limit 100000 --no-check-charset --txn-size 100000 --bulk-delete --bulk-insert
源库和目标库执行语句
2019-08-22T01:50:35.672490Z 9 Connect [email protected] on test_archiver using TCP/IP
2019-08-22T01:50:35.673125Z 9 Query set autocommit=0
...
2019-08-22T01:50:35.685987Z 10 Connect [email protected] on test_archiver2 using TCP/IP
2019-08-22T01:50:35.686278Z 10 Query set autocommit=0
...
2019-08-22T01:50:35.708866Z 9 Query SELECT /*!40001 SQL_NO_CACHE */ `id`,`k`,`c`,`pad` FROM `test_archiver`.`sbtest1` FORCE INDEX(`PRIMARY`) WHERE (id<100000) AND (`id` < '100000') ORDER BY `id` LIMIT 100000
...
2019-08-22T01:50:40.242371Z 10 Query LOAD DATA LOCAL INFILE '/tmp/X5W2UemPgDpt-archiver' INTO TABLE `test_archiver2`.`sbtest1`(`id`,`k`,`c`,`pad`)
2019-08-22T01:50:43.692914Z 9 Query SELECT 'pt-archiver keepalive'
2019-08-22T01:50:43.693411Z 9 Query DELETE FROM `test_archiver`.`sbtest1` WHERE (((`id` >= '1'))) AND (((`id` <= '99999'))) AND (id<100000) LIMIT 100000
2019-08-22T01:50:58.603351Z 9 Query SELECT /*!40001 SQL_NO_CACHE */ `id`,`k`,`c`,`pad` FROM `test_archiver`.`sbtest1` FORCE INDEX(`PRIMARY`) WHERE (id<100000) AND (`id` < '100000') AND ((`id` >= '99999')) ORDER BY `id` LIMIT 100000
2019-08-22T01:50:58.606390Z 10 Query commit
2019-08-22T01:50:58.717251Z 9 Query commit
2019-08-22T01:50:58.780614Z 10 Quit
2019-08-22T01:50:58.781480Z 9 Quit
注意
MySQL8.0 执行load data infile 命令除了设置secure_file_priv 外,还需要在[client] 和[mysqld] 中设置local-infile=1,不然会出现错误
DBD::mysql::st execute failed: The used command is not allowed with this MySQL version
pt-archiver commit
- 操作的相关代码可见是在目标库完成commit 操作后,源库才进行commit操作的
- 当事务中操作的数据量很大时,源库delete的commit操作耗时也会比较长,pt-archiver发生异常终止后(源库的commit还没完成,delete操作会回滚),会出现目标库已存在数据,源库还未删除数据不一致的情况
7068 if ( $dst ) {
7069 trace('commit', sub {
7070 $dst->{dbh}->commit;
7071 });
7072 }
7073 trace('commit', sub {
7074 $src->{dbh}->commit;
7075 });
7076 $txn_cnt = 0;
7077 }
7078 }
结论
- 在pt-archiver归档非commit期间,pt-archiver异常终止,源库和目标库都会rollback,不会出现不一致情况
- 在commit的时刻pt-archiver异常终止,可能出现不一致情况:目标库已经insert ,源库还没有delete的情况
- pt-archiver异常终止后(没按时归档完,手动kill pt进程等),需手动校验目标库和源库的主键情况,否则再次归档会出现主键冲突的错误