问题描述
今天今天我们跑了一个alter table,拿下了DB。我们失败了对奴隶,并在死后,我们发现这在mysql error.log
InnoDB:错误:最后一个检查点的年龄是90608129,
InnoDB:超过日志组容量90593280.
InnoDB:如果你使用大BLOB或TEXT行,必须设置
InnoDB:合并大小的日志文件比
大至少10倍InnoDB:这样的行最大。
这个错误是正确的,因为我们正在处理包含BLOB数据类型的非常大的表。 p>
我们在网上找到的表示
$ b为了解决这个问题,你需要停止MySQL(非常重要),删除现有的InnoDB日志文件(可能是你的MySQL数据目录中的lb_logfile *,除非你' ve移动它们),然后调整innodb_log_file_size以适应您的需要,然后再次启动MySQL。来自MySQL性能博客的这篇文章可能是有启发性的。和注释
这是我猜测发生了什么,基于我们当前的(默认) innodb_log_file_size
48 MB?
显示全球变量类%%innodb_log%';
+ ----------------------------- + ---------- +
| Variable_name |价值|
+ ----------------------------- + ---------- +
| innodb_log_buffer_size | 8388608 |
| innodb_log_compressed_pages | ON |
| innodb_log_file_size | 50331648 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | ./ |
+ ----------------------------- + ---------- +
所以,这导致我有两个尖锐的问题和一个开放的问题:
- 我们如何确定最大的行,以便我们可以将
innodb_log_file_size
设置为大于 b $ b - 步骤1中的操作会产生什么后果?
- 考虑到我们有一个大表(650k行,6169.8GB)和无限制的,可变长度的BLOB字段。
我们正在运行mysql 5.6,这是我们的 my.cnf
。
[mysqld]
#defaults
basedir = / opt /mysql/server-5.6
datadir = / var / lib / mysql
port = 3306
socket = /var/run/mysqld/mysqld.sock
tmpdir = / tmp
bind-address = 0.0.0.0
#logs
log_error = /var/log/mysql/error.log
expire_logs_days = 4
slow_query_log = on
long_query_time = 1
innodb_buffer_pool_size = 11G
#http://stackoverflow.com/a/10866836/182484
collation- server = utf8_bin
init-connect ='SET NAMES utf8'
init_connect ='set collation_connection = utf8_bin'
character-set-server = utf8
max_allowed_packet = 64M
skip-character-set-client-handshake
#cache
query_cache_size = 268435456
query_cache_type = 1
query_cache_limit = 1048576
```
作为以下建议的后续工作,我开始调查相关表的文件大小。我运行了一个脚本,将三个BLOB字段的组合字节大小写入一个名为 pen_sizes
的表。以下是获取最大字节大小的结果:
选择pen_size作为字节,·
pen_size / 1024/1024 as mb,
pen_id from pen_sizes
group by pen_id
按字节排序desc
限制40
+ --------- + ------------ + -------- +
|字节| mb | pen_id |
+ --------- + ------------ + -------- +
| 3542620 | 3.37850571 | 84816 |
| 3379107 | 3.22256756 | 74796 |
| 3019237 | 2.87936878 | 569726 |
| 3019237 | 2.87936878 | 576506 |
| 3019237 | 2.87936878 | 576507 |
| 2703177 | 2.57795048 | 346965 |
| 2703177 | 2.57795048 | 346964 |
| 2703177 | 2.57795048 | 93706 |
| 2064807 | 1.96915340 | 154627 |
| 2048592 | 1.95368958 237514 |
| 2000695 | 1.90801144 | 46798 |
| 1843034 | 1.75765419 | 231988 |
| 1843024 | 1.75764465 | 230423 |
| 1820514 | 1.73617744 | 76745 |
| 1795494 | 1.71231651 | 650208 |
| 1785353 | 1.70264530 | 74912 |
| 1754059 | 1.67280102 | 444932 |
| 1752609 | 1.67141819 | 76607
| 1711492 | 1.63220596 | 224574 |
| 1632405 | 1.55678272 | 76188 |
| 1500157 | 1.43066120 | 77256 |
| 1494572 | 1.42533493 | 137184 |
| 1478692 | 1.41019058 | 238547 |
| 1456973 | 1.38947773 | 181379 |
| 1433240 | 1.36684418 | 77631 |
| 1421452 | 1.35560226 | 102930 |
| 1383872 | 1.31976318 | 77627 |
| 1359317 | 1.29634571 | 454109 |
| 1355701 | 1.29289722 | 631811 |
| 1343621 | 1.28137684 | 75256 |
| 1343621 | 1.28137684 | 75257 |
| 1334071 | 1.27226925 | 77626 |
| 1327063 | 1.26558590 | 129731 |
| 1320627 | 1.25944805 | 636914 |
| 1231918 | 1.17484856 | 117269 |
| 1223975 | 1.16727352 | 75103 |
| 1220233 | 1.16370487 | 326462 |
| 1220233 | 1.16370487 | 326463 |
| 1203432 | 1.14768219 | 183967 |
| 1200373 | 1.14476490 | 420360 |
+ --------- + ------------ + -------- +
这让我相信,平均行大小更接近1mb比10建议。也许我之前列出的表格大小也包括索引?
我跑了
SELECT table_name ASTables,
round(((data_length + index_length)/ 1024/1024),2)Size in MB
FROM information_schema.TABLES
WHERE table_schema ='codepen'
+ ------------------- + ------------ +
|表|大小以MB |
+ ------------------- + ------------ +
... snip
|笔| 6287.89 |
... snip
0。初步信息
您的设置:
innodb_log_file_size = 50331648
innodb_log_files_in_group = 2
因此,您的= 2 x 50331648 = 96 MB
1。如何确定最大行
没有直接方法。但是,您可以根据这些表(压缩在这里对我们不重要,如果,如果,假设行不被压缩在日志文件中)。
2。影响 innodb_log_file_size
:
3。任何其他需要担心的事情
6169.8 GB / 650k行=每行大约10 MB
这是一个严重的问题,打算在事务,多用户情况下使用您的数据库。请考虑将 BLOB
存储为数据库外的文件。或者,至少将它们存储在单独的MyISAM(非事务性)表中。
We ran an alter table today today that took down the DB. We failed over to the slave, and in the post-mortem, we discovered this in the mysql error.log
InnoDB: ERROR: the age of the last checkpoint is 90608129,
InnoDB: which exceeds the log group capacity 90593280.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.
This error rings true because we were working on a very large table that contains BLOB data types.
The best answer we found online said
and in the comments
which is I guess what happened, based on our current (default) innodb_log_file_size
of 48mb?
SHOW GLOBAL VARIABLES LIKE '%innodb_log%';
+-----------------------------+----------+
| Variable_name | Value |
+-----------------------------+----------+
| innodb_log_buffer_size | 8388608 |
| innodb_log_compressed_pages | ON |
| innodb_log_file_size | 50331648 |
| innodb_log_files_in_group | 2 |
| innodb_log_group_home_dir | ./ |
+-----------------------------+----------+
So, this leads me to two pointed questions and one open-ended one:
- How do we determine the largest row so we can set our
innodb_log_file_size
to be bigger than that? - What is the consequence of the action in step 1? I'd read about long recovery times with bigger logs.
- Is there anything else I should worry about regarding migrations, considering that we have a large table (650k rows, 6169.8GB) with unrestrained, variable length BLOB fields.
We're running mysql 5.6 and here's our my.cnf
.
[mysqld]
#defaults
basedir = /opt/mysql/server-5.6
datadir = /var/lib/mysql
port = 3306
socket = /var/run/mysqld/mysqld.sock
tmpdir = /tmp
bind-address = 0.0.0.0
#logs
log_error = /var/log/mysql/error.log
expire_logs_days = 4
slow_query_log = on
long_query_time = 1
innodb_buffer_pool_size = 11G
#http://stackoverflow.com/a/10866836/182484
collation-server = utf8_bin
init-connect ='SET NAMES utf8'
init_connect ='SET collation_connection = utf8_bin'
character-set-server = utf8
max_allowed_packet = 64M
skip-character-set-client-handshake
#cache
query_cache_size = 268435456
query_cache_type = 1
query_cache_limit = 1048576
```
As a follow-up to the suggestions listed below, I began investigation into the file size of the table in question. I ran a script that wrote the combined byte size of the three BLOB fields to a table called pen_sizes
. Here's the result of getting the largest byte size:
select pen_size as bytes,·
pen_size / 1024 / 1024 as mb,·
pen_id from pen_sizes
group by pen_id
order by bytes desc
limit 40
+---------+------------+--------+
| bytes | mb | pen_id |
+---------+------------+--------+
| 3542620 | 3.37850571 | 84816 |
| 3379107 | 3.22256756 | 74796 |
| 3019237 | 2.87936878 | 569726 |
| 3019237 | 2.87936878 | 576506 |
| 3019237 | 2.87936878 | 576507 |
| 2703177 | 2.57795048 | 346965 |
| 2703177 | 2.57795048 | 346964 |
| 2703177 | 2.57795048 | 93706 |
| 2064807 | 1.96915340 | 154627 |
| 2048592 | 1.95368958 | 237514 |
| 2000695 | 1.90801144 | 46798 |
| 1843034 | 1.75765419 | 231988 |
| 1843024 | 1.75764465 | 230423 |
| 1820514 | 1.73617744 | 76745 |
| 1795494 | 1.71231651 | 650208 |
| 1785353 | 1.70264530 | 74912 |
| 1754059 | 1.67280102 | 444932 |
| 1752609 | 1.67141819 | 76607 |
| 1711492 | 1.63220596 | 224574 |
| 1632405 | 1.55678272 | 76188 |
| 1500157 | 1.43066120 | 77256 |
| 1494572 | 1.42533493 | 137184 |
| 1478692 | 1.41019058 | 238547 |
| 1456973 | 1.38947773 | 181379 |
| 1433240 | 1.36684418 | 77631 |
| 1421452 | 1.35560226 | 102930 |
| 1383872 | 1.31976318 | 77627 |
| 1359317 | 1.29634571 | 454109 |
| 1355701 | 1.29289722 | 631811 |
| 1343621 | 1.28137684 | 75256 |
| 1343621 | 1.28137684 | 75257 |
| 1334071 | 1.27226925 | 77626 |
| 1327063 | 1.26558590 | 129731 |
| 1320627 | 1.25944805 | 636914 |
| 1231918 | 1.17484856 | 117269 |
| 1223975 | 1.16727352 | 75103 |
| 1220233 | 1.16370487 | 326462 |
| 1220233 | 1.16370487 | 326463 |
| 1203432 | 1.14768219 | 183967 |
| 1200373 | 1.14476490 | 420360 |
+---------+------------+--------+
This makes me believe that the average row size is closer to 1mb than the 10 suggested. Maybe the table size I listed earlier includes the indexes, too?
I ran
SELECT table_name AS "Tables",
round(((data_length + index_length) / 1024 / 1024), 2) "Size in MB"
FROM information_schema.TABLES
WHERE table_schema = 'codepen'
+-------------------+------------+
| Tables | Size in MB |
+-------------------+------------+
...snip
| pens | 6287.89 |
...snip
0. Preliminary information
Your settings:
innodb_log_file_size = 50331648
innodb_log_files_in_group = 2
Therefore your "log group capacity" = 2 x 50331648 = 96 MB
1. How to determine the largest row
There is no direct method. But one can easily calculate the size of one given row based on these tables (compression should not matter to us here, if, as I assume, rows are not compressed in the log files).
2. Impact of innodb_log_file_size
3. Anything else to worry about
6169.8 GB / 650k rows = about 10 MB per row on averageThis is a serious problem per se if you intend to use your database in a transactional, multi-user situation. Consider storing your BLOB
's as files outside of the database. Or, at least, store them in a separate MyISAM (non-transactional) table.
这篇关于在mysql中设置正确的innodb_log_file_size的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!