现场:
跑着数据库的主机A报警应用连接不上数据库,我们无法ssh到主机。第一反应是通过telnet到远程控制口,发现数据库资源和硬件资源在被切换到HA架构的主机B(备机,通常性能比主机A的差,抗不住应用)。此时HA已经把数据库切到了备机上,勉强抗着应用。
分析:
一、查看故障机(主机A)的操作系统日志和oracle alert日志有大量的如下报错:
OS:
Mar 17 14:20:00 mktdb1 genunix: [ID 470503 kern.warning] WARNING: Sorry, no swap space to grow stack for pid 21868 (oracle)
Mar 17 14:20:13 mktdb1 Cluster.PMF.pmfd: [ID 972610 daemon.error] fork: Not enough space
Mar 17 14:20:13 mktdb1 Cluster.PMF.pmfd: [ID 837760 daemon.error] monitored processes forked failed (errno=12)
Mar 17 14:20:13 mktdb1 last message repeated 3 times
Mar 17 14:20:13 mktdb1 Cluster.PMF.pmfd: [ID 972610 daemon.error] fork: Not enough space
Mar 17 14:20:13 mktdb1 Cluster.PMF.pmfd: [ID 837760 daemon.error] monitored processes forked failed (errno=12)
Mar 17 14:20:13 mktdb1 last message repeated 3 times
Mar 17 14:20:13 mktdb1 Cluster.PMF.pmfd: [ID 972610 daemon.error] fork: Not enough space
Mar 17 14:20:13 mktdb1 Cluster.PMF.pmfd: [ID 837760 daemon.error] monitored processes forked failed (errno=12)
Mar 17 14:20:13 mktdb1 last message repeated 3 times
Mar 17 14:20:13 mktdb1 Cluster.PMF.pmfd: [ID 972610 daemon.error] fork: Not enough space
DB alert_log:
Errors in file /oracle/admin/mktdb/bdump/mktdb_psp0_15535.trc:
ORA-27300: OS system dependent operation:fork failed with status: 12
ORA-27301: OS failure message: Not enough space
ORA-27302: failure occurred at: skgpspawn3
ORA-27300: OS system dependent operation:fork failed with status: 12
ORA-27301: OS failure message: Not enough space
ORA-27302: failure occurred at: skgpspawn3
至此,断定是由于交换空间被耗尽而导致的系统挂起,HA自动切换。但是具体是什么原因导致swap耗尽,或者说具体什么进程消耗了大量的swap,swap相关参阅:
点击打开链接 (http://blog.csdn.net/yiyuf/article/details/21458957)
二、继续查找大量消耗swap分区的进程:
root@mktdb1 # root@mktdb1 # prstat
Please wait...
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
22879 oracle 11G 273M cpu8 0 0 0:13:47 3.6% extract/6
18091 oracle 48G 48G sleep 59 0 0:10:05 1.6% oracle/1
26397 oracle 48G 48G cpu529 59 0 0:00:05 1.0% oracle/1
25929 oracle 48G 48G sleep 51 0 0:00:27 0.7% oracle/11
21656 oracle 48G 48G sleep 40 0 0:00:31 0.7% oracle/11
19298 oracle 48G 48G cpu528 59 0 0:01:19 0.6% oracle/11
21610 oracle 48G 48G sleep 59 0 0:00:29 0.6% oracle/11
23695 oracle 48G 48G sleep 22 0 0:00:27 0.6% oracle/11
19583 oracle 48G 48G cpu512 11 0 0:01:29 0.6% oracle/11
18770 oracle 48G 48G sleep 31 0 0:01:56 0.6% oracle/11
21688 oracle 48G 48G sleep 22 0 0:00:42 0.6% oracle/11
Please wait...
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
22879 oracle 11G 273M cpu8 0 0 0:13:47 3.6% extract/6
18091 oracle 48G 48G sleep 59 0 0:10:05 1.6% oracle/1
26397 oracle 48G 48G cpu529 59 0 0:00:05 1.0% oracle/1
25929 oracle 48G 48G sleep 51 0 0:00:27 0.7% oracle/11
21656 oracle 48G 48G sleep 40 0 0:00:31 0.7% oracle/11
19298 oracle 48G 48G cpu528 59 0 0:01:19 0.6% oracle/11
21610 oracle 48G 48G sleep 59 0 0:00:29 0.6% oracle/11
23695 oracle 48G 48G sleep 22 0 0:00:27 0.6% oracle/11
19583 oracle 48G 48G cpu512 11 0 0:01:29 0.6% oracle/11
18770 oracle 48G 48G sleep 31 0 0:01:56 0.6% oracle/11
21688 oracle 48G 48G sleep 22 0 0:00:42 0.6% oracle/11
prstat命令解释参见:点击打开链接 (http://blog.csdn.net/yiyuf/article/details/21470073)
从上面prstat结果可以看出,22879进程确实使用了大量的swap分区,用ps -ef |grep 22879看具体是什么进程:
oracle 22879 19596 4 16:02:45 ? 22:53 /oracle/oradata/u2/gg/extract PARAMFILE /oracle/oradata/u2/gg/dirprm/extcf.prm
到此就很明确了。查看ogg状态,一直卡在一个某个大事物上,而分析读取队列(或归档日志)时需要(消耗)了大量的内存。
既然排查不是主机A的硬件问题,立即把库切回主机A,恢复业务