1.TFA的目的:

TFA是个11.2版本上推出的用来收集Grid Infrastructure/RAC环境下的诊断日志的工具,它可以用非常简单的命令协助用户收集RAC里的日志,以便进一步进行诊断;TFA是类似diagcollection的一个oracle 集群日志收集器,而且TFA比diagcollection集中和自动化的诊断信息收集能力更强大。TFA有以下几个特点:

1.    TFA可以在一台机器上执行一条简单的命令把所有节点的日志进行打包,封装;
2.    TFA可以在收集的过程中对日志进行”trim”操作,减少数据的收集量;
3.    TFA可以收集用来诊断用的“一段时间内”的数据;
4.    TFA可以把所有节点的日志收集并封装好放在某一个节点上以便传输阅读;
5.    TFA可以指定Cluster中的某一部分组件进行日志收集,如:ASM ,RDBMS,Clusterware
6.    TFA可以根据条件配置对告警日志进行实时扫描(DB Alert Logs, ASM Alert Logs, Clusterware Alert Logs, etc);
7.    TFA可以根据实时扫描的结果自动的收集诊断日志;
8.    TFA可以根据指定的错误进行对告警日志的扫描;
9.    TFA可以根据指定的错误扫描后的结果收集诊断日志;

 

2.    TFA的安装要求:

平台:

目前TFA支持以下几种平台:    
Intel Linux(Enterprise Linux, RedHat Linux, SUSE Linux)
Linux Itanium
Oracle Solaris SPARC
Oracle Solaris x86-64
AIX (requires bash shell version 3.2 or higher installed)
HPUX Itanium
HPUX PA-RISC

3.支持的数据库版本:

TFA目前的设计是脱离RDBMS和CRS进行设计的,所以设计的初衷是针对所有的版本而设计的,不受RDBMS或者CRS的版本限制;

下载 TFA Collector:

该版本的TFA和相关TFA用户指南可以通过点击下面的相关下载链接。

TFA 收集器:

https://mosemp.us.oracle.com/epmos/main/downloadattachmentprocessor?attachid=1513912.2:TFA_NOJRE&clickstream=no

TFA 用户手册:

https://mosemp.us.oracle.com/epmos/main/downloadattachmentprocessor?attachid=1513912.2:TFA_USER_GUIDE&clickstream=no

4.    TFA快速安装指南:

安装:

注意:在安装之前请确保您的环境上已经安装了JRE1.6或者是更高版本的JRE,如果没有,请先安装JRE1.6
1.    请使用root用户登录系统
2.    在所有的节点上为TFA准备一个安装的位置,注意这个位置不要放在Cluster file system中;
3.    在节点1上执行installTFALite.sh来启动安装过程:
---------------------------------
[root@rac1 tmp]# ./installTFALite.sh
Starting TFA installation
---------------------------------

备注: 最新版本的TFA已经把installTFALite.sh修改成了installTFALite,安装的时候可以直接执行installTFALite,并且可以指定TFA BASE和JAVA_HOME

4.    当系统提示安装位置,输入在第2步中选择的位置的TFA安装,:
---------------------------------
Enter a location for installing TFA [/opt/oracle/tfa]:/opt/oracle/tfa
Checking for available space in /opt/oracle/tfa/
---------------------------------
5.    请输入之前安装了JRE1.6的JAVA_HOME,注意这个位置需要在所有的节点上都相同:
---------------------------------
Enter a Java Home that contains Java 1.6 or later : /usr/java/jre1.7.0_11
Running Auto Setup for TFA as user root...
---------------------------------
6.    按照以下说明完成安装:
------------------------------------------------------------------
Would you like to do a [L]ocal only or [C]lusterwide installation ? [L|l|C|c] [C] :
The following installation requires temporary use of SSH.
If SSH is not configured already then we will remove SSH
when complete.
  Do you wish to Continue ? [Y|y|N|n] [N] Y
Installing TFA at /opt/oracle/tfa in all hosts
Discovering Nodes and Oracle resources
Checking whether CRS is up and running

Getting list of nodes in cluster

Checking ssh user equivalency settings on all nodes in cluster

Node rac2 is configured for ssh user equivalency for root user

Searching for running databases . . . . .

.
List of running databases registered in OCR
1. ORCL
. .

Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

TFA Will be Installed on the Following Nodes
++++++++++++++++++++++++++++++++++++++++++++

Install Nodes
=============
rac1
rac2
Do you wish to make changes to the Node List ? [Y/y/N/n] [N]

TFA will scan the following Directories
++++++++++++++++++++++++++++++++++++++++++++

.----------------------------------------------------------------.
|                             rac2                               |
+-----------------------------------------------------+----------+
| Trace Directory                                     | Resource |
+-----------------------------------------------------+----------+
| /u01/app/11.2.0/grid/cfgtoollogs                    | INSTALL  |
| /u01/app/11.2.0/grid/crs/log                        | CRS      |
| /u01/app/11.2.0/grid/css/log                        | CRS      |
| /u01/app/11.2.0/grid/cv/log                         | CRS      |
| /u01/app/11.2.0/grid/evm/admin/log                  | CRS      |
| /u01/app/11.2.0/grid/evm/admin/logger               | CRS      |
| /u01/app/11.2.0/grid/evm/log                        | CRS      |
| /u01/app/11.2.0/grid/install                        | INSTALL  |
| /u01/app/11.2.0/grid/log/                           | CRS      |
| /u01/app/11.2.0/grid/network/log                    | CRS      |
| /u01/app/11.2.0/grid/oc4j/j2ee/home/log             | CRSOC4J  |
| /u01/app/11.2.0/grid/opmn/logs                      | CRS      |
| /u01/app/11.2.0/grid/racg/log                       | CRS      |
| /u01/app/11.2.0/grid/rdbms/log                      | ASM      |
| /u01/app/11.2.0/grid/scheduler/log                  | CRS      |
| /u01/app/11.2.0/grid/srvm/log                       | CRS      |
| /u01/app/oraInventory/ContentsXML                   | INSTALL  |
| /u01/app/oraInventory/logs                          | INSTALL  |
| /u01/app/oracle/cfgtoollogs                         | CFGTOOLS |
| /u01/app/oracle/diag/asm/+asm/+ASM2/trace           | ASM      |
| /u01/app/oracle/diag/rdbms/orcl/ORCL2/trace         | RDBMS    |
| /u01/app/oracle/diag/tnslsnr                        | TNS      |
| /u01/app/oracle/diag/tnslsnr/rac2/listener/trace    | TNS      |
| /u01/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs | INSTALL  |
| /u01/app/oracle/product/11.2.0/dbhome_1/install     | INSTALL  |
'-----------------------------------------------------+----------'

.----------------------------------------------------------------.
|                             rac1                               |
+-----------------------------------------------------+----------+
| Trace Directory                                     | Resource |
+-----------------------------------------------------+----------+
| /u01/app/11.2.0/grid/cfgtoollogs                    | INSTALL  |
| /u01/app/11.2.0/grid/crs/log                        | CRS      |
| /u01/app/11.2.0/grid/css/log                        | CRS      |
| /u01/app/11.2.0/grid/cv/log                         | CRS      |
| /u01/app/11.2.0/grid/evm/admin/log                  | CRS      |
| /u01/app/11.2.0/grid/evm/admin/logger               | CRS      |
| /u01/app/11.2.0/grid/evm/log                        | CRS      |
| /u01/app/11.2.0/grid/install                        | INSTALL  |
| /u01/app/11.2.0/grid/log/                           | CRS      |
| /u01/app/11.2.0/grid/network/log                    | CRS      |
| /u01/app/11.2.0/grid/oc4j/j2ee/home/log             | CRSOC4J  |
| /u01/app/11.2.0/grid/opmn/logs                      | CRS      |
| /u01/app/11.2.0/grid/racg/log                       | CRS      |
| /u01/app/11.2.0/grid/rdbms/log                      | ASM      |
| /u01/app/11.2.0/grid/scheduler/log                  | CRS      |
| /u01/app/11.2.0/grid/srvm/log                       | CRS      |
| /u01/app/oraInventory/ContentsXML                   | INSTALL  |
| /u01/app/oraInventory/logs                          | INSTALL  |
| /u01/app/oracle/cfgtoollogs                         | CFGTOOLS |
| /u01/app/oracle/diag/asm/+asm/+ASM1/trace           | ASM      |
| /u01/app/oracle/diag/rdbms/orcl/ORCL1/trace         | RDBMS    |
| /u01/app/oracle/diag/tnslsnr                        | TNS      |
| /u01/app/oracle/diag/tnslsnr/rac1/listener/trace    | TNS      |
| /u01/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs | INSTALL  |
| /u01/app/oracle/product/11.2.0/dbhome_1/install     | INSTALL  |
'-----------------------------------------------------+----------'

Do you wish to change the Trace Directory List ? [Y/y/N/n] [N]
Installing TFA on rac1
Installing TFA on rac2
TFA is running
Successfully added host: rac2
.--------------------------------.
| Host   | Status of TFA | PID   |
+--------+---------------+-------+
| rac1 | RUNNING         | 11685 |
| rac2 | RUNNING         |  5081 |
'--------+---------------+-------'
Setting TFA cookie in all nodes
Successfully set cookie=77411b8fff446d2954d5c080225052ac
TFA Cookie: 77411b8fff446d2954d5c080225052ac
Summary of TFA Installation
.-----------------------------------------------------------.
|                           rac1                            |
+---------------------+-------------------------------------+
| Parameter           | Value                               |
+---------------------+-------------------------------------+
| Install location    | /opt/oracle/tfa/tfa_home            |
| Repository location | /opt/oracle/tfa/tfa_home/repository |
| Repository usage    | 0 MB out of 10240 MB                |
'---------------------+-------------------------------------'

.-----------------------------------------------------------.
|                           rac2                            |
+---------------------+-------------------------------------+
| Parameter           | Value                               |
+---------------------+-------------------------------------+
| Install location    | /opt/oracle/tfa/tfa_home            |
| Repository location | /opt/oracle/tfa/tfa_home/repository |
| Repository usage    | 0 MB out of 10240 MB                |
'---------------------+-------------------------------------'

TFA is successfully installed..
------------------------------------------------------------------

4.TFA启动和停止:

TFA运行在Linux和Unix平台上的init,所以,这将是在服务器启动时自动启动。
默认的情况我们把这个脚本命名为init.tfa;
所在位置取决于不同平台,如:
Linux and Solaris: /etc/init.d/init.tfa
Aix: /etc/init.tfa
HP-UX: /sbin/init.d/init.tfa
以下命令式在Linux平台下作为例子:
启动:
# /etc/init.d/init.tfa start
停止:
# /etc/init.d/init.tfa stop
重启:
# /etc/init.d/init.tfa restart

5.手动收集诊断信息:

我们通过调用tfactl的命令和诊断动词diagnostic来控制TFA收集我们期望的诊断信息。Tfactl 提供给用户多种可选择的模式进行收集,如 ,收集一个时间段内的日志信息来减少我们收集日志的量;
具体操作的命令您可以通过以下方式看到:
--------------------------------------------------------------
#$TFA_HOME/bin/tfactl diagcollect -h
Usage: /u01/app/tfa/tfa_home/bin/tfactl diagcollect [-all | -database | -asm | -crs | -os | -install | -node | -tag ]
        [-since <n><h|d>| -from <time> -to <time> | -for <time>]
        [-copy | -nocopy] [-symlink][-notrim]

Options:
  -all       Collect logs of all types
  -crs        Collect only CRS logs
  -asm        Collect only ASM logs
  -database  Collect only database logs from databases specified
  -os         Collect only OS files
  -install    Collect only INSTALL files
  -node       Specify comma separated list of host names for collection.
  -copy       Copy back the zip files to master node from all nodes
  -nocopy    Does not copy back the zip files to master node from all nodes
  -notrim     Does not trim the files collected
  -symlink    This option should be used with -for.
              Creates symlinks for files which are updated during the input time.
  -since <n><h|d>   Files from past 'n' [d]ays or 'n' [h]ours
  -from <time>        From time
  -to <time>         To time
  -for <time>        Specify a incident time.
  -z <file>           Output file name
  -tag <description>  Enter a tag for the zip(s) created

--------------------------------------------------------------

在下面的例子中,我们使用了 -all,�����诉TFA收集诊断所有类型的日志,从午夜1月21日至1月21日13:00 进行收集。该命令将启动指定的诊断在后台收集所有群集节点上,压缩成zip文件放置在每个节点的TFA_HOME中:

--------------------------------------------------------------

# $TFA_HOME/bin/tfactl diagcollect -all -from "Jan/21/2013 00:00:00" -to "Jan/21/2013 13:00:00"

time: Jan/21/2013
Valid pattern
Month : 1
time: Jan/21/2013 13:00:00
Valid pattern
Month : 1
rac1:startdiagcollection: -database -asm -crs -os -install -from Jan/21/2013 -to Jan/21/2013 13:00:00 -z Mon_Jan_21_11_52_20_EST_2013 -node all -copy
Logs are collected to:
/opt/oracle/tfa/tfa_home/repository/rac1.Mon_Jan_21_11_52_20_EST_2013.zip
/opt/oracle/tfa/tfa_home/repository/rac2.Mon_Jan_21_11_52_20_EST_2013.zip
--------------------------------------------------------------

6.诊断问题or上传诊断信息给Oracle Support工程师:

无论我们用哪种方法对诊断信息进行收集,日志信息都会被打包好放置在$TFA_HOME/repository的目录下,以便您上传该文件给Oracle的support工程师

7.推荐参考文档:

TFA Collector- The Preferred Tool for Automatic or ADHOC Diagnostic Gathering Across All Cluster Nodes [ID 1513912.2]

在本篇博客中我们来详细了解一下TFA的功能,以及如何利用它的便捷之处。

1. 便捷的日志收集和分析工具Trace File Analyzer

客户在和技术支持的工程师解决GI(RAC)问题的时候,一个最大的问题就是及时的收集各个节点上和问题相关的日志和诊断数据,特别是收集的数据还有跨节点。另外,RAC里的trace日志文件是轮循使用的,如果发生问题之后不及时收集日志就会被覆盖。对于单机的环境ADR(Automatic Diagnostic Repository)虽然可以很好的避免这个问题,它会对故障发生后对故障生成的文件进行打包,但是ADR并不能收集RAC的日志。对于Cluster的日志收集我们以前会经常使用diagcollection.pl这个脚本,但是这个脚本的弊端是它不会甄别日志里的内容,会把所有的RAC日志从头至尾都收集一遍。如果您曾经使用过diagcollection.pl一定会知道这个脚本收集的日志是非常大的,而且diagcollection.pl的脚本必须要在各个节点上分别使用root用户分别运行,使用不便利。
TFA基本上克服了上边的这些问题,TFA通过在每一个节点上运行一个Java的虚拟环境,来判断什么时候需要启动来收集,压缩日志,并且来判断哪些日志是解决问题必要,TFA是运行在GI和RDBMS之外的产品,所以它甚至和当前使用的版本和平台都没有关系。
所以,在处理Oracle GI 和 RAC问题时,使用 TFA可以一键收集所有需要的日志,而且会过滤掉不需要的日志。
也有客户担心使用TFA会对系统有影响,了解了上述它的功能之后,您就可以知道它只是一个日志收集工具,并不会对系统产生变更,他对OS的负载压力是轻量级的。

2.TFA的进程介绍和收集日志的便利方法:
TFA的功能是由一个TFA的进程和TFA的命令接口CLI构成,我们可以把它安装布置在任何环境里,TFA进程是个JAVA的进程,如下:

节点1上:

[grid@host1
~]$ ps -ef |grep java

root 3335 1 2 Feb26 ? 00:55:28 /u01/app/11.2.0.4/grid/jdk/jre/bin/java
-Xms128m -Xmx512m -classpath
/u01/app/11.2.0.4/grid/tfa/nascds10/tfa_home/jlib/RATFA.jar:/u01/app/11.2.0.4/grid/tfa/nascds10/tfa_home/jlib/je-5.0.84.jar:/u01/app/11.2.0.4/grid/tfa/nascds10/tfa_home/jlib/ojdbc5.jar:/u01/app/11.2.0.4/grid/tfa/nascds10/tfa_home/jlib/commons-io-2.1.jar
oracle.rat.tfa.TFAMain /u01/app/11.2.0.4/grid/tfa/nascds10/tfa_home

节点2上:

[grid@host2
~]$ ps -ef |grep TFA

root 3295 1 0 Feb25 ? 00:19:26
/u01/app/11.2.0.4/grid/jdk/jre/bin/java -Xms64m -Xmx256m -classpath
/u01/app/11.2.0.4/grid/tfa/nascds11/tfa_home/jar/RATFA.jar:/u01/app/11.2.0.4/grid/tfa/nascds11/tfa_home/jar/je-4.0.103.jar:/u01/app/11.2.0.4/grid/tfa/nascds11/tfa_home/jar/ojdbc6.jar
oracle.rat.tfa.TFAMain /u01/app/11.2.0.4/grid/tfa/nascds11/tfa_home

[grid@nascds11 ~]$

以上可以看到TFAMain是个由root用户启动的,它是个多线程的进程,同时会自动的完成对节点和节点之间以及CLI接口的驱动任务。节点和节点之间的TFAMain进程通过secure socket彼此进行监听并进行任务交互。
它的启动和RAC的ohasd一样,也是配置在/etc/init.d中,如:
/etc/init.d/init.tfa
关于TFA的各种环境的安装,升级,卸载等管理,请参考下边的文档:
TFA Collector- The Preferred Tool for Automatic or ADHOC Diagnostic Gathering Across All Cluster Nodes [ID 1513912.2]
以及我之前写的博客:Oracle GI 日志收集工具 - TFA 简介

我们这里来介绍一下如何能更方便的使用TFA进行日志过滤和收集,以及新加的功能。
2.1 我们先看以下tfa管理的节点和目前的状态:
[root@host1 tmp]# tfactl print hosts
Host Name : host1
Host Name : host2
[root@host1 tmp]# tfactl print status
.---------------------------------------------------------------------------------------------.

| Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status |

+-------+---------------+-------+------+------------+----------------------+------------------+

| host1 | RUNNING | 18686 | 5000 | 12.1.2.6.3 | 12126320160104141621 | COMPLETE |

| host2 | RUNNING | 18030 | 5000 | 12.1.2.6.3 | 12126320160104141621 | COMPLETE |

'-------+---------------+-------+------+------------+----------------------+------------------'

2.2 如果我们安装了一些其它工具收集的日志,我们想让TFA来帮我们一同管理,我们也可以直接把对应的目录添加进来,语法查询,请使用以下命令:
[root@host1 tmp]# tfactl directory -h

如:
/u01/app/11.2.0/grid/bin/tfactl directory add /nmon/log/
[root@host1 oswbb]# mkdir -p /nmon/log
[root@host1 oswbb]# /u01/app/11.2.0/grid/bin/tfactl directory add /nmon/log
Unable to determine component for directory: /nmon/log
Please choose a component for this Directory [RDBMS|CRS|ASM|INSTALL|OS|CFGTOOLS|TNS|DBWLM|ACFS|ALL] : OS
Do you wish to assign more components to this Directory ? [Y/y/N/n] [N] n
Running Inventory ...
Successfully added directory to TFA

2.3 客户可能会经常碰见我们技术支持的人需要收集OS watcher ,最新版本的TFA在安装的过程会把OSW也进行封装,并且按照默认的方式进行启动,如:
[root@host1 tmp]# ps -ef |grep osw
grid 19047 1 0 12:20 ? 00:00:00 /bin/sh ./OSWatcher.sh 30 48 NONE /u01/app/grid/tfa/repository/suptools/host1/oswbb/grid/archive
grid 20169 19047 0 12:20 ? 00:00:00 /bin/sh ./OSWatcherFM.sh 48 /u01/app/grid/tfa/repository/suptools/host1/oswbb/grid/archive
但是如果您想让这个功能完全生效,您还是需要把Exampleprivate.net修改成private.net,并且修改连的私网和平台信息,才能对私网数据进行采集。

2.4 按照我们自己定制的规则进行日志收集
收集日志的语法如下,可以通过以下命令进行查询获取:
[root@host1 oswbb]# tfactl diagcollect -h
我们这里解释集中常用的方式:
2.4.1 收集2个小时之前的由TFA管理的所有的日志:
#tfactl diagcollect –all –since 2h
2.4.2 收集1天内由TFA管理的所有日志,并压缩存放在本地foo为后缀
#tfactl diagcollect -since 1d -z foo
[root@host1 oswbb]# tfactl diagcollect -since 1d -z foo
Collecting data for all nodes
Collection Id : 20160228124457host1
Repository Location in host1 : /u01/app/grid/tfa/repository
Collection monitor will wait up to 30 seconds for collections to start
2016/02/28 12:45:01 CST : Collection Name : tfa_foo.zip
2016/02/28 12:45:01 CST : Sending diagcollect request to host : host2
2016/02/28 12:45:01 CST : Scanning of files for Collection in progress...
2016/02/28 12:45:01 CST : Collecting extra files...
2016/02/28 12:45:06 CST : Getting list of files satisfying time range [02/27/2016 12:45:01 CST, 02/28/2016 12:45:06 CST]
2016/02/28 12:45:06 CST : Starting Thread to identify stored files to collect
2016/02/28 12:45:06 CST : Getting List of Files to Collect
2016/02/28 12:45:07 CST : Trimming file : host1/u01/app/11.2.0/grid/log/host1/client/olsnodes.log with original file size : 2.7MB
2016/02/28 12:45:07 CST : Finished Getting List of Files to Collect
2016/02/28 12:45:07 CST : Collecting ADR incident files...
2016/02/28 12:45:07 CST : Waiting for collection of extra files

Logs are being collected to: /u01/app/grid/tfa/repository/collection_Sun_Feb_28_12_44_57_CST_2016_node_all
/u01/app/grid/tfa/repository/collection_Sun_Feb_28_12_44_57_CST_2016_node_all/host1.tfa_foo.zip
/u01/app/grid/tfa/repository/collection_Sun_Feb_28_12_44_57_CST_2016_node_all/host2.tfa_foo.zip
2.4.3 收集1个小时的所有节点上数据库相关的日志,并压缩放在本地,以test为后缀:
tfactl diagcollect -database orcl -since 1h -z test
[root@host1 oswbb]# tfactl diagcollect -database orcl -since 1h -z test
Collecting data for all nodes
Collection Id : 20160228124936host1
Repository Location in host1 : /u01/app/grid/tfa/repository
Collection monitor will wait up to 30 seconds for collections to start
2016/02/28 12:49:39 CST : Collection Name : tfa_test.zip
2016/02/28 12:49:39 CST : Sending diagcollect request to host : host2
2016/02/28 12:49:40 CST : Scanning of files for Collection in progress...
……
2016/02/28 12:50:01 CST : Total time taken : 22s
2016/02/28 12:50:01 CST : Remote Collection in Progress...
2016/02/28 12:50:20 CST : host2:Completed Collection
2016/02/28 12:50:20 CST : Completed collection of zip files.
Logs are being collected to: /u01/app/grid/tfa/repository/collection_Sun_Feb_28_12_49_36_CST_2016_node_all
/u01/app/grid/tfa/repository/collection_Sun_Feb_28_12_49_36_CST_2016_node_all/host1.tfa_test.zip
/u01/app/grid/tfa/repository/collection_Sun_Feb_28_12_49_36_CST_2016_node_all/host2.tfa_test.zip
2.4.4 收集1个小时的节点host1上的日志
tfactl diagcollect -node host1 -since 1h
[root@host1 oswbb]# tfactl diagcollect -node host1 -since 1h
Collecting data for host1 node(s)
Collection Id : 20160228125644host1
Repository Location in host1 : /u01/app/grid/tfa/repository
Collection monitor will wait up to 30 seconds for collections to start
....
2016/02/28 12:56:48 CST : Collection Name : tfa_Sun_Feb_28_12_56_44_CST_2016.zip
2016/02/28 12:56:48 CST : Scanning of files for Collection in progress...
2016/02/28 12:56:48 CST : Collecting extra files...
.....
Logs are being collected to: /u01/app/grid/tfa/repository/collection_Sun_Feb_28_12_56_44_CST_2016_node_host1
/u01/app/grid/tfa/repository/collection_Sun_Feb_28_12_56_44_CST_2016_node_host1/host1.tfa_Sun_Feb_28_12_56_44_CST_2016.zip

2.4.5 收集所有节点上在"Feb/28/2016"发生的日志
tfactl diagcollect -for "Feb/28/2016"
[root@host1 oswbb]# tfactl diagcollect -for "Feb/28/2016"
Collecting data for all nodes
Scanning files for Feb/28/2016 00:00:00
Collection Id : 20160228125814host1
Repository Location in host1 : /u01/app/grid/tfa/repository
Collection monitor will wait up to 30 seconds for collections to start
2016/02/28 12:58:20 CST : Collection Name : tfa_Sun_Feb_28_12_58_14_CST_2016.zip
2016/02/28 12:58:20 CST : Sending diagcollect request to host : host2
2016/02/28 12:58:20 CST : Scanning of files for Collection in progress...
2016/02/28 12:58:20 CST : Collecting extra files...
.....
Logs are being collected to: /u01/app/grid/tfa/repository/collection_Sun_Feb_28_12_58_14_CST_2016_node_all
/u01/app/grid/tfa/repository/collection_Sun_Feb_28_12_58_14_CST_2016_node_all/host1.tfa_Sun_Feb_28_12_58_14_CST_2016.zip

2.4.6 指定时间区域,对节点1上的ASM的日志进行收集

tfactl diagcollect -asm -node host1 -from "Feb/27/2016" -to "Feb/28/2016 01:00:00"
[root@host1 oswbb]# tfactl diagcollect -asm -node host1 -from "Feb/27/2016" -to "Feb/28/2016 01:00:00"
Collecting data for host1 node(s)
Scanning files from Feb/27/2016 00:00:00 to Feb/28/2016 01:00:00
Collection Id : 20160228130124host1
Repository Location in host1 : /u01/app/grid/tfa/repository
Collection monitor will wait up to 30 seconds for collections to start
2016/02/28 13:01:28 CST : Collection Name : tfa_Sun_Feb_28_13_01_24_CST_2016.zip
2016/02/28 13:01:28 CST : Scanning of files for Collection in progress...
2016/02/28 13:01:28 CST : Collecting extra files...
Logs are being collected to: /u01/app/grid/tfa/repository/collection_Sun_Feb_28_13_01_24_CST_2016_node_host1
/u01/app/grid/tfa/repository/collection_Sun_Feb_28_13_01_24_CST_2016_node_host1/host1.tfa_Sun_Feb_28_13_01_24_CST_2016.zip

通过以上例子,我们可以看到我们可以通过指定一些我们需要的参数来进行日志的过滤和收集,更准确的收集我们需要的日志,避免了通过diagcollection.pl收集上来冗余的无用的日志文件,提高了沟通的效率,简化了分析难度。
除了我们上述介绍的内容之外,TFA还有一些其它的功能:如,采集AWR的功能,自动收集日志的功能,对非root用户进行权限控制的功能等等。

3.封装的新功能
TFA版本从12.1.2.3.0之后封装了很多现有的Oracle问题分析的工具,包括ORAchk ,EXAchk,OSWatcher,Procwatcher,ORATOP,SQLT,DARDA,alertsummary等等,这些工具我们都可以通过TFACL的接口进行调用,我们可以通过以下方式查看这些封装的工具以及状态:
tfactl> toolstatus
.------------------------------------.
| External Support Tools |
+-------+--------------+-------------+
| Host | Tool | Status |
+-------+--------------+-------------+
| host1 | alertsummary | DEPLOYED |
| host1 | exachk | DEPLOYED |
| host1 | ls | DEPLOYED |
| host1 | pstack | DEPLOYED |
| host1 | orachk | DEPLOYED |
| host1 | sqlt | DEPLOYED |
| host1 | grep | DEPLOYED |
| host1 | summary | DEPLOYED |
| host1 | prw | NOT RUNNING |
| host1 | vi | DEPLOYED |
| host1 | tail | DEPLOYED |
| host1 | param | DEPLOYED |
| host1 | dbglevel | DEPLOYED |
| host1 | darda | DEPLOYED |
| host1 | history | DEPLOYED |
| host1 | oratop | DEPLOYED |
| host1 | oswbb | RUNNING |
| host1 | changes | DEPLOYED |
| host1 | events | DEPLOYED |
| host1 | ps | DEPLOYED |
'-------+--------------+-------------'
这里给大家演示几个常用的调用方式:
3.1 调用orachk:
[root@host1 oswbb]# tfactl
tfactl> orachk
This computer is for [S]ingle instance database or part of a [C]luster to run RAC database [S|C] [C]:C
Unable to determine nodes in cluster. Do you want to enter manually.[y/n][y]y
Enter cluster node names delimited by comma.by defalut localhost will be printed. (eg. node2,node3,node4)
host1,host1,host2
Checking ssh user equivalency settings on all nodes in cluster
Node host2 is configured for ssh user equivalency for root user
CRS binaries found at /u01/app/11.2.0/grid. Do you want to set CRS_HOME to /u01/app/11.2.0/grid?[y/n][y]
Checking Status of Oracle Software Stack - Clusterware, ASM, RDBMS
Parsing file host1_iostat_16.02.28.1200.dat ...
Parsing file host1_iostat_16.02.28.1300.dat ...
Parsing file host1_vmstat_16.02.28.1200.dat ...
Parsing file host1_vmstat_16.02.28.1300.dat ...
Parsing file host1_netstat_16.02.28.1200.dat ...
Parsing file host1_netstat_16.02.28.1300.dat ...
Parsing file host1_top_16.02.28.1200.dat ...
Parsing file host1_top_16.02.28.1300.dat ...
Parsing file host1_ps_16.02.28.1200.dat ...
Parsing file host1_ps_16.02.28.1300.dat ...
Parsing Completed.
Enter 1 to Display CPU Process Queue Graphs
Enter 2 to Display CPU Utilization Graphs
Enter 3 to Display CPU Other Graphs
Enter 4 to Display Memory Graphs
Enter 5 to Display Disk IO Graphs
Enter 6 to Generate All CPU Gif Files
Enter 7 to Generate All Memory Gif Files
Enter 8 to Generate All Disk Gif Files
Enter L to Specify Alternate Location of Gif Directory
Enter T to Alter Graph Time Scale Only (Does not change analysis dataset)
Enter D to Return to Default Graph Time Scale
Enter R to Remove Currently Displayed Graphs
Enter A to Analyze Data
Enter S to Analyze Subset of Data(Changes analysis dataset including graph time scale)
Enter P to Generate A Profile
Enter X to Export Parsed Data to File
Enter Q to Quit Program
Please Select an Option:1

3.3调用 Procwatcher
tfactl> prw deploy
Sun Feb 28 13:26:15 CST 2016: Building default prwinit.ora at /u01/app/grid/tfa/repository/suptools/prw/root/prwinit.ora
Clusterware must be running with adequate permissions to deploy, exiting
tfactl> prw start
Sun Feb 28 13:27:00 CST 2016: Starting Procwatcher as user root
Sun Feb 28 13:27:00 CST 2016: Thank you for using Procwatcher. 
Sun Feb 28 13:27:00 CST 2016: Please add a comment to Oracle Support Note 459694.1
Sun Feb 28 13:27:00 CST 2016: if you have any comments, suggestions, or issues with this tool.
Procwatcher files will be written to: /u01/app/grid/tfa/repository/suptools/prw/root
Sun Feb 28 13:27:00 CST 2016: Started Procwatcher
tfactl> prw stop
Sun Feb 28 13:27:20 CST 2016: Stopping Procwatcher
Sun Feb 28 13:27:20 CST 2016: Checking for stray debugging sessions...(waiting 1 second)
Sun Feb 28 13:27:21 CST 2016: No debugging sessions found, all good, exiting...
Sun Feb 28 13:27:21 CST 2016: Thank you for using Procwatcher. 
Sun Feb 28 13:27:21 CST 2016: Please add a comment to Oracle Support Note 459694.1
Sun Feb 28 13:27:21 CST 2016: if you have any comments, suggestions, or issues with this tool.
Sun Feb 28 13:27:21 CST 2016: Procwatcher Stopped
由于封装功能比较多,而且Oracle还在进一步增强,我们没办法一一列出,但是以上的这些工具的植入调用,我们都可以通过tfactl的接口来简单的实现,具体用法请参考1513912.2中的解释.
4 附录.推荐文档:
TFA Collector- The Preferred Tool for Automatic or ADHOC Diagnostic Gathering Across All Cluster Nodes [ID 1513912.2]

05-11 15:13