安装环境
主机操作系统:windows 10
虚拟机Vbox:两台Oracle Linux R6 U7 x86_64
Oracle Database software: Oracle11gR2
Cluster software: Oracle grid infrastructure 11gR2
本次安装过程中,原来已有的安装软件有些问题,折腾了好长时间,最后重新下载的七个压缩文件,用到了前三个,七个压缩包内容如下:
SWAP大小一定要注意,为物理内存的1.5倍为好。
共享存储:ASM
[root@rac1 ~]# lsb_release -a
LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
Distributor ID: OracleServer
Description: Oracle Linux Server release 6.5
Release: 6.5
Codename: n/a
[root@rac1 ~]# uname -r
3.8.13-16.2.1.el6uek.x86_64
硬件配置要求:
- 每个服务器节点至少需要2块网卡,一个对外网络接口,一个私有网路接口(心跳)。
- 如果你通过OUI安装Oracle集群软件,需要保证每个节点用于外网或私网接口(网卡名)保证一致。比如,node1使用eth0作为对外接口,node2就不能使用eth1作为对外接口。
IP配置要求:
这里不采用DHCP方式,指定静态的scan ip(scan ip可以实现集群的负载均衡,由集群软件按情况分配给某一节点)。
每个节点分配一个ip、一个虚拟ip、一个私有ip。
其中ip、vip和scan-ip需要在同一个网段。
非GNS下手动配置IP实例:
非GNS下手动配置IP实例:
RAC-1 Public | RAC1 | RAC1 | rac1 | Public | 192.168.177.101 |
RAC-1 VIP | RAC1 | RAC1 | rac1-vip | Public | 192.168.177.201 |
RAC-1 Private | RAC1 | RAC1 | rac1-priv | Private | 192.168.139.101 |
RAC2 | RAC2 | RAC2 | rac2 | Public | 192.168.177.102 |
RAC2 VIP | RAC2 | RAC2 | rac2-vip | Public | 192.168.177.202 |
RAC2 Private | RAC2 | RAC2 | rac2-priv | Private | 192.168.139.102 |
SCAN IP | none | Selected by Oracle Clusterware | scan-ip | virtual | 192.168.177.110 |
二. 创建操作系统
1.oraliux 6.x
磁盘分区20G swap 不能少于内存,最好大于内存1.5倍
选择包时除了 BASEBASE BASE自带的包外,选中以下项:
- Compatibility libraries
- ftp server
- gnome-desktop
- x windows system
- Development tools
- Chinese support
包可以通过oralinux一键安装包,自动调整linux环境,以满足oracle的安装。
2.关闭防火墙及selinux(在两个节点node1、node2上都配置),否则在安装grid时,有可能卡在65%处。
[root@rac1 ~]# service iptables stop
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Flushing firewall rules: [ OK ]
iptables: Unloading modules: [ OK ]
[root@rac1 ~]# chkconfig iptables off
[root@rac1 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
[root@rac1 ~]# setenforce 0
[root@rac1 ~]#
3. 把光盘设置为本地YUM源:
mv /etc/yum.repos.d/CentOS-Base.repo CentOS-Base.repo.bak
vim /etc/yum.repos.d/CentOS_Media.repo [c6-media]
name=CentOS-$releasever - Media
baseurl=file:///media/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
yum clean all
yum make cache
oralinux下:
[root@vmac6 ~]# cd /etc/yum.repos.d
[root@vmac6 yum.repos.d]# mv public-yum-ol6.repo public-yum-ol6.repo.bak
[root@vmac6 yum.repos.d]# touch public-yum-ol6.repo
[root@vmac6 yum.repos.d]# vim public-yum-ol6.repo
[oel6]
name = Enterprise Linux 6.3 DVD
baseurl=file:///media/"OL6.3 x86_64 Disc 1 20120626"/Server
gpgcheck=0
enabled=1
4.细节说明:
安装Oracle Linux时,注意分配两个网卡,一个网卡为Host Only方式,用于两台虚拟机节点的通讯,另一个网卡为Nat方式,用于连接外网,后面再手动分配静态IP。每台主机的内存和swap规划为至少2.5G。硬盘规划为:boot 500M,其他空间分配为LVM方式管理,LVM划分2.5G为swap,其他为/。
两台Oracle Linux主机名为rac1、rac2
注意这里安装的两个操作系统最好在不同的硬盘中,否则I/O会很吃力。
检查内存和swap大小
[root@rac1 ~]# grep MemTotal /proc/meminfo
MemTotal: 2552560 kB
[root@rac1 ~]# grep SwapTotal /proc/meminfo
SwapTotal: 2621436 kB
如果swap太小,swap调整方法:
通过此种方式进行swap 的扩展,首先要计算出block的数目。具体为根据需要扩展的swapfile的大小,以M为单位。block=swap分区大小*1024, 例如,需要扩展64M的swapfile,则:block=64*1024=65536.
然后做如下步骤:
#dd if=/dev/zero of=/swapfile bs=1024 count=65536
#mkswap /swapfile
#swapon /swapfile
#vi /etc/fstab
增加/swapf swap swap defaults 0 0
# cat /proc/swaps 或者# free –m //查看swap分区大小
# swapoff /swapf //关闭扩展的swap分区
5.配置网络
(1)配置ip
//这里的网关有vmware中网络设置决定,eth0为连接外网,eth0内网心跳
//rac1主机下:
[root@rac1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
IPADDR=192.168.177.101
PREFIX=24
GATEWAY=192.168.177.1
DNS1=192.168.177.1
[root@rac1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
IPADDR=192.168.139.101
PREFIX=24
//rac2主机下
[root@rac2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
IPADDR=192.168.177.102
PREFIX=24
GATEWAY=192.168.177.1
DNS1=192.168.177.1
[root@rac2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
IPADDR=192.168.139.102
PREFIX=24
(2)配置hostname
//rac1主机下
[root@rac1 ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=rac1
GATEWAY=192.168.177.1
NOZEROCONF=yes
//rac2主机下
[root@rac2 ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=rac2
GATEWAY=192.168.177.1
NOZEROCONF=yes
(3)配置hosts
rac1和rac2均要添加:
[root@rac1 ~]# vi /etc/hosts
192.168.177.101 rac1
192.168.177.201 rac1-vip
192.168.139.101 rac1-priv
192.168.177.102 rac2
192.168.177.202 rac2-vip
192.168.139.102 rac2-priv
192.168.177.110 scan-ip
6.添加用户和组及新建安装目录
/usr/sbin/groupadd -g 1000 oinstall
/usr/sbin/groupadd -g 1020 asmadmin
/usr/sbin/groupadd -g 1021 asmdba
/usr/sbin/groupadd -g 1022 asmoper
/usr/sbin/groupadd -g 1031 dba
/usr/sbin/groupadd -g 1032 oper
useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
useradd -u 1101 -g oinstall -G dba,asmdba,oper oracle
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/grid
mkdir /u01/app/oracle
chown -R grid:oinstall /u01
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/
[root@rac1 ~]# passwd grid
[root@rac1 ~]# passwd oracle
7. 修改内核参数
[root@rac1 ~]# vi /etc/sysctl.conf #preinstall包已经修改 [root@rac1 ~]# vim /etc/security/limits.conf #需要增加grid内容
# grid-rdbms-server-11gR2-preinstall setting for nofile soft limit is 1024
grid soft nofile 1024
# grid-rdbms-server-11gR2-preinstall setting for nofile hard limit is 65536
grid hard nofile 65536
# grid-rdbms-server-11gR2-preinstall setting for nproc soft limit is 2047
grid soft nproc 2047
# grid-rdbms-server-11gR2-preinstall setting for nproc hard limit is 16384
grid hard nproc 16384
# grid-rdbms-server-11gR2-preinstall setting for stack soft limit is 10240KB
grid soft stack 10240
# grid-rdbms-server-11gR2-preinstall setting for stack hard limit is 32768KB
grid hard stack 32768
[root@rac1 ~]# vi /etc/pam.d/login
session required pam_limits.so
8.修改用户环境变量
[root@rac1 ~]# su - grid
[grid@rac1 ~]$ vim .bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM1 # RAC1
export ORACLE_SID=+ASM2 # RAC2
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
umask 022 [root@rac1 ~]# su - oracle
[oracle@rac1 ~]$ vim .bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=orcl1 # RAC1
export ORACLE_SID=orcl2 # RAC2
export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export TNS_ADMIN=$ORACLE_HOME/network/admin
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib $ source .bash_profile使配置文件生效
9.克隆第二个节点,并进行共享存储设置
修改2号机器的网络
cd /etc/udev/rules.d
vim 70-persistent-net.rules
手动在vbox中添加多块共享盘,并在节点二上添加各磁盘。
创建存储
cd /dev
ls -l sd*
两个节点上执行如下脚本,绑定共享磁盘
for i in b c d e f g ;
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"" >> /etc/udev/rules.d/99-oracle-asmdevices.rules
done
/sbin/start_udev
查看:
ls -l asm*
[root@rac1 dev]# ls -l asm*
brw-rw---- 1 grid asmadmin 8, 16 Apr 27 12:00 asm-diskb
brw-rw---- 1 grid asmadmin 8, 32 Apr 27 12:00 asm-diskc
brw-rw---- 1 grid asmadmin 8, 48 Apr 27 12:00 asm-diskd
brw-rw---- 1 grid asmadmin 8, 64 Apr 27 12:00 asm-diske
brw-rw---- 1 grid asmadmin 8, 80 Apr 27 12:00 asm-diskf
brw-rw---- 1 grid asmadmin 8, 96 Apr 27 12:00 asm-diskg
VMware创建共享存储方式参考:
进入VMware安装目录,cmd命令下:
C:\Program Files (x86)\VMware\VMware Workstation>
vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\ocr.vmdk
vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\ocr2.vmdk
vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\votingdisk.vmdk
vmware-vdiskmanager.exe -c -s 20000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\data.vmdk
vmware-vdiskmanager.exe -c -s 10000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\backup.vmdk
实例:
C:\Program Files (x86)\VMware\VMware Workstation>vmware-vdiskmanager.exe -c -s 2g -a lsilogic -t 2 d:\vpc\rac\share\ocr.vmdk
Creating disk 'd:\vpc\rac\share\ocr.vmdk'
Create: 100% done.
Virtual disk creation successful. C:\Program Files (x86)\VMware\VMware Workstation>vmware-vdiskmanager.exe -c -s 5g -a lsilogic -t 2 d:\vpc\rac\share\data.vmdk
Creating disk 'd:\vpc\rac\share\data.vmdk'
Create: 100% done.
Virtual disk creation successful. C:\Program Files (x86)\VMware\VMware Workstation>vmware-vdiskmanager.exe -c -s 2g -a lsilogic -t 2 d:\vpc\rac\share\fra.vmdk
Creating disk 'd:\vpc\rac\share\fra.vmdk'
Create: 100% done.
Virtual disk creation successful. 注 -a指定磁盘类型 -t表示直接划分一个预分配空间的文件。
这里创建了两个1G的ocr盘,一个1G的投票盘,一个20G的数据盘,一个10G的备份盘。
[root@node1 ~]# su - grid
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa
[root@node1 ~]# su - oracle
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa [root@node2 ~]# su - grid
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa
[root@node2 ~]# su - oracle
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa 在节点1上进行互信配置:
[root@node1 ~]# su - grid
touch ~/.ssh/authorized_keys
cd ~/.ssh
ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys
[root@node1 ~]# su - oracle
touch ~/.ssh/authorized_keys
cd ~/.ssh
ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys 在node1把存储公钥信息的验证文件传送到node2上
[root@node1 ~]# su - grid
scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys [root@node1 ~]# su - oracle
scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys 验证ssh配置是否正确,以gird、oracle用户在两个节点node1、node2上都配置执行:
[root@node1 ~]# su - grid
设置验证文件的权限
chmod 600 ~/.ssh/authorized_keys 启用用户一致性
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add 验证ssh配置是否正确
ssh rac1 date
ssh rac2 date
ssh rac1-priv date
ssh rac2-priv date [root@node1 ~]# su - oracle
设置验证文件的权限
chmod 600 ~/.ssh/authorized_keys 启用用户一致性
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add 验证ssh配置是否正确
ssh rac1 date
ssh rac2 date
ssh rac1-priv date
ssh rac2-priv date
如果不需要输入密码就可以输出时间,说明ssh验证配置成功。必须把以上命令在两个节点都运行,每一个命令在第一次执行的时候需要输入yes。
如果不运行这些命令,即使ssh验证已经配好,安装clusterware的时候也会出现错误:
The specified nodes are not clusterable
因为,配好ssh后,还需要在第一次访问时输入yes,才算是真正的无障碍访问其他服务器。
请谨记,SSH互信需要实现的就是各个节点之间可以无密码进行SSH访问。
环境配置
默认情况下,下面操作在每个节点下均要进行,密码均设置oracle
1. 通过SecureCRT建立命令行连接
sqlplus中Backspace出现^H的乱码
Options->Session Options->Terminal->Emulation->Mapped Keys->Other mappings
勾选Backspace sends deletevi中不能使用delete和home
Options->Session Options->Terminal->Emulation
设置Terminal为Linux
勾选Select an alternate keyboard emulation为Linux
3、关闭NTP及端口范围参数修改(在两个节点node1、node2上都配置)
Oracle建议使用Oracle Cluster Time Synchronization Service,因此关闭删除NTP
[root@node1 ~]# service ntpd stop
[root@node1 ~]# chkconfig ntpd off
[root@node1 ~]# mv /etc/ntp.conf /etc/ntp.conf.old
[root@node1 ~]# rm -rf /var/run/ntpd.pid
4、检查TCP/UDP端口范围
# cat /proc/sys/net/ipv4/ip_local_port_range
如果已经显示9000 65500,就不用进行下面的步骤了
# echo 9000 65500 > /proc/sys/net/ipv4/ip_local_port_range
# vim /etc/sysctl.conf
# 添加此行:
# TCP/UDP port range
net.ipv4.ip_local_port_range = 9000 65500
# 重启网络
# /etc/rc.d/init.d/network restart
同步时间(在两个节点node1、node2上都配置)
[root@node1 ~]# date -s 23:29:00
[root@node1 ~]# ssh 192.168.7.12 date;date
[root@node1 ~]# clock -w
date -s 03/07/2017 时间设定成2017年3月7日
date -s 23:29:00 时间设置成晚上23点29分0秒
clock -w 同步bios时钟,强制将系统时间写入
5、 系统文件设置
(1)内核参数设置:
[root@rac1 ~]# vi /etc/sysctl.conf
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 1306910720
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
net.ipv4.tcp_wmem = 262144 262144 262144
net.ipv4.tcp_rmem = 4194304 4194304 4194304
这里后面检测要改
kernel.shmmax = 68719476736
确认修改内核
[root@rac1 ~]# sysctl -p
也可以采用Oracle Linux光盘中的相关安装包来调整
[root@rac1 Packages]# pwd
/mnt/cdrom/Packages
[root@rac1 Packages]# ll | grep preinstall
-rw-r–r– 1 root root 15524 Dec 25 2012 oracle-rdbms-server-11gR2-preinstall-1.0-7.el6.x86_64.rpm
(2)配置oracle、grid用户的shell限制
[root@rac1 ~]# vi /etc/security/limits.conf
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
(3)配置login
[root@rac1 ~]# vi /etc/pam.d/login
session required pam_limits.so
(4)安装需要的软件包
binutils-2.20.51.0.2-5.11.el6 (x86_64)
compat-libcap1-1.10-1 (x86_64)
compat-libstdc++-33-3.2.3-69.el6 (x86_64)
compat-libstdc++-33-3.2.3-69.el6.i686
gcc-4.4.4-13.el6 (x86_64)
gcc-c++-4.4.4-13.el6 (x86_64)
glibc-2.12-1.7.el6 (i686)
glibc-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6.i686
ksh
libgcc-4.4.4-13.el6 (i686)
libgcc-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6 (x86_64)
libstdc++-4.4.4-13.el6.i686
libstdc++-devel-4.4.4-13.el6 (x86_64)
libstdc++-devel-4.4.4-13.el6.i686
libaio-0.3.107-10.el6 (x86_64)
libaio-0.3.107-10.el6.i686
libaio-devel-0.3.107-10.el6 (x86_64)
libaio-devel-0.3.107-10.el6.i686
make-3.81-19.el6
sysstat-9.0.4-11.el6 (x86_64)这里使用的是配置本地源的方式,自己先进行配置:
[root@rac1 ~]# mount /dev/cdrom /mnt/cdrom/
[root@rac1 ~]# vi /etc/yum.repos.d/dvd.repo
[dvd]
name=dvd
baseurl=file:///mnt/cdrom
gpgcheck=0
enabled=1
[root@rac1 ~]# yum clean all
[root@rac1 ~]# yum makecache
[root@rac1 ~]# yum install gcc gcc-c++ glibc* glibc-devel* ksh libgcc* libstdc++* libstdc++-devel* make sysstat
vbox下,执行yum install oracle-rdbms-server-11gR2-preinstall-1.0-6.el6
参考http://www.cnblogs.com/ld1977/articles/6767918.html
6、配置grid和oracle用户环境变量
Oracle_sid需要根据节点不同进行修改
[root@rac1 ~]# su - grid
[grid@rac1 ~]$ vi .bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM1 # RAC1
export ORACLE_SID=+ASM2 # RAC2
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
umask 022
需要注意的是ORACLE_UNQNAME
是数据库名,创建数据库时指定多个节点是会创建多个实例,ORACLE_SID
指的是数据库实例名
[root@rac1 ~]# su - oracle
[oracle@rac1 ~]$ vi .bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=orcl1 # RAC1
export ORACLE_SID=orcl2 # RAC2
export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export TNS_ADMIN=$ORACLE_HOME/network/admin
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
$ source .bash_profile
使配置文件生效
7、配置oracle用户ssh互信
这是很关键的一步,虽然官方文档中声称安装GI和RAC的时候OUI会自动配置SSH,但为了在安装之前使用CVU检查各项配置,还是手动配置互信更优。
配置过程如下:
各节点生成Keys:
[root@node1 ~]# su - grid
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa
[root@node1 ~]# su - oracle
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa 在节点1上进行互信配置:
[root@node1 ~]# su - grid
touch ~/.ssh/authorized_keys
cd ~/.ssh
ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys
[root@node1 ~]# su - oracle
touch ~/.ssh/authorized_keys
cd ~/.ssh
ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys 在node1把存储公钥信息的验证文件传送到node2上
[root@node1 ~]# su - grid
scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys [root@node1 ~]# su - oracle
scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys 验证ssh配置是否正确,以gird、oracle用户在两个节点node1、node2上都配置执行:
[root@node1 ~]# su - grid
设置验证文件的权限
chmod 600 ~/.ssh/authorized_keys 启用用户一致性
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add 验证ssh配置是否正确
ssh rac1 date
ssh rac2 date
ssh rac1-priv date
ssh rac2-priv date [root@node1 ~]# su - oracle
设置验证文件的权限
chmod 600 ~/.ssh/authorized_keys 启用用户一致性
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add 验证ssh配置是否正确
ssh rac1 date
ssh rac2 date
ssh rac1-priv date
ssh rac2-priv date
如果不需要输入密码就可以输出时间,说明ssh验证配置成功。必须把以上命令在两个节点都运行,每一个命令在第一次执行的时候需要输入yes。
如果不运行这些命令,即使ssh验证已经配好,安装clusterware的时候也会出现错误:
The specified nodes are not clusterable
因为,配好ssh后,还需要在第一次访问时输入yes,才算是真正的无障碍访问其他服务器。
请谨记,SSH互信需要实现的就是各个节点之间可以无密码进行SSH访问。
需要注意的是生成密钥时不设置密码,授权文件权限为600,同时需要两个节点互相ssh通过一次。
8、配置磁盘
使用asm管理存储需要裸盘,前面配置了共享硬盘到两台主机上。配置裸盘的方式有两种(1)oracleasm添加(2)/etc/udev/rules.d/60-raw.rules配置文件添加(字符方式帮绑定udev) (3)脚本方式添加(块方式绑定udev,速度比字符方式快,最新的方法,推荐用此方式)
在配置裸盘之前需要先格式化硬盘:
fdisk /dev/sdb
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
最后 w 命令保存更改
重复步骤,格式化其他盘,得到如下分区 [root@rac1 ~]# ls /dev/sd*
/dev/sda /dev/sda2 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sda1 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
添加裸盘:没用上的步骤
[root@rac1 ~]# vi /etc/udev/rules.d/60-raw.rules
ACTION=="add",KERNEL=="/dev/sdb1",RUN+='/bin/raw /dev/raw/raw1 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="17",RUN+="/bin/raw /dev/raw/raw1 %M %m"
ACTION=="add",KERNEL=="/dev/sdc1",RUN+='/bin/raw /dev/raw/raw2 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="33",RUN+="/bin/raw /dev/raw/raw2 %M %m"
ACTION=="add",KERNEL=="/dev/sdd1",RUN+='/bin/raw /dev/raw/raw3 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="49",RUN+="/bin/raw /dev/raw/raw3 %M %m"
ACTION=="add",KERNEL=="/dev/sde1",RUN+='/bin/raw /dev/raw/raw4 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="65",RUN+="/bin/raw /dev/raw/raw4 %M %m"
ACTION=="add",KERNEL=="/dev/sdf1",RUN+='/bin/raw /dev/raw/raw5 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="81",RUN+="/bin/raw /dev/raw/raw5 %M %m" KERNEL=="raw[1-5]",OWNER="grid",GROUP="asmadmin",MODE="660" [root@rac1 ~]# start_udev
Starting udev: [ OK ]
[root@rac1 ~]# ll /dev/raw/
total 0
crw-rw---- 1 grid asmadmin 162, 1 Apr 13 13:51 raw1
crw-rw---- 1 grid asmadmin 162, 2 Apr 13 13:51 raw2
crw-rw---- 1 grid asmadmin 162, 3 Apr 13 13:51 raw3
crw-rw---- 1 grid asmadmin 162, 4 Apr 13 13:51 raw4
crw-rw---- 1 grid asmadmin 162, 5 Apr 13 13:51 raw5
crw-rw---- 1 root disk 162, 0 Apr 13 13:51 rawctl
这里需要注意的是配置的,
前后都不能有空格,否则会报错。最后看到的raw盘权限必须是grid:asmadmin用户。
方法(3):没用上步骤
[root@rac1 ~]# for i in b c d e f ;
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"">> /etc/udev/rules.d/99-oracle-asmdevices.rules
done [root@rac1 ~]# start_udev
Starting udev: [ OK ]
[root@rac1 ~]# ll /dev/*asm*
brw-rw—- 1 grid asmadmin 8, 16 Apr 27 18:52 /dev/asm-diskb
brw-rw—- 1 grid asmadmin 8, 32 Apr 27 18:52 /dev/asm-diskc
brw-rw—- 1 grid asmadmin 8, 48 Apr 27 18:52 /dev/asm-diskd
brw-rw—- 1 grid asmadmin 8, 64 Apr 27 18:52 /dev/asm-diske
brw-rw—- 1 grid asmadmin 8, 80 Apr 27 18:52 /dev/asm-diskf
用这种方式添加,在后面的添加asm磁盘组的时候,需要指定Change Diskcovery Path为/dev/*asm*
安装ASM这块按这个教程没成功,用的以下方法:
root@node1 ~]# cd /tmp/oracle
[root@node1 ~]# rpm -ivh kmod-oracleasm-2.0.6.rh1-3.el6.x86_64.rpm
[root@node1 ~]# rpm -ivh oracleasmlib-2.0.4-1.el6.x86_64.rpm
[root@node1 ~]# rpm -Uvh oracleasm-support-2.1.8-1.el6.x86_64.rpm
[root@node1 ~]# rpm -ivh cvuqdisk-1.0.9-1.rpm 在安装 kmod-oracleasm-2.0.6.rh1-3.el6.x86_64.rpm时报错,内核不对,后查找在
http://rpm.pbone.net/index.php3/stat/4/idpl/30518374/dir/scientific_linux_6/com/kmod-oracleasm-2.0.8-5.el6_7.x86_64.rpm.html上面下载的2.0.8版本,可以在linux6.7上安装
Download
mirror.switch.ch kmod-oracleasm-2.0.8-5.el6_7.x86_64.rpm
ftp.rediris.es kmod-oracleasm-2.0.8-5.el6_7.x86_64.rpm
ftp.pbone.net kmod-oracleasm-2.0.8-5.el6_7.x86_64.rpm
ftp.icm.edu.pl kmod-oracleasm-2.0.8-5.el6_7.x86_64.rpm
我使用的在vmware虚拟机手动添加各个硬盘,两个节点都增加:
添加磁盘后,给新的磁盘分区
在node1上
[root@node1 ~]# fdisk /dev/sdb
m(帮助)
p(查看)
n(新建分区)p 1 1 1
p(查看)
w(保存)
q(退出) fdisk /dev/sdc
fdisk /dev/sdd
fdisk /dev/sde
fdisk /dev/sdf
配置ASM Libaray(必须在两个节点node1、node2上都配置)
root@node1 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done [root@node1 ~]# /usr/sbin/oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Mounting ASMlib driver filesystem: /dev/oracleasm
创建ASM磁盘(只需在node1上操作)
/usr/sbin/oracleasm createdisk VDK001 /dev/sdb1
/usr/sbin/oracleasm createdisk VDK002 /dev/sdc1
/usr/sbin/oracleasm createdisk VDK003 /dev/sdd1
/usr/sbin/oracleasm createdisk VDK004 /dev/sde1
/usr/sbin/oracleasm createdisk VDK005 /dev/sdf1
........... [root@node1 oracle]# /usr/sbin/oracleasm createdisk VDK001 /dev/sdb1
Writing disk header: done
Instantiating disk: done
[root@node1 oracle]# /usr/sbin/oracleasm createdisk VDK002 /dev/sdc1
Writing disk header: done
Instantiating disk: done
[root@node1 oracle]# /usr/sbin/oracleasm createdisk VDK003 /dev/sdd1
Writing disk header: done
Instantiating disk: done
[root@node1 oracle]# /usr/sbin/oracleasm createdisk VDK004 /dev/sde1
Writing disk header: done
Instantiating disk: done
[root@node1 oracle]# /usr/sbin/oracleasm createdisk VDK005 /dev/sdf1
Writing disk header: done
Instantiating disk: done 排错
Marking disk "VOL5" as an ASM disk: [FAILED]
-----失败的原因 没识别 需要先执行/sbin/partprobe
/etc/init.d/oracleasm createdisk VDK001 /dev/sdb1
/etc/init.d/oracleasm createdisk VDK002 /dev/sdc1
/etc/init.d/oracleasm createdisk VDK003 /dev/sdd1
/etc/init.d/oracleasm createdisk VDK004 /dev/sde1
/etc/init.d/oracleasm createdisk VDK005 /dev/sdf1 删除已有的磁盘
/etc/init.d/oracleasm deletedisk vdk001
加载扫描ASM盘(必须在两个节点node1、node2上都配置)
/etc/init.d/oracleasm scandisks
/etc/init.d/oracleasm listdisks [root@node1 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node1 ~]# /etc/init.d/oracleasm listdisks
VDK001
VDK002
VDK003
VDK004
VDK005 [root@node2 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@node2 ~]# /etc/init.d/oracleasm listdisks
VDK001
VDK002
VDK003
VDK004
VDK005 Device ..... is already labeled for ASM disk .....的错误.note
[root@node1 oracle]# /usr/sbin/oracleasm createdisk VDK004 /dev/sde1
Device "/dev/sde1" is already labeled for ASM disk "VDK001"
[root@node1 oracle]# /usr/sbin/oracleasm renamedisk -f /dev/sde1 VDK004
Writing disk header: done
Instantiating disk "VDK004": done
安装grid软件
libaio-0.3.105(i386)、compat-libstdc++-33-3.2.3(i386)、libaio-devel(i386)、libgcc(i386)、libstdc++(i386)、unixODBC(i386)、
unixODBC-devel(i386)、pdksh、几个包在执行预检查时失败,后续忽略,可以过去。
安装前全面检查(DNS可忽略)(只需在node1主机上操作):
[root@node1 ~]# su - grid
[root@node1 ~]# cd /tmp/oracle/grid
[root@node1 ~]# ./runcluvfy.sh comp nodecon -n oracle-rac1,oracle-rac2 -verbose [grid@oraclerac1 grid]$ ./runcluvfy.sh comp nodecon -n oraclerac1,oraclerac2 -verbose Verifying node connectivity Checking node connectivity... Checking hosts config file...
Node Name Status
------------------------------------ ------------------------
oraclerac1 passed
oraclerac2 passed Verification of the hosts config file successful Interface information for node "oraclerac1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.7.11 192.168.7.0 0.0.0.0 192.168.7.1 08:00:27:17:68:C7 1500
eth2 172.16.16.1 172.16.16.0 0.0.0.0 192.168.7.1 08:00:27:00:27:D9 1500 Interface information for node "oraclerac2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.7.12 192.168.7.0 0.0.0.0 192.168.7.1 08:00:27:AB:1D:38 1500
eth2 172.16.16.2 172.16.16.0 0.0.0.0 192.168.7.1 08:00:27:7A:74:50 1500 Check: Node connectivity of subnet "192.168.7.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
oraclerac1[192.168.7.11] oraclerac2[192.168.7.12] yes
Result: Node connectivity passed for subnet "192.168.7.0" with node(s) oraclerac1,oraclerac2 Check: TCP connectivity of subnet "192.168.7.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
oraclerac1:192.168.7.11 oraclerac2:192.168.7.12 passed
Result: TCP connectivity check passed for subnet "192.168.7.0" Check: Node connectivity of subnet "172.16.16.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
oraclerac1[172.16.16.1] oraclerac2[172.16.16.2] yes
Result: Node connectivity passed for subnet "172.16.16.0" with node(s) oraclerac1,oraclerac2 Check: TCP connectivity of subnet "172.16.16.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
oraclerac1:172.16.16.1 oraclerac2:172.16.16.2 passed
Result: TCP connectivity check passed for subnet "172.16.16.0" Interfaces found on subnet "192.168.7.0" that are likely candidates for VIP are:
oraclerac1 eth0:192.168.7.11
oraclerac2 eth0:192.168.7.12 Interfaces found on subnet "172.16.16.0" that are likely candidates for a private interconnect are:
oraclerac1 eth2:172.16.16.1
oraclerac2 eth2:172.16.16.2
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.7.0".
Subnet mask consistency check passed for subnet "172.16.16.0".
Subnet mask consistency check passed. Result: Node connectivity check passed Verification of node connectivity was successful.
21、安装grid软件
[root@node1 ~]# export DISPLAY=:0.0
[root@node1 ~]# xhost +
access control disabled, clients can connect from any host
[root@node1 ~]# su - grid
[grid@node1 ~]$ xhost +
access control disabled, clients can connect from any host
[grid@oraclerac1 grid]$ ./runInstaller
定义集群名字,SCAN Name 为hosts中定义的scan-ip,取消GNS
界面只有第一个节点rac1,点击“Add”把第二个节点rac2加上
配置ASM,这里选择前面配置的裸盘raw1,raw2,raw3,冗余方式为External即不冗余。因为是不用于,所以也可以只选一个设备。这里的设备是用来做OCR注册盘和votingdisk投票盘的。
安装grid,在执行/u01/app/11.2.0/grid/root.sh时出现了ohasd failed,报出如下错误
Adding daemon to inittab
CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.
ohasd failed to start: Inappropriate ioctl for device
ohasd failed to start at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.
解决方案参考http://www.cnblogs.com/ld1977/articles/6765341.html
根据提示查看日志
[grid@rac1 grid]$ vi /u01/app/oraInventory/logs/installActions2016-04-10_04-57-29PM.log
命令模式查找错误:/ERROR
WARNING:
INFO: Completed Plugin named: Oracle Cluster Verification Utility
INFO: Checking name resolution setup for "scan-ip"...
INFO: ERROR:
INFO: PRVG-1101 : SCAN name "scan-ip" failed to resolve
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for "scan-ip" (IP address: 192.168.2
48.110) failed
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "scan-i
p"
INFO: Verification of SCAN VIP and Listener setup failed
由错误日志可知,是因为没有配置resolve.conf,可以忽略
安装grid清单位置
至此grid集群软件安装完成
2.安装grid后的资源检查
以grid用户执行以下命令。 [root@rac1 ~]# su - grid
检查crs状态
[grid@rac1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
检查Clusterware资源
[grid@rac1 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host
----------------------------------------------------------------------
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE rac1
ora.OCR.dg ora....up.type 0/5 0/ ONLINE ONLINE rac1
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE rac1
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE rac1
ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE rac1
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE rac1
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE rac1
ora....C1.lsnr application 0/5 0/0 ONLINE ONLINE rac1
ora.rac1.gsd application 0/5 0/0 OFFLINE OFFLINE
ora.rac1.ons application 0/3 0/0 ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac1
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE rac2
ora....C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2
ora.rac2.gsd application 0/5 0/0 OFFLINE OFFLINE
ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2
ora.rac2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE rac2
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE rac1
检查集群节点
[grid@rac1 ~]$ olsnodes -n
rac1 1
rac2 2
检查两个节点上的Oracle TNS监听器进程
[grid@rac1 ~]$ ps -ef|grep lsnr|grep -v 'grep'|grep -v 'ocfs'|awk '{print$9}'
LISTENER_SCAN1
LISTENER
确认针对Oracle Clusterware文件的Oracle ASM功能:
如果在 Oracle ASM 上暗转过了OCR和表决磁盘文件,则以Grid Infrastructure 安装所有者的身份,使用给下面的命令语法来确认当前正在运行已安装的Oracle ASM:
[grid@rac1 ~]$ srvctl status asm -a
ASM is running on rac2,rac1
ASM is enabled.