OracleLinux_R7_U9安装Oracle19c集群

owner
2024-01-25 / 0 评论 / 45 阅读 / 正在检测是否收录...

1 安装计划

1.1 测试环境规划
模块软件
虚拟化VMware® Workstation 17 Pro 17.0.0 build-20800274
操作系统OracleLinux-R7-U9-Server-x86_64-dvd.iso
Oracle GILINUX.X64_193000_grid_home.zip
Oracle DBLINUX.X64_193000_db_home.zip
1.2 虚拟机规划

硬件规划

模块配置
处理器2 Core
内存4 GB
磁盘40 GB
网卡两个网卡
镜像-
1.3 网络规划
节点名Public IPPrivate IPVirtual IPScan IP
rac-01192.168.110.22110.10.10.221192.168.110.223192.168.110.220
rac-02192.168.110.22210.10.10.222192.168.110.224192.168.110.220
1.4 系统分区规划
分区大小功能
/boot1 G引导分区
/boot/efi1 GEFI 分区
swap4 G虚拟内存(对标内存大小)
/剩下的空间根分区
1.5 共享存储规划
虚拟磁盘大小
OCR11 G
OCR21 G
OCR31 G
FRA10 G
DATA40 G
1.6 用户,组规划
GroupNameGroupID说明
oinstall54421Oracle清单和软件所有者
dba54322数据库管理员
oper54323DBA操作员组
backupdba54324备份管理员
dgdba54325DG管理员
kmdba54326KM管理员
asmdba54327ASM数据库管理员组
asmoper54328ASM操作员组
asmadmin54329Oracle自动存储管理组
racdba54330RAC管理员
1.7 软件目录规划
目录名称路径说明
ORACLE_BASE (oracle)/u01/app/oracleoracle基目录
ORACLE_HOME (oracle)/u01/app/oracle/product/19.3.0/dbhome_1oracle用户HOME目录
ORACLE_BASE (grid)/u01/app/gridgrid基目录
ORACLE_HOME (grid)/u01/app/19.3.0/gridgrid用户HOME目录

2 系统安装

:安装步骤省略

3 共享存储配置

3.1 创建虚拟磁盘(Windows宿主机)
# VMware 安装目录运行CMD 执行下边的命令创建vmdk文件
vmware-vdiskmanager.exe -c -s 1g -a lsilogic -t 2 "C:\Users\syspn\Documents\Virtual Machines\ShareDisk\ocr1.vmdk"
vmware-vdiskmanager.exe -c -s 1g -a lsilogic -t 2 "C:\Users\syspn\Documents\Virtual Machines\ShareDisk\ocr2.vmdk"
vmware-vdiskmanager.exe -c -s 1g -a lsilogic -t 2 "C:\Users\syspn\Documents\Virtual Machines\ShareDisk\ocr3.vmdk"
vmware-vdiskmanager.exe -c -s 10GB -a lsilogic -t 2 "C:\Users\syspn\Documents\Virtual Machines\ShareDisk\fra.vmdk"
vmware-vdiskmanager.exe -c -s 40GB -a lsilogic -t 2 "C:\Users\syspn\Documents\Virtual Machines\ShareDisk\data.vmdk"
3.2 修改VMX文件(Windows宿主机)

注:关闭两个虚拟机后,打开两个虚拟机目录的vmx文件,将下边的内容添加到文件中,将下边的路径改为实际创建的路径和名称。修改完成后重新打开虚拟机

#shared disks configure
diskLib.dataCacheMaxSize=0        
diskLib.dataCacheMaxReadAheadSize=0
diskLib.dataCacheMinReadAheadSize=0
diskLib.dataCachePageSize=4096    
diskLib.maxUnsyncedWrites = "0"

disk.locking = "FALSE"
scsi1.sharedBus = "virtual"
scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"

scsi1:0.mode = "independent-persistent"
scsi1:0.deviceType = "disk"
scsi1:0.present = "TRUE"
scsi1:0.fileName = "C:\Users\syspn\Documents\Virtual Machines\ShareDisk\ocr1.vmdk"
scsi1:0.redo = ""

scsi1:1.mode = "independent-persistent"
scsi1:1.deviceType = "disk"
scsi1:1.present = "TRUE"
scsi1:1.fileName = "C:\Users\syspn\Documents\Virtual Machines\ShareDisk\ocr2.vmdk"
scsi1:1.redo = ""

scsi1:2.mode = "independent-persistent"
scsi1:2.deviceType = "disk"
scsi1:2.present = "TRUE"
scsi1:2.fileName = "C:\Users\syspn\Documents\Virtual Machines\ShareDisk\ocr3.vmdk"
scsi1:2.redo = ""

scsi1:3.mode = "independent-persistent"
scsi1:3.deviceType = "disk"
scsi1:3.present = "TRUE"
scsi1:3.fileName = "C:\Users\syspn\Documents\Virtual Machines\ShareDisk\fra.vmdk"
scsi1:3.redo = ""

scsi1:4.mode = "independent-persistent"
scsi1:4.deviceType = "disk"
scsi1:4.present = "TRUE"
scsi1:4.fileName = "C:\Users\syspn\Documents\Virtual Machines\ShareDisk\data.vmdk"
scsi1:4.redo = ""
3.3 磁盘绑定

磁盘绑定有两种方式:

​ multipath + Udev:

  • 多路径必须使用该方式,才能正确识别磁盘,multipath将scsi_id 一样的设备绑定为一个相同的dm设备。

    再通过Udev绑定asm磁盘。

    * 单路径也能使用该方式。
    

​ udev:

  • 直接将/dev/sd* 设备绑定为asm磁盘

具体方式参见

4 安装前系统准备工作

4.1 设置主机名-双节点
# 节点1
hostnamectl set-hostname rac01
# 节点2
hostnamectl set-hostname rac02
4.2 调整主机Hosts-双节点
echo "#public ip
192.168.110.221  rac01
192.168.110.222  rac02
#priv ip
10.10.10.221  rac01-pri
10.10.10.222  rac02-pri
#vip ip
192.168.110.223  rac01-vip
192.168.110.224  rac02-vip
#scan ip
192.168.110.220  rac-scan" >> /etc/hosts
4.3 网卡配置-双节点

:检查双网卡配置

网卡1:

​ 配置public IP 需要网关

网卡2:

​ 配置priv IP 不要网关

网卡都关闭IPv6

4.4 测试连通性-双节点
ping -c 1 rac01; ping -c 1 rac02; ping -c 1 rac01-pri; ping -c 1 rac02-pri;

如果测试有不通的检查原因并解决

4.5 调整Network-双节点
echo "NOZEROCONF=yes"  >>/etc/sysconfig/network && cat /etc/sysconfig/network
4.6 调整/dev/shm -双节点
# 修改shm大小
echo "tmpfs    /dev/shm    tmpfs    rw,exec,size=4G    0 0">>/etc/fstab
# 重新挂载
mount -o remount /dev/shm
# 检测shm大小是否修改
df -h
4.7 关闭防火墙-双节点
systemctl stop firewalld
systemctl disable firewalld
4.8 关闭selinux-双节点
sed -i 's/enforcing/disabled/g' /etc/selinux/config && grep "SELINUX=" /etc/selinux/config
setenforce 0
4.9 关闭透明大页-双节点
cp /etc/default/grub /etc/default/grub.bak
sed -i 's/quiet/quiet\ transparent_hugepage=never/g' /etc/default/grub
#执行命令
grub2-mkconfig -o /boot/grub2/grub.cfg 
#不重启生效
echo never > /sys/kernel/mm/transparent_hugepage/enabled
4.10 修改本地源-双节点

:内网环境必改,外网可选

mv /etc/yum.repos.d/* /tmp/
echo "[local_yum]" >> /etc/yum.repos.d/henry.repo
echo "name = henry_repo" >> /etc/yum.repos.d/henry.repo
echo "baseurl = file:///mnt/" >> /etc/yum.repos.d/henry.repo
echo "enabled = 1" >> /etc/yum.repos.d/henry.repo
echo "gpgcheck = 0" >> /etc/yum.repos.d/henry.repo
mount /dev/cdrom /mnt/
4.11 安装软件和工具包-双节点
4.11.1 软件包安装

:tigervnc 可选是否安装,如果使用x11转发则不需要安装tigervnc

yum install -y bc*  ntp* binutils*  compat-libcap1*  compat-libstdc++*  dtrace-modules*  dtrace-modules-headers*  dtrace-modules-provider-headers*  dtrace-utils*  elfutils-libelf*  elfutils-libelf-devel* fontconfig-devel*  glibc*  glibc-devel*  ksh*  libaio*  libaio-devel*  libdtrace-ctf-devel*  libXrender*  libXrender-devel*  libX11*  libXau*  libXi*  libXtst*  libgcc*  librdmacm-devel*  libstdc++*  libstdc++-devel*  libxcb*  make*  net-tools*  nfs-utils*  python*  python-configshell*  python-rtslib*  python-six*  targetcli*  smartmontools*  sysstat* gcc* nscd* unixODBC* unzip readline xauth* nano net-tools wget curl tigervnc*
4.11.2 其他包安装

:将 compat-libstdc++-33-3.2.3-72.el7.x86_64.rpm 上传到root根目录

rpm -ivh compat-libstdc++-33-3.2.3-72.el7.x86_64.rpm
4.12 配置核心参数-双节点

直接复制执行下边代码块

memTotal=$(grep MemTotal /proc/meminfo | awk '{print $2}')
totalMemory=$((memTotal / 2048))
shmall=$((memTotal / 4))
if [ $shmall -lt 2097152 ]; then
  shmall=2097152
fi
shmmax=$((memTotal * 1024 - 1))
if [ "$shmmax" -lt 4294967295 ]; then
  shmmax=4294967295
fi
cat <<EOF>>/etc/sysctl.conf
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = $shmall
kernel.shmmax = $shmmax
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 16777216
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.wmem_default = 16777216
fs.aio-max-nr = 6194304
vm.dirty_ratio=20
vm.dirty_background_ratio=3
vm.dirty_writeback_centisecs=100
vm.dirty_expire_centisecs=500
vm.swappiness=10
vm.min_free_kbytes=524288
net.core.netdev_max_backlog = 30000
net.core.netdev_budget = 600
#vm.nr_hugepages = 
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
net.ipv4.ipfrag_time = 60
net.ipv4.ipfrag_low_thresh=6291456
net.ipv4.ipfrag_high_thresh = 8388608
EOF
sysctl -p
4.13 关闭avahi服务-双节点
systemctl stop avahi-deamon
systemctl disable avahi-deamon
systemctl stop avahi-chsconfd
systemctl disable  avahi-chsconfd
4.14 关闭其他服务-双节点
--禁用开机启动
systemctl disable accounts-daemon.service 
systemctl disable atd.service 
systemctl disable avahi-daemon.service 
systemctl disable avahi-daemon.socket 
systemctl disable bluetooth.service 
systemctl disable brltty.service
systemctl disable chronyd.service
systemctl disable colord.service 
systemctl disable cups.service  
systemctl disable debug-shell.service 
systemctl disable firewalld.service 
systemctl disable gdm.service 
systemctl disable ksmtuned.service 
systemctl disable ktune.service   
systemctl disable libstoragemgmt.service  
systemctl disable mcelog.service 
systemctl disable ModemManager.service 
systemctl disable ntpd.service
systemctl disable postfix.service 
systemctl disable postfix.service  
systemctl disable rhsmcertd.service  
systemctl disable rngd.service 
systemctl disable rpcbind.service 
systemctl disable rtkit-daemon.service 
systemctl disable tuned.service
systemctl disable upower.service 
systemctl disable wpa_supplicant.service
--停止服务
systemctl stop accounts-daemon.service 
systemctl stop atd.service 
systemctl stop avahi-daemon.service 
systemctl stop avahi-daemon.socket 
systemctl stop bluetooth.service 
systemctl stop brltty.service
systemctl stop chronyd.service
systemctl stop colord.service 
systemctl stop cups.service  
systemctl stop debug-shell.service 
systemctl stop firewalld.service 
systemctl stop gdm.service 
systemctl stop ksmtuned.service 
systemctl stop ktune.service   
systemctl stop libstoragemgmt.service  
systemctl stop mcelog.service 
systemctl stop ModemManager.service 
systemctl stop ntpd.service
systemctl stop postfix.service 
systemctl stop postfix.service  
systemctl stop rhsmcertd.service  
systemctl stop rngd.service 
systemctl stop rpcbind.service 
systemctl stop rtkit-daemon.service 
systemctl stop tuned.service
systemctl stop upower.service 
systemctl stop wpa_supplicant.service 
4.15 配置ssh服务-双节点
# 配置LoginGraceTime参数为0, 将timeout wait设置为无限制
sed -i '/#LoginGraceTime 2m/ s/#LoginGraceTime 2m/LoginGraceTime 0/' /etc/ssh/sshd_config && grep LoginGraceTime /etc/ssh/sshd_config
# 加快SSH登录  禁用DNS
sed -i '/#UseDNS yes/ s/#UseDNS yes/UseDNS no/' /etc/ssh/sshd_config && grep UseDNS /etc/ssh/sshd_config
4.16 修改Login配置-双节点
cat >> /etc/pam.d/login <<EOF
session required pam_limits.so
EOF
4.17 配置用户限制-双节点
echo "#ORACLE SETTING
grid soft nproc 16384
grid hard nproc 16384
grid soft nofile 16384
grid hard nofile 65536
grid soft stack 1638
grid hard stack 32768
grid hard memlock 8192000
grid soft memlock 8192000

oracle soft nproc 16384
oracle hard nproc 16384
oracle soft nofile 16384
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
oracle hard memlock 8192000
oracle soft memlock 8192000
" >> /etc/security/limits.conf
ulimit -a
4.18 时间同步配置-双节点

:如不配置时间同步,时间差明显了会出现集群脑裂现象,建议使用稳定的本地时间服务器同步

可选时间服务

ntp

chony

# 检查时间是否一致
date
4.19 创建用户和组-双节点

groupadd -g 54321 oinstall
groupadd -g 54322 dba
groupadd -g 54323 oper
groupadd -g 54324 backupdba
groupadd -g 54325 dgdba
groupadd -g 54326 kmdba
groupadd -g 54327 asmdba
groupadd -g 54328 asmoper
groupadd -g 54329 asmadmin
groupadd -g 54330 racdba
useradd -g oinstall -G dba,oper,backupdba,dgdba,kmdba,asmdba,racdba -u 10000 oracle
useradd -g oinstall -G dba,asmdba,asmoper,asmadmin,racdba -u 10001 grid
echo "oracle" | passwd --stdin oracle
echo "grid" | passwd --stdin grid
4.20 创建目录-双节点
mkdir -p /u01/app/19.3.0/grid
mkdir -p /u01/app/grid
mkdir -p /u01/app/oracle/product/19.3.0/dbhome_1
chown -R grid:oinstall /u01
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/
4.21 配置用户环境变量-双节点

:直接使用root用户执行以下代码块,如目录有改变,修改变动的目录即可

4.21.1 Grid
#节点1
cat >> /home/grid/.bash_profile << "EOF"
umask 022
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/19.3.0/grid
export TNS_ADMIN=$ORACLE_HOME/network/admin
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export ORACLE_SID=+ASM1
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
alias sas='sqlplus / as sysasm'
export PS1="[\`whoami\`@\`hostname\`:"'$PWD]\$ '
EOF

#节点2
cat >> /home/grid/.bash_profile << "EOF"
umask 022
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/19.3.0/grid
export TNS_ADMIN=$ORACLE_HOME/network/admin
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export ORACLE_SID=+ASM2
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
alias sas='sqlplus / as sysasm'
export PS1="[\`whoami\`@\`hostname\`:"'$PWD]\$ '
EOF
4.21.2 oracle用户
# 节点1
cat >> /home/oracle/.bash_profile << "EOF"
umask 022
export TMP=/tmp
export TMPDIR=$TMP
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/19.3.0/dbhome_1
export ORACLE_HOSTNAME=oracle19c-rac1
export TNS_ADMIN=\$ORACLE_HOME/network/admin
export LD_LIBRARY_PATH=\$ORACLE_HOME/lib:/lib:/usr/lib
export ORACLE_SID=orcl1
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
alias sas='sqlplus / as sysdba'
export PS1="[\`whoami\`@\`hostname\`:"'$PWD]\$ '
EOF

#节点2
cat >> /home/oracle/.bash_profile << "EOF"
umask 022
export TMP=/tmp
export TMPDIR=$TMP
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/19.3.0/dbhome_1
export ORACLE_HOSTNAME=oracle19c-rac1
export TNS_ADMIN=\$ORACLE_HOME/network/admin
export LD_LIBRARY_PATH=\$ORACLE_HOME/lib:/lib:/usr/lib
export ORACLE_SID=orcl2
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH
alias sas='sqlplus / as sysdba'
export PS1="[\`whoami\`@\`hostname\`:"'$PWD]\$ '
EOF
4.22 连接iscsi存储-双节点(可选)

:如未使用本地创建的磁盘,使用iscsi共享磁盘,需要先连接到共享磁盘

yum install -y iscsi-initiator-utils
systemctl start iscsi
systemctl enable iscsi
#发现target(discovery)
iscsiadm -m discovery -t st -p 192.168.110.199:3260
#连接target(discovery)存储
iscsiadm -m node -T iqn.2005-10.org.freenas.ctl:oracle19c-rac -p 192.168.110.199 -l
#查看登录的session 相当于iscsiadm -m session -P 0
iscsiadm -m session 
#扫描所有关联的target/session
iscsiadm -m node -R
iscsiadm -m session -R
#自动登录
iscsiadm -m node -T iqn.2005-10.org.freenas.ctl:oracle19c-rac -p 192.168.110.199,3260 --op update -n node.startup -v automatic
#扫描新添加的Iscsi存储设备
/usr/bin/scsi-rescan

#查看已经建立的iscsi连接
iscsiadm -m session -P 3
#断开iscsi连接
iscsiadm --mode node --targetname <target_name> --portal <target_portal> --logout
#断开所有iscsi连接
iscsiadm --mode node --logoutall=all
#重启iscsi
systemctl restart iscsid
#取消掉自动登录
iscsiadm --mode node --targetname iqn.2005-10.org.freenas.ctl:oracle19c-rac --portal 192.168.110.199:3260 --op delete
4.23 配置共享存储-双节点
4.23.1 multipath + udev

multipath配置

##安装multipath
yum install -y device-mapper*
mpathconf --enable --with_multipathd y
##查看共享盘的scsi_id
/usr/lib/udev/scsi_id -g -u /dev/sdb
/usr/lib/udev/scsi_id -g -u /dev/sdc
/usr/lib/udev/scsi_id -g -u /dev/sdd
/usr/lib/udev/scsi_id -g -u /dev/sde
/usr/lib/udev/scsi_id -g -u /dev/sdf

mv /etc/multipath.conf /etc/multipath.conf_bak
# 将iscsi_id 替换到下边的代码块中
cat <<EOF>> /etc/multipath.conf
defaults {
    user_friendly_names yes
}
blacklist {
  devnode "^sda"
}
multipaths {
  multipath {
  wwid "36589cfc00000044b6fe5709b1454743d"
  alias asm_ocr01
  }
  multipath {
  wwid "36589cfc000000bc28e3caeba02722b8b"
  alias asm_ocr02
  }
  multipath {
  wwid "36589cfc00000007e32793d447621f10a"
  alias asm_ocr03
  }
  multipath {
  wwid "36589cfc0000000b3db671f80dbce5e2f"
  alias asm_fra
  }  
  multipath {
  wwid "36589cfc000000e0e55b50afdddf54d4b"
  alias asm_data
  }
}
EOF

#加载内核模块
modprobe dm_multipath
#删除所有不使用的多路径设备
multipath -F
#查看生成的多路径盘符
multipath -v2
#显示多路径配置状态
multipath -ll

Udev配置

cd /dev/mapper
for i in asm_*; do
  printf "%s %s\n" "$i" "$(udevadm info --query=all --name=/dev/mapper/"$i" | grep -i dm_uuid)" >>/dev/mapper/udev_info
done
while read -r line; do
  dm_uuid=$(echo "$line" | awk -F'=' '{print $2}')
  disk_name=$(echo "$line" | awk '{print $1}')
  echo "KERNEL==\"dm-*\",ENV{DM_UUID}==\"${dm_uuid}\",SYMLINK+=\"${disk_name}\",OWNER=\"grid\",GROUP=\"asmadmin\",MODE=\"0660\"" >>/etc/udev/rules.d/99-oracle-asmdevices.rules
done < /dev/mapper/udev_info
##重载udev
udevadm control --reload-rules
udevadm trigger --type=devices
ll /dev/asm*
4.23.2 Udev(非multipath)

:/dev/sd[b-f] 对应下边的 b c d e f 如有更多的盘添加到代码中

for i in b c d e f;
do
echo "KERNEL==\"sd*\", ENV{DEVTYPE}==\"disk\", SUBSYSTEM==\"block\", PROGRAM==\"/lib/udev/scsi_id -g -u -d \$devnode\", RESULT==\"`/lib/udev/scsi_id -g -u -d /dev/sd$i`\", SYMLINK+=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"" >> /etc/udev/rules.d/99-oracle-asmdevices.rules
done

# 加载rules文件,重新加载udev rule
/sbin/udevadm control --reload
# 检查新的设备名称
/sbin/udevadm trigger --type=devices --action=change
4.24 用户互信

:所有节点(rac01,rac02)的所有用户(oracle,grid,root)执行生成密钥对

#生成密钥对
rm -rf ~/.ssh
mkdir ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa

:节点1(rac01)的所有用户(oracle,grud,root)执行追加互信

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh rac02 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh rac02 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys rac02:~/.ssh/authorized_keys

:所有节点(rac01,rac02)的所有用户(oracle,grid,root)执行互信验证,都不需要密码则互信添加完成

ssh rac01 date; ssh rac02 date; ssh rac01-pri date; ssh rac02-pri date;
4.25 准备工作完成,重启系统-双节点

5 安装Oracle GI

5.1 安装Grid-节点1

:将LINUX.X64_193000_grid_home.zip 上传到/opt/目录中。

# root用户下修改软件包权限
chown grid:oinstall /opt/LINUX.X64_193000_grid_home.zip
# 切换到grid用户解压GI包
su - grid
unzip /opt/LINUX.X64_193000_grid_home.zip -d $ORACLE_HOME
5.2 安装cvuqdisk软件-双节点
# root 用户下执行
export CVUQDISK_GRP=oinstall
rpm -ivh /u01/app/19.3.0/grid/cv/rpm/cvuqdisk-1.0.10-1.rpm
# 将cvuqdisk传到节点2
scp /u01/app/19.3.0/grid/cv/rpm/cvuqdisk-1.0.10-1.rpm rac02:
# 节点2执行安装
export CVUQDISK_GRP=oinstall
rpm -ivh /root/cvuqdisk-1.0.10-1.rpm
5.3 安装前检查-节点1
# 在grid用户下执行
# 使用 CVU 验证硬件和操作系统设置
cd $ORACLE_HOME
# 检测最后如有需要执行脚本,root登录所有节点(rac01,rac02)执行提示的脚本后按回车检测
./runcluvfy.sh stage -pre crsinst -n rac01,rac02 -fixup -verbose
./runcluvfy.sh stage -pre crsinst -n rac01,rac02 -verbose
./runcluvfy.sh stage -post hwos -n rac01,rac02 -verbose
5.4 执行安装-节点1
su - grid
cd $ORACLE_HOME
./gridSetup.sh

安装过程选项

Configuration Option > Configure Oracle Grid Infrastructure for a New Cluster
Cluster Configuration > Configure an Oracle Standalone Cluster
Grid Plug and Play > Cluster Name: rac
                     SCAN Name: rac-scan
Cluster Node Information > add 添加节点2 , 检查并修改列表中的Public Hostname 和 Virtual                                     Hostname 与 Hosts 文件中设置的主机名一致
Network Interface Usage > 将private 选择为 ASM & private
Storage Option > Use Oracle Flex ASM for storage
Create Grid Infrastructure > No
Create ASM Disk Group > Disk Group Name : OCR ,下边选择asm_ocr1,asm_ocr2,asm_ocr3
ASM Password > Use same passwords for these accounts: 设置密码
Failure Isolation > Do not use IPMI
Managenent Option > 不勾选
Operation System Groups > 默认
Installtion Location > 默认
Create Inventory > 默认
Root Script execution > 如需要自动执行安装后脚本就勾选Automatically run ... 并填写root密码
Prerequisite Checks > 等待检查完成
Summary > 默认
Install Product > 等待安装完成,过程中如出现INS-20802 错误,点击OK
Finish > close
5.5 GI安装完成

检测集群状态-双节点

crsctl stat res -t

[grid@rac01:/home/grid]$ crsctl status res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
ora.chad
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
ora.net1.network
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
ora.ons
               ONLINE  ONLINE       rac01                    STABLE
               ONLINE  ONLINE       rac02                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       rac01                    STABLE
      2        ONLINE  ONLINE       rac02                    STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac01                    STABLE
ora.OCR.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac01                    STABLE
      2        ONLINE  ONLINE       rac02                    STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       rac01                    Started,STABLE
      2        ONLINE  ONLINE       rac02                    Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       rac01                    STABLE
      2        ONLINE  ONLINE       rac02                    STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac01                    STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac01                    STABLE
ora.rac01.vip
      1        ONLINE  ONLINE       rac01                    STABLE
ora.rac02.vip
      1        ONLINE  ONLINE       rac02                    STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac01                    STABLE
--------------------------------------------------------------------------------
5.6 GI创建ASM磁盘组-任意节点
# grid用户执行
asmca
# 按照下边的过程创建其他磁盘组
右键点击 Disk Groups > Create... > 填写Fra - 选择 External - 下边勾选 /dev/asm_fra > ok
右键点击 Disk Groups > Create... > 填写Data - 选择 External - 下边勾选 /dev/asm_data > ok
等待创建完成后看状态,如果是MOUNTED(2 of 2) 就没问题,点击 exit 退出

6 安装Oracle DB

6.1 安装db -节点1

:将LINUX.X64_193000_db_home.zip 上传到/opt/目录中。

# root用户下修改软件包权限
chown oracle:oinstall /opt/LINUX.X64_193000_db_home.zip
# 切换到oracle用户解压GI包
su - oracle
unzip /opt/LINUX.X64_193000_db_home.zip -d $ORACLE_HOME
6.2 执行安装-节点1
su - oracle
cd $ORACLE_HOME
./runInstaller

安装过程选项

Configuration Option > set Up Software Only
Database Installation Option > Oracle Real Application Clusters Database installation
Nodes Selection > 选择所有节点
Database Edition > Enterprise Edition
Installation Location > 默认
Operating System Groups > 默认
Root Script execution > 如需要自动执行安装后脚本就勾选Automatically run ... 并填写root密码
Prerequisite Checks > 等待检查完成,判断是否需要修复,勾选Igone All
Summary > Install
Install Product > 等待安装完成
Finish > close
6.3 软件安装完成验证-任意节点

执行命令,检查是否能登录,检查版本信息

[oracle@rac01:/home/oracle]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Dec 8 19:58:16 2023
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.

SQL>

7 创建数据库

7.1 数据库规划
项目描述
内存规划PGA SGA
processes1000
字符集AL32UTF8
归档模式打开
redo5组
undo2 G最大扩展4 G
temp4 G
闪回配置4 G
7.2 DBCA创建数据库

oracle用户下执行dbca命令

Database Operation > Create a database
Creation Mode > Advanced configuration
Deployment Type > Database type : Oracle Real Application Cluster (RAC) database
                  Configuration Type : Admin Managed
                  select a template for your database.选择合适的安装
Nodes Selection > 选择所有节点
Database Identification > 填写实例信息,根据情况选择是否使用PDB
Storage Option > 使用ASM存储,检查Database file location 是否正确
Fast Recovery Option > 是否启用闪回和归档,根据需求选择
Data Vault Option > 默认,不配置
Configuration Options > 根据环境配置,本测试配置Use Automatic Shared Memory Management : 1488
                        Sizing页 Processes修改为1000
                        Character sets 设备为 AL32UTF8
                        Sample schemas 勾选 Add sample schemas to the database
Management Options > EM 配置,保持默认即可
User Credentials > Use the same administrative password for all accounts:设置密码
Creation Option > 保持默认
Prerequisite Checks > 等待检查完成,Igonre All
Summary > 保持默认,Finish
Progress Page > 等待安装完成
Finish > close
7.3 创建完成
7.3.1 检查实例状态-任意节点

grid 用户检查实例

su - grid
srvctl status database -d orcl
# 查看实例启动情况

Oracle 用户检查实例

su - oracle
# 检查监听情况
lsnrctl status
# 登录实例
sqlplus / as sysdba
# 检查实例启动状态
select open_mode from v$database;

8 Rac集群日常管理命令

8.1 集群资源状态
su - grid
crsctl status res -t
8.2 集群服务状态
su - oracle
crsctl check cluster -all
8.3 数据库状态
su - oracle
srvctl status database -d orcl
8.4 GI 监听状态
su - grid
lsnrctl status
srvctl status listener
8.5 SCAN 状态
su - grid
srvctl status scan
srvctl status scan_listener
lsnrctl status LISTENER_SCAN1
8.6 nodeapps状态
su - grid
srvctl status nodeapps
8.7 VIP 状态
su - grid
srvctl status vip -node rac01
srvctl status vip -node rac02
8.8 数据库配置
su - grid
srvctl config database -d orcl
crsctl status res ora.orcl.db -p |grep -i auto
8.9 OCR
su - grid
ocrcheck
8.10 VOTEDISK
su - grid
crsctl query css votedisk
8.11 GI版本
su - grid
crsctl query crs releaseversion
crsctl query crs activeversion
8.12 ASM
su - grid
asmcmd
> lsdg
> lsof
> lsdsk
8.13 启动/关闭Rac
su - grid
# 关闭\启动单个实例
srvctl stop\start instance -d racdb -i rac01
# 关闭\启动所有实例
srvctl stop\start database -d orcl
# 关闭\启动CRS
crsctl stop\start crs
# 关闭\启动集群服务
crsctl stop\start cluster -all
crsctl start\stop crs # 是单节管理
crsctl start\stop cluster # [-all 所有节点] 可以管理多个节点
crsctl start\stop crs # 管理crs 包含进程 OHASD
crsctl start\stop cluster # 不包含OHASD进程 要先启动 OHASD进程才可以使用
srvctl stop\start database # 启动\停止所有实例及其启用的服务
8.14 切换Scan
su - grid
srvctl relocate scan_listener -i 1 -n rac02
8.15 切换VIP
su - grid
srvctl config network
> srvctl relocate vip -vip oracle19c-rac2-vip -node rac02
0

评论 (0)

取消