Prerequisites
Document for implementing 11gR2
Server
Hardware Requirements
•
At least 2 GB of Physical RAM
•
Swap space equivalent to the multiple of the available
RAM
Available
RAM
|
Swap
Space Required
|
Between
1 GB and 2 GB
|
1.5
times the size of RAM
|
More
than 2 GB
|
Equal
to the size of RAM
|
•
400 MB of disk space in the /tmp directory
•
Up to 4 GB of disk space for the Oracle Software
Software
Requirements:
•
Minimum Required RPMs for OEL 5 (All the 2 RAC Nodes):
binutils-2.17.50.0.6-2.el5
compat-libstdc++-33-3.2.3-61
elfutils-libelf-0.125-3.el5
elfutils-libelf-devel-0.125
gcc-4.1.1-52
gcc-c++-4.1.1-52
glibc-2.5-12
glibc-common-2.5-12
glibc-devel-2.5-12
glibc-headers-2.5-12
libaio-0.3.106
libaio-devel-0.3.106
libgcc-4.1.1-52
libstdc++-4.1.1
libstdc++-devel-4.1.1-52.e15
make-3.81-1.1
sysstat-7.0.0
unixODBC-2.2.11
unixODBC-devel-2.2.11
libXp-1.0.0-8
oracleasmlib-2.0.4-1 (download from
http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html)
cvuqdisk-1.0.1-1 (located at Clusterware Media( under rpm
folder))
•
Configuring the Kernel Parameters:
#more /etc/sysctl.conf
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.msgmni = 2878
kernel.msgmax = 8192
kernel.msgmnb = 65535
vm.lower_zone_protection = 500
vm.oom-kill = 0
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 256 32000 100 142
fs.file-max = 131072
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default=4194304
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=262144
#/sbin/sysctl -p
•
Setting Shell limits for the Oracle
User:
#more /etc/security/limits.conf
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
#more /etc/pam.d/login/
Session required /lib/security/pam_limits.so
Each node in the
cluster must meet the following requirements:
•
Each node must have at least two
network adapters: one for the public network interface, and one for the private
network interface (the interconnect).
•
The public interface names
associated with the network adapters for each network must be the same on all
nodes, and the private interface names associated with the network adaptors
should be the same on all nodes.
- For increased reliability, configure redundant public and private network adapters for each node.
- For the public network, each network adapter must support TCP/IP.
- For the private network, the interconnect must support the user datagram protocol (UDP) using high-speed network adapters and switches that support TCP/IP (Gigabit Ethernet or better recommended).
- For the private network, the endpoints of all designated interconnect interfaces must be completely reachable on the network. There should be no node that is not connected to every private network. You can test whether an interconnect interface is reachable using a ping command.
Node Time Requirements:
Before starting the installation, ensure
that each member node of the cluster is set as closely as possible to the same
date and time. Oracle strongly recommends using the Network Time Protocol
feature of most operating systems for this purpose, with all nodes using the
same reference Network Time Protocol server.
IP Requirements:
Before starting
the installation, you must have the following IP addresses available for each
node:
- An IP address with an associated network name registered in the domain name service (DNS) for the public interface. If you do not have an available DNS, then record the network name and IP address in the system hosts file, /etc/hosts.
- One virtual IP (VIP) address with an associated network name registered in DNS. If you do not have an available DNS, then record the network name and VIP address in the system hosts file, /etc/hosts. Select an address for your VIP that meets the following requirements:
- The IP address and network name are currently unused
- The VIP is on the same subnet as your public interface
•
A private IP
address with a host name for each private interface
Node
|
Interface
Name
|
Type
|
IP Address
(example)
|
Registered
In
|
rac1
|
rac1
|
Public
|
143.46.43.100
|
DNS (if
available, else the hosts file)
|
rac1
|
rac1-vip
|
Virtual
|
143.46.43.104
|
DNS (if
available, else the hosts file)
|
rac1
|
rac1-priv
|
Private
|
10.0.0.1
|
Hosts file
|
rac2
|
rac2
|
Public
|
143.46.43.101
|
DNS (if
available, else the hosts file)
|
rac2
|
rac2-vip
|
Virtual
|
143.46.43.105
|
DNS (if
available, else the hosts file)
|
rac2
|
rac2-priv
|
Private
|
10.0.0.2
|
Hosts file
|
Public, VIPs and SCAN VIPs are resolved by DNS. The
private IPs for Cluster Interconnects are resolved through /etc/hosts. The hostname
along with public/private and NAS network is configured at the time of OEL
network installations. The final Network Configurations files are listed here.
(a) hostname:
For Node node1:
[root@rac1 ~]# hostname
rac1.server.com
Rac1.server.com : /etc/sysconfig/network (example)
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=rac1.server.com
For Node node2:
[root@rac2 ~]# hostname
rac2.server.com
Rac2.server.com: /etc/sysconfig/network (example)
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=rac2.server.com
(b) Private Network
for Cluster Interconnect:
Rac1.server.com:
/etc/sysconfig/network-scripts/ifcfg-eth0
(example)
# Linksys Gigabit Network Adapter
DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.0.255
HWADDR=00:22:6B:BF:4E:60
IPADDR=192.168.0.1
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.0.0
ONBOOT=yes
Rac2.server.com:
/etc/sysconfig/network-scripts/ifcfg-eth0 (example)
# Linksys Gigabit Network Adapter
DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.0.255
HWADDR=00:22:6B:BF:4E:4B
IPADDR=192.168.0.2
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.0.0
ONBOOT=yes
(c) Public
Network:
Rac1.server.com:
/etc/sysconfig/network-scripts/ifcfg-eth2 (example)
# Broadcom Corporation NetXtreme
BCM5751 Gigabit Ethernet PCI Express
DEVICE=eth2
BOOTPROTO=static
BROADCAST=192.168.2.255
HWADDR=00:18:8B:04:6A:62
IPADDR=192.168.2.1
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.2.0
ONBOOT=yes
Rac2.server.com:
/etc/sysconfig/network-scripts/ifcfg-eth2 (example)
# Broadcom Corporation NetXtreme
BCM5751 Gigabit Ethernet PCI Express
DEVICE=eth2
BOOTPROTO=static
BROADCAST=192.168.2.255
HWADDR=00:18:8B:24:F8:58
IPADDR=192.168.2.2
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.2.0
ONBOOT=yes
(e) /etc/hosts files:
/etc/hosts (on all the RAC nodes) (example)
# Do not remove the following line,
or various programs
# that require network functionality
will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6
localhost6
##=======================================
# Public Network
##=======================================
192.168.2.1 rac1.server.com rac1
192.168.2.2 rac2.server.com rac2
##=======================================
# VIPs
##=======================================
192.168.2.51 rac1-vip.server.com rac1-vip
192.168.2.52 rac2-vip.server.com rac2-vip
##=======================================
# Private Network for Cluster
Interconnect
##=======================================
192.168.0.1 rac1-priv
192.168.0.2 rac2-priv
##=======================================
##=======================================
#groupadd -g 1000 oinstall (Oracle Inventory Group)
#groupadd -g 1031 dba (Oracle
OSDBA Group)
#useradd -u 1101 -g oinstall -G dba
oracle (Oracle Software Owner
User)
#mkdir -p /u01/app/oracle
#chown –R oracle:oinstall
/u01/app/oracle
#chmod -R 775 /u01
#passwd oracle
#more /etc/oraInst.loc
inventory_loc=/u01/app/oracle/oraInventory
inst_group=oinstall
#id nobody
#/usr/sbin/useradd nobody (If above command doesn’t
return any value)
Repeat this procedure on all the
cluster nodes.
Create Required
Software Directories:
Oracle Base Directory
Oracle Inventory Directory
Oracle Clusterware Home Directory
Oracle Home Directory
Checking the Configuration of the Hangcheck-timer Module
Before
installing Oracle Real Application Clusters on Linux systems, verify that the
hangcheck-timer module (
hangcheck-timer
)
is loaded and configured correctly. hangcheck-timer
monitors
the Linux kernel for extended operating system hangs that could affect the
reliability of a RAC node and cause a database corruption. If a hang occurs,
then the module restarts the node in seconds.
This package is located in the rpm directory on
Clusterware Media and needs to be installed after the group oinstall is
created. In my case, as this was a fresh install of 10g R2 on a new machines,
old versions of cvuqdisk was not present. If it is, then the older version
needs to be removed first.
# pwd
/home/oracle/10gr2/clusterware/rpm
# export CVUQDISK_GRP=oinstall
# echo $CVUQDISK_GRP
oinstall
# rpm -ivh rpm –ivh
cvuqdisk-1.0.1-1.rpm
Preparing...
########################################### [100%]
1:cvuqdisk
########################################### [100%]
# rpm -qa | grep cvuqdisk
cvuqdisk-1.0.1-1
On All the Cluster Nodes:
su - oracle
mkdir ~/.ssh
chmod 700 ~/.ssh
Generate the RSA and DSA keys:
/usr/bin/ssh-keygen -t rsa
/usr/bin/ssh-keygen -t dsa
On node rac1:
touch ~/.ssh/authorized_keys
cd ~/.ssh
(a) Add these Keys to the
Authorized_keys file.
cat id_rsa.pub >> authorized_keys
cat id_dsa.pub >> authorized_keys
(b) Send this file to node2.
scp authorized_keys rac2:/home/oracle/.ssh/
On node rac2:
(a) Add these Keys to the
Authorized_keys file.
cd ~/.ssh
cat id_rsa.pub >> authorized_keys
cat id_dsa.pub >> authorized_keys
(b) Send this file to node1.
scp authorized_keys rac1:/home/oracle/.ssh/
On All the Nodes:
chmod 600 ~/.ssh/authorized_keys
ssh rac1 date
ssh rac2 date
ssh rac1.server.com date
ssh rac2.server.com date
ssh rac1-priv date
ssh rac2-priv date
Entered 'yes' and continued when
prompted
Enabling SSH User Equivalency on Cluster Member Nodes
su – oracle
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add
At the prompts, enter the pass phrase for each key that you generated.
Prepare the Shared Disks
Both Oracle Clusterware and Oracle RAC require access to disks that are shared by each node in the cluster.
For
Clusterware:
1.
OCFS (Release 1 or 2)
2. raw
devices
3.
third party cluster filesystem such as GPFS or Veritas
For RAC
database storage:
1.
OCFS (Release 1 or 2)
2. ASM
3. raw
devices
4.
third party cluster filesystem such as GPFS or Veritas
Create the partitions on the disks(According to our Environment)
File System
|
Size
|
Comments
|
/dev/sdb
|
650GB
|
Needed for DATA1
|
/dev/sdc
|
650GB
|
Needed for DATA2
|
/dev/sdd
|
550GB
|
Needed for RECO
|
/dev/sde
|
5G
|
Needed for OCR1
|
/dev/sdf
|
5G
|
Needed for OCR2
|
/dev/sdg
|
5G
|
Needed for OCR3
|
Mounts Points which are to be mounted on the server(According to our Environment)
Mount Point
|
Total Size
|
Comments
|
/u01
|
200GB
|
100GB on each Server For Oracle home and grid home.
|
/Backups
|
535GB
|
265G on one server and 270G on other server.
|
Installation and configuration of Automatic Storage Management
Running the rootpre.sh Script for Installations on x86 (64-bit) Systems
If you plan to use Oracle Clusterware on x86 (64-bit) systems, then you must run therootpre.sh
script as the root
user on all the nodes of the cluster.
No comments:
Post a Comment
Thank you for your Suggestions...