Oracle Database 11g Release 2 RAC On Linux5.4 Using NFS.
Their are two types of installation 11gR2 RAC.
1. Using Shared file system (shared directory).
2. Use Automatic storage Management.
Their are two types of installation 11gR2 RAC.
1. Using Shared file system (shared directory).
2. Use Automatic storage Management.
We have discussed here Installation type
1. Using Shared file system (shared directory).
NFS is an abbreviation of Network File System, a platform independent technology created by Sun Microsystems that allows shared access to files stored on computers via an interface called the Virtual File System (VFS) that runs on top of TCP/IP.
Computers that share files are considered NFS servers, while those that access shared files are considered NFS clients. An individual computer can be either an NFS server, a NFS client or both.We can use NFS to provide shared storage for a RAC installation.
In a production environment we would expect the NFS server to be a NAS, but for testing it can just as easily be another server, or even one of the RAC nodes itself. To cut costs, this articles uses one of the RAC nodes as the source of the shared storage. Obviously, this means if that node goes down the whole database is lost, so it's not a sensible idea to do this if you are testing high availability.
If you have access to a NAS or a third server you can easily use that for the shared storage,
making the whole solution much more resilient. Whichever route you take, the fundamentals of the installation are the same.
The Single Client Access Name (SCAN) should really be defined in the DNS or GNS and round-robin between one of 3 addresses, which are on the same subnet as the public and virtual IPs. In this article I've defined it as a single IP address in the "/etc/hosts" file,which is wrong and will cause the cluster verification to fail, but it allows me to complete the install without the presence of a DNS.
Server Hardware Requirements:
Each node must meet the following minimum hardware requirements:
We have 2 Node configure on Virtual Box.
Virtual Machine name:- RAC1 or RAC2
We have 2 Node configure on Virtual Box.
Virtual Machine name:- RAC1 or RAC2
·
At least 2 GB of Physical RAM
·
Swap space equivalent to the multiple of the available
RAM
Available
RAM
|
Swap
Space Required
|
Between
1 GB and 2 GB
|
1.5
times the size of RAM
|
More
than 2 GB
|
Equal
to the size of RAM
|
·
400 MB of disk space in the /tmp directory
·
Up to 4 GB of disk space for the Oracle Software
Software
Requirements:
Before we begin first download the
required software.
- Download and install Oracle Virtual-Box Software from
- Download Oracle Linux from http://edelivery.oracle.com/linux
Minimum Required RPMs (All the 2-Nodes(RAC1 or RAC2)
binutils-2.17.50.0.6-2.el5
compat-libstdc++-33-3.2.3-61
elfutils-libelf-0.125-3.el5
elfutils-libelf-devel-0.125
gcc-4.1.1-52
gcc-c++-4.1.1-52
glibc-2.5-12
glibc-common-2.5-12
glibc-devel-2.5-12
glibc-headers-2.5-12
libaio-0.3.106
libaio-devel-0.3.106
libgcc-4.1.1-52
libstdc++-4.1.1
libstdc++-devel-4.1.1-52.e15
make-3.81-1.1
sysstat-7.0.0
unixODBC-2.2.11
unixODBC-devel-2.2.11
libXp-1.0.0-8
oracleasmlib-2.0.4-1 (download from
http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html)
cvuqdisk-1.0.1-1 (located at Clusterware Media( under rpm
folder))
Configuring the Kernel Parameters:
[root@sujeet] vi /etc/sysctl.conf
Add or amend the following lines to the "/etc/sysctl.conf" file. fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmall = 2097152 kernel.shmmax = 1054504960 kernel.shmmni = 4096 # semaphores: semmsl, semmns, semopm, semmni kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default=262144 net.core.rmem_max=4194304 net.core.wmem_default=262144 net.core.wmem_max=1048586 Run the following command to change the current kernel parameters. [root@sujeet] /sbin/sysctl -p Add the following lines to the "/etc/security/limits.conf" file.
[root@sujeet] vi /etc/security/limits.conf
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
Add the following lines to the "/etc/pam.d/login" file, if it does not already exist.
session required pam_limits.so
Network Requirements:
Each node in the
cluster must meet the following requirements:
·
Each node must have at least two
network adapters: one for the public network interface, and one for the private
network interface (the interconnect).
·
The public interface names
associated with the network adapters for each network must be the same on all
nodes, and the private interface names associated with the network adaptors
should be the same on all nodes.
- For increased reliability, configure redundant public and private network adapters for each node.
- For the public network, each network adapter must support TCP/IP.
- For the private network, the interconnect must support the user datagram protocol (UDP) using high-speed network adapters and switches that support TCP/IP (Gigabit Ethernet or better recommended).
- For the private network, the endpoints of all designated interconnect interfaces must be completely reachable on the network. There should be no node that is not connected to every private network. You can test whether an interconnect interface is reachable using a ping command.
Node Time Requirements:
Before starting the installation, ensure
that each member node of the cluster is set as closely as possible to the same
date and time. Oracle strongly recommends using the Network Time Protocol
feature of most operating systems for this purpose, with all nodes using the
same reference Network Time Protocol server.
IP Requirements:
Before starting
the installation, you must have the following IP addresses available for each
node:
- An IP address with an associated network name registered in the domain name service (DNS) for the public interface. If you do not have an available DNS, then record the network name and IP address in the system hosts file, /etc/hosts.
- One virtual IP (VIP) address with an associated network name registered in DNS. If you do not have an available DNS, then record the network name and VIP address in the system hosts file, /etc/hosts. Select an address for your VIP that meets the following requirements:
- The IP address and network name are currently unused
- The VIP is on the same subnet as your public interface
·
A private IP
address with a host name for each private interface
Node
|
Interface
Name
|
Type
|
IP Address
(example)
|
Registered
In
|
RAC1
|
RAC1
|
Public
|
192.168.1.167
|
DNS (if
available, else the hosts file)
|
RAC1
|
RAC1-vip
|
Virtual
|
192.168.1.243
|
DNS (if
available, else the hosts file)
|
RAC1
|
RAC1-priv
|
Private
|
192.168.2.167
|
Hosts file
|
RAC2
|
RAC2
|
Public
|
192.168.1.166
|
DNS (if
available, else the hosts file)
|
RAC2
|
RAC2-vip
|
Virtual
|
192.168.1.202
|
DNS (if
available, else the hosts file)
|
RAC2
|
RAC2-priv
|
Private
|
192.168.2.166
|
Hosts file
|
Public, VIPs and SCAN VIPs are resolved by DNS. The
private IPs for Cluster Interconnects are resolved through /etc/hosts. The hostname
along with public/private and NAS network is configured at the time of OEL
network installations. The final Network Configurations files are listed here.
(a) hostname:
For Node node1:
[root@RAC1 ~]# hostname RAC1.oracle.com
RAC1.oracle.com : /etc/sysconfig/network (example)
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=RAC1.oracle.com
For Node node2:
[root@RAC2 ~]# hostname RAC2.oracle.com
RAC2.server.com: /etc/sysconfig/network (example)
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=RAC2.oracle.com
(b) Private Network
for Cluster Interconnect:
RAC1.oracle.com:
/etc/sysconfig/network-scripts/ifcfg-eth0
(example)
# Linksys Gigabit Network Adapter
DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.0.255
HWADDR=00:22:6B:BF:4E:60
IPADDR=192.168.0.1
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.0.0
ONBOOT=yes
RAC2.oracle.com:
/etc/sysconfig/network-scripts/ifcfg-eth0 (example)
# Linksys Gigabit Network Adapter
DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.0.255
HWADDR=00:22:6B:BF:4E:4B
IPADDR=192.168.0.2
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.0.0
ONBOOT=yes
(c) Public
Network:
RAC1.oracle.com:
/etc/sysconfig/network-scripts/ifcfg-eth2 (example)
# Broadcom Corporation NetXtreme
BCM5751 Gigabit Ethernet PCI Express
DEVICE=eth2
BOOTPROTO=static
BROADCAST=192.168.2.255
HWADDR=00:18:8B:04:6A:62
IPADDR=192.168.2.1
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.2.0
ONBOOT=yes
RAC2.oracle.com:
/etc/sysconfig/network-scripts/ifcfg-eth2 (example)
# Broadcom Corporation NetXtreme
BCM5751 Gigabit Ethernet PCI Express
DEVICE=eth2
BOOTPROTO=static
BROADCAST=192.168.2.255
HWADDR=00:18:8B:24:F8:58
IPADDR=192.168.2.2
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.2.0
ONBOOT=yes
(e) /etc/hosts files:
Creating Oracle Users/Groups/Permissions and Installation
Paths: (On all the RAC Nodes):
[root@RAC1] groupadd -g 1000 oinstall (Oracle Inventory Group)
[root@RAC1] groupadd -g 1031 dba (Oracle OSDBA Group)
[root@RAC1] useradd -u 1101 -g oinstall -G dba oracle (Oracle Software Owner User)
[root@RAC1] passwd oracle
Repeat this procedure on all the
cluster nodes(RAC1 or RAC2).
Create Required
Software Directories:
Oracle Base Directory
Oracle Inventory Directory
Oracle Cluster-ware Home Directory
Oracle Home Directory
Create the directories in which the Oracle software will be installed.
[root@RAC1] mkdir -p /u01/app/11.2.0/grid
[root@RAC1] mkdir -p /u01/app/oracle/product/11.2.0/db_1
[root@RAC1] chown -R oracle:dba /u01
[root@RAC1] chmod -R 775 /u01/
Checking the Configuration of the Hangcheck-timer Module
Before
installing Oracle Real Application Clusters on Linux systems, verify that the
hangcheck-timer module (
hangcheck-timer
)
is loaded and configured correctly. hangcheck-timer
monitors
the Linux kernel for extended operating system hangs that could affect the
reliability of a RAC node and cause a database corruption. If a hang occurs,
then the module restarts the node in seconds.
Install cvuqdisk Package: (On all the RAC Nodes):
This package is located in the rpm directory on
Clusterware Media and needs to be installed after the group oinstall is
created. In my case, as this was a fresh install of 10g R2 on a new machines,
old versions of cvuqdisk was not present. If it is, then the older version
needs to be removed first.
[root@RAC1]# pwd
/home/oracle/10gr2/clusterware/rpm
[root@RAC1] # export CVUQDISK_GRP=oinstall
[root@RAC1] # echo $CVUQDISK_GRP
oinstall
[root@RAC1] # rpm -ivh rpm –ivh cvuqdisk-1.0.1-1.rpm
Preparing...
########################################### [100%]
1:cvuqdisk
########################################### [100%]
[root@RAC1] # rpm -qa | grep cvuqdisk
cvuqdisk-1.0.1-1
Change the setting of SELinux to permissive by editing the "/etc/selinux/config" file, making sure the SELINUX flag is set as follows.
SELINUX=permissive
Alternatively, this alteration can be done using the GUI tool (System > Administration > Security Level and Firewall). Click on the SELinux tab and disable the feature.
If you have the Linux firewall enabled, you will need to disable or configure it, as shown here or here.
The following is an example of disabling the firewall.
[root@RAC1] # service iptables stop
Either configure NTP, or make sure it is not configured so the Oracle Cluster Time Synchronization
Service (ctssd) can synchronize the times of the RAC nodes. If you want to reconfigure NTP do the following.
Shutting down ntpd: [ OK ]
If you want to use NTP, you must add the "-x" option into the following line in the "/etc/sysconfig/ntpd" file.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
Then restart NTP.
Change the setting of SELinux to permissive by editing the "/etc/selinux/config" file, making sure the SELINUX flag is set as follows.
[root@RAC1] # vi /etc/selinux/config
SELINUX=permissive
Alternatively, this alteration can be done using the GUI tool (System > Administration > Security Level and Firewall). Click on the SELinux tab and disable the feature.
If you have the Linux firewall enabled, you will need to disable or configure it, as shown here or here.
The following is an example of disabling the firewall.
[root@RAC1] # service iptables stop
[root@RAC1] # chkconfig iptables off
Either configure NTP, or make sure it is not configured so the Oracle Cluster Time Synchronization
Service (ctssd) can synchronize the times of the RAC nodes. If you want to reconfigure NTP do the following.
[root@RAC1] ## service ntpd stop
Shutting down ntpd: [ OK ]
[root@RAC1] # chkconfig ntpd off
[root@RAC1] # mv /etc/ntp.conf /etc/ntp.conf.orig
[root@RAC1] # rm /var/run/ntpd.pid
If you want to use NTP, you must add the "-x" option into the following line in the "/etc/sysconfig/ntpd" file.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
Then restart NTP.
[root@RAC1] # service ntpd restart
Prepare the Shared Disks
Both Oracle Cluster-ware and Oracle RAC require access to disks that are
shared by each node in the cluster.
[root@RAC1] # mkdir /shared_config
[root@RAC1] # mkdir /shared_grid
[root@RAC1] # mkdir /shared_home
[root@RAC1] # mkdir /shared_data
Add the following lines to the "/etc/exports" file.
[root@RAC1] # vi /etc/exports
/shared_config *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/shared_grid *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/shared_home *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/shared_data *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
Run the following command to export the NFS shares.
[root@RAC1] # chkconfig nfs on
[root@RAC1] # service nfs restart
On both RAC1 and RAC2 create the directories in which the Oracle software will be installed.
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/oracle/product/11.2.0/db_1
mkdir -p /u01/oradata
mkdir -p /u01/shared_config
chown -R oracle:dba /u01/app /u01/app/oracle /u01/oradata /u01/shared_config
chmod -R 775 /u01/app /u01/app/oracle /u01/oradata /u01/shared_config
Add the following lines to the "/etc/fstab" file.
[root@RAC1] # vi /etc/fstab
nas1:/shared_config /u01/shared_config nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
nas1:/shared_grid /u01/app/11.2.0/grid nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
nas1:/shared_home /u01/app/oracle/product/11.2.0/db_1 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
nas1:/shared_data /u01/oradata nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
Mount the NFS shares on both servers.
mount /u01/shared_config
mount /u01/app/11.2.0/grid
mount /u01/app/oracle/product/11.2.0/db_1
mount /u01/oradata
Make sure the permissions on the shared directories are correct.
chown -R oracle:dba /u01/shared_config
chown -R oracle:dba /u01/app/11.2.0/grid
chown -R oracle:dba /u01/app/oracle/product/11.2.0/db_1
chown -R oracle:dba /u01/oradata
We've made a lot of changes, so it's worth doing a reboot of the servers at this point to make sure all the changes have taken effect.
# shutdown -r now
Install the Grid Infrastructure
Start both RAC nodes, login to RAC1 as the oracle user and start the Oracle installer.
[root@RAC1] # xhosts +
[oracle@RAC1] # cd /11gr2/grid
[oracle@RAC1] # ./runInstaller
Select the "Install and Configure Grid Infrastructure for a Cluster" option,
then click the "Next" button
Select the "Advanced Installation" option, then click the "Next" button.
Select the the required language support, then click the "Next" button
Enter cluster information and uncheck the "Configure GNS" option,
then click the "Next" button.
On the "Specify Node Information" screen, click the "Add" button.
Enter the details of the second node in the cluster, then click the "OK" button.
Click the "SSH Connectivity..." button and enter the password for the "oracle" user. Click the "Setup" button to to configure SSH connectivity, and the "Test" button to test it once it is complete. Click the "Next" button.
Check the public and private networks are specified correctly,
then click the "Next" button.
Select the "Shared File System" option, then click the "Next" button.
Select the required level of redundancy and enter the OCR File Location(s),
then click the "Next" button.
Select the required level of redundancy and enter the Voting Disk File Location(s), then click the "Next" button.
Accept the default failure isolation support by clicking the "Next" button.
Click the ok button
Select the preferred OS groups for each option, then click the "Next" button. Click the "Yes" button on the subsequent message dialog.
Enter "/u01/app/oracle" as the Oracle Base and "/u01/app/11.2.0/grid" as the software location, then click the "Next" button.
Accept the default inventory directory by clicking the "Next" button.
Wait while the prerequisite checks complete. If you have any issues, either fix them or check the "Ignore All" checkbox and click the "Next" button. If there are no issues, you will move directly to the summary screen. If you are happy with the summary information, click the "Finish" button.
Wait while the setup takes place.
When prompted, run the configuration scripts on each node.
The output from the "orainstRoot.sh" file should look something like that listed below.
The output of the root.sh will vary a little depending on the node it is run on. Example output can be seen here (Node1, Node2).
Once the scripts have completed, return to the "Execute Configuration Scripts" screen on RAC1 and click the "OK" button.
Wait for the configuration assistants to complete.
Once the scripts have completed, return to the "Execute Configuration Scripts" screen on RAC1 and click the "OK" button.
Wait for the configuration assistants to complete.
We expect the verification phase to fail with an error relating to the SCAN, assuming you are not using DNS.
Click the "Close" button to exit the installer.
Provided this is the only error, it is safe to ignore this and continue by clicking the "Next" button.INFO: Checking Single Client Access Name (SCAN)... INFO: Checking name resolution setup for "rac-scan.localdomain"... INFO: ERROR: INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan.localdomain" INFO: ERROR: INFO: PRVF-4657 : Name resolution setup check for "rac-scan.localdomain" (IP address: 192.168.2.201) failed INFO: ERROR: INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "rac-scan.localdomain" INFO: Verification of SCAN VIP and Listener setup failed
Click the "Close" button to exit the installer.
The grid infrastructure installation is now complete.
Install the Database
Start all the RAC nodes, login to RAC1 as the oracle user and start the Oracle installer.[oracle@RAC1] # cd /11gr2/data/ [oracle@RAC1] # ./runInstaller
No comments:
Post a Comment