Oracle 19c RAC (Grid Infrastructure) Installation.

Oracle 19c has upgraded some of the minimum requirements such as the operating system distribution, packages, and other software requirements for Oracle Grid Infrastructure installation. A default Linux installation includes most of the required packages and helps you limit manual verification of package dependencies. Oracle does not support Leaf nodes in 19c. In Oracle Grid Infrastructure 19c (19.1) and later releases, all nodes in a Flex Cluster function as hub nodes. The capabilities offered by Leaf nodes in the original implementation of the Flex Cluster architecture can as easily be served by hub nodes. Therefore, leaf nodes are no longer supported.

Bellow table list out some of the requirements in a high level

Item Description Comments  
Linux x86-64 Oracle Linux 7.4 (4.1.12 or higher) For details click here
RAM 8 GB RAM Oracle recommends at least 8 GB RAM for Oracle Grid Infrastructure installations. If the production system does not have enough RAM, you will see instance evictions and node reboot that may not be very explainable very often.
Swap space   Between 1 GB and 2 GB: 1.5 times the size of the RAM
Between 2 GB and 16 GB: Equal to the size of the RAM
More than 16 GB: 16 GB
Click here
CRS/Voting disk Create ASM disk with 4GB storage MGMT DB to can be created on a different disk
ASM Disk1 MGMT DB, minimum 27 GB This optional in 9c
VIP address One VIP address for each node Make sure these addresses are not used anywhere. It should not be pingable before CRS is up and running.
Private IP One private IP for each node You can create network bonding to create hardware redundancy for private interconnect
Public IP One public IP address for each node  
SCAN See #6 below  

Note: I have excluded complete list of checklist from this note, You can refer Grid Infrastructure Installation guide (here)more details.

In order to make sure all pre-requisite are met before installing GI is to use oracle-database-preinstall-18c-1.0-4.el7.x86_64.rpm. Oracle Linux automatically creates a standard (not role-allocated) Oracle installation owner and groups, and sets up other kernel configuration settings as required for Oracle installations. This has to be run on all nodes of the cluster. There is no separate rpm for 19c yet.

# yum install oracle-database-preinstall-18c
# sysctl -p

If you are not using oracle-database-preinstall-18c-1.0-4.el7.x86_64.rpm package, We will have to manually configure all the per-requsites, I am not going to the details as there are bunch of documents/article in the internet explains how it can be done. There are the main areas.

  1. OS requirements
  2. Limits
  3. Packages
  4. groups and users.

Additionally make sure following is done

1. set SELINUX to permissive or disable.

Restart server after setting SELINUX

# SELINUX=permissive

2. Disable Linux firewall.

If firewall is need to be enabled, you have to make sure it does not do any packet snipping on private interconnect.

# systemctl stop firewalld
# systemctl disable firewall

3. Create the directories for GI software will be installed. (on all nodes)

mkdir -p /u02/app/grid/19c
chown -R grid:oinstall /u02
chmod -R 775 /u02

4. Installing the cvuqdisk RPM for Linux on all nodes.

You may have down load cvuqdisk-1.0.10-1.rpm package install it on all nodes.

rpm -iv cvuqdisk-1.0.10-1.rpm

5. Storage requirements

Oracle still allow filesystem or ASM to be used  for Oracle Database files, recovery files, or both. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file. The cluster requires shared connection to storage for each server in the cluster. Oracle Clusterware supports Network File Systems (NFSs), iSCSI, Direct Attached Storage (DAS), Storage Area Network (SAN) storage, and Network Attached Storage (NAS).

For this installation, I used Linux device manager “udev” to configure disks. I used this link to configure SCSI device.

6. Single Client Access Name (SCAN)
The SCAN is a domain name registered to at least one and up to three IP addresses. I configured DNS server on my Windows server 2008 R2 and create scan name ‘primcluster-scan.just.net’. First need to create a forwarding lookup zone and create New Hosts (A or AAA) record with 3 IPs mapping to same name.

7. Update hosts file. Here is my hosts file on all nodes with public IP, private IP and Virtual IP

 
192.168.1.100   roll.localdomain        roll
192.168.1.101   stone.localdomain       stone
 
# Private
192.168.2.100   roll-priv.localdomain   sroll-priv
192.168.2.101   stone-priv.localdomain  stone-priv
 
# Virtual
192.168.1.106   roll-vip.localdomain    roll-vip
192.168.1.107   stone-vip.localdomain    stone-vip

8. Run Cluvfy to verify the environment.

Cluster Verification Utility (CVU): CVU is a command-line utility that you use to verify a range of cluster and Oracle RAC specific components.

/share/19csw/runcluvfy.sh stage -pre crsinst -n roll,stone -verbose

Output of this script has to be verified to make sure every things are ok. During the installation, gridsetup runs this script to verify the environment any way.

9. Set the environment variable for you GRID user

This includes GRID_HOME,ORACLE_HOME settings for the grid user. This is optional an not really required for grid install.

Start the installation.

In 19c, Grid installation is done by gridsetup.sh. You will unzip the software zip file to the GRID_HOME then you invoke the gridsetup.sh this will popup GUI and you will need X11 fowarding enabled.

$su - grid 
$cd /u02/app/grid/19c
$cp \share\V981627-01.zip .
$unzip V981627-01.zip
$rm -rf V981627-01.zip
$./gridSetup.sh

Select “Configure Oracle Grid Infrastructure for a New Cluster”

Select Configure an Oracle standalone cluster

This the screen where you are going to configure cluster name and SCAN. You can also configure GNS but in this installation I am not using DHCP so I am not configuring GNS.

Add second node and setup SSH. You can setup the password less ssh in this step and it is very convenient. We don’t see option for leaf nodes anymore, it used to be there in 12c and 18c. Oracle actually deprecated leaf node concept as it is kind of useless anyway.

Change the private subnet(interface) for use of “ASM & Private”. In Flex ASM, ASM network provides the capability to isolate ASM’s internal network trafic to it’s own dedicated private network. The ASM network is the communication path in which all the traffic between database instances and ASM instances happens. This traffic is mostly the metadata information such as file’s extent map.

In 18c Oracle gives option to choose to create MGMT DB or not. I believe in earlier release, MGMT DB was mandatory. Select your MGMT database option. I choose NO as it is just a test install

Select the drive for CRS and Voting disk. I got this option as I select ASM for the CRS or voting file

Remaining Screens are self explanatory. So I am not going to show it here. Going straight to last few screens. This is where Oracle runs cluvfy and other utilities.

[root@roll ~]# /u02/app/grid/oraInventory/orainstRoot.sh 
Changing permissions of /u02/app/grid/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u02/app/grid/oraInventory to oinstall.
The execution of the script is complete.
[root@roll ~]# /u02/app/grid/oraInventory/orainstRoot.sh
Changing permissions of /u02/app/grid/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u02/app/grid/oraInventory to oinstall.
The execution of the script is complete.
[root@roll ~]# /u02/app/grid/19c/root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u02/app/grid/19c
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
Copying oraenv to /usr/local/bin …
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
Copying coraenv to /usr/local/bin …
Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u02/app/grid/19c/crs/install/crsconfig_params
The log of current session can be found at:
/u02/app/grid/base/crsdata/roll/crsconfig/rootcrs_roll_2019-03-20_05-14-27PM.log
2019/03/20 17:15:10 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2019/03/20 17:15:10 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2019/03/20 17:15:10 CLSRSC-363: User ignored prerequisites during installation
2019/03/20 17:15:11 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2019/03/20 17:15:20 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2019/03/20 17:15:25 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2019/03/20 17:15:25 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2019/03/20 17:15:25 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2019/03/20 17:16:34 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2019/03/20 17:16:47 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2019/03/20 17:17:06 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2019/03/20 17:17:45 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2019/03/20 17:17:46 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2019/03/20 17:18:12 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2019/03/20 17:18:13 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2019/03/20 17:19:22 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2019/03/20 17:19:48 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2019/03/20 17:20:15 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2019/03/20 17:20:40 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
ASM has been created and started successfully.
[DBT-30001] Disk groups created successfully. Check /u02/app/grid/base/cfgtoollogs/asmca/asmca-190320PM052143.log for details.
2019/03/20 17:23:18 CLSRSC-482: Running command: '/u02/app/grid/19c/bin/ocrconfig -upgrade grid oinstall'
CRS-4256: Updating the profile
Successful addition of voting disk 2b551e6e3e484f92bfc3253f44576c21.
Successfully replaced voting disk group with +CRS.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
ONLINE 2b551e6e3e484f92bfc3253f44576c21 (/dev/sdi1) [CRS]
Located 1 voting disk(s).
2019/03/20 17:27:04 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2019/03/20 17:29:16 CLSRSC-343: Successfully started Oracle Clusterware stack
2019/03/20 17:29:17 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2019/03/20 17:34:58 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2019/03/20 17:36:43 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster … succeeded

On the second node

[root@stone share]# /u02/app/grid/19c/root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u02/app/grid/19c
Enter the full pathname of the local bin directory: [/usr/local/bin]: y
Creating y directory…
Copying dbhome to y …
Copying oraenv to y …
Copying coraenv to y …
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u02/app/grid/19c/crs/install/crsconfig_params
The log of current session can be found at:
/u02/app/grid/base/crsdata/stone/crsconfig/rootcrs_stone_2019-03-20_05-37-10PM.log
2019/03/20 17:37:32 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2019/03/20 17:37:33 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2019/03/20 17:37:33 CLSRSC-363: User ignored prerequisites during installation
2019/03/20 17:37:33 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2019/03/20 17:37:33 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2019/03/20 17:37:39 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2019/03/20 17:37:39 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2019/03/20 17:37:40 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2019/03/20 17:37:42 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2019/03/20 17:37:48 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2019/03/20 17:37:48 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2019/03/20 17:37:59 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2019/03/20 17:38:35 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2019/03/20 17:38:42 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2019/03/20 17:38:43 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2019/03/20 17:39:31 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2019/03/20 17:39:40 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2019/03/20 17:39:48 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2019/03/20 17:39:54 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
2019/03/20 17:40:18 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2019/03/20 17:42:20 CLSRSC-343: Successfully started Oracle Clusterware stack
2019/03/20 17:42:20 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2019/03/20 17:43:31 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2019/03/20 17:43:59 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster … succeeded
[root@roll rules.d]# /u02/app/grid/19c/bin/crsctl status res -t
Name Target State Server State details
Local Resources
ora.LISTENER.lsnr
ONLINE ONLINE roll STABLE
ONLINE ONLINE stone STABLE
ora.chad
ONLINE ONLINE roll STABLE
ONLINE ONLINE stone STABLE
ora.net1.network
ONLINE ONLINE roll STABLE
ONLINE ONLINE stone STABLE
ora.ons
ONLINE ONLINE roll STABLE
ONLINE ONLINE stone STABLE
Cluster Resources
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
1 ONLINE ONLINE roll STABLE
2 ONLINE ONLINE stone STABLE
3 OFFLINE OFFLINE STABLE
ora.CRS.dg(ora.asmgroup)
1 ONLINE ONLINE roll STABLE
2 ONLINE ONLINE stone STABLE
3 OFFLINE OFFLINE STABLE
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE stone STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE roll STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE roll STABLE
ora.asm(ora.asmgroup)
1 ONLINE ONLINE roll Started,STABLE
2 ONLINE ONLINE stone Started,STABLE
3 OFFLINE OFFLINE STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
1 ONLINE ONLINE roll STABLE
2 ONLINE ONLINE stone STABLE
3 OFFLINE OFFLINE STABLE
ora.cvu
1 ONLINE ONLINE roll STABLE
ora.qosmserver
1 ONLINE ONLINE roll STABLE
ora.roll.vip
1 ONLINE ONLINE roll STABLE
ora.scan1.vip
1 ONLINE ONLINE stone STABLE
ora.scan2.vip
1 ONLINE ONLINE roll STABLE
ora.scan3.vip
1 ONLINE ONLINE roll STABLE
ora.stone.vip
1 ONLINE ONLINE stone STABLE

Few words about some additional daemons or listeners etc

ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
1 ONLINE ONLINE roll STABLE
2 ONLINE ONLINE stone STABLE
3 OFFLINE OFFLINE
ora.asm(ora.asmgroup)
1 ONLINE ONLINE roll Started,STABLE
2 ONLINE ONLINE stone Started,STABLE
3 OFFLINE OFFLINE STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
1 ONLINE ONLINE roll STABLE
2 ONLINE ONLINE stone STABLE
3 OFFLINE OFFLINE STABLE

With FLEX ASM the default ASM instance are 3, even if it is 2 node cluster. What you see above correct. Starting from 12c, Oracle has made some changes in the way ASM implemented, mainly to implement FLEX feature. This allows cluster to have Flexible ASM instance. Some node many not even have to host full ASM instance, it can only host a proxy instance.

Leave a Reply

Your email address will not be published. Required fields are marked *