Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

How to deploy Oracle 18c Grid and Standalone Database on Red Hat Enterprise Linux 7.x

Summary: Step by step guide to deploy Oracle 18c Grid and Standalone Database on Red Hat Enterprise Linux 7.x

This article may have been automatically translated. If you have any feedback regarding its quality, please let us know using the form at the bottom of this page.

Article Content


Instructions

Software and Hardware Requirements and Prerequisites

   a. RAM and swap space requirements Minimum RAM:
  • At least 1 GB RAM for Oracle Database installations. 2 GB RAM recommended
  • At least 8 GB RAM for Oracle Grid Infrastructure installations
  • Swap Space:Minimum swap space recommended for Oracle 18c Database is 2 GB or twice the size of RAM, whichever is lesser
   b. Storage Checklist
The following describes the disk space requirement for Linux x86-64:
  • At least 6.8 GB for an Oracle Grid Infrastructure for a standalone server installation
  • At least 7.5 GB for Oracle Database Enterprise Edition
  • At least 7.5 GB for Oracle Database Standard Edition 2
  c. Network Requirements
  •  It is recommended to ensure each node contains at least one network interface cards for public network
  • The hostname of each node must follow the RFC 952 standard (www.ietf.org/rfc/rfc952.txt). Hostname that include an underscore ("_") are not permitted
  d. Operating System Requirements
  • Red Hat Enterprise Linux (Red Hat Enterprise Linux) 7.x
Below is the recommended disk partitioning scheme entries when installing Red Hat Enterprise Linux 7 using a kickstart file on the local HDDs with at least 1.2 TB space available:
 
part /boot --asprimary --fstype="xfs" --ondisk=sda --size=1024
part pv.1 --size=1 --grow --ondisk=sda --asprimary
volgroup rhel7 pv.1
logvol / --name=root --fstype=xfs --vgname=rhel7 --size=51200
logvol swap --fstype swap --name=swap --vgname=rhel7 --size=17408
logvol /home --name=home --fstype=xfs --vgname=rhel7 --size=51200
logvol /var --name=var --fstype=xfs --vgname=rhel7 --size=20480
logvol /opt --name=opt --fstype=xfs --vgname=rhel7 --size=20480
logvol /tmp --name=tmp --fstype=xfs --vgname=rhel7 --size=5120
logvol /u01 --name=u01 --fstype=xfs --vgname=rhel7 --size=1 --grow
 

Preparing Servers for Oracle Installation

Before installing Grid and database make sure to install below deployment scripts from Dell EMC which will set environment for Oracle database installation
2.1. Attaching systems to Red Hat Network (RHN)/Unbreakable Linux Network (ULN) Repository
Step 1: All the pre-requisites rpms need to installed before any GRID/DB installation is performed
  • rhel-7-server-optional-rpms
  • rhel-7.x
Skip Step 2 if the repository setup is successful for all the channels mentioned in RHN/ULN

Step 2:

Most of the pre-requisite RPMs for Oracle GRID/DB install are available as part of the base ISO. However, few RPMs like compat-libstdc++. is not available in the base (RH) ISO file and needs to be downloaded and installed manually prior to installing the preinstall RPMS provided by Dell for Red Hat.
Setup a local yum repository to automatically install the rest of dependency RPMS for performing GRID/DB install

1. The recommended configuration is to serve the files over http using an Apache server (package name: httpd). This section discusses hosting the repository files from a local file system storage. While other options to host repository files exist, they are outside of the scope of this document. It is highly recommended to use local file system storage for speed and simplicity of maintenance

  • To mount the DVD, insert the DVD into the server and it should auto-mount into the /media directory.
  • To mount an ISO image we will need to run the following command as root, substituting the path name of your ISO image for the field myISO.iso:

           mkdir /media/myISO
           mount -o loop myISO.iso /media/myISO

 
2. To install and configure the http daemon, configure the machine that will host the repository for all other machines to use the DVD image locally. Create the file /etc/yum.repos.d/local.repo and enter the following:                   

          [local]
           name=Local Repository
           baseurl=file:///media/myISO
           gpgcheck=0
           enabled=0 

3. Now we will install the Apache service daemon with the following command which will also temporarily enable the local repository for dependency resolution:

         yum -y install httpd --enablerepo=local

         After the Apache service daemon is installed, start the service and set it to start up for us next time we reboot. Run the following commands as root:

         systemctl start httpd

4. To use Apache to serve out the repository, copy the contents of the DVD into a published web directory. Run the following commands as root (make sure to switch myISO with the name of your ISO)command:

         mkdir /var/www/html/myISO
         cp -R /media/myISO/* /var/www/html/myISO

5. This step is only necessary if you are running SELinux on the server that hosts the repository. The following command should be run as root and will restore the appropriate SELinux context to the copied files:
    
        restorecon -Rvv /var/www/html/     

6. The final step is to gather the DNS name or IP of the server that is hosting the repository. The DNS name or IP of the hosting server will be used to configure your yum repository repo file on the client server. The following is the listing of an example configuration using the Red Hat Enterprise Linux 7.x Server media and is held in the configuration file:/etc/yum.repos.d/myRepo.repo

       [myRepo]
       name=Red Hat Enterprise Linux 7.x Base ISO DVD
       baseurl= http://reposerver.mydomain.com/myISO
       enabled=1
       gpgcheck=0

Replace reposerver.mydomain.com with your server's DNS name or IP address. Copy the file to /etc/yum.repos.d in all the necessary servers where GRID/DB will be installed

7. Install the compat-libstdc++ rpm manually using rpm or yum command in the directory where the rpms are copied.

       Ex: rpm -ivh

       yum localinstall -y

Step 3:
Replace reposerver.mydomain.com with your server's DNS name or IP address. Copy the file to /etc/yum.repos.d in all the necessary servers where GRID/DB will be installed

1. Install the compat-libstdcc++ rpms by running the following command

      yum install -y compat-libstdc++.i686

      yum install -y compat-libstdc++.x86_64

2. Download the latest DellEMC Oracle Deployment tar file from DellEMC Deployment RPMs for RH to the servers where GRID/DB Installations will be performed

2.2. Setting up the Network

 2.2.1. Public Network

Ensure that the public IP address is a valid and routable IP address.

To configure the public network

  1. Login as root
  2. Navigate to /etc/sysconfig/network-scripts and edit the ifcfg-em# file

where # is the number of the network device

NAME="Oracle Public"
DEVICE= "em3"
ONBOOT=yes
TYPE= Ethernet
BOOTPROTO=static
IPADDR=
NETMASK=
GATEWAY=

When configuring Red Hat Enterprise Linux 7 as a guest OS in a VMware ESXi environment, the network device enumeration might begin with 'ens#' instead of 'em#'

 3. Set the hostname via below command
        hostnamectl set-hostname  <hostname>
         where hostname is the hostname that we are using for installation
4. Type service network restart to restart network service
5. Type ifconfig to verify that the IP addresses are set correctly
6. To check your network configuration, ping the public IP address from a client on the LAN


Preparing Shared Storage for Oracle Standalone Installation

In this section, the terms disk(s), volume(s), virtual disk(s), LUN(s) mean the same and are used interchangeably, unless specified otherwise

Oracle 18c Standalone Database installation requires LUNs for storing your Oracle Cluster Registry (OCR), Oracle Database files, and Flash Recovery Area (FRA). Additionally if using a virtual environment, an OS volume is needed to store the OS of the VM running Oracle 18c database. The following table shows the typical recommended storage volume design for Oracle 18c database.

Database Volume Type/PurposeDatabase Volume Type/Purpose No of Volumes Volume Size
OCR/VOTE 3 50 GB each
DATA 4 250 GB1 each
REDO2 2 At least 50GB each
FRA 1 100 GB3
TEMP 1 100GB

1 - Adjust each volume size based on your database; 2 - At least two REDO ASM diskgroups are recommended, each with at least one storage volume; 3 - Ideally, the size should be 1.5x the size of the database if storage usable capacity permits;

 3.1. Setting up Device Mapper Multipath
 
The purpose of Device Mapper Multipath is to enable multiple I/O paths to improve performance and provide consistent naming. Multipathing accomplishes this by combining your I/O paths into one device mapper path and properly load balancing the I/O. This section will provide the best practices on how to setup your device mapper multipathing within your Dell PowerEdge server.
Skip this section if Red Hat Enterprise Linux 7 is deployed as a guest OS in a virtual environment as multipathing is handled at the bare-metal host-level

   Setting Up Multipath on Bare-Metal OS

  • If you are deploying Red Hat Enterprise Linux on bare-metal, verify that your device-mapper and multipath driver are at least the version shown below or higher
        #> rpm -qa | grep device-mapper-multipath
  •  Enable multipath
        #> mpathconf -enable
  •  Configure the multipath file /etc/multipath.conf with the correct storage settings. Here is an of the multipath file being configure with XtremIO Storage:
device {
vendor XtremIO
product XtremApp
path_grouping_policy multibus
path_checker tur
path_selector "queue-length 0"
rr_min_io_rq 1
user_friendly_names yes
fast_io_fail_tmo 15
failback immediate
}
  •  Add appropriate user friendly names to each volume with the corresponding scsi_id. We can get scsi_ids with the command below:

#>/usr/lib/udev/scsi_id -g -u -d /dev/sdX 

  • Locate the multipath section within your /etc/multipath.conf file. In this section you will provide the scsi_id of each volume and provide an alias in order to keep a consistent naming convention across all of your nodes. An example is shown below:

multipaths {
multipath {
wwid
alias alias_of_volume1
}
multipath {
wwid
alias alias_of_volume2
} }

  •  Restart your multipath daemon service
#> service multipathd restart
  •  Verify that your multipath volumes alias are displayed properly
#> multipath -ll
Setting Up Multipath on ESXi Hypervisor
We configured multipathing on the ESXi 6.7 host according to the following best practices:
  • Use vSphere Native Multipathing (NMP) as the multipathing software.
  • Retained the default selection of round-robin for the native path selection policy (PSP) on the PowerMax volumes that are presented to the ESXi hosts.
  •  Change the NMP round-robin path switching frequency of I/O packets from the default value of 1,000 to 1. For information about how to set this parameter, see Dell EMC Host Connectivity Guide for VMware ESX Server
3.2 Partitioning the Shared Disk
 
This section describes how to use parted utility to create a single partition on a volume/virtual disk that spans the entire disk.
1. Partition each database volume by running the following command
* When running on a virtual environment:
$> parted -s /dev/sdX mklabel msdos

$> parted -s /dev/sdX mkpart primary 2048s 100%

* When Red Hat Enterprise Linux is running as a bare-metal OS, partition each database volume that was setup using device-mapper by running the following command:
$> parted -s /dev/mapper/ mklabel msdos

$> parted -s /dev/mapper/ mkpart primary 2048s 100%
2. Repeat this for all the required volumes

3.3. Using udev Rules for disk permissions and persistence

Red Hat Enterprise Linux 7.x  have the ability to use udev rules to ensure that the system properly manages permissions of device nodes. In this case, we are referring to properly setting permissions for our LUNs/volumes discovered by the OS. It is important to note that udev rules are executed in enumerated order. When creating udev rules for setting permissions, please include the prefix 60- and append .rules to the end of the filename.

  • Create a file 60-oracle-asmdevices.rules under /etc/udev/rules.d
  • Ensure each block device has an entry in the file as shown below

When Red Hat Enterprise Linux is running as a bare-metal OS:

           #---------------------start udev rule contents ------------------------#

KERNEL=="dm-*", ENV =="C1_OCR1p?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_OCR2p?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_OCR3p?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_DATA1p?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_DATA2p?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_DATA3p?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_DATA4p?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_REDO1p?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_REDO2p?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_FRA?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

KERNEL=="dm-*", ENV =="C1_TEMP?", OWNER:="grid", GROUP:="asmadmin", MODE="0660"

          #-------------------------- end udev rule contents ------------------#

     When Red Hat Enterprise Linux is running as a guest OS  :

Obtain the unique scsi_ids by running the following command against each database volume and provide the value in the appropriate RESULT section below: /usr/lib/udev/scsi_id -g -u -d /dev/sdX   

#---------------------start udev rule contents ------------------------#

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-ocr1", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-ocr2", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-ocr3", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-fra", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-temp, OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-data1", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-data2", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-data3", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-data4", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-redo1", OWNER="grid", GROUP="asmadmin", MODE="0660"

KERNEL=="sd[a-z]*[1-9]", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="<scsi_id>", SYMLINK+="oracleasm/disks/ora-redo2", OWNER="grid", GROUP="asmadmin", MODE="0660"

#-------------------------- end udev rule contents ------------------#

  • Run "udevadm trigger" to apply the rule.


Installing Oracle 18c Grid Infrastructure for a standalone database

This section gives you the installation steps of Oracle 18c grid infrastructure for a standalone database
  • Open a terminal window and type: xhost + or export DISPLAY=:0.0
  • If /u01/app/grid/18.3.0/ directory does not exists, create it manually as the 'grid ' user
  •  Unzip Grid installation files to /u01/app/grid/18.3.0/as the grid user
unzip -q /u01/app/grid/18.3.0/LINUX.X64_180000_grid_home.zip 
  1. cd /u01/app/grid/18.3.0
  2. Run ./gridSetup.sh &
  3. In the Select Configuration Option window, select Configure Grid Infrastructure for a Standalone Server(Oracle Restart) and click Next HOW16915_en_US__1image(9645)
  4. In the Create ASM Disk Group window, enter the Disk group name (OCR), Redundancy (Normal), and select appropriate candidate disks that are meant for OCR. Uncheck Configure Oracle ASM Filter Driver and click Next
HOW16915_en_US__2image(9646)
5. Set password for ASM users
HOW16915_en_US__3image(9647)
6. In the Specify Management Option window, proceed with default options and click Next
HOW16915_en_US__4image(9648)
 
7. In the Privileged Operating System Groups window, select default Operating System Groups and click Next. A popup window will appear. Click Yes to confirm the group settings.
HOW16915_en_US__5image(9649)
8. In the Specify Installation Location window, choose the Oracle Base location, and click Next
HOW16915_en_US__6image(9650)
 
9. In the Create Inventory window, choose default and Click Next
HOW16915_en_US__7image(9651)
10. In the Root script execution configuration window, uncheck Automatically Run Configuration Scripts and click Next
11. In the Perform Prerequisite Checks window, if there are issue, select Fix & Check Again
HOW16915_en_US__8image(9653)
 
12. After running the fixup script in the Summary window, review summary click Install
HOW16915_en_US__9image(9654)
 
13. Run the Root scripts whenever prompted and click Ok
HOW16915_en_US__10image(9655)
HOW16915_en_US__11image(9656)
14. In the Finish window, click Close after Grid installation is successful
HOW16915_en_US__12image(9657)
 

Oracle Standalone Database Software Installation

  1. Mount the Oracle Database 18c Media

  2. Login as oracle user and go to the directory where the Oracle Database media is located and run the installer

    #> su - oracle
    #> /runInstaller
  3. In the Configure Security Updates window, uncheck I wish to receive security updated via My Oracle Support and click Next
  4. In the Select Installation Option window, select Set Up Software Only and click Next
HOW16915_en_US__13image(9658)
5. In the Select Database Installation Option window, Select Single instance database installation and click Next
HOW16915_en_US__14image(9659)
6. In the Select Database Edition window select Enterprise Edition and click Next

HOW16915_en_US__15image(9660)

7. In the Specify Installation Location window, specify the location of Oracle base and click Next

Oracle base: /u01/app/oracle

Software Location: /u01/app/oracle/product/18.3.0/db

HOW16915_en_US__16image(9661)

 If you installed the Dell EMC Oracle preinstall deployment RPMs then the needed groups as noted in the screen below should already exist. If not, you may have to create the appropriate groups manually
HOW16915_en_US__17image(9662)

9. After Prerequisite Checks are completed, verify your settings in the Summary window and click Install
HOW16915_en_US__18image(9663)
HOW16915_en_US__19image(9664)
10. Upon completion of installation process, the Execute Configuration scripts window will appear. Follow the instructions in the window and click Ok
HOW16915_en_US__20image(9665)
11. Run the root.sh script to complete the installation
HOW16915_en_US__21image(9666)
12. In the Finish window click Close after Oracle Database installation successful
HOW16915_en_US__22image(9667)
 

Database Installation

6.1. Creating Disk Groups Using ASM Configuration Assistant (ASMCA)
  1. Login as grid user and start asma
    #> /u01/grid/app/18.3.0 /bin//u01/app/grid/18.3.0/bin/asmca
  2. Create 'DATA' disk group with External Redundancy by selecting appropriate candidate disks

HOW16915_en_US__23image(9668)
3. Create two 'REDO' disk groups - REDO1 and REDO2 - with External Redundancy by selecting at least one candidate disk per REDO disk group
4. Create 'FRA' disk group with External Redundancy by selecting appropriate candidate disks
HOW16915_en_US__24image(9669)
5. Create 'TEMP' disk group with External Redundancy by selecting appropriate candidate disks
HOW16915_en_US__25image(9670)
6. Verify all required disk groups and click Exit to close from ASMCA utility
HOW16915_en_US__26image(9671)

7. Change ASM striping to fine-grained for REDO, TEMP and FRA diskgroups as a grid user using below commands
We must change to fine-grained striping before we run DBCA

SQL> ALTER DISKGROUP REDO ALTER TEMPLATE onlinelog ATTRIBUTES (fine)

SQL> ALTER DISKGROUP TEMP ALTER TEMPLATE tempfile ATTRIBUTES (fine)

SQL> ALTER DISKGROUP FRA ALTER TEMPLATE onlinelog ATTRIBUTES (fine)

6.2. Creating Database using DBCA
  1. Login as oracle user and run the dbca utility from ORACLE_HOME

#> /u01/app/oracle/product/18.3.0/db/bin/dbca

      2. In the select Database Operation window, select Create a database and click Next
HOW16915_en_US__27image(9672)
3. In the Select Database Creation Mode window, select Advanced Configuration and click Next
HOW16915_en_US__28image(9673)

4. In the Select Database Deployment Type window, select Oracle Single Instance database for the Database type and select General Purpose or Transition Processing as a template and click Next

HOW16915_en_US__29image(9674)
 
5. In the Specify Database Identification Details window, enter appropriate values for Global database name and select Create as Container database and specify number of PDBs and PDB name and click Next
Creating a Container database is optional. If you would like to create a traditional Oracle database then uncheck 'Create as Container database' option
  

HOW16915_en_US__30image(9675)

6. In the Select Database Storage Option window, select Database file location as +DATA and click Next

 HOW16915_en_US__31image(9676)

7.  In the Select Fast Recovery Option window, check Specify Fast Recovery Area, enter Fast Recovery Area as +FRA and specify the size and click Next

HOW16915_en_US__32image(9677)

8. In the Specify Network Configuration Details window, select the already created listener and click Next
HOW16915_en_US__33image(9678)
 
9. In the Select Oracle Data Vault Config Option window, leave it as default and click Next

HOW16915_en_US__34image(9679)

10. In the Specify Configuration Options window, specify appropriate SGA size and PGA size and click Next
HOW16915_en_US__35image(9680)
11. In the Specify Management Options window, check the EM box as need and click Next. In our case, we left it as default

HOW16915_en_US__36image(9681)

12. In the Specify Database User Credentials window, enter password and click Next

HOW16915_en_US__37image(9682)

13. In the Select Database Creation Option window, click on Customize Storage Locations

HOW16915_en_US__38image(9683)

14. Create/modify the Redo Log Groups based on the following design recommendation
Redo Log Group Number Thread Number Disk Group Location Redo Log File Size
1 1 +REDO1 5 GB
2 1 +REDO2 5 GB
3 1 +REDO1 5 GB
4 1 +REDO2 5 GB
15. In the Summary window, review summary and click Finish
HOW16915_en_US__39image(9684)
HOW16915_en_US__40image(9685)
16. In the Finish window, check for Database creation completion and click Close to exit the installer
HOW16915_en_US__41image(9686)
17. Check database status and Listener status
 
HOW16915_en_US__42image(9687)

 

 

 

Article Properties


Affected Product

Red Hat Enterprise Linux Version 7

Last Published Date

21 Feb 2021

Version

4

Article Type

How To