Vagrant OEL 8 DevOps · Oracle · RAC · Database Labs

VagrantOracle RAC Lab on OEL 8

Set up an Oracle RAC 2-node lab with Vagrant on OEL 8. Shared ASM disks, cluster networking, Grid Infrastructure and RAC database provisioning.

Oracle Real Application Clusters (RAC) allows multiple server instances to access a single Oracle database — providing high availability and scalability. Setting up a RAC lab with Vagrant gives you a safe environment to learn Oracle Clusterware, ASM, and RAC database administration without real hardware.

⚠ Warning: Oracle RAC lab is resource-intensive. You need at least 16 GB RAM and 100 GB free disk on your host machine. Each node requires 6–8 GB RAM.
Oracle RAC Lab RAC Node 1 192.168.56.41 Public IP 192.168.56.51 Private / Interconnect 192.168.56.61 VIP RAC Node 2 192.168.56.42 Public IP 192.168.56.52 Private / Interconnect 192.168.56.62 VIP Shared Storage — ASM disks via shared VDI /dev/sdb     /dev/sdc     /dev/sdd SCAN IP  192.168.56.70
InterfaceNode 1Node 2Purpose
Public192.168.56.41192.168.56.42Client connections
Private (IC)192.168.56.51192.168.56.52Cluster interconnect
VIP192.168.56.61192.168.56.62Virtual IP for failover
SCAN192.168.56.70Single Client Access Name
Ruby — RAC Vagrantfile
# -*- mode: ruby -*-
Vagrant.configure("2") do |config|
  config.vm.box              = "generic/oracle8"
  config.vm.box_check_update = false

  # Shared disk path (must exist on host)
  SHARED_DISK_PATH = File.join(File.dirname(__FILE__), "shared_disks")

  [["rac1", "192.168.56.41", "192.168.56.51"],
   ["rac2", "192.168.56.42", "192.168.56.52"]].each_with_index do |(name, pub_ip, priv_ip), idx|

    config.vm.define name do |node|
      node.vm.hostname = name
      node.vm.network "private_network", ip: pub_ip,  name: "vboxnet0"
      node.vm.network "private_network", ip: priv_ip, name: "vboxnet1"

      node.vm.provider "virtualbox" do |vb|
        vb.name   = "Oracle-RAC-#{name.upcase}"
        vb.memory = 8192
        vb.cpus   = 2
        vb.customize ["modifyvm", :id, "--ioapic", "on"]

        # Shared storage — only create on first node
        if idx == 0
          Dir.mkdir(SHARED_DISK_PATH) unless Dir.exist?(SHARED_DISK_PATH)

          ["asm1", "asm2", "asm3"].each do |disk|
            disk_file = "#{SHARED_DISK_PATH}/#{disk}.vdi"
            unless File.exist?(disk_file)
              vb.customize ["createhd", "--filename", disk_file,
                           "--size", 10240, "--variant", "Fixed"]
              vb.customize ["modifyhd", disk_file, "--type", "Shareable"]
            end
            vb.customize ["storageattach", :id,
              "--storagectl", "SATA Controller",
              "--port", ["asm1","asm2","asm3"].index(disk)+1,
              "--device", 0, "--type", "hdd",
              "--medium", disk_file, "--mtype", "shareable"]
          end
        else
          # Attach existing shared disks to second node
          ["asm1","asm2","asm3"].each_with_index do |disk, port|
            vb.customize ["storageattach", :id,
              "--storagectl", "SATA Controller",
              "--port", port+1, "--device", 0,
              "--type", "hdd",
              "--medium", "#{SHARED_DISK_PATH}/#{disk}.vdi",
              "--mtype", "shareable"]
          end
        end
      end

      node.vm.provision "shell", path: "scripts/rac_prereqs.sh",
        args: [name, pub_ip, priv_ip]
    end
  end
end
BASH — rac_prereqs.sh
#!/bin/bash
# scripts/rac_prereqs.sh
set -e

NODE_NAME=${1:-rac1}
PUBLIC_IP=${2:-192.168.56.41}
PRIVATE_IP=${3:-192.168.56.51}

echo "=== RAC Prerequisites for ${NODE_NAME} ==="

# Install Oracle Grid/RAC prerequisites
dnf install -y oracle-database-preinstall-19c
dnf install -y unzip libaio bc nfs-utils

# Create grid user
useradd -g oinstall -G dba,racdba,asmadmin grid
echo "grid" | passwd --stdin grid

# Configure /etc/hosts for all nodes
cat >> /etc/hosts << EOF
192.168.56.41  rac1        # Node 1 public
192.168.56.42  rac2        # Node 2 public
192.168.56.51  rac1-priv   # Node 1 interconnect
192.168.56.52  rac2-priv   # Node 2 interconnect
192.168.56.61  rac1-vip    # Node 1 VIP
192.168.56.62  rac2-vip    # Node 2 VIP
192.168.56.70  rac-scan    # SCAN IP
EOF

# Configure ASM shared disks
dnf install -y oracleasm-support
oracleasm init
oracleasm configure -i << ASMCFG
oracle
dba
y
y
ASMCFG

# Mark shared disks for ASM
oracleasm createdisk DATA1 /dev/sdb
oracleasm createdisk DATA2 /dev/sdc
oracleasm createdisk FRA1  /dev/sdd

# Setup SSH equivalency between nodes (simplified)
mkdir -p /home/oracle/.ssh /home/grid/.ssh
chmod 700 /home/oracle/.ssh /home/grid/.ssh

echo "=== RAC Node ${NODE_NAME} prerequisites done ==="
echo "Next: Install Grid Infrastructure, then RAC database"
  1. Run vagrant up --no-parallel to start both nodes
  2. Setup SSH equivalency between oracle and grid users on both nodes
  3. Install Oracle Grid Infrastructure 19c using runInstaller on rac1
  4. Add rac2 to the cluster using addNode.sh
  5. Install Oracle Database 19c software on both nodes
  6. Create RAC database using DBCA in RAC mode
  7. Verify with srvctl status database and crsctl status res -t
BASH — Verify RAC
# On rac1 as grid user
crsctl status res -t

# Check database status
srvctl status database -d ORCL

# Check all RAC instances
srvctl status instance -d ORCL -i ORCL1
srvctl status instance -d ORCL -i ORCL2

# Connect to RAC
sqlplus sys/Oracle@123@//rac-scan:1521/ORCL as sysdba
SELECT instance_name, host_name, status FROM gv$instance;

# Check ASM
sqlplus / as sysasm
SELECT name, state FROM v$asm_diskgroup;
💡 Note: RAC setup is complex and requires 3–5 hours for a full lab setup. Always take a snapshot after each major phase — Grid install, DB install, DB creation — so you can restore without starting over.