Vagrant OEL 8 DevOps · MySQL · InnoDB Cluster

VagrantMySQL InnoDB Cluster Lab

3-node MySQL InnoDB Cluster lab with Vagrant on OEL 8. MySQL Shell cluster creation, MySQL Router setup, automatic failover testing and rejoin procedure.

MySQL InnoDB Cluster is the official MySQL HA solution that combines MySQL Group Replication, MySQL Shell, and MySQL Router. Unlike raw Group Replication, InnoDB Cluster provides an easy management interface through MySQL Shell and built-in routing through MySQL Router — making it much simpler to operate.

Applications MySQL Router :6446 — R/W :6447 — R/O All running Group Replication underneath ic1 192.168.56.71 PRIMARY ic2 192.168.56.72 SECONDARY ic3 192.168.56.73 SECONDARY
Ruby — InnoDB Cluster Vagrantfile
Vagrant.configure("2") do |config|
  config.vm.box              = "generic/oracle8"
  config.vm.box_check_update = false

  NODES = [
    { name: "ic1", ip: "192.168.56.71", id: 1, port: 13371 },
    { name: "ic2", ip: "192.168.56.72", id: 2, port: 13372 },
    { name: "ic3", ip: "192.168.56.73", id: 3, port: 13373 },
  ]

  NODES.each do |node|
    config.vm.define node[:name] do |vm|
      vm.vm.hostname = node[:name]
      vm.vm.network "private_network",  ip: node[:ip]
      vm.vm.network "forwarded_port",   guest: 3306, host: node[:port]

      vm.vm.provider "virtualbox" do |vb|
        vb.name="InnoDB-Cluster-#{node[:name]}"; vb.memory=2048; vb.cpus=2
      end
      vm.vm.provider "parallels" do |prl|
        prl.name="InnoDB-Cluster-#{node[:name]}"; prl.memory=2048; prl.cpus=2
        prl.update_guest_tools=false
      end

      vm.vm.provision "shell",
        path: "scripts/innodb_cluster_setup.sh",
        args: [node[:name], node[:ip], node[:id].to_s]
    end
  end
end
BASH — innodb_cluster_setup.sh
#!/bin/bash
# scripts/innodb_cluster_setup.sh
set -e

NODE_NAME=${1:-ic1}
MY_IP=${2:-192.168.56.71}
SERVER_ID=${3:-1}
ROOT_PASS="Root@123!"

echo "=== Setting up MySQL InnoDB Cluster node: ${NODE_NAME} ==="

# Install MySQL 8 + MySQL Shell
rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2022
dnf install -y https://dev.mysql.com/get/mysql80-community-release-el8-9.noarch.rpm 2>/dev/null
dnf module disable -y mysql 2>/dev/null
dnf install -y mysql-community-server mysql-shell

systemctl enable --now mysqld
TEMP_PASS=$(grep "temporary password" /var/log/mysqld.log | tail -1 | awk "{print $NF}")

# Change root password
mysql --connect-expired-password -u root -p"$TEMP_PASS"   -e "ALTER USER 'root'@'localhost' IDENTIFIED BY '${ROOT_PASS}';"

# Write InnoDB Cluster compatible configuration
cat > /etc/my.cnf.d/innodb_cluster.cnf << EOF
[mysqld]
server-id              = ${SERVER_ID}
bind-address           = 0.0.0.0
gtid_mode              = ON
enforce_gtid_consistency = ON
log_bin                = /var/log/mysql/mysql-bin.log
binlog_format          = ROW
log_slave_updates      = ON
master_info_repository = TABLE
relay_log_info_repository = TABLE
transaction_write_set_extraction = XXHASH64
plugin-load-add        = group_replication.so
EOF

systemctl restart mysqld

# Setup remote root and cluster admin user
mysql -u root -p"$ROOT_PASS" << SQL
CREATE USER IF NOT EXISTS 'root'@'%' IDENTIFIED BY '${ROOT_PASS}';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
CREATE USER IF NOT EXISTS 'clusteradmin'@'%' IDENTIFIED BY 'Cluster@123!';
GRANT ALL PRIVILEGES ON *.* TO 'clusteradmin'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;
SQL

echo "=== ${NODE_NAME} ready. Use MySQL Shell to create cluster. ==="
echo "MySQL Shell: mysqlsh root@${MY_IP}:3306"
BASH — MySQL Shell Create Cluster
# SSH into ic1
vagrant ssh ic1

# Launch MySQL Shell
mysqlsh

# Inside MySQL Shell — create cluster
shell.connect('root@ic1:3306')

# Check instance configuration
dba.checkInstanceConfiguration('root@ic1:3306')

# Configure all instances (run for each)
dba.configureInstance('root@ic1:3306', {restart: true})
dba.configureInstance('root@ic2:3306', {restart: true})
dba.configureInstance('root@ic3:3306', {restart: true})

# Create cluster on ic1
var cluster = dba.createCluster('MyCluster')

# Add ic2 and ic3 to cluster
cluster.addInstance('root@ic2:3306', {recoveryMethod: 'clone'})
cluster.addInstance('root@ic3:3306', {recoveryMethod: 'clone'})

# Check cluster status
cluster.status()

# Expected output shows:
# ic1: PRIMARY
# ic2: ONLINE (secondary)
# ic3: ONLINE (secondary)
# exit
BASH — MySQL Router Setup
# On ic1 — bootstrap MySQL Router
vagrant ssh ic1

sudo mysqlrouter --bootstrap root@ic1:3306     --user=mysqlrouter     --directory=/var/lib/mysqlrouter

# Start MySQL Router
sudo mysqlrouter --config /var/lib/mysqlrouter/mysqlrouter.conf &

# Test connections via Router
# R/W port (goes to PRIMARY)
mysql -u root -pRoot@123! -h 127.0.0.1 -P 6446 -e "SELECT @@hostname, @@read_only;"

# R/O port (goes to SECONDARY)
mysql -u root -pRoot@123! -h 127.0.0.1 -P 6447 -e "SELECT @@hostname, @@read_only;"
BASH — Failover Test
# Check cluster status
vagrant ssh ic1 -c "mysqlsh --js -e \"
var c = dba.getCluster();
print(JSON.stringify(c.status(), null, 2));
\""

# Test failover — stop primary ic1
vagrant ssh ic1 -c "sudo systemctl stop mysqld"

# InnoDB Cluster auto-elects new primary
vagrant ssh ic2 -c "mysqlsh --js -e \"
var c = dba.getCluster();
c.status();
\""

# Rejoin ic1 after restart
vagrant ssh ic1 -c "sudo systemctl start mysqld"
vagrant ssh ic2 -c "mysqlsh --js -e \"
var c = dba.getCluster();
c.rejoinInstance('root@ic1:3306');
\""
💡 Note: InnoDB Cluster uses clone plugin for adding new members — faster than traditional replication-based SST. The entire database is cloned from an existing member automatically when you run addInstance().