Vagrant OEL 8 DevOps · Galera · PXC · Database Labs

VagrantGalera Cluster (PXC) Lab on OEL 8

3-node Percona XtraDB Cluster (Galera) lab with Vagrant on OEL 8. Synchronous multi-master replication, wsrep monitoring and multi-master write testing.

Percona XtraDB Cluster (PXC) is a synchronous multi-master MySQL cluster based on Galera replication. Every node in the cluster can accept reads and writes simultaneously — there is no single point of write failure. This Vagrant lab creates a 3-node PXC cluster on OEL 8.

pxc1 192.168.56.61 Bootstrap node — first start pxc2 192.168.56.62 Joins pxc1 pxc3 192.168.56.63 Joins cluster All nodes wsrep_cluster_size = 3 · wsrep_local_state = SYNCED (4) READ + WRITE on any node
Ruby — PXC Vagrantfile
Vagrant.configure("2") do |config|
  config.vm.box              = "generic/oracle8"
  config.vm.box_check_update = false

  CLUSTER_NAME = "pxc-lab"
  NODES = [
    { name: "pxc1", ip: "192.168.56.61", port: 13361 },
    { name: "pxc2", ip: "192.168.56.62", port: 13362 },
    { name: "pxc3", ip: "192.168.56.63", port: 13363 },
  ]
  CLUSTER_IPS = NODES.map{|n| n[:ip]}.join(",")

  NODES.each do |node|
    config.vm.define node[:name] do |vm|
      vm.vm.hostname = node[:name]
      vm.vm.network "private_network",  ip: node[:ip]
      vm.vm.network "forwarded_port",   guest: 3306, host: node[:port]

      vm.vm.provider "virtualbox" do |vb|
        vb.name="PXC-#{node[:name]}"; vb.memory=2048; vb.cpus=2
      end
      vm.vm.provider "parallels" do |prl|
        prl.name="PXC-#{node[:name]}"; prl.memory=2048; prl.cpus=2
        prl.update_guest_tools=false
      end

      vm.vm.provision "shell",
        path: "scripts/pxc_setup.sh",
        args: [node[:name], node[:ip], CLUSTER_IPS, CLUSTER_NAME]
    end
  end
end
BASH — pxc_setup.sh
#!/bin/bash
# scripts/pxc_setup.sh
set -e

NODE_NAME=${1:-pxc1}
MY_IP=${2:-192.168.56.61}
CLUSTER_IPS=${3:-"192.168.56.61,192.168.56.62,192.168.56.63"}
CLUSTER_NAME=${4:-pxc-lab}
ROOT_PASS="Root@123!"
SST_PASS="SSTpass@123!"

echo "=== Setting up PXC node: ${NODE_NAME} ==="

# Add Percona repository
rpm --import https://repo.percona.com/yum/RPM-GPG-KEY-Percona
dnf install -y https://repo.percona.com/yum/percona-release-latest.noarch.rpm
percona-release setup pxc-80
dnf install -y percona-xtradb-cluster

# Write PXC config
cat > /etc/my.cnf.d/pxc.cnf << EOF
[mysqld]
server-id                      = $(echo ${MY_IP} | awk -F. '{print $4}')
bind-address                   = 0.0.0.0
datadir                        = /var/lib/mysql
socket                         = /var/lib/mysql/mysql.sock

# Galera settings
wsrep_on                       = ON
wsrep_provider                 = /usr/lib64/galera4/libgalera_smm.so
wsrep_cluster_name             = "${CLUSTER_NAME}"
wsrep_cluster_address          = "gcomm://${CLUSTER_IPS}"
wsrep_node_name                = "${NODE_NAME}"
wsrep_node_address             = "${MY_IP}"
wsrep_sst_method               = xtrabackup-v2
wsrep_sst_auth                 = sst:${SST_PASS}

# PXC settings
pxc_strict_mode                = ENFORCING
pxc-encrypt-cluster-traffic    = OFF

# InnoDB
innodb_buffer_pool_size        = 512M
innodb_flush_log_at_trx_commit = 0
innodb_flush_method            = O_DIRECT

# Binary log
log_bin                        = /var/log/mysql/mysql-bin.log
binlog_format                  = ROW
EOF

# Bootstrap first node only
if [ "$NODE_NAME" = "pxc1" ]; then
    systemctl start mysql@bootstrap.service

    TEMP_PASS=$(grep "temporary password" /var/log/mysqld.log | tail -1 | awk "{print $NF}")
    mysql --connect-expired-password -u root -p"$TEMP_PASS" << EOF
ALTER USER 'root'@'localhost' IDENTIFIED BY '${ROOT_PASS}';
CREATE USER 'root'@'%' IDENTIFIED BY '${ROOT_PASS}';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
CREATE USER 'sst'@'localhost' IDENTIFIED BY '${SST_PASS}';
GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sst'@'localhost';
CREATE USER 'monitor'@'%' IDENTIFIED BY 'Monitor@123!';
GRANT SELECT, PROCESS ON *.* TO 'monitor'@'%';
FLUSH PRIVILEGES;
EOF
    echo "=== pxc1 bootstrapped ==="
else
    echo "Waiting 40s for pxc1 to be ready..."
    sleep 40
    systemctl start mysql
    echo "=== ${NODE_NAME} joined cluster ==="
fi

# Show cluster status
mysql -u root -p"$ROOT_PASS" -e "SHOW STATUS LIKE 'wsrep%';" |   grep -E "wsrep_cluster_size|wsrep_local_state_comment|wsrep_connected"

echo "=== PXC ${NODE_NAME} ready ==="
BASH — Verify PXC
# Start in order
vagrant up --no-parallel

# Check cluster status
vagrant ssh pxc1 -c "mysql -u root -pRoot@123! -e   'SHOW STATUS LIKE "wsrep_cluster_size";
   SHOW STATUS LIKE "wsrep_local_state_comment";
   SHOW STATUS LIKE "wsrep_connected";'"

# Expected: wsrep_cluster_size = 3, wsrep_local_state_comment = Synced

# Test multi-master write
vagrant ssh pxc1 -c "mysql -u root -pRoot@123! -e   'CREATE DATABASE IF NOT EXISTS testdb; USE testdb;
   CREATE TABLE IF NOT EXISTS t1 (id INT, node VARCHAR(20));
   INSERT INTO t1 VALUES (1, "from_pxc1");'"

vagrant ssh pxc2 -c "mysql -u root -pRoot@123! -e   'INSERT INTO testdb.t1 VALUES (2, "from_pxc2");'"

# Verify all nodes see all data
vagrant ssh pxc3 -c "mysql -u root -pRoot@123! -e   'SELECT * FROM testdb.t1;'"
💡 Note: PXC uses wsrep_sst_method=xtrabackup-v2 for State Snapshot Transfer. When a new node joins, it receives a full backup from an existing node. First join of pxc2 and pxc3 may take 30–60 seconds depending on data size.