ProxySQL MySQL MySQL · DBA · High Availability

ProxySQLCluster Setup & Config Sync

Configure a ProxySQL cluster for high availability. Set up cluster credentials, add nodes, verify sync, and understand checksum-based config propagation.

ProxySQL can be deployed as a cluster where multiple ProxySQL nodes automatically synchronize their configuration with each other. This provides high availability for the ProxySQL layer itself — if one ProxySQL node fails, others continue serving traffic with the same configuration.

  [ App Servers ] ──── Load Balancer ──── [ ProxySQL Node 1 :6033 ]
                                      └── [ ProxySQL Node 2 :6033 ]
                                      └── [ ProxySQL Node 3 :6033 ]
                                                │
                                          Config Sync (6032)
                                                │
                              ┌─────────────────┴──────────────────┐
                        [ MySQL Master ]               [ MySQL Replicas ]
SQL — Cluster Credentials (All Nodes)
-- On ALL ProxySQL nodes, set cluster username/password
UPDATE global_variables SET variable_value='proxycluster'
WHERE variable_name='admin-cluster_username';

UPDATE global_variables SET variable_value='ClusterPass123!'
WHERE variable_name='admin-cluster_password';

-- Set check intervals
UPDATE global_variables SET variable_value='1000'
WHERE variable_name='admin-cluster_check_interval_ms';

UPDATE global_variables SET variable_value='2'
WHERE variable_name='admin-cluster_check_status_frequency';

-- Enable cluster synchronization
UPDATE global_variables SET variable_value='1'
WHERE variable_name='admin-cluster_mysql_query_rules_save_to_disk';

UPDATE global_variables SET variable_value='1'
WHERE variable_name='admin-cluster_mysql_servers_save_to_disk';

UPDATE global_variables SET variable_value='1'
WHERE variable_name='admin-cluster_mysql_users_save_to_disk';

UPDATE global_variables SET variable_value='1'
WHERE variable_name='admin-cluster_mysql_variables_save_to_disk';

LOAD ADMIN VARIABLES TO RUNTIME;
SAVE ADMIN VARIABLES TO DISK;
SQL — Add Cluster Nodes
-- On each ProxySQL node, add all other nodes to proxysql_servers
-- (Run this on ALL nodes)
INSERT INTO proxysql_servers (hostname, port, weight, comment)
VALUES ('192.168.1.10', 6032, 1, 'ProxySQL Node 1');

INSERT INTO proxysql_servers (hostname, port, weight, comment)
VALUES ('192.168.1.11', 6032, 1, 'ProxySQL Node 2');

INSERT INTO proxysql_servers (hostname, port, weight, comment)
VALUES ('192.168.1.12', 6032, 1, 'ProxySQL Node 3');

LOAD PROXYSQL SERVERS TO RUNTIME;
SAVE PROXYSQL SERVERS TO DISK;
💡 Note: Also create the cluster user in MySQL on each ProxySQL server's admin interface: UPDATE global_variables SET variable_value='proxycluster:ClusterPass123!' WHERE variable_name='admin-admin_credentials';
SQL — Verify Cluster
-- Check cluster node status
SELECT hostname, port, weight, active, comment
FROM proxysql_servers;

-- View runtime cluster nodes
SELECT hostname, port, weight
FROM runtime_proxysql_servers;

-- Check cluster metrics (sync status)
SELECT hostname, port, weight,
       response_time_ms, Uptime_s, last_check_ms,
       Queries, Client_Connections_connected
FROM stats.stats_proxysql_servers_metrics;

-- Check checksums (should match on all nodes when in sync)
SELECT name, version, epoch, checksum, changed_at, updated_at
FROM stats.stats_proxysql_servers_checksums;
VariableDescription
admin-cluster_usernameUsername for cluster inter-node communication
admin-cluster_passwordPassword for cluster communication
admin-cluster_check_interval_msHow often nodes check each other (ms)
admin-cluster_check_status_frequencyFrequency of status checks
admin-cluster_mysql_query_rules_save_to_diskAuto-save synced query rules to disk
admin-cluster_mysql_servers_save_to_diskAuto-save synced servers to disk
admin-cluster_mysql_users_save_to_diskAuto-save synced users to disk
admin-cluster_mysql_variables_save_to_diskAuto-save synced variables to disk

ProxySQL cluster sync is checksum-based. Each node maintains checksums of its configuration sections. When a node detects that another node has a different (newer) checksum, it fetches the updated config from that node and applies it locally.

  • Changes are made on any one node
  • That node runs LOAD ... TO RUNTIME
  • Other nodes detect the checksum change within cluster_check_interval_ms
  • Other nodes automatically pull and apply the change
  • If cluster_*_save_to_disk=1, changes are also saved to disk on all nodes
⚠ Warning: ProxySQL Cluster does NOT automatically sync from disk on startup. If a node restarts, it loads its own disk config — make sure all nodes have the same config saved to disk.