A connection plugin is the layer that takes a "run this Python on the remote" request
and actually makes it happen. The default is OpenSSH. Three other plugins cover the
common alternatives — paramiko for pure-Python SSH, local for the control
node itself, and winrm / psrp for Windows.
Wraps the system's openssh binary. This is what you want 99% of the time on Linux
managed nodes. It's faster than paramiko (because OpenSSH supports
ControlPersist), and it handles every SSH feature including agent forwarding and
ProxyJump.
[databases]
db1.example.com ansible_user=deploy
db2.example.com ansible_user=deploy
[databases:vars]
ansible_connection=ssh # default — can be omitted
ansible_port=22
ansible_ssh_private_key_file=~/.ssh/id_deploy
A pure-Python SSH client. Use it when you're running Ansible from a Windows control node
without WSL, or from a container without an SSH client installed. Otherwise stick with
ssh — paramiko is slower because it lacks connection multiplexing.
[oldhosts]
legacy.example.com ansible_connection=paramiko
# Or globally in ansible.cfg if you have to
[defaults]
transport = paramiko
Use connection: local when the task should run on the same machine that's running
Ansible — no SSH, no remote user. Common cases: rendering a config to disk, calling a
cloud API, building artefacts before pushing them.
---
- hosts: localhost
connection: local # play-level
tasks:
- name: Render TLS cert
ansible.builtin.template:
src: cert.pem.j2
dest: ./build/cert.pem
- name: Push to AWS Secrets Manager
community.aws.aws_secret:
name: prod/tls
secret: "{{{{ lookup('file', './build/cert.pem') }}}}"
become sits on top of the connection plugin. After SSH lands you on the host as
the connection user, become escalates to another user (usually root) via
sudo, su, doas, or a few corporate-grade alternatives:
become_method | How it escalates | Most common on |
|---|---|---|
sudo | Default — calls sudo | Linux |
su | Calls su - | Older Unix |
doas | OpenBSD's sudo replacement | OpenBSD |
pbrun | BeyondTrust Privilege Manager | Enterprise |
runas | Windows RunAs | Windows targets |
---
- hosts: databases
remote_user: deploy # connection user (SSH login)
become: true # escalate after SSH
become_user: root # to whom (default: root)
become_method: sudo # how (default: sudo)
tasks:
- name: Install MySQL
ansible.builtin.dnf:
name: mysql-server
state: present
- name: Run a query as the mysql account, not root
ansible.builtin.command: mysql -e "SELECT 1"
become_user: mysql # task-level override
Without tuning, every Ansible task does three SSH operations: connect, send module, disconnect. On a 200-task playbook against 20 hosts that's 12,000 SSH connections. Two settings cut that by 80%:
- ControlPersist — keeps an SSH connection open between tasks (multiplexed via a UNIX socket). One connection per host for the whole playbook instead of one per task.
- Pipelining — sends the module over the existing SSH connection's stdin instead of SFTP'ing it then SSH'ing again to run it. Halves SSH ops per task.
[defaults]
forks = 25 # parallel hosts (default: 5)
[ssh_connection]
# pipelining = ~50% fewer SSH ops per task
pipelining = True
# control multiplexing — keep SSH connection alive between tasks
ssh_args = -o ControlMaster=auto -o ControlPersist=300s -o PreferredAuthentications=publickey
# directory for the multiplex sockets (one per host)
control_path_dir = ~/.ansible/cp
# bigger SSH transfer buffer
scp_extra_args = -O # use new SCP protocol on RHEL 9+
requiretty to be off in /etc/sudoers on the managed nodes. OEL 8 is fine by default. Older RHEL 6 / CentOS 6 nodes need Defaults !requiretty added or pipelining will fail.For hosts behind a bastion, pass ProxyJump via SSH args. Per-host config:
[bastion]
bastion.example.com ansible_user=admin ansible_port=2222
[behind_bastion]
db1 ansible_host=10.0.1.10
db2 ansible_host=10.0.1.11
db3 ansible_host=10.0.1.12
[behind_bastion:vars]
ansible_ssh_common_args="-o ProxyJump=admin@bastion.example.com:2222"
ansible_user=deploy
Or wire it once in ~/.ssh/config so every tool benefits, not just Ansible:
Host bastion
HostName bastion.example.com
User admin
Port 2222
Host db*
ProxyJump bastion
User deploy
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
# Increase verbosity progressively (-v through -vvvv shows SSH command itself)
ansible db1 -m ping -vvv
# Test SSH directly with the same args Ansible uses
ssh -o ControlMaster=auto -o ControlPersist=300s deploy@db1.example.com hostname
# Show the exact ansible-resolved ssh args
ansible db1 --list-hosts -vvv 2>&1 | grep "EXEC ssh"
# Smoke-test the whole stack: connection + Python + sudo
ansible db1 -m ansible.builtin.command -a "whoami" --become
# Should print: root
- pipelining = True — ~50% speedup, free win on OEL 8
- ControlPersist = 300s — connection reuse across tasks
- forks = 25–50 — parallel host count (depends on control node CPU and SSH server limits)
- fact_caching = jsonfile — skip fact gathering on subsequent runs (page 4)
- strategy = mitogen_linear (optional plugin) — 2–6× speedup for heavy playbooks
- callbacks_enabled = profile_tasks — find the slow tasks first
You've covered every layer of how Ansible figures out where to go and how to get there: static INI/YAML inventory, dynamic plugins, transformation plugins (constructed, generator), Ansible Vault for encryption, vault discipline for git and CI/CD, and connection plugins with the SSH knobs that make large playbooks fly.
Next — Section 5: Database Labs (19 pages, 32–50). This is the heart of the series — actual playbooks for MySQL 8, master-replica replication, Group Replication, InnoDB Cluster, ProxySQL, Galera/PXC, PostgreSQL, Patroni, Oracle 19c, MongoDB. Real end-to-end automation against real labs.