Ansible OEL 8 DevOps · OEL 8 · Connections

AnsibleConnection Plugins — SSH, become, ssh_args

The connection plugin layer — when to use ssh vs paramiko vs local, how Ansible reuses SSH connections via control sockets, the ssh_args knobs that make playbooks 5–10× faster on large fleets, and the per-host overrides for jump-box topologies.

A connection plugin is the layer that takes a "run this Python on the remote" request and actually makes it happen. The default is OpenSSH. Three other plugins cover the common alternatives — paramiko for pure-Python SSH, local for the control node itself, and winrm / psrp for Windows.

How Ansible reaches a managed node Control node your laptop / CI Playbook Inventory connection plugin connection plugins ssh default — OpenSSH paramiko pure-Python SSH local no SSH — same machine winrm / psrp Windows targets Managed node db1.example.com Python 3 runs the module Default is ssh · override per-host with ansible_connection

Wraps the system's openssh binary. This is what you want 99% of the time on Linux managed nodes. It's faster than paramiko (because OpenSSH supports ControlPersist), and it handles every SSH feature including agent forwarding and ProxyJump.

YAML — connection variables in inventory
[databases]
db1.example.com ansible_user=deploy
db2.example.com ansible_user=deploy

[databases:vars]
ansible_connection=ssh                # default — can be omitted
ansible_port=22
ansible_ssh_private_key_file=~/.ssh/id_deploy

A pure-Python SSH client. Use it when you're running Ansible from a Windows control node without WSL, or from a container without an SSH client installed. Otherwise stick with ssh — paramiko is slower because it lacks connection multiplexing.

INI — using paramiko
[oldhosts]
legacy.example.com ansible_connection=paramiko

# Or globally in ansible.cfg if you have to
[defaults]
transport = paramiko

Use connection: local when the task should run on the same machine that's running Ansible — no SSH, no remote user. Common cases: rendering a config to disk, calling a cloud API, building artefacts before pushing them.

YAML — task-level local connection
---
- hosts: localhost
  connection: local        # play-level
  tasks:
    - name: Render TLS cert
      ansible.builtin.template:
        src: cert.pem.j2
        dest: ./build/cert.pem

    - name: Push to AWS Secrets Manager
      community.aws.aws_secret:
        name: prod/tls
        secret: "{{{{ lookup('file', './build/cert.pem') }}}}"

become sits on top of the connection plugin. After SSH lands you on the host as the connection user, become escalates to another user (usually root) via sudo, su, doas, or a few corporate-grade alternatives:

become_methodHow it escalatesMost common on
sudoDefault — calls sudoLinux
suCalls su -Older Unix
doasOpenBSD's sudo replacementOpenBSD
pbrunBeyondTrust Privilege ManagerEnterprise
runasWindows RunAsWindows targets
YAML — become syntax recap
---
- hosts: databases
  remote_user: deploy        # connection user (SSH login)
  become: true               # escalate after SSH
  become_user: root          # to whom (default: root)
  become_method: sudo        # how (default: sudo)
  tasks:
    - name: Install MySQL
      ansible.builtin.dnf:
        name: mysql-server
        state: present

    - name: Run a query as the mysql account, not root
      ansible.builtin.command: mysql -e "SELECT 1"
      become_user: mysql      # task-level override

Without tuning, every Ansible task does three SSH operations: connect, send module, disconnect. On a 200-task playbook against 20 hosts that's 12,000 SSH connections. Two settings cut that by 80%:

  1. ControlPersist — keeps an SSH connection open between tasks (multiplexed via a UNIX socket). One connection per host for the whole playbook instead of one per task.
  2. Pipelining — sends the module over the existing SSH connection's stdin instead of SFTP'ing it then SSH'ing again to run it. Halves SSH ops per task.
INI — production ansible.cfg for speed
[defaults]
forks = 25                                  # parallel hosts (default: 5)

[ssh_connection]
# pipelining = ~50% fewer SSH ops per task
pipelining = True

# control multiplexing — keep SSH connection alive between tasks
ssh_args = -o ControlMaster=auto -o ControlPersist=300s -o PreferredAuthentications=publickey

# directory for the multiplex sockets (one per host)
control_path_dir = ~/.ansible/cp

# bigger SSH transfer buffer
scp_extra_args = -O                          # use new SCP protocol on RHEL 9+
⚠ Warning: Pipelining requires requiretty to be off in /etc/sudoers on the managed nodes. OEL 8 is fine by default. Older RHEL 6 / CentOS 6 nodes need Defaults !requiretty added or pipelining will fail.

For hosts behind a bastion, pass ProxyJump via SSH args. Per-host config:

INI — bastion + private hosts
[bastion]
bastion.example.com ansible_user=admin ansible_port=2222

[behind_bastion]
db1 ansible_host=10.0.1.10
db2 ansible_host=10.0.1.11
db3 ansible_host=10.0.1.12

[behind_bastion:vars]
ansible_ssh_common_args="-o ProxyJump=admin@bastion.example.com:2222"
ansible_user=deploy

Or wire it once in ~/.ssh/config so every tool benefits, not just Ansible:

INI — ~/.ssh/config
Host bastion
  HostName bastion.example.com
  User     admin
  Port     2222

Host db*
  ProxyJump bastion
  User      deploy
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
BASH — debug connection problems
# Increase verbosity progressively (-v through -vvvv shows SSH command itself)
ansible db1 -m ping -vvv

# Test SSH directly with the same args Ansible uses
ssh -o ControlMaster=auto -o ControlPersist=300s deploy@db1.example.com hostname

# Show the exact ansible-resolved ssh args
ansible db1 --list-hosts -vvv 2>&1 | grep "EXEC ssh"

# Smoke-test the whole stack: connection + Python + sudo
ansible db1 -m ansible.builtin.command -a "whoami" --become
# Should print: root
  • pipelining = True — ~50% speedup, free win on OEL 8
  • ControlPersist = 300s — connection reuse across tasks
  • forks = 25–50 — parallel host count (depends on control node CPU and SSH server limits)
  • fact_caching = jsonfile — skip fact gathering on subsequent runs (page 4)
  • strategy = mitogen_linear (optional plugin) — 2–6× speedup for heavy playbooks
  • callbacks_enabled = profile_tasks — find the slow tasks first
✅ Tip: End of Section 4. Inventory + secrets is the foundation everything later builds on. With static + dynamic + plugins + Vault + tuned SSH, you can run any playbook from this series at scale.

You've covered every layer of how Ansible figures out where to go and how to get there: static INI/YAML inventory, dynamic plugins, transformation plugins (constructed, generator), Ansible Vault for encryption, vault discipline for git and CI/CD, and connection plugins with the SSH knobs that make large playbooks fly.

Next — Section 5: Database Labs (19 pages, 32–50). This is the heart of the series — actual playbooks for MySQL 8, master-replica replication, Group Replication, InnoDB Cluster, ProxySQL, Galera/PXC, PostgreSQL, Patroni, Oracle 19c, MongoDB. Real end-to-end automation against real labs.