Ansible OEL 8 DevOps · OEL 8 · Inventory

AnsibleDynamic Inventory

Generating the host list at runtime instead of maintaining a static file — the modern plugin approach for cloud providers, the legacy script approach, and how to combine static and dynamic inventory in the same project.

With cloud fleets, IPs and host counts change daily. A static hosts.ini goes stale the moment auto-scaling kicks in. Dynamic inventory queries the truth source (AWS, Azure, vSphere, Kubernetes) and builds the host list fresh on every run.

ansible-playbook starts a run -i aws_ec2.yml load Inventory plugin e.g. amazon.aws.aws_ec2 queries groups filters API call AWS EC2 / Azure / GCP cloud, k8s, vSphere, etc. hosts + groups Resolved at runtime [tag_role_db] i-0a3f... i-0b4c... [tag_role_web] i-0c5d... i-0d6e... Inventory built fresh on every run · auto-tracks fleet changes

Install the AWS collection, then drop a config file telling the plugin what to query:

BASH — install & configure
# Install the collection
ansible-galaxy collection install amazon.aws

# Install the Python deps
pip install boto3 botocore

# Set credentials (AWS CLI conventions)
export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."
export AWS_REGION="ap-south-1"
YAML — inventories/aws_ec2.yml
---
# The filename MUST end with .aws_ec2.yml or .aws_ec2.yaml
# OR include the plugin: line as below

plugin: amazon.aws.aws_ec2

regions:
  - ap-south-1
  - us-east-1

# Filter to live instances only
filters:
  instance-state-name: running
  tag:Environment: production

# Build groups from tags
keyed_groups:
  - prefix: tag
    key: tags.Role           # creates tag_Role_db, tag_Role_web, ...
  - prefix: az
    key: placement.availability_zone

# How to address each host (use private IP since we're in a VPC)
hostnames:
  - private-ip-address

# Compose extra vars from instance attributes
compose:
  ansible_host: private_ip_address
  instance_type: instance_type
  ec2_az: placement.availability_zone
BASH — use it
# Same -i flag, just point at the YAML config
ansible-playbook site.yml -i inventories/aws_ec2.yml

# Verify what came back
ansible-inventory -i inventories/aws_ec2.yml --graph

# Sample output
# @all:
#   |--@aws_ec2:
#   |  |--ip-10-0-1-12.ap-south-1.compute.internal
#   |  |--ip-10-0-1-15.ap-south-1.compute.internal
#   |--@tag_Role_db:
#   |  |--ip-10-0-1-12.ap-south-1.compute.internal
💡 Tip: Combine the EC2 plugin with group_vars/tag_Role_db.yml — Ansible auto-applies that file to every instance tagged Role=db. No manual mapping needed.
ProviderPluginCollection
AWS EC2amazon.aws.aws_ec2amazon.aws
Azureazure.azcollection.azure_rmazure.azcollection
GCPgoogle.cloud.gcp_computegoogle.cloud
OpenStackopenstack.cloud.openstackopenstack.cloud
vSpherecommunity.vmware.vmware_vm_inventorycommunity.vmware
Kuberneteskubernetes.core.k8skubernetes.core
DigitalOceancommunity.digitalocean.digitaloceancommunity.digitalocean

Before plugins existed (Ansible 2.4-), people wrote shell or Python scripts that printed JSON. Scripts still work — point -i at any executable file and Ansible runs it expecting JSON output:

PYTHON — inventories/from_db.py
#!/usr/bin/env python3
"""
Reads our internal CMDB and prints inventory JSON Ansible can consume.
Make this file executable: chmod +x from_db.py
"""
import json, sys
import requests

resp = requests.get("https://cmdb.example.com/api/hosts").json()

inv = {{
    "_meta": {{"hostvars": {{}}}}
}}

for h in resp["hosts"]:
    role = h["role"]
    if role not in inv:
        inv[role] = {{"hosts": []}}
    inv[role]["hosts"].append(h["fqdn"])
    inv["_meta"]["hostvars"][h["fqdn"]] = {{
        "ansible_host": h["ip"],
        "datacenter":   h["dc"],
    }}

# Required: --list for full inventory, --host <name> for one host's vars
if "--list" in sys.argv:
    print(json.dumps(inv))
elif "--host" in sys.argv:
    name = sys.argv[sys.argv.index("--host") + 1]
    print(json.dumps(inv["_meta"]["hostvars"].get(name, {{}})))
⚠ Warning: Inventory plugins are the modern path — they cache, they integrate with group_vars properly, and they don't fork a subprocess on every run. Use scripts only when no plugin exists for your source.

Real projects often have a static bastion host plus dynamically discovered managed nodes. Point -i at a directory containing both — Ansible reads them all and merges:

DIR — mixed inventory directory
inventories/production/
├── hosts.ini             # static — bastions, jumphosts, fixed nodes
├── aws_ec2.yml           # dynamic — production EC2 fleet
├── group_vars/
│   ├── all.yml
│   ├── tag_Role_db.yml   # applies to dynamic EC2 group
│   └── bastion.yml       # applies to static group
└── host_vars/

ansible-playbook site.yml -i inventories/production/
# → loads BOTH static hosts AND EC2 instances, with merged groups

Dynamic queries are slow — every ansible-playbook hits the cloud API. Enable caching to reuse the same query for a configurable time window:

INI — ansible.cfg with inventory cache
[defaults]
inventory = ./inventories/production/

[inventory]
cache = True
cache_plugin = jsonfile
cache_timeout = 3600
cache_connection = /tmp/ansible_inventory_cache
✅ Tip: With caching on, the first run takes 30s as it queries AWS, every run for the next hour reads from disk in milliseconds. --flush-cache at the CLI bypasses cache when you need a fresh look.