Overview: Jinja2 Templating with REST APIs in Ansible Roles
What is it?
You use Jinja2 templates to dynamically build REST API payloads (JSON or XML) in your Ansible roles.
The Ansible uri module sends these payloads to create, check, or remove configurations on devices like F5, Cisco, or Palo Alto.
Why should you care?
Consistency: Templates ensure all configurations follow the same standards.
Speed: Automate what used to take hours in the GUI or CLI.
Flexibility: One role can manage many devices or environments just by changing variables.
Auditability: All changes are tracked as code.
How does it work?
Variables (from playbook or inventory) define what you want to configure.
Jinja2 templates use those variables to render the exact API payload.
The uri module sends the payload to the device’s REST API.
Tags let you control which tasks (create, check, remove) run.
When should you use this?
Anytime you want repeatable, version-controlled, and scalable automation for network, security, or cloud devices that have a REST API.
Why Use Ansible URI + Jinja2 Instead of Existing Modules/Collections?
While Ansible offers vendor-specific collections (e.g., cisco.ios, f5networks.f5_bigip), the URI + Jinja2 templating approach provides unique advantages:
1. Universal Compatibility
Works with ANY REST API: Use the same pattern for F5, Cisco, Palo Alto, AWS, or niche tools without waiting for module support.
2. Full Control Over Payloads
Custom JSON/XML: Jinja2 templates let you craft exact API payloads required by unique endpoints, including nested structures or dynamic fields.
Edge Cases Handled: Work around undocumented API quirks or non-standard endpoints that vendor modules might not support.
3. Lifecycle Simplicity
Single Workflow for CRUD: Use create.j2, check.j2, and remove.j2 templates in one role—no need to learn different modules for each operation.
4. Future-Proofing
API Changes? Update Templates, Not Code: When APIs evolve, adjust Jinja2 templates instead of waiting for collection updates.
Avoid Collection Abandonment Risk: Vendor modules may become outdated; URI + Jinja2 is maintainable long-term.
5. Debugging Transparency
See Exactly What’s Sent: Render templates locally to inspect payloads before execution (impossible with opaque modules).
Test with curl/Postman: Validate payloads independently of Ansible.
Key Takeaway
Choose URI + Jinja2 when:
You need to interact with unsupported/undocumented APIs.
You require granular control over payloads.
You want a consistent pattern across all vendors.
Core Components
Core Components of the Ansible Roles REST-API Framework
This section outlines the essential building blocks of an Ansible automation framework that leverages roles with Jinja2 JSON templates, task files with tags for create, check, and remove operations, and the use of separate variables files for flexibility and maintainability.
1. Role Structure
templates/: Contains Jinja2 JSON templates to dynamically generate REST API payloads.
tasks/: Includes main.yml (entry point), and separate files for create.yml, check.yml, and remove.yml operations.
defaults/: Holds main.yml with default variable values, which can be overridden as needed.
vars/: (Optional) For variables that rarely change and shouldn't be easily overridden.
2. Task Files and Tags
create.yml: Tasks for creating or deploying configuration.
check.yml: Tasks for verifying the current state or existence of configuration.
remove.yml: Tasks for deleting or rolling back configuration.
main.yml: Includes the above files and leverages tags to control which tasks run.
Tags allow you to execute only the tasks relevant to your operation (e.g., --tags create).
3. Variables Management
defaults/main.yml: Default values for variables used in templates and tasks.
Playbook variables file: Environment or deployment-specific variables that override role defaults.
This separation allows roles to be reused across different environments by simply changing the playbook variables.
4. Playbook Integration
Playbooks call roles, specify which tags to run, and load the variables file that provides the necessary parameters for the role. This enables selective task execution and flexible configuration.
A clear and scalable directory structure is foundational for maintainable Ansible automation, especially when supporting multiple vendors and features. Following best practices ensures your roles are reusable, modular, and compatible with Ansible Galaxy and team collaboration.
Recommended directory structure:
roles/
<vendor>/
<feature>/
defaults/
main.yml
tasks/
main.yml
check.yml
create.yml
remove.yml
templates/
config_payload.j2
vars/
main.yml
handlers/
main.yml # Optional: for event-driven tasks
files/ # Optional: for static files
meta/
main.yml # Optional: for role metadata
defaults/ – Default variables, easily overridden.
vars/ – Variables less likely to change.
tasks/ – Main and supporting task files (e.g., check, create, remove).
templates/ – Jinja2 templates for dynamic configuration.
handlers/ – Event-driven tasks (optional).
files/ – Static files for copying to managed nodes (optional).
meta/ – Role metadata and dependencies (optional).
Manual Creation Example: Use this Bash script to scaffold a new role by vendor and feature:
Quick Setup Alternative: Use ansible-galaxy role init for a standard role skeleton. This is recommended for beginners and ensures all best practice directories and files are created automatically.
Keep roles single-purposed and modular for easier maintenance and reuse.
Document your role’s intent and usage in README.md and meta/main.yml.
Use version control (such as Git) to track changes and collaborate.
Group related tasks in separate files for clarity.
Never store secrets in version-controlled YAML files—use Ansible Vault or environment variables.
Set API Provider Credentials
Managing API provider credentials securely is essential for safe and scalable automation. Ansible recommends storing sensitive information like usernames and passwords outside of playbooks and roles. The best practice is to keep them in group_vars (or host_vars) within your inventory directory, referencing environment variables for secrets.
Example: Store API credentials in inventory/group_vars/<vendor>.yml
All hosts in the f5, cisco, or paloalto group inherit these credentials automatically.
Environment variable lookup (lookup('env', ...)) keeps secrets out of version control.
Centralizes management and supports different credentials per environment.
validate_certs: false is useful for lab/self-signed certs (use with caution in production).
Add Devices to Static Inventory
Define your devices in your static inventory file under a logical group (such as [f5], [cisco], or [paloalto]). This allows you to target all vendor devices easily in your playbooks. Both INI and YAML formats are supported.
Use descriptive hostnames for clarity (e.g., f5-standalone-01, f5-pair-01-primary).
Group devices logically to simplify targeting in playbooks.
Host-specific variables can be set in inventory/host_vars/<hostname>.yml if needed.
Standard naming convention is typically inventory.ini or inventory.yml.
Choose INI or YAML based on complexity; for larger, more complex environments, YAML is often preferred.
If you have multiple environments, you might use names like inventory_prod.ini, inventory_stage.ini, etc.
Define Default Variables
Defining default variables in your role's defaults/main.yml file is a best practice in Ansible. This approach ensures that roles are reusable and configurable, while keeping the main variables file minimal and focused only on variables that truly need to be overridden often.
List all variables that need default values in defaults/main.yml for each role.
Minimize the number of variables in vars/main.yml; prefer defaults/main.yml for most variables.
Use clear and descriptive variable names to avoid conflicts and improve readability.
Document the purpose of each variable in comments within the defaults file.
Define Variables
Variables files in the vars/ directory define required values that are not meant to be overridden by users. Unlike defaults/main.yml, variables here are considered essential for the role's operation and are used for testing or environment-specific configurations. Best practice is to keep these files minimal and focused on role-critical parameters.
Best Practices:
Store variables that must be set for the role to function (e.g., resource names, IDs, or environment-specific paths).
Avoid storing secrets here—use group_vars with environment variables or Ansible Vault instead.
Use lists/dictionaries to group related configurations (e.g., pools, interfaces, policies).
# Example 1: F5 Pool Configuration (vars/pool_vars.yml)
pools:
- name: POOL-testing-web-servers
partition: OSHS_NON_PROD
lb_method: least-connections-member
monitor: /OSHS_NON_PROD/test_tcp
route_domain: 2411
members:
- name: web-server1
address: 3.3.3.3
port: 80
description: "Web Server 1"
- name: web-server2
address: 3.3.3.4
port: 80
description: "Web Server 2"
state: absent # present/absent for idempotency
# Example 2: Cisco Interface Configuration (vars/interface_vars.yml)
interfaces:
- name: GigabitEthernet0/1
description: "Uplink to Core"
enabled: true
ipv4_address: 192.168.1.1/24
mtu: 1500
# Optional: VLAN for L3 interfaces
vlan: 100
Idempotency: Use state: present or state: absent to control resource creation/deletion (as seen in the F5 example).
Testing: These files allow you to simulate role execution without playbook integration.
Structure: Group related configurations under a top-level key (e.g., pools, interfaces).
Define Jinja2 Template
Creating robust Jinja2 templates requires deep alignment with vendor API specifications. Below are working examples for F5, Cisco, and Palo Alto, along with key API documentation pointers and enforcement strategies.
Enforce at variable definition level (vars/main.yml), not in templates
Template Structure
Mirror API documentation exactly - use Postman exports as reference
Error Handling
Include {% raw %}{% if...%}{% endif %}{% endraw %} blocks for optional fields
Testing
Validate XML/JSON with jq or xmllint pre-deployment
Critical API Documentation Elements:
Authentication method (Basic Auth vs API keys)
HTTP methods (POST/PUT/PATCH)
Response codes and error formats
Rate limits and throttling
Configure Check Task File
Check tasks verify resource existence before making changes, ensuring idempotency. Below are vendor-specific implementations using Ansible's uri module with detailed explanations of key components.
1. F5 BIG-IP Pool Existence Check
- name: Check if LTM Pool exists
uri:
url: "https://{{ f5_api_provider.server }}:{{ f5_api_provider.server_port }}/mgmt/tm/ltm/pool/~{{ pool.partition }}~{{ pool.name }}"
method: GET
status_code: 200, 404
user: "{{ f5_api_provider.user }}"
password: "{{ f5_api_provider.password }}"
validate_certs: "{{ f5_api_provider.validate_certs }}"
force_basic_auth: true
headers:
Content-Type: "application/json"
delegate_to: localhost
register: check_result
failed_when: false
- name: Display pool status
debug:
msg: "Pool {{ pool.name }} in partition {{ pool.partition }} {{ 'exists' if check_result.status == 200 else 'does not exist' }}"
Always reference credentials from group_vars for security
Use failed_when: false to handle graceful "not found" states
Validate SSL certificates in production (validate_certs: true)
Test with curl/Postman before Ansible implementation
Use consistent naming for registered variables (check_result)
Configure Create Task File
The create.yml task file is where API-driven configuration happens. Below are essential components and vendor-specific implementations following Ansible best practices.
Key Components Explained
Parameter
Purpose
Best Practice
url
API endpoint URL
Construct dynamically using variables from group_vars
status_code
Expected HTTP responses
Include 200 (success) and 409 (exists) for idempotency
Always use force_basic_auth: true for consistent authentication
Store credentials in group_vars with environment variable lookup
Validate templates with ansible -m debug before deployment
Use failed_when to handle expected API responses gracefully
Include post-task debugging with conditional messages
Configure Remove Task File
The remove.yml task file handles resource cleanup using REST API DELETE methods. Below we break down its components and provide vendor-specific implementations.
Core Components of Remove Tasks
Component
Purpose
Best Practice
uri Module
Sends HTTP requests to API endpoints
Always specify method: DELETE
status_code
Defines acceptable response codes
Include 200 (OK) and 404 (Not Found) for idempotency
Always reference credentials from group_vars files
Use failed_when: false with debug tasks to prevent playbook failures
Include partition/context in resource paths (F5/Palo Alto)
Validate JSON/XML payload structure with API documentation
Encrypt sensitive data with Ansible Vault
Configure Main Task File
The tasks/main.yml file acts as the control center for your role, orchestrating task execution flow and enabling modular operations through tags. Follow this structure for enterprise-grade maintainability.
Best Practice: Use layered variables with this precedence chain:
Playbook vars: (highest priority)
Inventory group_vars/
Role defaults/ (lowest priority)
Pro Tips
Idempotency: Always check item.state before making changes
Loop Controls: Use label for readable output in verbose mode
Tag Hygiene: Maintain consistent tagging across all roles
Dry Runs: Combine check tags with --check mode
How to Reference Roles in Playbooks
Ansible roles are designed to be reusable components, but proper variable management is critical when integrating them into playbooks. Below is a detailed explanation of variable precedence, role execution, and practical examples.
Variable Precedence Hierarchy
Playbook Variables > Role Variables (vars/) > Role Defaults (defaults/)
Control execution flow: ansible-playbook deploy.yml --tags "create"
Map to task files: tasks/create.yml
Best Practices
Practice
Implementation
Variable Separation
Never store playbook-specific vars in role's vars/
Tag Strategy
Use always, never tags for mandatory/optional tasks
Environment Isolation
Use separate vars files for dev/prod
Secrets Management
Store credentials in group_vars/all/vault.yml
Validation
Validation and Testing for Jinja2 REST API Roles
Proper validation ensures your Jinja2-templated roles work correctly with playbooks and variables. Follow this comprehensive verification process:
1. Template Rendering Validation
Verify payload structure before API calls:
# Render template with test variables
ansible -m debug -a "msg={{ lookup('template', 'templates/config.j2') }}" \
-e "@test_vars.yml" > rendered_payload.json
# Validate JSON syntax
jq . rendered_payload.json
# Validate XML syntax
xmllint --format rendered_payload.xml
2. Variable Passing Verification
Confirm playbook → role variable flow:
# tasks/main.yml
- debug:
msg: "Using pool_name: {{ pool_name }}"
# Playbook call with -vvv to show variable values
ansible-playbook deploy.yml -e "@playbook_vars.yml" -vvv
Check output for expected variable values
Verify precedence: Playbook vars > role vars > role defaults
3. Task Execution Testing
Validate tag-based operations:
Operation
Test Command
Expected Result
Create
--tags create --check
Dry-run shows correct POST payload
Check
--tags check
Returns current state without changes
Remove
--tags remove --check
Shows correct DELETE request
4. Idempotency Verification
Ensure safe re-execution:
# First run (creates resource)
ansible-playbook deploy.yml --tags create
# Second run (should show "ok" not "changed")
ansible-playbook deploy.yml --tags create
- name: Create resource
uri:
url: "{{ api_endpoint }}"
method: POST
body: "{{ payload }}"
status_code:
- 200
- 201
- 409 # Handle already exists
register: api_response
- name: Fail on unexpected status
fail:
msg: "API failed with {{ api_response.status }}"
when: api_response.status not in [200, 201, 409]
7. Naming Convention Compliance for Managed Objects
Ensure object naming standards are enforced in logic before API calls:
# Example: Set object name using convention in tasks/main.yml
- set_fact:
nat_object_name: "DNAT-{{ nat_ip }}"
- debug:
msg: "NAT object name will be {{ nat_object_name }}"
Object names (e.g., NAT, pool, policy) are constructed according to project naming standards (e.g., "DNAT-10.1.2.3")
Check debug output or rendered payload to verify correct naming
Validation Checklist
✅ Naming convention is applied to the object name (e.g., "DNAT-10.1.2.3" for NAT rules) regardless of input variable format
✅ Templates render valid JSON/XML with test data
✅ Playbook variables override role defaults
✅ Tags execute only designated tasks
✅ Second create run shows 0 changes (idempotent)
✅ API responses handled for success/conflict/errors
✅ --check mode shows correct payloads
Troubleshooting
Roles & Playbooks
Debugging Ansible roles and playbooks requires understanding how tags, variables, and task execution interact. Below is a structured approach to identify and resolve common issues when using roles with custom tags and playbook-specific variables.
Troubleshooting Flowchart
Start Troubleshooting
│
├─ 1. Are tags being applied?
│ ├─ No → Use --tags flag at runtime (not in playbook vars)
│ └─ Yes → Check task/role tagging alignment
│
├─ 2. Are variables overriding correctly?
│ ├─ No → Verify variable precedence:
│ │ Playbook Vars > Role Vars > Role Defaults
│ └─ Yes → Check Jinja2 template rendering
│
├─ 3. Are tasks executing unexpectedly?
│ ├─ Yes → Check for always tags or missing when: conditions
│ └─ No → Use --start-at-task or --step
│
└─ 4. API calls failing?
├─ Validate payload with --check --diff
└─ Test rendered JSON/XML directly with curl/Postman
---
- name: Create F5 Pool
uri:
url: "https://{{ f5_host }}/mgmt/tm/ltm/pool"
method: POST
body: "{{ lookup('template', 'pool_create.j2') }}"
status_code: 200
tags:
- create
- api
Debugging Tools & Tips
Tool
Command
Purpose
Dry Run
--check
Simulate changes without execution
Verbose Output
-vvv
Show detailed execution process
Diff Mode
--diff
Show file changes (for templates)
Start At Task
--start-at-task="Create Pool"
Skip to specific task
Best Practices:
Use ansible-lint to validate playbook structure
Separate environment-specific variables using group_vars/
Tag roles at the playbook level, not inside role definitions
Conclusion
In this blog post, we explored how to build scalable, reusable Ansible automation using roles with Jinja2 templating and the uri module to interact with REST APIs. Here are the key takeaways:
Jinja2 templating with REST APIs allows you to dynamically generate JSON or XML payloads for any device or system that exposes an API, making your automation flexible and vendor-agnostic.
Using the Ansible uri module gives you universal compatibility, full control over API payloads, and transparency for debugging and validation—advantages that vendor-specific modules sometimes lack.
Core framework components include a clear role structure (with tasks/, templates/, defaults/, and optional vars/), task files split by operation and tagged for selective execution, and separate variable files for environment-specific configuration.
Best practices covered:
Store API credentials securely in group_vars, using environment variables or Ansible Vault.
Enforce naming conventions for managed objects in your task logic, ensuring consistency across your environment.
Keep roles modular, single-purposed, and well-documented.
Use tags to control which tasks (create, check, remove) are executed in each playbook run.
Validation and troubleshooting steps ensure your roles and playbooks work as intended:
Validate Jinja2 template rendering and payload structure before API calls.
Confirm variable precedence and that playbook variables override role defaults.
Test each tag separately and check for idempotency.
Make sure naming conventions are applied to all managed objects.
Use verbose/debug output and dry-run (--check) modes to troubleshoot issues.
By following this approach, you gain a maintainable, auditable, and future-proof automation framework that works across vendors and adapts easily to new APIs or requirements. Whether you’re managing F5, Cisco, Palo Alto, or any other REST-capable device, this pattern puts you in control and accelerates your infrastructure-as-code journey.