Table of Contents
- Overview
- Job Structure
- Job Lifecycle
- Job Execution Details
- Advanced Capabilities
- Execution Environment
- Permissions and Security
- Best Practices
- Example Job: Interface Utilization Report
- Conclusion
Nautobot Job Execution Framework: Overview
What Is the Nautobot Job Execution Framework?
The Nautobot Job Execution Framework is a core automation feature of Nautobot that allows users to develop, schedule, and manage custom Python-based tasks within the Nautobot platform. Jobs are self-contained Python classes that automate IT tasks such as device provisioning, compliance checks, report generation, and integration workflows—all executed directly from the Nautobot UI or API.
Why You Need to Know About It
- Automation at Scale: The framework enables engineers to automate repetitive or complex network operations, reducing manual errors and freeing up valuable time for higher-level projects.
- Custom Workflows: Teams can encode business logic or compliance processes as reusable jobs, tailored to their unique infrastructure and policies.
- Consistency & Traceability: Every job run is logged, auditable, and follows a standardized lifecycle, ensuring accountability and detailed historical records.
- Integration & Extensibility: Jobs can interact with both Nautobot data models and external systems (APIs, devices, CMDBs), serving as the glue for workflow automation in hybrid network environments.
How It Works: The Nuts & Bolts
- Job Definition: Write jobs as simple Python classes that inherit from the Job base class. Define user inputs and describe the automation logic in a run() method.
- User Interaction: Inputs can be collected via dynamic web forms, supporting flexible, parameterized automation.
- Asynchronous Execution: When a job runs, it's queued and executed in the background by Nautobot’s job workers (using Celery), ensuring that long-running tasks don’t impact the main application.
- Logging & Results: Every step, message, and outcome is logged and accessible via Nautobot’s UI. Users can review job history, outputs, and errors for auditing or troubleshooting.
- Permissions & Security: Job execution is governed by fine-grained permissions, ensuring only authorized users can run, schedule, or approve jobs. Sensitive data is protected and integrated with Nautobot's secrets management.
- Scheduling & Advanced Features: Jobs can be scheduled in advance, require approval, triggered in response to events, or composed into complex workflows with dependencies.
In short, the Nautobot Job Execution Framework turns Nautobot into a powerful, programmable automation hub—enabling network teams to codify, orchestrate, and govern everything from day-to-day tasks to strategic workflows, all in a safe and repeatable fashion.
Job Structure
This section explains the structure of a Nautobot Job and how to define your own workflow automation tasks using Python:
-
Job Definition:
Every Nautobot Job is written as a Python class that inherits from
Job
. The job encapsulates logic in arun()
method.-
Class Structure:
- Define a class derived from
Job
.
- Add input fields using variable definitions likeStringVar
,IntegerVar
, orObjectVar
.
- Use the innerMeta
class to add metadata such as the job's name and description.
-
Class Structure:
- Define a class derived from
-
User Inputs:
Assign parameters for user input via variable fields, which will render as forms in the UI.
-
For example, use
StringVar
for strings (hostnames, comments) andObjectVar
for selecting Nautobot objects (sites, devices).
-
For example, use
-
Job Logic:
Write the actual automation logic within the
run()
method. Inputs from the user are passed as method arguments. -
Example Job Structure:
from nautobot.apps.jobs import Job, StringVar, ObjectVar from nautobot.dcim.models import Site class ExampleJob(Job): device_name = StringVar(description="Name of the device") site = ObjectVar(model=Site) class Meta: name = "Example Job" description = "Provisions a new device" def run(self, device_name, site): # Custom automation logic goes here pass
-
Best Practices:
- Use descriptive names for variables and the job itself to enhance readability.
- Validate all user input before proceeding with automation actions.
- Leverage built-in validation methods for Nautobot model objects.
Job Lifecycle
The lifecycle of a Nautobot Job follows a structured process, from code definition to user execution and results. Understanding this lifecycle is key to building, testing, and maintaining reliable automation in Nautobot:
-
1. Creation:
Jobs are written as Python classes in job modules. These modules must be placed in one of the following:
- The
LOCAL_APPS
job directory (typicallyjobs/
inside the Nautobot configuration) - A custom plugin registered with Nautobot
- A source-controlled repository (e.g., Git repository loaded dynamically)
Job
base class and contain a definedrun()
method. - The
-
2. Discovery:
On application startup or when the job source changes, Nautobot scans all job modules and registers valid jobs.
- Jobs are grouped and identified by their module path, class name, and optional grouping settings.
- Invalid jobs (e.g., syntax errors or missing attributes) are skipped and logged for review.
-
3. Execution:
Jobs can be triggered manually through the web UI or API.
- Users complete any required input fields defined in the job.
- The job is then handed off to Celery to be executed asynchronously in the background.
-
4. Logging & Output:
During execution, jobs can log their activities using
self.logger
or job-friendly logging methods likeself.log_info()
.- All messages are stored in the database and visible in the job results page.
- Use structured logging to help filter, search, and analyze historical job runs.
-
5. Result Status:
Once complete, a job is marked as one of the following statuses:
- Success: No errors occurred, and the task completed as expected.
- Failure: An error was raised during execution, or the logic returned a failure state.
- Errored: The job crashed unexpectedly due to an unhandled exception.
-
6. Permissions and Approval:
Nautobot supports fine-grained permissions and job approval workflows.
- Each job can be limited by user roles and permissions such as
run job
andapprove job
. - Admins can configure job approval policies to require validation before execution.
- Each job can be limited by user roles and permissions such as
Job Execution Details
This section explores how Nautobot Jobs are executed—covering runtime behavior, input handling, logging, result reporting, and advanced execution features:
-
Asynchronous Execution:
- When a job is started (via the UI or API), Nautobot hands off execution to a background worker using Celery.
- This allows long-running tasks to process safely in the background without impacting the user interface.
- Users can monitor job progress or queue multiple jobs in parallel.
-
Input & Output:
- Inputs provided by users via the form are passed into the
run()
method as arguments or within adata
dictionary. - The
commit
flag indicates whether changes should be saved (True) or simulated for a "dry run" (False). - Output and action summaries generated during execution are saved in Nautobot’s database and shown in the job results page.
- Inputs provided by users via the form are passed into the
-
Logging & Result Reporting:
- Use
self.logger
or methods likeself.log_info()
,self.log_success()
,self.log_failure()
to record status and actions. - Logs are displayed in real time to the user and stored for auditing.
- The job result includes execution status, who ran it, inputs used, duration, and all log messages.
- Status can be Success, Failure, or Errored, based on code execution or raised exceptions.
- Use
-
Advanced Execution Features:
- Jobs can set a hard
time_limit
to enforce maximum execution time (useful for long-running or scheduled Jobs). - The
run()
method can utilize thedata
andcommit
parameters for custom behavior and integration with other systems. - Jobs may call external APIs, interact with other Nautobot Jobs, or be composed into more complex workflows.
- Execution can be cancelled from the UI or CLI; results remain available for review after cancellation.
- Jobs can set a hard
-
Sample Job Execution (Python):
from nautobot.apps.jobs import Job, ObjectVar from nautobot.dcim.models import Device class ExampleJob(Job): device = ObjectVar(model=Device) class Meta: name = "Example Job" description = "Shows job execution info" def run(self, device, commit): self.log_info(f"Job started for device: {device}") if not commit: self.log_info("Dry run mode - no changes will be saved") # Real logic goes here... self.log_success(f"Completed job for device: {device}") return f"Results for {device}"
Advanced Capabilities
This section highlights powerful features that take Nautobot Jobs beyond simple automation—enabling advanced workflows, integration, and management flexibility:
-
Job Buttons:
- Add custom buttons to Nautobot object pages (like Devices, Sites) to launch a job directly from an object’s detail view.
- The selected object is passed as input, allowing targeted automation without filling out a general form.
- Job Buttons are configured in the UI by selecting the object types, job to run, and display text or color. They can be grouped or require confirmation before running.
-
Job Hooks:
- Automatically trigger jobs when events occur, such as object creation, update, or deletion.
- This enables event-driven automation—e.g., enforcing standards or cleaning up configurations the moment changes are made in Nautobot.
- Configure Job Hooks in the UI by selecting which models, what actions (create/update/delete), and which Job to run for each trigger.
-
Composability:
- Jobs can call or enqueue other jobs, allowing you to build complex, reusable workflows.
- This supports modular automation where job logic can be broken down, tested, and reused across solutions.
-
Scheduling & Approval:
- Jobs can be scheduled to run immediately, at a future date, or on a recurring schedule using cron-like syntax.
- Require approval before a job executes—great for governance or change management. Users with the right permissions can review and approve or deny pending jobs via the UI or API.
- The approval workflow separates job submission and execution, supporting regulatory and operational requirements.
-
Grouping & Namespacing:
- Jobs can be grouped logically in the UI for better organization using the
name
constant in the job module. - Supports easy discoverability—allowing users to quickly find and select jobs relevant to their tasks.
- Grouping can be overridden in the job’s metadata without changing the file or code structure.
- Jobs can be grouped logically in the UI for better organization using the
Execution Environment
This section describes how Nautobot Jobs are executed behind the scenes, focusing on the infrastructure and processing model that enables scalable, reliable automation:
-
Job Workers (Celery):
- Nautobot uses Celery workers to handle running Jobs, offloading tasks from the main web application.
- Workers are background processes, typically started as services on the Nautobot server (or in Kubernetes), and can be scaled horizontally for redundancy or load.
- Workers listen for new tasks on one or more Celery queues—separating Job processing from core Nautobot functions.
-
Task Queues:
- Jobs are scheduled onto dedicated Celery queues. Each queue can be optimized for different job types, such as long-running or quick tasks.
- This design prevents slow jobs from blocking critical or short operations, supporting smoother and more predictable automation.
- Different workers can be assigned to different queues, enabling fine-grained resource control and parallelism.
-
Queue Assignment & Management:
- Each Job defines the eligible queue(s) for its execution through configuration or metadata. The default queue is named
default
, but custom queues can be defined for specialized needs. - If a queue is not actively monitored by a worker, Jobs submitted to it will remain pending until a worker is available.
- Job queue and assignment information can be viewed and managed in the Nautobot UI.
- Each Job defines the eligible queue(s) for its execution through configuration or metadata. The default queue is named
-
Database Models & Job Identification:
- Each Job execution is tracked as a database record, allowing users to see execution status, start/end times, results, and outputs after completion.
- Jobs can be referred to by UUID for API calls and audit purposes—no need to reference implementation details or file paths.
-
Best Practices:
- For environments running frequent or long jobs, deploy multiple Celery workers with custom queues to optimize performance.
- Monitor the Job worker status using Nautobot's built-in worker status page to ensure all desired queues are active and healthy.
- Adjust worker concurrency and prefetch settings based on your workload and available server resources.
-
Example: Celery Worker Service Command
ExecStart=/opt/nautobot/bin/nautobot-server celery worker --loglevel INFO --pidfile /var/tmp/nautobot-worker-jobqueue.pid --queues job_queue
- This example starts a Celery worker process only listening to the
job_queue
queue, isolating job execution from other background tasks.
- This example starts a Celery worker process only listening to the
Permissions and Security
This section details how Nautobot controls access to job execution and ensures robust security through permissions and best practices:
-
Role-Based Permissions:
- Users need the
extras.run_job
permission to execute a job. Additional permissions, likeextras.approve_job
, control who can approve jobs. - Permissions can be set at the job, job group, or per-user level, allowing granular access management.
- Admins can disable or enable specific jobs; new jobs are disabled by default until reviewed.
- Users need the
-
Object-Based Constraints:
- Permissions can limit user actions to certain objects or groups, such as specific device types or site locations.
- This allows teams to delegate job execution while restricting sensitive changes to only authorized users.
-
Approval and Auditing:
- Some jobs may require explicit approval before running. The approval queue is visible only to users with the correct permissions.
- Every job execution—who ran it, who approved it, and all job outputs—is logged for auditing and compliance tracking.
-
Secrets and Sensitive Data:
- Jobs using sensitive variables (like passwords or API keys) are excluded from scheduling and approval workflows to prevent secret leakage.
- Leverage Nautobot’s secrets management tools to securely handle credentials and sensitive information.
-
Job Visibility and Security:
- Jobs can be hidden from the general UI, but this does not prevent access for users with direct permissions or API knowledge.
- Hiding jobs helps reduce unnecessary surface area but is not a substitute for true permission enforcement.
-
Best Practices:
- Grant only the minimum permissions required (principle of least privilege) and review frequently.
- Disable or hide jobs that are not in active use to limit the risk of misuse.
- Always validate job logic for security issues before enabling.
- Use object constraints in permissions to tightly scope user access.
- Store secrets in the built-in secrets manager, never as job variables.
Best Practices
This section summarizes key recommendations for writing reliable, maintainable, and secure Nautobot Jobs:
-
Validate All User Input:
- Apply explicit validation on each variable to prevent accidental or malformed data entry.
- Use built-in validation methods (such as
validated_save()
) when working with Nautobot objects to enforce model rules and integrity.
-
Avoid Direct or Bulk Operations:
- Steer clear of Django’s
bulk_create()
andupdate()
for critical job logic as they bypass object validation and signals. - Always prefer methods that enforce checks and fire change-logging events.
- Steer clear of Django’s
-
Use Clear Logging and Status Updates:
- Leverage methods like
self.log_info()
,self.log_success()
, andself.log_failure()
to keep users informed and aid troubleshooting. - Log key actions, decisions, and data changes for every execution.
- Leverage methods like
-
Support Dry Run (Commit) Modes:
- Respect the
commit
flag in your Job’srun()
method to allow safe “dry run” executions, letting users preview changes before applying them. - Clearly indicate in the logs whether a job executed in dry run or commit mode.
- Respect the
-
Break Large Jobs into Smaller Steps:
- Design complex Jobs as multiple discrete steps or as a series of smaller, reusable Jobs.
- This improves readability, testing, troubleshooting, and future enhancements.
-
Write Tests for Jobs:
- Develop unit tests for your Job logic using Nautobot’s test helpers and Django’s
TransactionTestCase
. - Test all data paths, including error handling, edge cases, and permission checks.
- Develop unit tests for your Job logic using Nautobot’s test helpers and Django’s
-
Follow Security Best Practices:
- Store sensitive data only in Nautobot’s secrets manager, never as job variables or logs.
- Review permissions and limit execution rights according to the principle of least privilege.
- Regularly audit enabled jobs, especially those with powerful actions or integrations.
-
Document Each Job:
- Write thorough docstrings and inline comments for each Job, explaining its purpose and any required inputs or caveats.
- Include examples in documentation to help users run and troubleshoot the job effectively.
-
Leverage Nautobot Features:
- Use namespaces and grouping for discoverability and management of multiple Jobs.
- Utilize job scheduling, approval, and visibility settings for operational control and governance.
Example Job: Interface Utilization Report
This section walks through a practical Nautobot Job that generates an interface utilization report for a selected site and set of device roles. You can use this as a starting point for your own custom reporting jobs in Nautobot:
-
Purpose:
- Generates a summary of network interface usage for all devices of chosen roles at a specific site.
- Displays totals and percentages for used and available ports.
-
Inputs:
- Location: Selectable via an
ObjectVar
tied to existing Locations in Nautobot. - Device Roles: Select one or more relevant Roles using
MultiObjectVar
.
- Location: Selectable via an
-
Job Logic Overview:
- Finds all devices matching the given location and roles.
- Aggregates interface data to count:
- Total interfaces
- Used interfaces (e.g., with statuses "Active", "Failed", "Maintenance")
- Unused interfaces (e.g., with statuses "Decommissioning", "Planned")
- Logs a readable report including percentages.
-
Full Example Job Code:
from nautobot.apps.jobs import Job, ObjectVar, MultiObjectVar, register_jobs from nautobot.dcim.models import Location, Device, Interface from nautobot.extras.models import Role class InterfaceUtilizationReport(Job): location = ObjectVar(model=Location) roles = MultiObjectVar(model=Role) class Meta: name = "Interface Utilization Report" description = "Generates a report of network port usage at a selected location." def run(self, location, roles): # Get all devices for the site with selected roles devices = Device.objects.filter(location=location, role__in=roles) self.log_info(f"{devices.count()} devices found.") # Gather interfaces from these devices interfaces = Interface.objects.filter(device__in=devices) total = interfaces.count() used = interfaces.filter(status__name__in=["Active", "Failed", "Maintenance"]).count() unused = interfaces.filter(status__name__in=["Decommissioning", "Planned"]).count() # Log the results with percentages if total > 0: used_pct = int(round((used / total) * 100)) unused_pct = int(round((unused / total) * 100)) else: used_pct = unused_pct = 0 self.log_success( f"Interface utilization at {location}: " f"Total={total}, Used={used} ({used_pct}%), Unused={unused} ({unused_pct}%)" ) register_jobs(InterfaceUtilizationReport)
-
How to Use:
- Copy the code into your Nautobot job module (such as
jobs/interface_report.py
). - Select the job in the Nautobot UI, choose a location and device roles, then run the report.
- Review the results and log entries for immediate insights into interface utilization.
- Copy the code into your Nautobot job module (such as
-
Tips for Adapting:
- Customize interface statuses to match your organization’s workflow (e.g., include other status names).
- Add further filtering (such as by interface type or tag) as needed.
- Enhance output formatting or export capabilities based on reporting requirements.
Conclusion
Throughout this post, we explored the powerful and extensible Nautobot Job Execution Framework, a core automation feature that empowers engineers and developers to build, schedule, and manage custom Python-based tasks inside the Nautobot platform.
🔑 Main Takeaways:
- Job Structure provides a clean, class-based approach to defining interactive and reusable network tasks.
- Job Lifecycle covers how jobs are created, discovered, executed, and logged—enabling repeatable and traceable automation.
- Execution Details revealed how input variables, logging, and result reporting work behind the scenes using asynchronous processing.
- Advanced Capabilities showed how to extend jobs with features like Job Buttons, Hooks, Scheduling, Approvals, and modular job composition.
- Execution Environment highlighted the role of Celery workers, queues, and job orchestration for handling workloads predictably and reliably.
- Permissions and Security stressed the importance of fine-grained access control, auditing, and safe handling of secrets and sensitive jobs.
- Best Practices offered tips on validating data, writing dry-run-safe jobs, logging actions clearly, and composing modular code.
- Example Job gave you a real-world use case—an interface utilization report—that you can copy, deploy, and customize in your own environment.
Whether you’re automating initial device provisioning, generating compliance reports, or building custom lifecycle integrations, the Nautobot Job framework gives you the flexibility and operational safety to do so at scale.
If you found this helpful, feel free to share it with your team, fork the example job, or even contribute to your own Job library!
Thanks for reading — and happy automating! 🤖💡