This is Part 3 of a three-part series on migrating scheduled tasks and ad-hoc jobs to GitHub Actions with Docker-based self-hosted runners. Read Part 1 and Part 2.
Migration Strategy: Incremental, Not Big-Bang
We didn’t flip a switch and move hundreds of jobs overnight. That’s a recipe for a very bad Monday morning. Instead, we followed a simple, repeatable process for each job:
- Commit the script to Git. Take the script off the server and into the repository.
- Write the workflow YAML. Define the trigger (schedule or manual), the container, and the steps.
- Run a few ad-hoc tests. Trigger the workflow manually with
workflow_dispatchand verify the output. - Submit a change control. Once validated, submit the request to retire the old cron job and enable the schedule on the new workflow.
- Disable the old job. Comment out or remove the crontab entry on the legacy server.
This incremental approach meant we always had a working job — either the old one or the new one — and we never had a gap in coverage. It also gave us a natural opportunity to clean up scripts as we migrated them. Some of those bash scripts hadn’t been touched in years.
Example 1: PowerShell — Inactive User Cleanup
This job runs every weekday morning, connects to Active Directory, and disables accounts that haven’t logged in for 90 days.
name: Inactive User Cleanup
on:
schedule:
- cron: "0 12 * * 1-5" # 6 AM CST (UTC-6)
workflow_dispatch:
jobs:
cleanup-inactive-users:
runs-on: self-hosted
container:
image: mcr.microsoft.com/powershell:latest
steps:
- uses: actions/checkout@v4
- name: Run cleanup script
env:
AD_SERVICE_ACCOUNT: ${{ secrets.AD_SERVICE_ACCOUNT }}
AD_SERVICE_PASSWORD: ${{ secrets.AD_SERVICE_PASSWORD }}
run: |
pwsh -File ./scripts/disable-inactive-users.ps1 \
-DaysInactive 90 \
-DryRun $false
A few things to note: we always include workflow_dispatch alongside the schedule so the team can trigger it manually if needed. The AD credentials come from GitHub Secrets — no passwords stored in the script or the repo.
Example 2: Ansible — Linux Patching
This workflow runs Ansible against our Linux inventory to apply security patches. It’s scheduled for the first Saturday of every month during a maintenance window.
name: Linux Security Patching
on:
schedule:
- cron: "0 10 1-7 * 6" # First Saturday, 4 AM CST
workflow_dispatch:
inputs:
target_group:
description: "Inventory group to patch"
required: true
default: "all"
type: string
jobs:
patch-linux-servers:
runs-on: self-hosted
container:
image: quay.io/ansible/ansible-runner:latest
steps:
- uses: actions/checkout@v4
- name: Write vault password
env:
VAULT_PASS: ${{ secrets.ANSIBLE_VAULT_PASSWORD }}
run: |
echo "$VAULT_PASS" > .vault_pass
chmod 600 .vault_pass
- name: Run patching playbook
run: |
ansible-playbook \
-i inventory/production/hosts.yml \
playbooks/security-patching.yml \
--vault-password-file .vault_pass \
--limit "${{ github.event.inputs.target_group || 'all' }}"
- name: Cleanup
if: always()
run: rm -f .vault_pass
The workflow_dispatch input lets the team target a specific inventory group for emergency patches without waiting for the next scheduled window. The vault password file is created at runtime and cleaned up afterward — it never persists on disk.
Example 3: Packer — Server Image Builds
Packer builds are scheduled so they always have current golden images on the Nutanix and Public Cloud environments. Linux servers are refreshed weekly and Windows servers shortly after patch tuesday.
name: Packer Image Build
on:
workflow_dispatch:
inputs:
template:
description: "Packer template to build"
required: true
type: choice
options:
- ubuntu-2204-base
- rhel-9-base
- windows-2022-base
jobs:
build-image:
runs-on: self-hosted
container:
image: hashicorp/packer:latest
steps:
- uses: actions/checkout@v4
- name: Initialize Packer
run: packer init templates/${{ github.event.inputs.template }}.pkr.hcl
- name: Validate template
run: packer validate templates/${{ github.event.inputs.template }}.pkr.hcl
- name: Build image
env:
NUTANIX_USERNAME: ${{ secrets.NUTANIX_USERNAME }}
NUTANIX_PASSWORD: ${{ secrets.NUTANIX_PASSWORD }}
NUTANIX_ENDPOINT: ${{ secrets.NUTANIX_ENDPOINT }}
NUTANIX_CLUSTER: ${{ secrets.NUTANIX_CLUSTER }}
run: |
packer build \
-var "nutanix_username=$NUTANIX_USERNAME" \
-var "nutanix_password=$NUTANIX_PASSWORD" \
-var "nutanix_endpoint=$NUTANIX_ENDPOINT" \
-var "nutanix_cluster=$NUTANIX_CLUSTER" \
templates/${{ github.event.inputs.template }}.pkr.hcl
The dropdown input in workflow_dispatch means the team doesn’t need to remember template names — they just pick from a list in the GitHub UI and click “Run workflow.”
What Changed
After migrating everything over, the difference was immediate:
- Hundreds of jobs visible in one place with full run history.
- Every script version controlled with git history — who changed what, when, and why.
- Every schedule defined in YAML, reviewed through pull requests, and living alongside the code it runs.
- Isolated environments through Docker containers — no more “this only works on that one server.”
- Secrets managed properly instead of sitting in plaintext config files.
The team went from not knowing if jobs ran to being able to pull up the exact output of any job on any date. That’s the kind of operational maturity that makes auditors happy and on-call engineers sleep better.
Getting Started
If you’re sitting on a pile of cron jobs and manual scripts, you don’t need to migrate everything at once. Pick one job — the simplest one you have. Commit the script, write the workflow, test it, and retire the old one. Once you’ve done it once, the pattern is clear and the rest follow naturally.
Happy automating!
This wraps up the three-part series on GitHub Actions and Job Scheduling. Check out Part 1: The Problem and Part 2: The Solution if you missed them.