Skip to content
Pipelines and Pizza 🍕
Go back

GitHub Actions and Job Scheduling, Part 2: The Solution

5 min read

This is Part 2 of a three-part series on migrating scheduled tasks and ad-hoc jobs to GitHub Actions with Docker-based self-hosted runners. Read Part 1 here.

From Scattered to Centralized

In Part 1, we talked about the mess: cron jobs on random servers, no visibility, no version control, and a lot of hoping things worked. Now let’s talk about the architecture that replaced all of it.

The solution came down to three pieces: self-hosted runners in Docker, vendor-published container images, and GitHub Actions workflow files committed to the codebase.

Self-Hosted Runners in a Docker Pool

GitHub Actions runners come in two flavors: GitHub-hosted (managed VMs in the cloud) and self-hosted (machines you manage yourself). We needed self-hosted for two reasons:

  1. Network access. These jobs interact with on-premises resources — Active Directory, internal servers, private network segments. GitHub-hosted runners can’t reach any of that.
  2. Toolchain isolation. We run PowerShell, Ansible, and Packer jobs. Rather than installing everything on one server and dealing with version conflicts, Docker containers let each job run in its own isolated environment.

The setup is a pool of three Docker servers running the GitHub Actions runner agent. When a workflow kicks off, it picks up an available runner from the pool. If one server goes down, the other two keep processing jobs. No more single points of failure.

Registering a runner is straightforward. GitHub provides a token you use during setup:

# Download and configure a self-hosted runner
./config.sh --url https://github.com/your-org/your-repo \
  --token YOUR_REGISTRATION_TOKEN \
  --labels docker,linux,self-hosted

With three servers in the pool and the same labels applied, GitHub distributes work across them automatically.

Vendor-Published Containers

One of the best decisions we made was leveraging official container images from vendors rather than building our own. Why maintain custom Dockerfiles when HashiCorp, Microsoft, and Red Hat already publish what you need?

Here’s what we use:

ToolContainer SourceExample Image
PackerHashiCorphashicorp/packer
TerraformHashiCorphashicorp/terraform
PowerShellMicrosoftmcr.microsoft.com/powershell
AnsibleRed Hat / Quayquay.io/ansible/ansible-runner

In a workflow file, you specify the container directly:

jobs:
  run-packer-build:
    runs-on: self-hosted
    container:
      image: hashicorp/packer:latest
    steps:
      - uses: actions/checkout@v4
      - name: Build image
        run: packer build template.pkr.hcl

Each job gets a fresh container. No leftover state from the last run, no dependency conflicts between tools. The Ansible job doesn’t care what version of PowerShell is installed because it’s running in a completely separate container.

Schedules in YAML

This is where GitHub Actions really shines for job scheduling. Instead of SSH-ing into a server and editing a crontab, you define schedules right in the workflow YAML file — version controlled, reviewable, and visible to the whole team.

GitHub Actions uses standard cron syntax in the schedule trigger:

on:
  schedule:
    # Run every weekday at 6 AM UTC
    - cron: "0 6 * * 1-5"

A few things worth knowing about GitHub’s cron implementation:

  • Times are in UTC. Plan accordingly for your local timezone.
  • Schedules aren’t exact. GitHub may delay execution by a few minutes during high-load periods. Don’t use this for anything that needs second-level precision.
  • Minimum interval is 5 minutes. You can’t schedule more frequently than that.
  • Schedules only run on the default branch. If your workflow file is on a feature branch, the schedule won’t fire.

For jobs that don’t need a fixed schedule — one-off Packer builds, emergency patching runs, ad-hoc reports — we use workflow_dispatch instead:

on:
  workflow_dispatch:
    inputs:
      target_environment:
        description: "Environment to patch"
        required: true
        type: choice
        options:
          - dev
          - staging
          - production

This gives the team a button in the GitHub UI to trigger jobs manually, with input parameters. No more SSH-ing into servers.

Secrets Management

Scheduled jobs often need credentials — service accounts, API keys, SSH keys for Ansible. GitHub Actions has built-in secrets management at the repository and organization level.

steps:
  - name: Run Ansible playbook
    env:
      ANSIBLE_VAULT_PASSWORD: ${{ secrets.ANSIBLE_VAULT_PASSWORD }}
      SERVICE_ACCOUNT_KEY: ${{ secrets.SVC_ACCOUNT_KEY }}
    run: |
      echo "$ANSIBLE_VAULT_PASSWORD" > .vault_pass
      ansible-playbook -i inventory/prod site.yml --vault-password-file .vault_pass
      rm -f .vault_pass

Secrets are encrypted at rest, masked in logs, and scoped to the repository or organization. No more plaintext passwords in scripts on shared servers.

The Big Win: Visibility

With hundreds of jobs now running through GitHub Actions, we have a single pane of glass. Every run is logged. Every schedule is defined in code. Every change goes through a pull request. When an auditor asks “did this job run last Tuesday?” you click into the workflow, scroll to the date, and show them the green checkmark.

That’s a long way from “we think it ran… probably.”

In Part 3, we’ll walk through real workflow examples for PowerShell, Ansible, and Packer, and cover how we migrated jobs incrementally without breaking anything.

Happy automating!


Next up: Part 3 — The Migration: Code Examples and Lessons Learned