Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Custom Resource Definitions

Reaper provides three CRDs for managing workloads, overlay filesystems, and node-wide configuration tasks.

ReaperPod

A simplified, Reaper-native way to run workloads without standard container boilerplate.

  • Group: reaper.io
  • Version: v1alpha1
  • Kind: ReaperPod
  • Short name: rpod (kubectl get rpod)

Spec

FieldTypeRequiredDescription
commandstring[]YesCommand to execute on the host
argsstring[]NoArguments to the command
envEnvVar[]NoEnvironment variables (simplified format)
volumesVolume[]NoVolume mounts (simplified format)
nodeSelectormap[string]stringNoNode selection constraints
dnsModestringNoDNS resolution mode (host or kubernetes)
overlayNamestringNoNamed overlay group (requires matching ReaperOverlay)

Status

FieldTypeDescription
phasestringCurrent phase: Pending, Running, Succeeded, Failed
podNamestringName of the backing Pod
nodeNamestringNode where the workload runs
exitCodeintProcess exit code (when completed)
startTimestringWhen the workload started
completionTimestringWhen the workload completed

Simplified Volumes

ReaperPod volumes use a flat format instead of the nested Kubernetes volume spec:

volumes:
  - name: config
    mountPath: /etc/config
    configMap: "my-configmap"     # ConfigMap name (string)
    readOnly: true
  - name: secret
    mountPath: /etc/secret
    secret: "my-secret"           # Secret name (string)
  - name: host
    mountPath: /data
    hostPath: "/opt/data"         # Host path (string)
  - name: scratch
    mountPath: /tmp/work
    emptyDir: true                # EmptyDir (bool)

Examples

Simple Task

apiVersion: reaper.io/v1alpha1
kind: ReaperPod
metadata:
  name: hello-world
spec:
  command: ["/bin/sh", "-c", "echo Hello from $(hostname) at $(date)"]

With Volumes

apiVersion: reaper.io/v1alpha1
kind: ReaperPod
metadata:
  name: with-config
spec:
  command: ["/bin/sh", "-c", "cat /config/greeting"]
  volumes:
    - name: config
      mountPath: /config
      configMap: "app-config"
      readOnly: true

With Node Selector

apiVersion: reaper.io/v1alpha1
kind: ReaperPod
metadata:
  name: compute-task
spec:
  command: ["/bin/sh", "-c", "echo Running on $(hostname)"]
  nodeSelector:
    workload-type: compute

Controller

The reaper-controller watches ReaperPod resources and creates backing Pods with runtimeClassName: reaper-v2. It translates the simplified ReaperPod spec into a full Pod spec.

  • Pod name matches ReaperPod name (1:1 mapping)
  • Owner references enable automatic garbage collection
  • Status is mirrored from the backing Pod
  • If overlayName is set, the Pod stays Pending until a matching ReaperOverlay is Ready

ReaperOverlay

A PVC-like resource that manages named overlay filesystem lifecycles independently from ReaperPod workloads. Enables Kubernetes-native overlay creation, reset, and deletion without requiring direct node access.

  • Group: reaper.io
  • Version: v1alpha1
  • Kind: ReaperOverlay
  • Short name: rovl (kubectl get rovl)

Spec

FieldTypeDefaultDescription
resetPolicystringManualWhen to reset: Manual, OnFailure, OnDelete
resetGenerationint0Increment to trigger a reset on all nodes

Status

FieldTypeDescription
phasestringCurrent phase: Pending, Ready, Resetting, Failed
observedResetGenerationintLast resetGeneration fully applied
nodes[]arrayPer-node overlay state
nodes[].nodeNamestringNode name
nodes[].readyboolWhether the overlay is available
nodes[].lastResetTimestringISO 8601 timestamp of last reset
messagestringHuman-readable status message

PVC-like Behavior

ReaperOverlay works like a PersistentVolumeClaim:

  • Blocking: ReaperPods with overlayName stay Pending until the matching ReaperOverlay exists and is Ready
  • Cleanup on delete: A finalizer ensures on-disk overlay data is cleaned up on all nodes when the ReaperOverlay is deleted
  • Reset: Increment spec.resetGeneration to trigger overlay teardown and recreation on all nodes

Examples

Create an Overlay

apiVersion: reaper.io/v1alpha1
kind: ReaperOverlay
metadata:
  name: slurm
spec:
  resetPolicy: Manual

Use with a ReaperPod

apiVersion: reaper.io/v1alpha1
kind: ReaperPod
metadata:
  name: install-slurm
spec:
  overlayName: slurm
  command: ["bash", "-c", "apt-get update && apt-get install -y slurm-wlm"]

Reset a Corrupt Overlay

kubectl patch rovl slurm --type merge -p '{"spec":{"resetGeneration":1}}'
kubectl get rovl slurm -w   # watch until phase returns to Ready

Delete an Overlay

kubectl delete rovl slurm   # finalizer cleans up on-disk data on all nodes

ReaperDaemonJob

A “DaemonSet for Jobs” that runs a command to completion on every matching node, with support for dependency ordering, retry policies, and shared overlays. Designed for node configuration tasks like Ansible playbooks that compose via shared overlays.

  • Group: reaper.io
  • Version: v1alpha1
  • Kind: ReaperDaemonJob
  • Short name: rdjob (kubectl get rdjob)

Spec

FieldTypeDefaultDescription
commandstring[](required)Command to execute on each node
argsstring[]Arguments to the command
envEnvVar[]Environment variables (same format as ReaperPod)
workingDirstringWorking directory for the command
overlayNamestringNamed overlay group for shared filesystem
nodeSelectormap[string]stringTarget specific nodes by labels (all nodes if empty)
dnsModestringDNS resolution mode (host or kubernetes)
runAsUserintUID for the process
runAsGroupintGID for the process
volumesVolume[]Volume mounts (same format as ReaperPod)
tolerationsToleration[]Tolerations for the underlying Pods
triggerOnstringNodeReadyTrigger events: NodeReady or Manual
afterstring[]Dependency ordering — names of other ReaperDaemonJobs that must complete first
retryLimitint0Maximum retries per node on failure
concurrencyPolicystringSkipWhat to do on re-trigger while running: Skip or Replace

Status

FieldTypeDescription
phasestringOverall phase: Pending, Running, Completed, PartiallyFailed
readyNodesintNumber of nodes that completed successfully
totalNodesintTotal number of targeted nodes
observedGenerationintLast spec generation reconciled
nodeStatuses[]arrayPer-node execution status
nodeStatuses[].nodeNamestringNode name
nodeStatuses[].phasestringPer-node phase: Pending, Running, Succeeded, Failed
nodeStatuses[].reaperPodNamestringName of the ReaperPod created for this node
nodeStatuses[].exitCodeintExit code on this node
nodeStatuses[].retryCountintNumber of retries so far
messagestringHuman-readable status message

Controller Layering

ReaperDaemonJobReaperPodPod. The DaemonJob controller creates one ReaperPod per matching node, pinned via nodeName. The existing ReaperPod controller then creates the backing Pods. No changes to the runtime or shim.

Dependency Ordering

The after field lists other ReaperDaemonJobs that must reach Completed phase before this job starts on any node. This enables composable workflows where one job’s output is another’s input (via shared overlays).

Examples

Simple Node Info

apiVersion: reaper.io/v1alpha1
kind: ReaperDaemonJob
metadata:
  name: node-info
spec:
  command: ["/bin/sh", "-c"]
  args:
    - |
      echo "Node: $(hostname)"
      echo "Kernel: $(uname -r)"

Composable Node Config with Dependencies

apiVersion: reaper.io/v1alpha1
kind: ReaperDaemonJob
metadata:
  name: mount-filesystems
spec:
  command: ["/bin/sh", "-c"]
  args: ["mkdir -p /mnt/shared && mount -t nfs server:/export /mnt/shared"]
  overlayName: node-config
  nodeSelector:
    role: compute
---
apiVersion: reaper.io/v1alpha1
kind: ReaperDaemonJob
metadata:
  name: install-packages
spec:
  command: ["/bin/sh", "-c"]
  args: ["apt-get update && apt-get install -y htop"]
  overlayName: node-config
  after:
    - mount-filesystems
  nodeSelector:
    role: compute
  retryLimit: 2