Examples
Examples
Runnable examples demonstrating Reaper’s capabilities. Each example includes a setup.sh script that creates a Kind cluster with Reaper pre-installed.
Prerequisites
All examples require:
Note: Examples 01–08 use the legacy Ansible-based installer. Newer examples (09+) use the Helm-based
setup-playground.shpattern.
Run all scripts from the repository root.
Examples
01-scheduling/ — Node Scheduling Patterns
Demonstrates running workloads on all nodes vs. a labeled subset using DaemonSets with nodeSelector.
- 3-node cluster (1 control-plane + 2 workers)
- All-node DaemonSet (load/memory monitor on every node)
- Subset DaemonSet (login-node monitor only on
node-role=loginnodes)
./examples/01-scheduling/setup.sh
kubectl apply -f examples/01-scheduling/all-nodes-daemonset.yaml
kubectl apply -f examples/01-scheduling/subset-nodes-daemonset.yaml
02-client-server/ — TCP Client-Server Communication
Demonstrates cross-node networking with a socat TCP server on one node and clients connecting from other nodes over host networking.
- 4-node cluster (1 control-plane + 3 workers)
- Server on
role=servernode, clients onrole=clientnodes - Clients discover the server IP via a ConfigMap
./examples/02-client-server/setup.sh
kubectl apply -f examples/02-client-server/server-daemonset.yaml
kubectl apply -f examples/02-client-server/client-daemonset.yaml
kubectl logs -l app=demo-client --all-containers --prefix -f
03-client-server-runas/ — Client-Server with Non-Root User
Same as client-server, but all workloads run as a shared non-root user (demo-svc, UID 1500 / GID 1500), demonstrating Reaper’s securityContext.runAsUser / runAsGroup support. The setup script creates the user on every node with identical IDs, mimicking an LDAP environment.
- 4-node cluster (1 control-plane + 3 workers)
- Shared
demo-svcuser created on all nodes (UID 1500, GID 1500) - All log output includes
uid=to prove privilege drop
./examples/03-client-server-runas/setup.sh
kubectl apply -f examples/03-client-server-runas/server-daemonset.yaml
kubectl apply -f examples/03-client-server-runas/client-daemonset.yaml
kubectl logs -l app=demo-client-runas --all-containers --prefix -f
04-volumes/ — Kubernetes Volume Mounts
Demonstrates Reaper’s volume mount support across four volume types: ConfigMap, Secret, hostPath, and emptyDir. Showcases package installation (nginx) inside the overlay namespace without modifying the host.
- 2-node cluster (1 control-plane + 1 worker)
- ConfigMap-configured nginx, read-only Secrets, hostPath file serving, emptyDir scratch workspace
- Software installed inside pod commands via overlay (host unmodified)
./examples/04-volumes/setup.sh
kubectl apply -f examples/04-volumes/configmap-nginx.yaml
kubectl logs configmap-nginx -f
05-kubemix/ — Kubernetes Workload Mix
Demonstrates running Jobs, DaemonSets, and Deployments simultaneously on a 10-node cluster. Each workload type targets a different set of labeled nodes, showcasing Reaper across diverse Kubernetes workload modes. All workloads read configuration from dedicated ConfigMap volumes.
- 10-node cluster (1 control-plane + 9 workers)
- Workers partitioned: 3 batch (Jobs), 3 daemon (DaemonSets), 3 service (Deployments)
- Each workload reads config from its own ConfigMap volume
./examples/05-kubemix/setup.sh
kubectl apply -f examples/05-kubemix/
kubectl get pods -o wide
06-ansible-jobs/ — Ansible Jobs
Demonstrates overlay persistence by running sequential Jobs: the first installs Ansible via apt, the second runs an Ansible playbook (from a ConfigMap) to install and verify nginx. Packages installed by Job 1 persist in the shared overlay for Job 2.
- 10-node cluster (1 control-plane + 9 workers)
- Job 1: installs Ansible on all workers (persists in overlay)
- Job 2: runs Ansible playbook from ConfigMap to install nginx
./examples/06-ansible-jobs/setup.sh
kubectl apply -f examples/06-ansible-jobs/install-ansible-job.yaml
kubectl wait --for=condition=Complete job/install-ansible --timeout=300s
kubectl apply -f examples/06-ansible-jobs/nginx-playbook-job.yaml
07-ansible-complex/ — Ansible Complex (Reboot-Resilient)
Fully reboot-resilient Ansible deployment using only DaemonSets. A bootstrap DaemonSet installs Ansible, then role-specific DaemonSets run playbooks (nginx on login nodes, htop on compute nodes). Init containers create implicit dependencies so a single kubectl apply -f deploys everything in the right order. All packages survive node reboots.
- 10-node cluster (1 control-plane + 9 workers: 2 login, 7 compute)
- 3 DaemonSets: Ansible bootstrap (all), nginx (login), htop (compute)
- Init container dependencies — no manual ordering needed
./examples/07-ansible-complex/setup.sh
kubectl apply -f examples/07-ansible-complex/
kubectl rollout status daemonset/nginx-login --timeout=300s
08-mix-container-runtime-engines/ — Mixed Runtime Engines
Demonstrates mixed runtime engines in the same cluster: a standard containerized OpenLDAP server (default containerd/runc) alongside Reaper workloads that configure SSSD on every node. Reaper pods consume the LDAP service via a fixed ClusterIP, enabling getent passwd to resolve LDAP users on the host.
- 4-node cluster (1 control-plane + 3 workers: 1 login, 2 compute)
- OpenLDAP Deployment (default runtime) with 5 posixAccount users
- Reaper DaemonSets: Ansible bootstrap + SSSD configuration (all workers)
- Init containers handle dependency ordering (Ansible + LDAP readiness)
./examples/08-mix-container-runtime-engines/setup.sh
kubectl apply -f examples/08-mix-container-runtime-engines/
kubectl rollout status daemonset/base-config --timeout=300s
09-reaperpod/ — ReaperPod CRD
Demonstrates the ReaperPod Custom Resource Definition — a simplified, Reaper-native way to run workloads without container boilerplate. A reaper-controller watches ReaperPod resources and creates real Pods with runtimeClassName: reaper-v2 pre-configured.
- No
image:field needed (busybox placeholder handled automatically) - Reaper-specific fields:
dnsMode,overlayName, simplified volumes - Status tracks phase, podName, nodeName, exitCode
# Prerequisites: install CRD and controller
kubectl create namespace reaper-system
kubectl apply -f deploy/kubernetes/crds/reaperpods.reaper.io.yaml
kubectl apply -f deploy/kubernetes/reaper-controller.yaml
# Run a simple task
kubectl apply -f examples/09-reaperpod/simple-task.yaml
kubectl get reaperpods
kubectl describe reaperpod hello-world
# With volumes (create ConfigMap first)
kubectl create configmap app-config --from-literal=greeting="Hello from ConfigMap"
kubectl apply -f examples/09-reaperpod/with-volumes.yaml
# With node selector (label a node first)
kubectl label node <name> workload-type=compute
kubectl apply -f examples/09-reaperpod/with-node-selector.yaml
10-slurm-hpc/ — Slurm HPC (Mixed Runtimes)
Demonstrates a Slurm HPC cluster using mixed Kubernetes runtimes: slurmctld (scheduler) runs as a standard container, while slurmd (worker daemons) run on compute nodes via Reaper with direct host access for CPU pinning and device management.
- 4-node cluster (1 control-plane + 1 slurmctld + 2 compute)
- slurmctld Deployment (default runtime) with munge authentication
- slurmd DaemonSet (Reaper) on compute nodes with shared overlay
./examples/10-slurm-hpc/setup.sh
kubectl apply -f examples/10-slurm-hpc/
kubectl rollout status daemonset/slurmd --timeout=300s
11-node-monitoring/ — Node Monitoring (Prometheus + Reaper)
Demonstrates host-level node monitoring: Prometheus node_exporter runs as a Reaper DaemonSet for accurate host metrics, while a containerized Prometheus server (default runtime) scrapes them.
- 3-node cluster (1 control-plane + 2 workers)
- node_exporter DaemonSet (Reaper) — downloads and runs on host
- Prometheus Deployment (default runtime) with Kubernetes service discovery
./examples/11-node-monitoring/setup.sh
kubectl apply -f examples/11-node-monitoring/
kubectl port-forward svc/prometheus 9090:9090
12-daemon-job/ — ReaperDaemonJob CRD (Node Configuration)
Demonstrates the ReaperDaemonJob Custom Resource Definition — a “DaemonSet for Jobs” that runs commands to completion on every matching node. Designed for node configuration tasks like Ansible playbooks that compose via shared overlays.
- Dependency ordering via
afterfield (second job waits for first) - Shared overlays via
overlayName(composable node config) - Per-node status tracking with retry support
# Prerequisites: Reaper + controller running (via Helm or setup-playground.sh)
kubectl apply -f examples/12-daemon-job/simple-daemon-job.yaml
kubectl get reaperdaemonjobs
kubectl describe reaperdaemonjob node-info
# Composable example with dependencies
kubectl apply -f examples/12-daemon-job/composable-node-config.yaml
kubectl get rdjob -w # watch until both jobs complete
Cleanup
Examples with setup.sh scripts can be cleaned up independently:
./examples/01-scheduling/setup.sh --cleanup
./examples/02-client-server/setup.sh --cleanup
./examples/03-client-server-runas/setup.sh --cleanup
./examples/04-volumes/setup.sh --cleanup
./examples/05-kubemix/setup.sh --cleanup
./examples/06-ansible-jobs/setup.sh --cleanup
./examples/07-ansible-complex/setup.sh --cleanup
./examples/08-mix-container-runtime-engines/setup.sh --cleanup
./examples/10-slurm-hpc/setup.sh --cleanup
./examples/11-node-monitoring/setup.sh --cleanup
For CRD-based examples (09, 12), delete the resources directly:
kubectl delete reaperpod --all
kubectl delete reaperdaemonjob --all