bootc has been around for a while in the Fedora/CentOS ecosystem, but recently the Rocky Linux SIG/Containers group has done the work to port it to Rocky Linux 10. With Rocky and CIQ launching their bootable containers with commercial support, I wanted to build my own customized image on top of it — so I put together bootc-rocky.

If you’ve ever managed a fleet of machines and dealt with the pain of package drift, this is for you.

The Problem with Traditional OS Management

If you’ve run infrastructure at scale, you know the pattern. You install an OS, run configuration management, push updates over time, and eventually every machine becomes a unique snowflake. Packages drift out of sync, config files accumulate manual edits, and the machine you installed two years ago looks nothing like the one you installed last week.

I touched on some of these problems back in my Stateless Hypervisors at Scale post where we were booting live images to solve similar consistency issues with OpenStack compute nodes. bootc takes that concept further with a much more mature toolchain.

Treat Your OS Like a Container

The idea behind bootc is simple: your entire OS — kernel, bootloader, packages, config — is a standard OCI container image. You define it in a Containerfile, build it, push it to a registry, and machines pull the identical image. No more dnf update across a fleet and hoping nothing breaks.

Here’s what the bootc-rocky Containerfile looks like at a high level:

FROM localhost/rocky-bootc-base:10

# install packages
RUN dnf install -y \
        tmux rsync unzip \
        firewalld aide audit \
    && dnf clean all

# security hardening
COPY config/sshd/99-hardening.conf /etc/ssh/sshd_config.d/
COPY config/sysctl.d/99-hardening.conf /etc/sysctl.d/
RUN update-crypto-policies --no-reload --set FUTURE

# enable services
RUN systemctl enable firewalld && \
    systemctl enable auditd && \
    systemctl enable sshd

It looks exactly like building a Docker container because it is. The difference is that when a machine boots this image, it becomes the root filesystem with a real kernel and init system.

How Updates Work

This is where it gets interesting. Instead of SSHing into machines and running package updates, you:

  1. Edit the Containerfile
  2. Build a new image
  3. Push to a registry
  4. Machines pull the new image and reboot into it
$ bootc upgrade    # pulls new image, stages it
$ bootc status     # shows current vs staged
$ reboot           # boots into the new image

bootc uses an A/B deployment scheme backed by ostree. The running system is deployment A. When you upgrade, the new image is staged as deployment B. On reboot, the bootloader switches to B. If something goes wrong:

$ bootc rollback
$ reboot

You’re back on the previous image. The root filesystem is read-only by default — /etc (config) and /var (data) are both persistent and writable, while everything else is immutable and replaced atomically on update.

There’s also an option to enable a live overlay on /usr so you can install packages at runtime for debugging or applying critical fixes while the machine is up. Anything installed this way gets cleared on the next reboot. If you need the change to persist, you update the Containerfile and build a new image — keeping the source of truth in code.

Why This Matters for Fleets

Think about an OpenStack deployment with hundreds of compute nodes. Traditionally you’d have configuration management running on every node, pushing updates, handling package conflicts, dealing with machines that missed a run or had a transient failure. Over time they diverge.

With bootc, every machine in the fleet is running the exact same image. When you need to update the kernel, patch a vulnerability, or add a package, you change the Containerfile, build, push, and roll it out. Every node gets the identical image. You can:

  • Stage updates during maintenance windows with bootc upgrade --download-only and then apply them when ready
  • Blue/green deploy by switching machines between image tags with bootc switch
  • Roll back the entire fleet if something goes wrong
  • Audit exactly what’s running because the image is tied to a git commit

The image can be deployed via ISO, PXE boot, bootc install to-disk, or even converting an existing running system in-place with bootc install to-existing-root.

Getting Started

The bootc-rocky repo has everything wired up:

$ make base        # build base Rocky bootc image
$ make custom      # build custom hardened image
$ make push        # push to registry
$ make disk-iso    # build an installer ISO

The base image is built from the upstream rocky-bootc build system from the Rocky Linux SIG/Containers group. The custom layer adds security hardening — SSH key-only auth, kernel sysctl hardening, firewall rules, FUTURE crypto policy, and stripped SUID bits.

If you’re running any kind of fleet — whether it’s OpenStack, Kubernetes nodes, or just a rack of servers — this approach eliminates an entire class of operational problems. Your infrastructure becomes as reproducible as your application containers.