← Back to News

Why we chose Ceph as the storage foundation — and what we had to build on top of it

Ceph is the right answer for software-defined storage at scale. But operating it requires tooling that doesn't exist in the open-source ecosystem. This is the gap we're filling.

Why we chose Ceph

When we decided to build a software-defined storage platform, we evaluated every serious contender: GlusterFS, MinIO (object-only), DRBD, and Ceph. We chose Ceph for three reasons.

1. Unified storage protocols

Ceph exposes block (RBD), object (RGW, S3-compatible), and file (CephFS) from a single cluster. Most competitors specialize. The ability to serve all three protocols from one pool means customers don’t need separate systems for VM disks, object storage, and shared filesystems.

2. CRUSH and failure domain awareness

CRUSH (Controlled Replication Under Scalable Hashing) is Ceph’s placement algorithm. It lets you define arbitrary failure domains — host, rack, row, datacenter — and guarantee that replicas never share a failure domain. No other open-source storage system has anything comparable in production maturity.

3. Community and longevity

Ceph is backed by Red Hat, SUSE, and a large independent community. The codebase is 15+ years old and has survived enormous scale deployments. We’re not betting on something that might disappear.

What we had to build on top

Ceph’s operational tooling tells you what to run, not when and in what order. cephadm handles daemon deployment, but the orchestration logic — drain before OSD removal, wait for recovery before continuing an upgrade, sequence monitor restarts to preserve quorum — lives in operator runbooks.

NodeFoundry is the orchestration layer that makes those runbooks unnecessary.

Want to see it for yourself?

We're happy to walk you through it.

No pitch deck. Just a real conversation about your infrastructure, your cluster size, and whether NodeFoundry is the right fit. If it's not, we'll tell you.