Free for up to 3 nodes — no credit card
NodeFoundry provisions bare metal servers over iPXE, deploys Ceph clusters, and handles day-two operations — upgrades, monitoring, multi-cluster management — so your team can focus on workloads, not infrastructure.
Cluster overview — 12 active nodes, HEALTH_OK, 847 TiB raw at 43% utilization. Health panel, per-node status list, 24-hour IOPS sparkline.
nf CLI. Integrate with Terraform, Ansible, or your existing runbooks.No OS installs. No config files. No SSH sessions.
Point your DHCP server at the master node. Power on your storage servers. Each one pulls the bootstrap image over iPXE, inventories its hardware, and registers — automatically.
Inspect auto-discovered specs: CPU, RAM, drives, NICs, and IPMI address. Select nodes, choose your Ceph version and failure domain, and deploy with one command.
Monitor live IOPS and capacity, manage pools and OSDs, run rolling upgrades, add nodes, or spin up additional clusters — from the dashboard or via the REST API.
Zero-touch provisioning
NodeFoundry's iPXE server boots each node, inventories every CPU core, gigabyte of RAM, and attached drive, then registers it in the control plane — with no operator involvement.
New nodes appear in the browser dashboard within 60 seconds of
power-on. Every action available in the UI is also available via the nf CLI and REST API — use whichever fits your workflow.
$ nf network create --name stor-net \
--range 10.10.1.0/24 --gateway 10.10.1.1
Browser dashboard
Provision nodes, deploy clusters, manage pools and OSDs, watch live metrics — all from a browser-based dashboard that runs on your own infrastructure. NodeFoundry is licensed software you install. No cloud. No SaaS.
| Device ↕ | Capacity ↕ | Interface ↕ | Model ↕ | Role |
|---|---|---|---|---|
| /dev/sda | 500 GB | SATA | Samsung 870 EVO | MON |
| /dev/sdb | 1.0 TB | NVME | WD Black SN850 | Available |
| Task | State | Started | Result | |
|---|---|---|---|---|
| ceph.install-mon | succeeded | Mar 28, 01:52 PM | Monitor daemon started successfully | |
| ceph.install-mgr | succeeded | Mar 28, 02:42 PM | Manager daemon started successfully | |
| ceph.install-osd | succeeded | Mar 28, 03:32 PM | OSD.0 activated and running | |
| ceph.install-osd | succeeded | Mar 28, 04:22 PM | OSD.1 activated and running | |
| ceph.install-mon | succeeded | Mar 28, 05:12 PM | Monitor joined quorum | |
| ceph.install-osd | running | Mar 28, 07:20 PM | — |
Full lifecycle management — from first boot to rolling upgrades across multiple clusters.
Manage any number of independent Ceph clusters from a single control plane. Switch between clusters in the dashboard without logging into separate systems.
Active-passive master node failover with automatic promotion. Replicated Ceph monitors on three or more nodes ensure cluster health survives any single node failure.
Zero-downtime major Ceph version upgrades. NodeFoundry drains each OSD and recommissions sequentially while Ceph keeps serving I/O. Automatic rollback on failure.
Continuous drive health checks across every HDD and NVMe in your clusters. Predictive failure warnings surface before a drive impacts cluster availability.
RBD block, CephFS distributed filesystem, Object Gateway (S3-compatible), and NVMe-oF targets — provisioned and managed from a single interface.
Per-OSD IOPS, throughput, and latency retained for 30 days. Centralized log search across all nodes. Webhook alerts to Slack, PagerDuty, or any HTTP endpoint.
Built by Ceph operators
Our founding team ran Ceph clusters for major cloud providers and infrastructure-intensive businesses long before NodeFoundry existed. We've triaged corrupt placement groups at 2 AM, recovered from OSD cascades, and upgraded from Luminous to Squid on live production clusters.
NodeFoundry is the tooling we always wished existed — built with the operational depth that only comes from years of running Ceph in anger.
Professional plan customers get guaranteed response times and direct access to our engineering team for incident support.
Enterprise customers get a dedicated customer success engineer who knows your cluster configuration and is on-call for escalations.
Moving from cephadm or a self-managed cluster? Our team will help you plan and execute the migration with zero downtime.
Before you deploy, our team will review your hardware topology, network design, and failure domain strategy to catch issues early.
Free to start
NodeFoundry is free for up to 3 nodes — no credit card, no time limit. Follow the quickstart guide to install the master node and deploy your first Ceph cluster from the dashboard or CLI.
cephadm manages a Ceph cluster you've already provisioned. NodeFoundry is an appliance platform — it owns the full lifecycle from bare metal. NodeFoundry's differentiator is an API-first control plane: every action in the UI is available over REST, making it composable with Terraform, Ansible, or your existing runbooks.
Yes. Professional and Enterprise plans include multi-cluster management. All clusters appear in a single dashboard with a quick-switch selector. Each cluster has independent health status, nodes, pools, and alert configuration. The REST API namespaces all resources by cluster ID, making cross-cluster automation straightforward.
Yes. You can run the master node in active-passive configuration with automatic failover. Ceph clusters continue serving I/O regardless — NodeFoundry is in the control plane, not the data path. Replicated Ceph monitors on three or more nodes ensure cluster health survives any single node failure.
Your cluster keeps running. NodeFoundry sits in the control plane, not the data path — Ceph continues serving I/O regardless of whether the NodeFoundry master is reachable. You can manage the cluster directly with cephadm or the Ceph CLI.
Any x86_64 server with BIOS or UEFI PXE support. We test against Dell PowerEdge, HP ProLiant, Supermicro, and common whitebox configurations across 1 GbE, 10 GbE, 25 GbE, and 100 GbE. The hardware compatibility list is in the docs.
Yes. Enterprise deployments run the master node entirely within your network. No telemetry leaves your infrastructure. OS images, Ceph packages, and firmware are hosted on the master and served internally.
Free for up to 3 nodes. No credit card. Your first cluster deploys in minutes.