OPCP - Ceph RBD Block Storage - Performance, Resilience and Scalability with OpenStack

Wissensdatenbanken

OPCP - Ceph RBD Block Storage - Performance, Resilience and Scalability with OpenStack


Icons/System/eye-open Created with Sketch. 16 Ansichten 12.11.2025 On-Prem Cloud Platform

Please note that this information may change prior to General Availability (GA). Our current estimate for GA is March 2026. We will make every effort to keep you informed of any updates, but we recommend checking this guide regularly for the latest information. Thank you for your understanding.

Objective

This guide presents the Block Storage architecture implemented in OPCP, based on OpenStack. It explains how persistent storage for virtual machines (VMs) is ensured by Ceph RADOS Block Device (RBD), an open-source, distributed and highly scalable solution designed to meet the needs of modern enterprises.

The objective is to show how this storage layer guarantees performance, resilience and scalability, allowing teams to focus on application development and deployment without being limited by the infrastructure.

1. Performance and Scalability: Designed to evolve

Unlike traditional storage solutions, the Ceph infrastructure distributes data and workloads across all physical nodes in the cluster.

This approach eliminates bottlenecks and allows for linear scalability suitable for demanding environments.

FeatureCustomer Benefit
Distributed I/OElimination of bottlenecks: read and write operations are simultaneously distributed across many devices. This distribution ensures high throughput and low and constant latency, essential for critical applications and databases.
Horizontal scalabilityGrowth without interruption: increasing storage capacity is simply achieved by adding standard servers to the cluster. The system automatically integrates the new hardware and rebalances the data in the background, without causing any downtime.
Dynamic provisioningOptimized efficiency: volumes are allocated instantly but only use physical space when data is actually written. This approach significantly improves the utilization and efficiency of the physical storage pool.

2. Resilience and data protection

The fundamental architecture of Ceph is designed to offer maximum fault tolerance, ensuring that hardware failures do not cause service interruptions or data loss.

Automatic replication (High availability):

  • Each piece of data is automatically copied and stored on multiple distinct disks and physical servers (typically 3 copies).
  • In the event of a disk or server failure, virtual machines continue to operate normally, accessing the remaining copies without interruption.

Self-healing:

  • When a device fails, Ceph immediately detects the anomaly and replicates the lost data onto the remaining healthy devices.
  • This process automatically restores the defined data protection level.

No single point of failure:

  • The system operates without a central controller or proprietary hardware component, ensuring real redundancy and maximum availability for cloud services.

3. Technical specifications and limitations

This section details the main technical features and compatibility requirements for users deploying applications on the OVHcloud OPCP platform.

A. Compatibility and access

ElementDetailDescription
Storage typeBlock storagePresented to the VM as a standard SCSI or virtio block device, equivalent to a physical disk for optimal performance.
POSIX compatibilityYes (via guest OS)Ceph RBD provides the raw block device; the guest OS file system (XFS, ext4, NTFS, etc.) ensures POSIX compliance.
Network file systemNoIt is not a shared NFS or SMB. Each volume is dedicated to a single VM at a time for maximum performance.

B. Limitations and sizing

MetricValue/DefaultNotes
Maximum volume size16 TBLimit to ensure optimal performance and ease of management under OpenStack. Custom pools can support larger sizes on request.
Minimum volume size1 GBSmallest deployable volume unit via Cinder.
Volume availabilityInstantVolume allocation and attachment are done immediately via the OpenStack API or dashboard.

C. Performance and quality of service (QoS)

The platform offers several performance levels via Cinder volume types (Storage Classes), with guaranteed I/O limits enforced by the Ceph RBD QoS system:

Storage classIOPSThroughputUse case
Distributed storage (Standard)1 000 / 1 000100 MB/sGeneral VMs, web servers, test and development environments.
Distributed storage (High)3 000 / 3 000200 MB/sDatabases, analytics engines, high-traffic or latency-sensitive applications.
Distributed storage (Insane)7 500 / 7 500300 MB/sExtreme workloads, critical transactional systems, deep learning infrastructure.

Note: these thresholds are enforced by the Ceph QoS system. Volumes may sometimes exceed these limits if system resources allow, but the guarantees ensure predictable performance, even during high demand periods.

4. Advanced data management features

The Ceph RBD backend offers advanced tools for volume lifecycle management, simplifying and accelerating operations for cloud users.

  • Instant snapshots: Creation of space-efficient point-in-time copies of any volume, allowing for quick rollback or the creation of new environments in seconds.
  • Fast cloning (Copy-on-Write): Rapid provisioning of new read/write volumes from snapshots or existing images. The CoW (Copy-on-Write) technology saves disk space and accelerates VM deployment.
  • Hot volume resizing: Increasing the size of a volume attached to a running VM, without shutting down or restarting the VM.
  • Support for live migration: The distributed storage allows for non-disruptive migration of active virtual machines between compute nodes.
  • Boot from volume: Direct launch of a VM from a Cinder volume, ensuring that the root disk immediately benefits from all the high availability and advanced features of Ceph RBD.
  • Disaster recovery ready (RBD Mirroring): The underlying Ceph architecture supports asynchronous volume mirroring to a remote Ceph cluster, offering a robust solution for Disaster Recovery Planning (DRP).

Go further

If you need training or technical assistance for the implementation of our solutions, contact your sales representative or click this link to request a quote and have your project analyzed by our Professional Services team experts.

Join our community of users.

Zugehörige Artikel