OPCP - How to configure a software RAID on a node

Knowledge Base

OPCP - How to configure a software RAID on a node


Icons/System/eye-open Created with Sketch. 21 Views 06.02.2026 On-Prem Cloud Platform

Objective

This guide explains how to configure and manage a software RAID on an OpenStack Ironic baremetal node in your OPCP environment.

Software RAID allows you to create a redundancy configuration at the software level, without requiring a dedicated hardware RAID controller. This solution is particularly useful to improve the availability and storage performance of your instances.

This guide covers:

  • Configuring software RAID via the Ironic agent interface
  • Verifying the RAID configuration
  • Best practices for managing software RAID

Software RAID must be configured before deploying an instance on the node. Once an instance has been deployed, changing the RAID configuration requires deleting the instance and reconfiguring the node. Creating or modifying a RAID array erases the data present on the disks used.

Monitoring of the software RAID is the responsibility of the end customer: we do not have access to the deployed instance to manage monitoring. You should configure alerting (for example using the mdadm --monitor command) and integrate this monitoring into your existing monitoring tools.

Requirements

Before you begin, make sure you have the following:

  • An active OPCP service.
  • Access to Configured OpenStack CLI with the required permissions (clouds.yaml or environment variables).
  • The admin role and/or nodes transferred into your project.
  • An available node (status available) or a node in maintenance mode.
  • A Linux (GNU/Linux) system image is required for the instance. This image must include the mdadm package or allow its installation. VMware appliances and Windows operating systems, for example, are not compatible with this procedure.
  • Basic knowledge of OpenStack Ironic and baremetal node management.

Why use software RAID?

Software RAID provides several benefits in an OPCP environment:

  • Data redundancy: Protection against data loss in the event of a disk failure.
  • Performance improvement: Distribution of read/write operations across multiple disks.
  • Lower cost: No need for a dedicated hardware RAID controller.

Instructions

1. Check the disks available on the node

Before configuring RAID, you must identify the disks available on your node.

List available nodes:

openstack baremetal node list

Check the node hardware properties:

openstack baremetal node show <node-id>

2. Enable maintenance mode

Before any configuration change, put the node in maintenance mode:

openstack baremetal node maintenance set <node-id> --reason "Software RAID configuration"

3. Supported RAID level

Ironic supports several software RAID levels. The following values are accepted in the JSON configuration:

RAID levelJSON valueDescriptionMinimum number of disks
RAID 1"1"Mirroring2

Important constraint: The first logical disk with is_root_volume: true must be in RAID 1. Other RAID levels (RAID 0, 5, 6, 10, etc.) are not allowed for the root volume.

It is therefore only possible to use RAID 1 for the instance deployment.

4. Configure software RAID

Ironic allows you to configure software RAID via the agent interface. This configuration is applied automatically when an instance is deployed.

4.1. Check and enable the agent interface if necessary

Before configuring RAID, check the RAID interface currently configured on the node:

openstack baremetal node show <node-id> -f json | jq '.raid_interface'

If the output is null or different from "agent", enable the agent RAID interface:

openstack baremetal node set <node-id> --raid-interface=agent

4.2. Create the RAID configuration file

Create a JSON file containing the desired RAID configuration. The following example creates a RAID 1 (mirroring) array with two disks:

cat > /tmp/raid1.json <<EOF
{
  "logical_disks": [{
    "controller": "software",
    "size_gb": "MAX",
    "raid_level": "1",
    "is_root_volume": true,
    "physical_disks": [
      {"size": "<1000"},
      {"size": "<1000"}
    ]
  }]
}
EOF

The "size": "<1000" parameter in physical_disks automatically selects disks smaller than 1000 GB. You can adjust this value according to the size of your disks or use other selection criteria.

4.3. Apply the RAID configuration

Once the configuration file has been created, apply it to the node:

openstack baremetal node set <node-id> --target-raid-config /tmp/raid1.json

4.4. Check the RAID configuration

To check the RAID configuration applied on a node:

openstack baremetal node show <node-id> -f json | jq '.target_raid_config'

5. Disable maintenance mode

Once the RAID configuration is complete, disable maintenance mode:

openstack baremetal node maintenance unset <node-id>

6. Deploy an instance on the configured node

Once RAID has been configured, you can deploy an instance on the node:

openstack server create \
  --image <image-name> \
  --flavor <flavor-id> \
  --key-name <keypair-name> \
  --nic net-id=<network-id> \
  --availability-zone "nova::<node-id>" \
  <instance-name>

To make sure that your instance is deployed on the node configured with RAID, use the availability zone nova::<node-id>.

7. Check the RAID configuration after deployment

Once the instance is deployed, you can check the RAID configuration from within the instance:

Check RAID devices:

cat /proc/mdstat

Example output:

Personalities : [raid1] [raid10] [linear] [multipath] [raid0] [raid6] [raid5] [raid4]
md0 : active raid1 sda[0] sdb[1]
      500G 0 blocks super 1.2 [2/2] [UU]

unused devices: <none>

Check detailed information for a RAID device:

mdadm --detail /dev/md0

Summary of main commands

ActionCommand
List nodesopenstack baremetal node list
Enable maintenance modeopenstack baremetal node maintenance set <node-id>
Check RAID interfaceopenstack baremetal node show <node-id> -f json \| jq '.raid_interface'
Enable agent RAID interfaceopenstack baremetal node set <node-id> --raid-interface=agent
Apply RAID configurationopenstack baremetal node set <node-id> --target-raid-config /tmp/raid1.json
Check RAID configurationopenstack baremetal node show <node-id> -f json \| jq '.target_raid_config'
Disable maintenance modeopenstack baremetal node maintenance unset <node-id>
Deploy an instanceopenstack server create --image <image-name> --flavor <flavor-id> --key-name <keypair-name> --nic net-id=<network-id> --availability-zone "nova::<node-id>" <instance-name>
Check RAID status (from the instance)cat /proc/mdstat

Best practices

  • Always configure RAID before deployment: The configuration must be done on a node with no active instance.
  • Use maintenance mode: Always put the node in maintenance before making any changes.
  • Regular backups: RAID is not a backup solution; make regular backups of your data.
  • Monitor the RAID: Set up RAID monitoring (for example with mdadm --monitor) and integrate alerts into your monitoring tools to quickly detect any degradation or disk failure.

Limitations and considerations

  • Performance: Software RAID can have an impact on CPU performance compared to hardware RAID.
  • Compatibility: Not all operating systems support all software RAID levels.
  • Rebuild: Rebuilding a RAID array after replacing a disk can take a long time and consume resources.

Troubleshooting

ErrorCauseSolution
Driver redfish does not support raid (disabled or not implemented). (HTTP 404)The RAID interface is not configured as agentSee the section 4.1 - Check and enable the agent interface if necessary
Software RAID Configuration requires RAID-1 for the first logical diskThe first logical disk with is_root_volume: true must be RAID 1Use RAID 1 for the root volume.
See the section 3 - Supported RAID levels

References

Go further

If you need training or technical assistance to implement our solutions, please contact your sales representative or click on this link to request a quote and have your project reviewed by our Professional Services experts.

Join our user community.

Related articles