Configuring multi-attach persistent volumes with Enterprise File Storage
Objective
OVHcloud Managed Kubernetes currently offers Block Storage for persistent volumes by default, but that may not be suited for applications that require a shared file system between multiple nodes. This tutorial shows how to configure a shared Kubernetes Persistent Volume (AccessMode ReadWriteMany or RWX) using OVHcloud Enterprise File Storage as a storage backend.
Requirements
This tutorial assumes that you already have a working OVHcloud Managed Kubernetes cluster, and some basic knowledge of how to operate it. If you want to know more on those topics, please look at the deploying a Hello World application documentation.
It also assumes you have an OVHcloud Enterprise File Storage already available. If you don't, you can order one in the OVHcloud Control Panel.
You also need to have Helm installed on your workstation. Please refer to the How to install Helm on OVHcloud Managed Kubernetes Service tutorial.
Instructions
Step 1 - Creating a volume and granting your Managed Kubernetes Service access to it
Your Enterprise File Storage service can expose multiple volumes, and supports a variety of protocols. Each volume is accessible only from a specific range of IPs. We will create a new EFS volume and make it accessible from your Kubernetes worker nodes.
You can find more information about our Enterprise File Storage product by clicking here.
Access the UI for OVHcloud Enterprise File Storage by clicking the Storage and backups then Enterprise File Storage menu in the Bare Metal Cloud section of the OVHcloud Control Panel.
Click your Enterprise File Storage service, then click the Volumes tab. Click the Create a volume button and create the new Enterprise File Storage volume with the following content:

Provide the following parameters to create a volume:
| Name | Description | Required |
|---|---|---|
| Volume name | Name of the volume | False |
| Volume description | Description of the volume | False |
| Protocol | Protocol used to connect | True |
| Volume size | Size of the volume | True |
The volume size needs to be adapted to your needs. For this guide, we define a volume size of 100 GiB.
Once your volume is created, click on its ID and select Access Control List.
Enter your Nodes' public IPs and/or your Public Cloud Gateway public IP into the volume's ACLs. This will ensure your Kubernetes worker nodes can reach the storage service.
Your cluster is installed with Public Network or a private network without using an OVHcloud Internet Gateway or a custom one as your default route
Once the volume is created, we need to allow our Kubernetes nodes to access it.
Get our Kubernetes nodes IP:
Your cluster is installed with Private Network and a default route via your Private Network (OVHcloud Internet Gateway/OpenStack Router or a custom one)
Because your nodes are configured to be routed by the private network gateway, you need to add the gateway IP address to the ACLs.
By using Public Cloud Gateway through our Managed Kubernetes Service, public IPs on nodes are only for management purposes: MKS Known Limits
You can get your OVHcloud Internet Gateway's public IP by navigating through the OVHcloud Control Panel:
Public Cloud > Select your tenant > Network / Gateway > Public IP
You can also get your OVHcloud Internet Gateway's public IP via the following API:
If you are not familiar with the OVHcloud API, read our First Steps with the OVHcloud API guide.
If you want to use your Kubernetes cluster to know your Gateway's public IP, you can run this command:
This command will create a temporary pod and open a console.
You may have to wait a bit to let the pod be created. Once the shell appears, you can run this command:
The public IP of the Gateway you're using should appear.
Click on the Manage IP Access (ACL) menu of our newly created volume:

Add either the nodes' IPs one by one or the Gateway's public IP depending on your configuration:

You should now have something similar to this:

Step 2 - Configuring Kubernetes to use our newly created EFS volume
Your Kubernetes cluster needs some additional piece of software to make use of the Enterprise File Storage volume. We will install those and then create a first volume, shared across multiple pods.
To do so, you can install the csi-driver-nfs:
Let's verify our installation:
Step 3 - Create the NFS StorageClass Object
Let's create a efs-storageclass.yaml file:
Don't forget to replace [EFS_IP], [EFS_PATH] and [PARTITION_NAME] with the correct information.
The EFS_IP is the private IP of your Enterprise File Storage and the EFS_PATH is the path to access your volume.
The tcp parameter instructs the NFS mount to use the TCP protocol.
Then apply the YAML file to create the StorageClass:
Step 4 - Create and use an NFS persistent volume
Let’s create a efs-persistent-volume-claim.yaml file:
And apply this to create the persistent volume claim:
You can find more information about the PVC by running this command:
By reading the events on this PersistentVolumeClaim, our PVC has been provisioned from our Enterprise File Storage.
If you encounter errors such as:
or similar to:
It may indicate an issue with the Enterprise File Storage ACLs configuration. Check the authorized IPs which can access to the wanted partition on the ACLs list.
If the number of PersistentVolumes to schedule simultaneously is too important, some slowness can be encountered and volume creation can be delayed.
Let’s now create a DaemonSet of Nginx pods using the persistent volume claim as their webroot folder.
Using a DaemonSet will create a pod on each deployed node and make troubleshooting easier in case of a misconfiguration or to isolate a node issue.
Let's create a file named nginx-daemonset.yaml:
And apply this to create the Nginx DaemonSet:
Both pods should be running:
Let’s enter inside the first Nginx pod and container to check if the Enterprise File Storage Volume is properly mounted and create a file on the NFS persistent volume:
Now, we will check if the EFS volume is properly shared between the deployed pods.
Create a new index.html file:
And exit the Nginx container:
Let’s try to access our new web page:
Generate the URL to open in your broswer:
You can open the URL which is displayed to access the Nginx Service.
Now let’s try to see if the data is shared with the second pod (if you have more than one node deployed).
You can open the URL which is displayed to access the Nginx Service on the other pod.
As you can see the data is correctly shared between the two Nginx pods running on two different Kubernetes nodes.
Congratulations, you have successfully set up a multi-attach persistent volume with OVHcloud Enterprise File Storage!
Go further
To learn more about using your Kubernetes cluster the practical way, we invite you to look at our OVHcloud Managed Kubernetes doc site.
If you need training or technical assistance to implement our solutions, contact your sales representative or click on this link to get a quote and ask our Professional Services experts for a custom analysis of your project.
Join our community of users.