Backing up and restoring your Persistent Volume with Volume Snapshots on OVHcloud Managed Kubernetes
In this tutorial, we are using Kubernetes Volume Snapshots to back up and restore persistent volumes on an OVHcloud Managed Kubernetes cluster.
Volume Snapshots are a Kubernetes feature released in General Availability (GA) on Kubernetes 1.20.
They provide the ability to create a “snapshot” of a persistent volume. A snapshot represents a point-in-time copy of a volume. A snapshot can be used either to rehydrate a new volume (pre-populated with the snapshot data) or to restore an existing volume to a previous state (represented by the snapshot).
Before you begin
This tutorial presupposes that you already have a working OVHcloud Managed Kubernetes cluster, and some basic knowledge of how to operate it. If you want to know more on those topics, please look at the OVHcloud Managed Kubernetes Service Quickstart.
The tutorial also supposes that you're familiar with Kubernetes Persistent Volumes. You also need to know how PVs are handled on the OVHcloud Managed Kubernetes service. Please refer to the Persistent Volumes on OVHcloud Managed Kubernetes guide.
Instructions
Setup
In this guide we are going to use a simple example: a small Nginx web server with a PersistentVolume.
Create a file named nginx-example-with-pv.yml with the following content:
And apply it to the cluster:
If you look attentively to the deployment part of this manifest, you will see that we have defined a .spec.strategy.type. It specifies the strategy used to replace old pods by new ones, and we have set it to Recreate, so all existing pods are killed before new ones are created.
We do so as the Storage Class we are using, csi-cinder-high-speed, only supports a ReadWriteOnce, so we can only have one pod writing on the Persistent Volume at any given time.
Wait until you get an external IP:
When you have a Load Balancer external IP, save it:
And do some calls to the URL to generate some access logs:
Now we need to connect to the pod to read the log file and verify that our logs are written.
First, get the name of the Nginx running pod:
And then connect to it and view your access logs:
Volume snapshots work with all storage classes, including LUKS encrypted volumes (csi-cinder-high-speed-luks, csi-cinder-classic-luks, csi-cinder-high-speed-gen2-luks). The encryption is transparently maintained during snapshot and restore operations.
You should have a result like this:
Creating a Snapshot
Create a VolumeSnapshot in a nginx-example-snapshot.yml file with the following content:
And apply it:
You should have a result like this:
Simulate a disaster
Let’s simulate a disaster scenario, deleting the log files from the PVC:
You shoud have a result like this:
Restoring the Volume
To restore from a given snapshot, you need to delete the original PVC and then recreate it from the snapshot.
Downscale the deployment to 0 replicas and delete the original PVC:
Then create a nginx-example-restore.yml file with the following content:
And apply it:
Verify that the PVC is restored:
The volume should have a status equal to Bound. Now you can restore the deployment to its replica value of 1, and wait until the pod is again Running:
Now you can verify that the access.log file is back and its content is still there:
You should have a result like this:
Clean-up
At the end you can proceed to clean up by deleting everything. Delete the nginx-example namespace:
Go further
-
If you need training or technical assistance to implement our solutions, contact your sales representative or click on this link to get a quote and ask our Professional Services experts for assisting you on your specific use case of your project.
-
Join our community of users on https://community.ovh.com/en/.