Skip to content

How to use persistent storage

By default, Pods use exphemeral storage. If you delete the Pod, or it gets evicted for any reason (node failure, image update, application crash, ...) all the data written to disk will be lost.

To avoid that, you can request a Persistent Volume and mount it into your containers.

Persistent Volumes are provided by our Ceph cluster, a networked data storage system. The data you store on Ceph will be persisted across your Pods, even if you restart or recreate them.

Requesting a Persistent Volume with a Persistent Volume Claim

The way you create a volume is by creating a Persistent Volume Claim into Kubernetes. It describes the type of storage you want and the capacity you need. For example:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-volume
spec:
  storageClassName: csi-rbd-3-sc # Or remove this line to use the default
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi

Create it and check its status:

$ kubectl apply -f my-volume.yml
persistentvolumeclaim/my-volume created
$ kubectl get persistentvolumeclaim
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
my-volume   Bound    pvc-cc8b4d67-e9fc-4651-9a9e-f85304c3cbcd   100Gi      RWO            csi-rbd-3-sc   2s

If the "status" is "Bound", then a volume was successfully allocated and bound to that claim. You can now mount it into a Pod.

If the "status" stays "Pending", then no volume was bound to it. This can happen if you used the wrong storageClassName or accessModes, or if you requested too much capacity.

Mounting a persistent volume in a Pod

To use the PersistentVolumeClaim, you attach it to a Pod, and mount it in your desired location in the Pod's containers. You can also do this if you are creating a Deployment, Job, etc by putting it in the Pod template section of the manifest.

For example, this is a Job with one container, with a persistent volume mount as /data:

apiVersion: batch/v1
kind: Job
metadata:
  name: my-job
spec:
  template:
    spec:
      restartPolicy: Never
      volumes:
        # This section describes the Pod's volumes
        # Those volumes can be mounted by containers, below
        - name: persistent-data
          persistentVolumeClaim:
            claimName: my-volume
      containers:
        - name: experiment
          image: quay.io/researcher/important-experiment:v1
          volumeMounts:
            # This section indicates where to mount the Pod's volumes
            - name: persistent-data
              mountPath: /data

Available storage class names

Storage class Supports ReadWriteMany Backing storage Description Default quota
csi-rbd-3-sc (default) no Hard drives Ceph block device replicated (3).
Safe
20 TB
csi-rbd-3-ssd-sc no Solid-state drives Ceph block device, replicated (3).
Safe, fast
200 GB
csi-cephfs-sc yes Hard drives Ceph filesystem, replicated (3). Contact us

Solid-state drive storage is faster but less plentiful. If you need more storage please contact us.