Uploaded image for project: 'Multicloud'
  1. Multicloud

Need for "ReadWriteMany" access on storage when deploying on Kubernetes?


    • Icon: Bug Bug
    • Resolution: Done
    • Icon: High High
    • El Alto Release
    • El Alto Release
    • None


      today when deploying Multicloud with OOM, one of the PersistentVolumeClaim needs the "ReadWriteMany" (or "RWX") capability:


      $ kubectl get pvc -n onap | grep multicloud
      onap-multicloud-multicloud-k8s-etcd-data-onap-multicloud-multicloud-k8s-etcd-0   Bound     onap-multicloud-multicloud-k8s-etcd-data-0      1Gi        RWO            onap-multicloud-multicloud-k8s-etcd-data    3h27m
      onap-multicloud-multicloud-k8s-mongo-data                                        Bound     onap-multicloud-multicloud-k8s-mongo-data       1Gi        RWX            onap-multicloud-multicloud-k8s-mongo-data   3h27m


      According to Kubernetes Documentation (https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes), ReadWriteMany stands for "the volume can be mounted as read-write by many nodes".

      That means that a particular PVC needs to be read and written from many pods. That also means that your code takes that into account and do the work to avoid write at the same place at the same time.

      An issue on RWX mode is that most "official" storage driver from Kubernetes doesn't support it (13 over the 19 drivers doesn't support it, espacially OpenStack, Amazon and Google storage classes).

      It's then very hard in a lot of case to use Multicloud.


      If the needs is that the volume can be mounted as read by many nodes and write by one node (which seems to be the case), RWO (ReadWriteOnce) should be sufficient and is supported by all drivers.


      in the effort to deploy ONAP for gating on public cloud (one the priority of El Alto), using storage classes (and if possible RWO access) is very important.


      Can you verify that RWO is sufficient and if yes, I can submit a Gerrit review to move your access needs to RWO.

            kirankamineni kirankamineni
            sdesbure sdesbure
            0 Vote for this issue
            5 Start watching this issue