Uploaded image for project: 'External API Framework'
  1. External API Framework
  2. EXTAPI-305

No Need for "ReadWriteMany" access on storage when deploying on Kubernetes

XMLWordPrintable

      Hello,

      today when deploying NBI with OOM, the PersistentVolumeClaim needs the "ReadWriteMany" (or "RWX") capability:

       

      $ kubectl get pvc -n onap | grep nbi
      onap-nbi-nbi-mariadb                                                             Bound     onap-nbi-nbi-mariadb                            2Gi        RWX                                                        3h30m
      onap-nbi-nbi-mongo-data                                                          Bound     onap-nbi-nbi-mongo-data                         1Gi        RWX            onap-nbi-nbi-mongo-data                     3h30m

       

       

      According to Kubernetes Documentation (https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes), ReadWriteMany stands for "the volume can be mounted as read-write by many nodes".

      That means that a particular PVC needs to be read and written from many pods. That also means that your code takes that into account and do the work to avoid write at the same place at the same time.

      An issue on RWX mode is that most "official" storage driver from Kubernetes doesn't support it (13 over the 19 drivers doesn't support it, espacially OpenStack, Amazon and Google storage classes).

      It's then very hard in a lot of case to use NBI.

       

      If the needs is that the volume can be mounted as read by many nodes and write by one node (which seems to be the case), RWO (ReadWriteOnce) should be sufficient and is supported by all drivers.

       

      in the effort to deploy ONAP for gating on public cloud (one the priority of El Alto), using storage classes (and if possible RWO access) is very important.

       

            sdesbure sdesbure
            sdesbure sdesbure
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: