Uploaded image for project: 'ONAP Operations Manager'
  1. ONAP Operations Manager
  2. OOM-1138

Work with Rancher labs to automatically enable the --max-pods kubelet config for when we reach 300+ per 3 node cluster

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Done
    • Icon: Low Low
    • Dublin Release
    • Casablanca Release
    • None

      We hit the 110 kubernetes default pod limit a couple months ago - although we recommend clustering we should have an option to run a cluster node with more than 110 pods - we currently have 210 - when we reach 300+ on a 3 node system we will need this.

      The setting is on the kublet - I spend about 10 min researching how to add this as a config option to the rancher-agent docker - but could not find a way to pass in the parameter yet in an automated way when bringing up the docker
      https://lists.onap.org/pipermail/onap-discuss/2018-June/010246.html
      https://github.com/kubernetes/kubernetes/issues/23349

      We met with rancher labs today - adding this one to the label set
      our collaboration ticket
      https://github.com/rancher/rancher/issues/13962

      kubectl describe node
      
      Capacity:
       cpu:     16
       memory:  125827328Ki
       pods:    110
      Allocatable:
       cpu:     16
       memory:  125724928Ki
       pods:    110
      
      

      procedure
      https://wiki.onap.org/display/DW/Cloud+Native+Deployment#CloudNativeDeployment-Changemax-podsfromdefault110podlimit
      Manual procedure: change the kubernetes template (1pt2) before using it to create an environment (1a7)

      add --max-pods=500 to the "Additional Kubelet Flags" box on the v1.10.13 version of the kubernetes template from the "Manage Environments" dropdown on the left of the 8880 rancher console.

            michaelobrien michaelobrien
            michaelobrien michaelobrien
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved: