-
Story
-
Resolution: Done
-
Medium
-
None
-
None
https://github.com/kubernetes/kubernetes/issues/23349
We use a cluster now by default but we should have a work around to use the "max-pods" kubelet parameter
kubectl describe node Capacity: cpu: 16 memory: 125827328Ki pods: 110 Allocatable: cpu: 16 memory: 125724928Ki pods: 110
procedure
https://wiki.onap.org/display/DW/Cloud+Native+Deployment#CloudNativeDeployment-Changemax-podsfromdefault110podlimit
Manual procedure: change the kubernetes template (1pt2) before using it to create an environment (1a7)
add --max-pods=500 to the "Additional Kubelet Flags" box on the v1.10.13 version of the kubernetes template from the "Manage Environments" dropdown on the left of the 8880 rancher console.
inserting in line 140 of
https://git.onap.org/logging-analytics/tree/deploy/rancher/oom_rancher_setup.sh#n140
POD_LIMIT=600 K8S_LIMIT_URL_RESPONSE=`curl -X PUT -H 'Accept: application/json' -H 'ContentType: application/json' -d '{"id":"1pt2","type":"projectTemplate","baseType":"projectTemplate","name":"Kubernetes","state":"active","accountId":null,"created":"2018-07-30T21:59:58Z","createdTS":1532987998000,"data":{"fields":{"stacks":[{"name":"kubernetes","templateId":"library:infra*k8s"},{"name":"network-services","templateId":"library:infra*network-services"},{"name":"ipsec","templateId":"library:infra*ipsec"},{"name":"healthcheck","templateId":"library:infra*healthcheck"}]}},"description":"Default Kubernetes template","externalId":"catalog://library:project*kubernetes:0","isPublic":true,"kind":"projectTemplate","removeTime":null,"removed":null,"stacks":[{"type":"catalogTemplate","name":"healthcheck","templateId":"library:infra*healthcheck"},{"type":"catalogTemplate","name":"kubernetes","templateVersionId":"library:infra*k8s:47","answers":{"CONSTRAINT_TYPE":"none","CLOUD_PROVIDER":"rancher","AZURE_CLOUD":"AzurePublicCloud","AZURE_TENANT_ID":"","AZURE_CLIENT_ID":"","AZURE_CLIENT_SECRET":"","AZURE_SEC_GROUP":"","RBAC":false,"REGISTRY":"","BASE_IMAGE_NAMESPACE":"","POD_INFRA_CONTAINER_IMAGE":"rancher/pause-amd64:3.0","HTTP_PROXY":"","NO_PROXY":"rancher.internal,cluster.local,rancher-metadata,rancher-kubernetes-auth,kubernetes,169.254.169.254,169.254.169.250,10.42.0.0/16,10.43.0.0/16","ENABLE_ADDONS":true,"ENABLE_RANCHER_INGRESS_CONTROLLER":true,"RANCHER_LB_SEPARATOR":"rancherlb","DNS_REPLICAS":"1","ADDITIONAL_KUBELET_FLAGS":"--max-pods=$POD_LIMIT","FAIL_ON_SWAP":"false","ADDONS_LOG_VERBOSITY_LEVEL":"2","AUDIT_LOGS":false,"ADMISSION_CONTROLLERS":"NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,ResourceQuota","SERVICE_CLUSTER_CIDR":"10.43.0.0/16","DNS_CLUSTER_IP":"10.43.0.10","KUBEAPI_CLUSTER_IP":"10.43.0.1","KUBERNETES_CIPHER_SUITES":"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305","DASHBOARD_CPU_LIMIT":"100m","DASHBOARD_MEMORY_LIMIT":"300Mi","INFLUXDB_HOST_PATH":"","EMBEDDED_BACKUPS":true,"BACKUP_PERIOD":"15m0s","BACKUP_RETENTION":"24h","ETCD_HEARTBEAT_INTERVAL":"500","ETCD_ELECTION_TIMEOUT":"5000"}},{"type":"catalogTemplate","name":"network-services","templateId":"library:infra*network-services"},{"type":"catalogTemplate","name":"ipsec","templateId":"library:infra*ipsec"}],"transitioning":"no","transitioningMessage":null,"transitioningProgress":null,"uuid":"d0929c3a-7809-4a53-ac31-23eb57ab87a7"}' "http://$SERVER:8880/v2-beta/projecttemplates/1pt2"` echo "$K8S_LIMIT_URL_RESPONSE"
- is blocked by
-
OOM-1138 Work with Rancher labs to automatically enable the --max-pods kubelet config for when we reach 300+ per 3 node cluster
- Closed
- is duplicated by
-
OOM-998 DevOps for multi-node Kubernetes cluster because of the 110 pod limit per host
- Closed
-
OOM-1138 Work with Rancher labs to automatically enable the --max-pods kubelet config for when we reach 300+ per 3 node cluster
- Closed
- relates to
-
OOM-1458 dev.yaml references dockerdata not dockerdata-nfs - disable clamp/so, start replicaSet: 1 config
- Closed
-
OPTFRA-336 OOM oof deployment failure on missing image - optf-osdf:1.2.0
- Closed
- links to
- mentioned in
-
Page Loading...