Details
-
Bug
-
Status: Closed
-
Low
-
Resolution: Done
-
Casablanca Release
-
None
Description
When deploying Casablanca branch via OOM (current commit in the branch as of today 2019-01-20 is 6516dc4), the aai-data-router container in the pod onap-aai-data-router enters into crashloopbackoff.
The log from the container is:
$ kubectl -n onap logs -f onap-aai-aai-data-router-b5dfbf44c-4fv9c aai-data-router
BOOT-INF/lib/* : no such file or directory
. ____ _ __ _ _
/
/ ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )___ | '_ | '_| | '_ \/ _` | \ \ \ \
/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |___, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.5.18.RELEASE)
As a workaround, changing the values.yaml file at /oom/kubernetes/aai/charts/aai-data-router from:
image: onap/data-router:1.3.2
to
image: onap/data-router:1.3.1
resolves the issue on redeployment of the aai component.
Note that the log output of the pod with the 1.3.1 image is still the same as above, but the crashloopbackoff is resolved and the pod reports as running
A describe of the 1.3.2 image containing pod is:
ubuntu@registry01:~/oom/kubernetes$ kubectl -n onap describe pod onap-aai-aai-data-router-b5dfbf44c-64pdr Name: onap-aai-aai-data-router-b5dfbf44c-64pdr Namespace: onap Priority: 0 PriorityClassName: <none> Node: ip-172-16-21-150.ec2.internal/172.16.21.150 Start Time: Sun, 20 Jan 2019 22:30:17 +0000 Labels: app=aai-data-router pod-template-hash=618969007 release=onap-aai Annotations: <none> Status: Running IP: 172.16.27.6 Controlled By: ReplicaSet/onap-aai-aai-data-router-b5dfbf44c Init Containers: init-sysctl: Container ID: docker://26f41debe300f4098c9e846d0c88a31f90f8eaa0003a8c3f070028ab715c453f Image: docker.io/busybox Image ID: docker-pullable://busybox@sha256:7964ad52e396a6e045c39b5a44438424ac52e12e4d5a25d94895f2058cb863a0 Port: <none> Host Port: <none> Command: /bin/sh -c mkdir -p /logroot/data-router/logs chmod -R 777 /logroot/data-router/logs chown -R root:root /logroot State: Terminated Reason: Completed Exit Code: 0 Started: Sun, 20 Jan 2019 22:30:29 +0000 Finished: Sun, 20 Jan 2019 22:30:29 +0000 Ready: True Restart Count: 0 Environment: NAMESPACE: onap (v1:metadata.namespace) Mounts: /logroot/ from onap-aai-aai-data-router-logs (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-dmnsz (ro) Containers: aai-data-router: Container ID: docker://64a75d164fe8680e2106c689f1a6621f2d8f640ff05e130420972e42d16e691d Image: nexus3.onap.org:10001/onap/data-router:1.3.2 Image ID: docker-pullable://nexus3.onap.org:10001/onap/data-router@sha256:19da04c8ed67e0e82ea813f42141231d7ea8e4dd8598a8b96c4ba556feee3c7a Port: 9502/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Sun, 20 Jan 2019 22:32:08 +0000 Finished: Sun, 20 Jan 2019 22:32:11 +0000 Ready: False Restart Count: 3 Liveness: tcp-socket :9502 delay=10s timeout=1s period=10s #success=1 #failure=3 Readiness: tcp-socket :9502 delay=10s timeout=1s period=10s #success=1 #failure=3 Environment: SERVICE_BEANS: /opt/app/data-router/dynamic/conf CONFIG_HOME: /opt/app/data-router/config/ KEY_STORE_PASSWORD: OBF:1y0q1uvc1uum1uvg1pil1pjl1uuq1uvk1uuu1y10 DYNAMIC_ROUTES: /opt/app/data-router/dynamic/routes KEY_MANAGER_PASSWORD: OBF:1y0q1uvc1uum1uvg1pil1pjl1uuq1uvk1uuu1y10 PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin JAVA_HOME: usr/lib/jvm/java-8-openjdk-amd64 Mounts: Type: ConfigMap (a volume populated by a ConfigMap) Name: aai-filebeat Optional: false aai-filebeat: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: onap-aai-aai-data-router-auth: Type: Secret (a volume populated by a Secret) SecretName: onap-aai-aai-data-router Optional: false onap-aai-aai-data-router-properties: Type: ConfigMap (a volume populated by a ConfigMap) Name: onap-aai-aai-data-router-prop Optional: false onap-aai-aai-data-router-dynamic-route: Type: ConfigMap (a volume populated by a ConfigMap) Name: onap-aai-aai-data-router-dynamic Optional: false onap-aai-aai-data-router-dynamic-policy: Type: ConfigMap (a volume populated by a ConfigMap) Name: onap-aai-aai-data-router-dynamic Optional: false onap-aai-aai-data-router-dynamic-oxm: Type: ConfigMap (a volume populated by a ConfigMap) Name: onap-aai-aai-data-router-dynamic Optional: false onap-aai-aai-data-router-logs: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: onap-aai-aai-data-router-logback-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: onap-aai-aai-data-router-log-configmap Optional: false default-token-dmnsz: Type: Secret (a volume populated by a Secret) SecretName: default-token-dmnsz Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m default-scheduler Successfully assigned onap/onap-aai-aai-data-router-b5dfbf44c-64pdr to ip-172-16-21-150.ec2.internal Normal Pulled 1m kubelet, ip-172-16-21-150.ec2.internal Container image "docker.io/busybox" already present on machine Normal Created 1m kubelet, ip-172-16-21-150.ec2.internal Created container Normal Started 1m kubelet, ip-172-16-21-150.ec2.internal Started container Normal Pulling 1m kubelet, ip-172-16-21-150.ec2.internal pulling image "nexus3.onap.org:10001/onap/data-router:1.3.2" Normal Pulled 1m kubelet, ip-172-16-21-150.ec2.internal Successfully pulled image "nexus3.onap.org:10001/onap/data-router:1.3.2" Normal Pulled 1m kubelet, ip-172-16-21-150.ec2.internal Container image "docker.elastic.co/beats/filebeat:5.5.0" already present on machine Normal Created 1m kubelet, ip-172-16-21-150.ec2.internal Created container Normal Started 1m kubelet, ip-172-16-21-150.ec2.internal Started container Normal Created 15s (x4 over 1m) kubelet, ip-172-16-21-150.ec2.internal Created container Normal Started 15s (x4 over 1m) kubelet, ip-172-16-21-150.ec2.internal Started container Normal Pulled 15s (x3 over 1m) kubelet, ip-172-16-21-150.ec2.internal Container image "nexus3.onap.org:10001/onap/data-router:1.3.2" already present on machine Warning BackOff 3s (x6 over 59s) kubelet, ip-172-16-21-150.ec2.internal Back-off restarting failed container
For the data router with the 1.3.1 image, the aai-data-router exit code 1 error is not seen:
<snip> aai-data-router: Container ID: docker://282fc184fab7ff771be32771b8fb1007ccc8add608ee673aaf6a081add3d4632 Image: nexus3.onap.org:10001/onap/data-router:1.3.1 Image ID: docker-pullable://nexus3.onap.org:10001/onap/data-router@sha256:7cbd6fe2d41fc59f90d6513086c36b80b1895d74144c8ead44da69143738cb22 Port: 9502/TCP Host Port: 0/TCP State: Running Started: Sun, 20 Jan 2019 22:46:53 +0000 Ready: True Restart Count: 0 Liveness: tcp-socket :9502 delay=10s timeout=1s period=10s #success=1 #failure=3 Readiness: tcp-socket :9502 delay=10s timeout=1s period=10s #success=1 #failure=3 Environment: SERVICE_BEANS: /opt/app/data-router/dynamic/conf CONFIG_HOME: /opt/app/data-router/config/ KEY_STORE_PASSWORD: OBF:1y0q1uvc1uum1uvg1pil1pjl1uuq1uvk1uuu1y10 DYNAMIC_ROUTES: /opt/app/data-router/dynamic/routes KEY_MANAGER_PASSWORD: OBF:1y0q1uvc1uum1uvg1pil1pjl1uuq1uvk1uuu1y10 PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin JAVA_HOME: usr/lib/jvm/java-8-openjdk-amd64 <snip>
The same issue and resolution also affects the pomba data router pod.