-
Bug
-
Resolution: Done
-
Medium
-
Dublin Release
-
Integration OOM-Staging-Daily
-
Integration R4 M3(2/14-3/14)
In ONAP integration lab env, sometimes DMAAP message router fails to start correctly.
root@staging-rancher:~# kubectl n onap get pod |grep dmaap-message dev-dmaap-message-router-0 1/1 Running 0 14m dev-dmaap-message-router-kafka-0 1/1 Running 1 25m dev-dmaap-message-router-kafka-1 0/1 CrashLoopBackOff 9 24m dev-dmaap-message-router-kafka-2 1/1 Running 0 23m dev-dmaap-message-router-zookeeper-0 1/1 Running 0 2h dev-dmaap-message-router-zookeeper-1 1/1 Running 0 2h dev-dmaap-message-router-zookeeper-2 1/1 Running 0 2h
We know message router pods need to start in the order of zookeeper > kafka -> message-router
In Integration lab deployment, sometimes kafka pod takes several restarts during initialization stage due to short liveness and readiness timers. This would cause message-router pod starts before kafka pod is ready, and it breaks dmaap message router pods starting sequence. When this happens, you would see DMAAP message router healthcheck fails.