Details
-
Task
-
Status: Open
-
High
-
Resolution: Unresolved
-
None
Description
20190109
from aai team
https://wiki.onap.org/display/DW/2019-01-17+AAI+Developers+Meeting+Open+Agenda
"hector has discovered that the stress test jar (liveness probe?) in aai-cassandra is hammering the cpu/ram/hd on the vm that aai is on - this breaks the etcd cluster (not the latency/network issues we suspected that may cause pod rescheduling) "
Some VM architectures are using 4 vCores per VM - not 8 - in this case the current vCore limit of 3 for logstash is insufficient - reducing to 1 for now
We will see some log processing hit but at least the vm ls and aai-cs are collocated won't saturate
Actually I remember an issue with LOG-376 where running with less than 3 cores causes logstash not to deploy properly - I may just put the ReplicaSet to 1 or 2 from the current 5 (it is no longer a DaemonSet at 13)
AAI-cassandra will also periodically reach 7 cores of 8 but only in spikes
Discussion with @Sanjay Agraharam and pau2882 on checking how cassandra is running on the vm and if debug levels are on should be verified
Proposal to use labelling to split aai-cs and ls - no DaemonSet in this case
Attachments
Issue Links
- is blocked by
-
LOG-376 Logstash full saturation of 8 cores with AAI deployed on one of the quad 8 vCore vms for 30 logs/sec - up replicaSet 1 to 3 or use DaemonSet
-
- Closed
-
-
LOG-860 Lower logstash replicaSet from 5 to 3 and set to 1 in dev.yaml
-
- Closed
-
- relates to
-
OOM-2794 Upgrade logstash image
-
- Closed
-
-
VVP-130 DEV mode dev.yaml override for all remaining ReplicaSet counts still above 1
-
- Closed
-
- mentioned in
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...