Details
-
Bug
-
Status: Closed
-
Highest
-
Resolution: Done
-
None
Description
20190109
from aai team
https://wiki.onap.org/display/DW/2019-01-17+AAI+Developers+Meeting+Open+Agenda
"hector has discovered that the stress test jar (liveness probe?) in aai-cassandra is hammering the cpu/ram/hd on the vm that aai is on - this breaks the etcd cluster (not the latency/network issues we suspected that may cause pod rescheduling) "
20181017: update - reopen or re-raise AAI/Logstash specific JIRA for Dublin - in
LOG-707 - as the issue is more of an AAI to logstash issue
Need to find out which container - it is the logstash one - mine
12588 ubuntu 20 0 6397972 699192 22012 S 578.1 1.1 567:55.27 /usr/bin/java -Xmx500m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbo+
http://jenkins.onap.info/job/oom-cd-master/2897/console
find out what the reason is for the saturation - is it excessive logs from example the cluster heartbeat from all the db clusters
or a misconfiguration of the resource section
it looks like logs are still being processed up to 4 min after they come into logstash - getting an average of 200-400 logs per 30 sec on
Attachments
Issue Links
- blocks
-
LOG-915 Reduce Logstash core limit to 1 from 3 until LOG-LS and AAI-CS perf issue on the same VM is determined
-
- Open
-
-
LOG-841 Logstash container - use a label to distribute the ReplicaSet instead of DaemonSet
-
- Closed
-
-
OOM-1793 High CPU observed by Cassandra liveness/readiness probe
-
- Closed
-
- duplicates
-
LOG-435 logstash consuming high CPU on host affects other components - pre onap-wide resource allocations
-
- Closed
-
- is blocked by
-
LOG-181 Use DaemonSets for logging - Logstash
-
- Closed
-
-
LOG-258 S3P ELK stack performance and clustering
-
- Closed
-
-
LOG-860 Lower logstash replicaSet from 5 to 3 and set to 1 in dev.yaml
-
- Closed
-
-
LOG-380 Platform Maturity: Performance, Stability, Resiliency, Scalability
-
- Closed
-
-
OOM-1120 values.yaml resource limits for cpu/ram have no effect - workaround is use JVM threadpool setting
-
- Closed
-
- is duplicated by
-
LOG-862 Log project pods take a lot of CPU in full ONAP deployment
-
- Closed
-
- relates to
-
LOG-877 S3P: Logging streaming/format alignment for dublin - China Telecom, Deutsche Telekom, Vodafone
-
- Closed
-
-
LOG-256 log-kibana deployment is timing out periodically
-
- Closed
-
-
LOG-876 S3P: Logging for Core Service/VNF state and transition model - Deutsche Telekom and Vodafone
-
- Closed
-
-
AAF-273 Cassandra pod running over 8G heap - or 10% of ONAP ram (for 135 other pods on 256G 4 node cluster)
-
- Closed
-
-
AAI-2092 aai-resources does excessive amounts of logging
-
- Closed
-
-
LOG-157 S3P Events, Metrics, Monitoring
-
- Closed
-
-
OOM-758 Common Mariadb Galera Helm Chart to be reused by many applications
-
- Closed
-
-
OOM-927 Need a production grade configuration override file of ONAP deployment
-
- Closed
-
-
OOM-1153 Resource Limits for log
-
- Closed
-
-
OOM-2794 Upgrade logstash image
-
- Closed
-
-
VVP-130 DEV mode dev.yaml override for all remaining ReplicaSet counts still above 1
-
- Closed
-
-
AAI-1860 champ service during startup using 9+ vCores until restarted
-
- Closed
-
- links to
- mentioned in
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...