Uploaded image for project: 'Configuration Persistence Service'
  1. Configuration Persistence Service
  2. CPS-1768

NCMP: Chunk sending slows down during discovery flow

XMLWordPrintable

    • Icon: Story Story
    • Resolution: Done
    • Icon: Medium Medium
    • None
    • London Release
    • None

      Current behavior

      During the testing with high amount of cmhandles it turned out that sending in the cmhandles to NCMP slowed down.
      It is visible that sending in the first batches are around 1-2 seconds, the last batches are more than 2 minutes.
      20000 cmhandles were sent in in 100 sized batches (200x100)

      Testing with subsystem stub it means that sending in 20k cmhandles time increased from 12 minutes to 70 minutes
      With real subsystem it is more hours.

      The scenario where we see the issue is the following:
      We are sending in cmhandle chunks as it was discribed before. In the meantime our topology adapter starts topology discovery. For each READY cmhandles it starts sending requests to ncmp.
      This means ~10-15 request/sec on /ncmp/v1/ch/{cm-handle}/modules and ~25-30 request/sec on /ncmp/v1/ch/{cm-handle}/data/ds/{datastore-name} endpoint.
      Sure during this we still send the cmhandle batches.
      This behavior and the way how our microservices doing these does not really changed, those are working as before.
      Sending in cmhandle batches slowed down around the team when the topology adapter starts its operation

      The default memory usage settings:

      • requested memory 2Gi
      • memory limit 3Gi,
      • From these the java heap is 750MB

      We do not have heap memory issues that we had in CPS-1716

      Previously used version
      3.2.5
      https://gerrit.onap.org/r/gitweb?p=cps.git;a=commit;h=3bc22ed0ea833bdb649f393ec20c08dbb1bb7610

       

      Version where we found the fault
      3.3.2
      https://gerrit.onap.org/r/gitweb?p=cps.git;a=commit;h=19f963b1653ebfa38ed6441107175fafb59940c8

       

      Expected behavior:
      Similar cmHandle registration performance as before

       

      Reproduction
      Configure java heap size to our settings (see above)
      Try to register 20000 cmhandles in to ncmp in 100 sized batches
      In the meantime start sending reuest toward modules, data and searches endpoint
      It will be more than an hour to register it.
      Sending in the batches takes even more time.

       

      Test environment:
        Test was performed on a kubernetes cluster without wiremocked elements

       

      Collected logs:
      ?

        1. data_request_rate_325.png
          data_request_rate_325.png
          92 kB
        2. data_request_rate_332.png
          data_request_rate_332.png
          61 kB
        3. disco-log-325.txt
          109 kB
        4. ncmp-log-325.zip
          768 kB
        5. ncmp-log-332.txt
          8.67 MB
        6. ready_events_total_325.png
          ready_events_total_325.png
          59 kB
        7. ready_events_total_332.png
          ready_events_total_332.png
          27 kB

            Unassigned Unassigned
            danielhollos danielhollos
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: