Uploaded image for project: 'ONAP Operations Manager'
  1. ONAP Operations Manager
  2. OOM-2762

NBI pod startup fails when using "localCluster" DB option

XMLWordPrintable

    • Hide

      When using "localCluster: true" option, the resulting chart is not complete and 
      the NBI pod startup fails.
      Error is caused by wrong SPRING_DATASOURCE_URL:

          Liveness:       http-get https://:8443/nbi/api/v4/status delay=180s timeout=1s period=30s #success=1 #failure=3
          Readiness:      http-get https://:8443/nbi/api/v4/status delay=185s timeout=1s period=30s #success=1 #failure=3
          Environment:
            SPRING_DATASOURCE_URL:         jdbc:mariadb://:/nbi
            SPRING_DATASOURCE_USERNAME:    <set to the key 'login' in secret 'nbi-nbi-db-secret'>     Optional: false
            SPRING_DATASOURCE_PASSWORD:    <set to the key 'password' in secret 'nbi-nbi-db-secret'>  Optional: false
      

      Reason for this is a missing "service" entry in the mariadb-galera section in values.yaml:

      mariadb-galera:
        db:
          externalSecret: *dbUserSecretName
          name: &mysqlDbName nbi
        nameOverride: &nbi-galera nbi-galera
        replicaCount: 1
        persistence:
          enabled: true
          mountSubPath: nbi/maria/data
        serviceAccount:
          nameOverride: *nbi-galera
      

      To be added part:

        service:
          name: nbi-galera
          portName: nbi-galera
          internalPort: 3306
      

      <!-- Thank you for creating a Bug on OOM. -->
      <!-- remove all lines by doing what's expected and not expected when started by <!-- -->
      <!-- Please fill the following template so we can efficiently move on: -->

      SUMMARY

      <!--- Explain the problem briefly below -->

      OS / ENVIRONMENT

      • Kubernetes version:
        <!-- output of `kubernetes version` -->
      • Helm version:
        <!-- output of `helm version` -->
      • Kubernetes mode of installation:
        <!-- add also configuration file if relevant -->
        <!-- please run:
        docker run -e DEPLOY_SCENARIO=k8s-test \
        -v <the kube config>:/root/.kube/config \
        opnfv/functest-kubernetes-healthcheck:latest
        -->
        <!-- and upload the result directory as a zip file -->
      • CNI Used for Kubernetes:
      • type of installation: <!-- number of control, number of nodes -->

      OOM VERSION

      <!--- which branch / tag did you use -->

      CONFIGURATION

      <!-- please paste or upload override file used -->

      STEPS TO REPRODUCE

      <!-- please show line used to create helm charts -->
      <!-- please show line used to deploy ONAP -->
      <!-- add any necessary relevant command done -->

      EXPECTED RESULTS

      <!--- Describe what you expected to happen when running the steps above -->

      ACTUAL RESULTS

      <!--- Describe what actually happened. -->
      <!-- please run: docker run -v <the kube config>:/root/.kube/config -v \
      <result directory>:/var/lib/xtesting/results \ registry.gitlab.com/orange-opensource/lfn/onap/integration/xtesting/infra-healthcheck:latest
      -->
      <!-- and upload the result directory as a zip file -->
      <!-- cd where/your/oom/install is -->
      <!-- launch healthchecks: ./kubernetes/robot/ete-k8s.sh YOUR_DEPLOYMENT_NAME health -->
      <!-- and upload the result directory as a zip file -->
      <!-- it should be /dockerdata-nfs/onap/robot/logs/0000_ete_health/ (0000 must be the biggest number) -->

      Show
      When using "localCluster: true" option, the resulting chart is not complete and  the NBI pod startup fails. Error is caused by wrong SPRING_DATASOURCE_URL: Liveness: http-get https: //:8443/nbi/api/v4/status delay=180s timeout=1s period=30s #success=1 #failure=3 Readiness: http-get https: //:8443/nbi/api/v4/status delay=185s timeout=1s period=30s #success=1 #failure=3 Environment: SPRING_DATASOURCE_URL: jdbc:mariadb: //:/nbi SPRING_DATASOURCE_USERNAME: <set to the key 'login' in secret 'nbi-nbi-db-secret' > Optional: false SPRING_DATASOURCE_PASSWORD: <set to the key 'password' in secret 'nbi-nbi-db-secret' > Optional: false Reason for this is a missing "service" entry in the mariadb-galera section in values.yaml: mariadb-galera: db: externalSecret: *dbUserSecretName name: &mysqlDbName nbi nameOverride: &nbi-galera nbi-galera replicaCount: 1 persistence: enabled: true mountSubPath: nbi/maria/data serviceAccount: nameOverride: *nbi-galera To be added part: service: name: nbi-galera portName: nbi-galera internalPort: 3306 <!-- Thank you for creating a Bug on OOM. --> <!-- remove all lines by doing what's expected and not expected when started by <!-- --> <!-- Please fill the following template so we can efficiently move on: --> SUMMARY <!--- Explain the problem briefly below --> OS / ENVIRONMENT Kubernetes version: <!-- output of `kubernetes version` --> Helm version: <!-- output of `helm version` --> Kubernetes mode of installation: <!-- add also configuration file if relevant --> <!-- please run: docker run -e DEPLOY_SCENARIO=k8s-test \ -v <the kube config>:/root/.kube/config \ opnfv/functest-kubernetes-healthcheck:latest --> <!-- and upload the result directory as a zip file --> CNI Used for Kubernetes: type of installation: <!-- number of control, number of nodes --> OOM VERSION <!--- which branch / tag did you use --> CONFIGURATION <!-- please paste or upload override file used --> STEPS TO REPRODUCE <!-- please show line used to create helm charts --> <!-- please show line used to deploy ONAP --> <!-- add any necessary relevant command done --> EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ACTUAL RESULTS <!--- Describe what actually happened. --> <!-- please run: docker run -v <the kube config>:/root/.kube/config -v \ <result directory>:/var/lib/xtesting/results \ registry.gitlab.com/orange-opensource/lfn/onap/integration/xtesting/infra-healthcheck:latest --> <!-- and upload the result directory as a zip file --> <!-- cd where/your/oom/install is --> <!-- launch healthchecks: ./kubernetes/robot/ete-k8s.sh YOUR_DEPLOYMENT_NAME health --> <!-- and upload the result directory as a zip file --> <!-- it should be /dockerdata-nfs/onap/robot/logs/0000_ete_health/ (0000 must be the biggest number) -->

          andreasgeissler Andreas Geissler
          andreasgeissler Andreas Geissler
          Votes:
          0 Vote for this issue
          Watchers:
          2 Start watching this issue

            Created:
            Updated:
            Resolved: