Uploaded image for project: 'ONAP Operations Manager'
  1. ONAP Operations Manager
  2. OOM-3198

A1Policy Manager ConfigMaps have 2 "Name" properties

    XMLWordPrintable

Details

    • Hide

      The manifest for a1policy-manager contains 2 name entries, e.g.:

      kind: ConfigMap
      metadata:
        name: release-a1policymanagement
        namespace: default
        labels:
          app.kubernetes.io/name: a1policymanagement
          app: a1policymanagement    helm.sh/chart: a1policymanagement-12.0.0
          app.kubernetes.io/instance: release
          app.kubernetes.io/managed-by: Helm
        name: release-a1policymanagement-policy-conf
      data: 

      Problem is the wrong usage of the resource.metadata template in e.g.
      https://git.onap.org/oom/tree/kubernetes/a1policymanagement/templates/configmap.yaml#n22

      apiVersion: v1
      kind: ConfigMap
      metadata: {{- include "common.resourceMetadata" . | nindent 2 }}
        name: {{ include "common.fullname" . }}-policy-conf
      data:
      {{ tpl (.Files.Glob "resources/config/*").AsConfig . | indent 2 }}
       

       

      <!-- Thank you for creating a Bug on OOM. -->
      <!-- remove all lines by doing what's expected and not expected when started by <!-- -->
      <!-- Please fill the following template so we can efficiently move on: -->

      SUMMARY

      <!--- Explain the problem briefly below -->

      OS / ENVIRONMENT

      • Kubernetes version:
        <!-- output of `kubernetes version` -->
      • Helm version:
        <!-- output of `helm version` -->
      • Kubernetes mode of installation:
        <!-- add also configuration file if relevant -->
        <!-- please run:
        docker run -e DEPLOY_SCENARIO=k8s-test \
        -v <the kube config>:/root/.kube/config \
        opnfv/functest-kubernetes-healthcheck:latest
        -->
        <!-- and upload the result directory as a zip file -->
      • CNI Used for Kubernetes:
      • type of installation: <!-- number of control, number of nodes -->

      OOM VERSION

      <!--- which branch / tag did you use -->

      CONFIGURATION

      <!-- please paste or upload override file used -->

      STEPS TO REPRODUCE

      <!-- please show line used to create helm charts -->
      <!-- please show line used to deploy ONAP -->
      <!-- add any necessary relevant command done -->

      EXPECTED RESULTS

      <!--- Describe what you expected to happen when running the steps above -->

      ACTUAL RESULTS

      <!--- Describe what actually happened. -->
      <!-- please run: docker run -v <the kube config>:/root/.kube/config -v \
      <result directory>:/var/lib/xtesting/results \ registry.gitlab.com/orange-opensource/lfn/onap/integration/xtesting/infra-healthcheck:latest
      -->
      <!-- and upload the result directory as a zip file -->
      <!-- cd where/your/oom/install is -->
      <!-- launch healthchecks: ./kubernetes/robot/ete-k8s.sh YOUR_DEPLOYMENT_NAME health -->
      <!-- and upload the result directory as a zip file -->
      <!-- it should be /dockerdata-nfs/onap/robot/logs/0000_ete_health/ (0000 must be the biggest number) -->

      Show
      The manifest for a1policy-manager contains 2 name entries, e.g.: kind: ConfigMap metadata:   name: release-a1policymanagement   namespace: default   labels:     app.kubernetes.io/name: a1policymanagement     app: a1policymanagement    helm.sh/chart: a1policymanagement-12.0.0     app.kubernetes.io/instance: release     app.kubernetes.io/managed-by: Helm   name: release-a1policymanagement-policy-conf data: Problem is the wrong usage of the resource.metadata template in e.g. https://git.onap.org/oom/tree/kubernetes/a1policymanagement/templates/configmap.yaml#n22 apiVersion: v1 kind: ConfigMap metadata: {{- include "common.resourceMetadata" . | nindent 2 }} name: {{ include "common.fullname" . }}-policy-conf data: {{ tpl (.Files.Glob "resources/config/*" ).AsConfig . | indent 2 }}   <!-- Thank you for creating a Bug on OOM. --> <!-- remove all lines by doing what's expected and not expected when started by <!-- --> <!-- Please fill the following template so we can efficiently move on: --> SUMMARY <!--- Explain the problem briefly below --> OS / ENVIRONMENT Kubernetes version: <!-- output of `kubernetes version` --> Helm version: <!-- output of `helm version` --> Kubernetes mode of installation: <!-- add also configuration file if relevant --> <!-- please run: docker run -e DEPLOY_SCENARIO=k8s-test \ -v <the kube config>:/root/.kube/config \ opnfv/functest-kubernetes-healthcheck:latest --> <!-- and upload the result directory as a zip file --> CNI Used for Kubernetes: type of installation: <!-- number of control, number of nodes --> OOM VERSION <!--- which branch / tag did you use --> CONFIGURATION <!-- please paste or upload override file used --> STEPS TO REPRODUCE <!-- please show line used to create helm charts --> <!-- please show line used to deploy ONAP --> <!-- add any necessary relevant command done --> EXPECTED RESULTS <!--- Describe what you expected to happen when running the steps above --> ACTUAL RESULTS <!--- Describe what actually happened. --> <!-- please run: docker run -v <the kube config>:/root/.kube/config -v \ <result directory>:/var/lib/xtesting/results \ registry.gitlab.com/orange-opensource/lfn/onap/integration/xtesting/infra-healthcheck:latest --> <!-- and upload the result directory as a zip file --> <!-- cd where/your/oom/install is --> <!-- launch healthchecks: ./kubernetes/robot/ete-k8s.sh YOUR_DEPLOYMENT_NAME health --> <!-- and upload the result directory as a zip file --> <!-- it should be /dockerdata-nfs/onap/robot/logs/0000_ete_health/ (0000 must be the biggest number) -->

    Attachments

      No reviews matched the request. Check your Options in the drop-down menu of this sections header.

      Activity

        People

          andreasgeissler Andreas Geissler
          andreasgeissler Andreas Geissler
          Votes:
          0 Vote for this issue
          Watchers:
          1 Start watching this issue

          Dates

            Created:
            Updated:
            Resolved: