Uploaded image for project: 'Integration'
  1. Integration
  2. INT-120

Deployment Status Sanity - Daily auto E2E verification status of the build

    XMLWordPrintable

    Details

    • Type: Story
    • Status: Closed
    • Priority: High
    • Resolution: Done
    • Affects Version/s: None
    • Fix Version/s: Casablanca Release
    • Component/s: None
    • Labels:
      None
    • Sprint:
      Integration R1 RC0

      Description

       setup lab and test harness to auto-deploy OOM for example to verify the daily deploy

      OOM master hourly deployment (19 min turnaround) - only health check so far

      https://wiki.onap.org/display/DW/Auto+Continuous+Deployment+via+Jenkins+and+Kibana

      http://jenkins.onap.info/job/oom-cd/

      http://kibana.onap.info:5601/app/kibana#/dashboard/AV-xTMU_UcDKD8zJ166G?_g=(refreshInterval:(display:'5%2520seconds',pause:!f,section:1,value:5000),time:(from:now-24h,mode:quick,to:now))&_a=(description:dash%2520pass,filters:!(),options:(darkTheme:!f),panels:!((col:1,id:AV-xBtr0UcDKD8zJ15sl,panelIndex:1,row:1,size_x:6,size_y:3,type:visualization),(col:7,id:AV-xCajhUcDKD8zJ15y-,panelIndex:2,row:1,size_x:6,size_y:3,type:visualization)),query:(match_all:()),timeRestore:!t,title:dash%2520pass,uiState:(P-1:(vis:(params:(sort:(columnIndex:0,direction:desc))))),viewMode:view) 

      (20171027 focus is HEAT right now)

       

      Artifacts should be pushed to a daily status page as part of the automated job.

       

      One of the goals would be to get a job running that deploys all of ONAP and runs more than the healthcheck – it would need to do actual operations up to and including a full vFW deploy in order to pickup issues inside the containers.  I understand that the Integration team is working on this

       

      Bring up image/ami with rancher running (no OOM)

      Clone latest master

      ./deleteAll.sh everything

      Helm purge config pod

      Delete /dockerdata-nfs

       

      Run ./createConfig

      Run ./createAll (everthing)

      Parse for 0/1 remaining pods not up (1 hour)

      Run robot healthcheck

      Optionally run several rest commands for granular heath (or expand robot heathcheck)

      Publish to automated page/Jenkins

      Tag daily build as CI_20170917 good

       AWS CLI

      #20171029 POC working on EC2 Spot using AMI preconfigured with Rancher 1.6 server/client
      aws ec2 request-spot-instances --spot-price "0.25" --instance-count 1 --type "one-time" --launch-specification file://aws_ec2_spot_cli.json
      aws ec2 associate-address --instance-id i-048637ed92da66bf6 --allocation-id eipalloc-375c1d02
      aws ec2 reboot-instances --instance-ids i-048637ed92da66bf6
      
      root@ip-172-31-68-153:~# kubectl cluster-info
      Kubernetes master is running at https://url.onap.info:8880/r/projects/1a7/kubernetes:6443
      Heapster is running at https://url.onap.info:8880/r/projects/1a7/kubernetes:6443/api/v1/namespaces/kube-system/services/heapster/proxy
      KubeDNS is running at https://url.onap.info:8880/r/projects/1a7/kubernetes:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
      (4 more)

       

       start with health rest calls

      ONAP 1.0 (no vFW yet) out of the box - gets that return data
      http://{{sdc_ip}}:8080/sdc2/rest/v1/user/users
      http://{{sdc_ip}}:8080/onboarding-api/v1.0/vendor-license-models
      http://{{sdc_ip}}:8080/onboarding-api/v1.0/vendor-software-products
      http://{{sdc_ip}}:8080/sdc2/rest/v1/catalog/resources/latestversion/notabstract/uidonly?internalComponentType=SERVICE
      
      http://{{mso_ip}}:8080/asdc/properties/encrypt/ecomp-dev/aa3871669d893c7fb8abbcda31b88b4f
      
      http://{{collector_ip}}:3904/events/unauthenticated.TCA_EVENT_OUTPUT/monitor/1?timeout=10000
      
      http://{{cdap_ip}}:10000/v3/namespaces/TCA
      
      https://{{aai_ip}}:8443/aai/v8/service-design-and-creation/models

       start with healthcheck

      root@vm1-robot:~# docker ps
      CONTAINER ID        IMAGE                                                          COMMAND                  CREATED             STATUS              PORTS                NAMES
      36140a0c9b44        nexus3.onap.org:10001/openecomp/testsuite:1.0-STAGING-latest   "lighttpd -D -f /e..."   2 minutes ago       Up 2 minutes        0.0.0.0:88->88/tcp   openecompete_container
      root@vm1-robot:~# cd /opt
      root@vm1-robot:/opt# ls
      config  demo.sh  docker  ete.sh  eteshare  robot_vm_init.sh  testsuite
      root@vm1-robot:/opt# docker exec -it openecompete_container bash
      root@36140a0c9b44:/# ls /var/opt/OpenECOMP_ETE/
      Dockerfile  LICENSE.TXT  README.md  demo  demo.sh  dnstraffic.sh  docker  html  lighttpd.conf  red.xml  robot  robot_vm_init.sh  runTags.sh  setup.sh  testsuite  version.properties
      
      then cloud-region put before robot init
      
      curl -X PUT https://127.0.0.1:30233/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/RegionOne --data "@aai-cloud-region-put.json" -H "authorization: Basic TW9kZWxMb2FkZXI6TW9kZWxMb2FkZXI=" -H "X-TransactionId:jimmy-postman" -H "X-FromAppId:AAI" -H "Content-Type:application/json" -H "Accept:application/json" --cacert aaiapisimpledemoopenecomporg_20171003.crt -k
      
      {
      "cloud-owner": "CloudOwner",
      "cloud-region-id": "RegionOne",
      "cloud-region-version": "v2",
      "cloud-type": "SharedNode",
      "cloud-zone": "CloudZone",
      "owner-defined-type": "OwnerType",
      "tenants": {
      "tenant": [{
      "tenant-id": "{{tenant_id}}",
      "tenant-name": "ecomp-dev"
      }]
      }
      }
      
      curl -X GET https://127.0.0.1:30233/aai/v11/cloud-infrastructure/cloud-regions/ -H "authorization: Basic TW9kZWxMb2FkZXI6TW9kZWxMb2FkZXI=" -H "X-TransactionId:jimmy-postman" -H "X-FromAppId:AAI" -H "Content-Type:application/json" -H "Accept:application/json" --cacert aaiapisimpledemoopenecomporg_20171003.crt -k
      
      

        Attachments

          Issue Links

          No reviews matched the request. Check your Options in the drop-down menu of this sections header.

            Activity

              People

              Assignee:
              michaelobrien Michael O'Brien
              Reporter:
              michaelobrien Michael O'Brien
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Dates

                Created:
                Updated:
                Resolved: