-
Story
-
Resolution: Done
-
High
-
None
-
None
-
None
-
Integration R1 RC0
setup lab and test harness to auto-deploy OOM for example to verify the daily deploy
OOM master hourly deployment (19 min turnaround) - only health check so far
https://wiki.onap.org/display/DW/Auto+Continuous+Deployment+via+Jenkins+and+Kibana
http://jenkins.onap.info/job/oom-cd/
(20171027 focus is HEAT right now)
Artifacts should be pushed to a daily status page as part of the automated job.
One of the goals would be to get a job running that deploys all of ONAP and runs more than the healthcheck – it would need to do actual operations up to and including a full vFW deploy in order to pickup issues inside the containers. I understand that the Integration team is working on this
Bring up image/ami with rancher running (no OOM)
Clone latest master
./deleteAll.sh everything
Helm purge config pod
Delete /dockerdata-nfs
Run ./createConfig
Run ./createAll (everthing)
Parse for 0/1 remaining pods not up (1 hour)
Run robot healthcheck
Optionally run several rest commands for granular heath (or expand robot heathcheck)
Publish to automated page/Jenkins
Tag daily build as CI_20170917 good
AWS CLI
#20171029 POC working on EC2 Spot using AMI preconfigured with Rancher 1.6 server/client aws ec2 request-spot-instances --spot-price "0.25" --instance-count 1 --type "one-time" --launch-specification file://aws_ec2_spot_cli.json aws ec2 associate-address --instance-id i-048637ed92da66bf6 --allocation-id eipalloc-375c1d02 aws ec2 reboot-instances --instance-ids i-048637ed92da66bf6 root@ip-172-31-68-153:~# kubectl cluster-info Kubernetes master is running at https://url.onap.info:8880/r/projects/1a7/kubernetes:6443 Heapster is running at https://url.onap.info:8880/r/projects/1a7/kubernetes:6443/api/v1/namespaces/kube-system/services/heapster/proxy KubeDNS is running at https://url.onap.info:8880/r/projects/1a7/kubernetes:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy (4 more)
start with health rest calls
ONAP 1.0 (no vFW yet) out of the box - gets that return data http://{{sdc_ip}}:8080/sdc2/rest/v1/user/users http://{{sdc_ip}}:8080/onboarding-api/v1.0/vendor-license-models http://{{sdc_ip}}:8080/onboarding-api/v1.0/vendor-software-products http://{{sdc_ip}}:8080/sdc2/rest/v1/catalog/resources/latestversion/notabstract/uidonly?internalComponentType=SERVICE http://{{mso_ip}}:8080/asdc/properties/encrypt/ecomp-dev/aa3871669d893c7fb8abbcda31b88b4f http://{{collector_ip}}:3904/events/unauthenticated.TCA_EVENT_OUTPUT/monitor/1?timeout=10000 http://{{cdap_ip}}:10000/v3/namespaces/TCA https://{{aai_ip}}:8443/aai/v8/service-design-and-creation/models
start with healthcheck
root@vm1-robot:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 36140a0c9b44 nexus3.onap.org:10001/openecomp/testsuite:1.0-STAGING-latest "lighttpd -D -f /e..." 2 minutes ago Up 2 minutes 0.0.0.0:88->88/tcp openecompete_container root@vm1-robot:~# cd /opt root@vm1-robot:/opt# ls config demo.sh docker ete.sh eteshare robot_vm_init.sh testsuite root@vm1-robot:/opt# docker exec -it openecompete_container bash root@36140a0c9b44:/# ls /var/opt/OpenECOMP_ETE/ Dockerfile LICENSE.TXT README.md demo demo.sh dnstraffic.sh docker html lighttpd.conf red.xml robot robot_vm_init.sh runTags.sh setup.sh testsuite version.properties then cloud-region put before robot init curl -X PUT https://127.0.0.1:30233/aai/v11/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/RegionOne --data "@aai-cloud-region-put.json" -H "authorization: Basic TW9kZWxMb2FkZXI6TW9kZWxMb2FkZXI=" -H "X-TransactionId:jimmy-postman" -H "X-FromAppId:AAI" -H "Content-Type:application/json" -H "Accept:application/json" --cacert aaiapisimpledemoopenecomporg_20171003.crt -k { "cloud-owner": "CloudOwner", "cloud-region-id": "RegionOne", "cloud-region-version": "v2", "cloud-type": "SharedNode", "cloud-zone": "CloudZone", "owner-defined-type": "OwnerType", "tenants": { "tenant": [{ "tenant-id": "{{tenant_id}}", "tenant-name": "ecomp-dev" }] } } curl -X GET https://127.0.0.1:30233/aai/v11/cloud-infrastructure/cloud-regions/ -H "authorization: Basic TW9kZWxMb2FkZXI6TW9kZWxMb2FkZXI=" -H "X-TransactionId:jimmy-postman" -H "X-FromAppId:AAI" -H "Content-Type:application/json" -H "Accept:application/json" --cacert aaiapisimpledemoopenecomporg_20171003.crt -k
- is blocked by
-
LOG-320 CD: OOM deployment templates for public cloud (Arm, CloudFormation, Heat)
- Closed
-
OOM-328 Preload docker images to allow 7 min startup
- Closed
-
AAI-614 AAI: Refactor CHEF out or enable $TAG passed into chef branch git pull in traversal/resources to enable CI/CD
- Closed
-
LOG-268 F2F 201712 conf only: CD Jenkins
- Closed
- is duplicated by
-
INT-119 Deployment Status: vFirewall closed loop testing
- Closed
-
CLAMP-671 ONAP on AWS
- Closed
- relates to
-
INT-106 Stabilizing the ONAP master branch
- Closed
-
INT-284 Proper Release Management for Deployments
- Closed
-
LOG-332 Deployment CI/CD Integration Jenkins job using OOM-K8S - to validate branch
- Closed
-
INT-118 Deployment Status: branch health testing scenarios
- Closed
-
INT-119 Deployment Status: vFirewall closed loop testing
- Closed
-
OPENLABS-4 Set up environment
- Closed
-
LOG-300 CD: OOM framework for continuous E2E deploy validation of tagged commit/merge trigger docker snapshots
- Closed
-
INT-99 Setup a integration environment for testing VNFs
- Closed
-
INT-332 Rework ROBOT healthcheck reporting logs for ELK friendly CD frameworks
- Closed
-
LOG-266 F2F: ONAP CI/CD using OOM Kubernetes
- Closed
-
INT-300 CI-CD For ONAP master branch
- Closed
-
LOG-79 Cloud Native Logging for OOM undercloud events (scale/restart)
- Closed
-
LOG-96 ELK stack for KPI or Feature Coverage stats - audit.log
- Closed
-
LOG-341 Run CD jenkins/kibana from our own Kubernetes deployment - not off docker compose
- Closed
-
OOM-375 F2F: ONAP/OOM for Developers
- Closed
-
OOM-422 onap-config pod creation 12 min instead of 2 when dockerdata-nfs content was not deleted previously
- Closed
- links to
- mentioned in
-
Page Loading...