Uploaded image for project: 'ONAP Operations Manager'
  1. ONAP Operations Manager
  2. OOM-591

AAI needs persistent volumes configured, need help with OS in lab

XMLWordPrintable

    • Icon: Task Task
    • Resolution: Done
    • Icon: Highest Highest
    • Casablanca Release
    • Amsterdam Release
    • None
    • None
    • OOM Sprint 8, OOM Sprint 9 - Beijing freeze

      Hi OOM team,

      Problem context

      In A&AI we need the set up a cassandra cluster. We want to provide remote storage for each cassandra instance. For testing purposes we are doing it in the Windriver lab. The lab is based on openstack and provides remote storage via the Cinder service.

      Problem

      We are unable to work with Cinder storage using OOM/K8s. Rancher 1.6.X does not support the Cinder driver. We have tried to set up our own K8S (with kubeadm) using openstack as a cloud provider, but were unable to get it working. When we enable openstack in the kubelet the networking between pods stops working. We have tried it with k8s 1.8.4, 1.8.5 and 1.9.2, we tried the kubernetes CNI 0.6.0 as well as 0.5.1 and docker 17.12. We used Flannel and Kube-router as networking plugins

      Step-by-step how set up k8s

      (as root)

      kubeadm init --pod-network-cidr=192.168.0.0/16
      remember the kubeadm join --token ... line

      (as user)
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
      kubectl taint node oom-aai-openstack-1 node-role.kubernetes.io/master:NoSchedule-
      kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter-all-features.yaml

      sudo vim /etc/kubernetes/cloud.conf

      add the following (don't copy&paste the '===================='):

      ====================
      [Global]
      auth-url=http://10.12.25.2:5000/v3
      username=OOM
      password=OOM
      domain-id=default
      region=RegionOne
      tenant-name=A & AI
      tenant-id=24f8eea7f8a146db9e6fa57aee8c3a1c

      [LoadBalancer]
      subnet-id=7d2dd91b-08ac-463f-a292-91a1e1dd76d4
      ====================

      sudo vim /etc/kubernetes/manifests/kube-controller-manager.yaml
      sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml

      Edit them like this:

      [...]
      spec:
        containers:
        - command:
          - kube-controller-manager (kube-apiserver)
          - --cloud-provider=openstack
          - --cloud-config=/etc/kubernetes/cloud.conf

      [...]
          volumeMounts:
          - mountPath: /etc/kubernetes/cloud.conf
            name: cloud-config
            readOnly: true

      [...]
       volumes:
        - hostPath:
            path: /etc/kubernetes/cloud.conf
            type: FileOrCreate
          name: cloud-config
      [...]

      (on all machines)
      sudo vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

      add "--cloud-provider=openstack --cloud-config=/etc/kubernetes/cloud.conf" to KUBELET_KUBECONFIG_ARGS

      sudo systemctl daemon-reload
      sudo systemctl restart kubelet

      Result

      After editing 10-kubeadm.conf and restarting the kubelet hostname resolution and networking stop working properly. We are unable to find the cause.

      Final remarks

      For now we run the cassandra cluster with local storage. We would like to have an OOM/k8s installation in the windriver lab which can dynamically allocate storage from Cinder. I personally think that getting Cinder remote storage working with OOM/k8s in the lab would also be beneficial to other ONAP teams.

       

            michaelobrien michaelobrien
            jimmydot jimmydot
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: