Ensure that Helm charts are parameterized to ensure that one can instantiate the service multiple times.
Prove with some sample application to ensure that one can write the data in M3DB and read the data from M3DB.
This is breakdown of activities I see :
- Setup POC : To understand how Spark and M3DB work together
- Use M3DB operator to instantiate M3DB cluster. Wherever operators are available, use them. But ensure that operators are installed using Helm Charts. And also, any CRDs and configuration information is also represented as Helm Charts.
- Explore use of etcd operator to bring up etcd for M3DB configuration management.
- Ensure that they work fine by running some tests on M3DB.
- Now identify the items that are to be parameterized to ensure that same Helm charts are used to deploy multiple deployments. Some in the same cluster in different namespace and some across clusters in various cloud-regions.
- Update the helm charts
- Ensure that they all work fine by deploying some test application in various deployments (this test would require its own Helm chart)
- Test above using K8S plugin API of ONAP.
- Test above using SDC and SO of ONAP
- Integrate these helm charts with rest of analytics framework
- Integrate with Prometheus helm charts.
- Integrate with Kafka broker helm charts
- Create one sample Analytics application that leverage data in M3DB and generate alerts over Kafka bus.