-
Story
-
Resolution: Unresolved
-
High
-
None
-
None
Ensure that the helm charts created can be used to deploy Kafka broker and associated zookeeper multiple times. Some times, in the same K8S cluster, but via different namespace.
Use operators to create topics and users dynamically.
Even operator is used to deploy Kafka and zookeeper, helm charts are expected to be created even if CRDs are used.
Parameterization should be flexible enough to deploy kafka multiple times.
Following activities are identified as part of this:
- PoC Setup
- Using https://github.com/strimzi/strimzi-kafka-operator, bring up operator (using Helm charts in https://github.com/strimzi/strimzi-kafka-operator/tree/master/helm-charts)
- Using examples (yaml files), bring up Kafka broker, Zookeeper
- Using topic operator and user opeartor, create few topics and users who can access those topics.
- Using producer and consumer examples (https://strimzi.io/docs/master/#deploying-example-clients-str), ensure that everything is okay.
- Convert to Helm charts
- Instead of using yaml files in examples, use helm charts - Convert examples yaml files to Helm charts.
- Ensure that parameterization happens well.
- Test again producer and consumer examples.
- Deploy it using K8S plugin of ONAP.
- Ensure that multiple instances can be created in various cloud regions.
- Ensure that multiple instances can be created in different namespaces of a cloud-region.
- Ensure that multiple instances work using producer and consumer clients.
- Also, ensure that things work even if producer and consumers are in a namespace that is different from kafka broker namespace.
- Also, ensure that things work even producer and consumer are in different namespaces.
- Create various configuration profiles for various scenarios. Scenario include sites that only can use no replica. Sites with various replicas, Sites with kafka-connect etc...
- Test above using SDC and SO when K8S plugin is integrated with SDC & SO.
- Avoid multiple zookeeper daemon
- Since HDFS/HBASE also use zookeeper, ensure that only one zookeeper is present.
- Integrate with Prometheus collection and HDFS/HBASE/OpenTSDB/Spark (all in a single set of helm charts).
- Test with a sample application to ensure that all component work together.
1.
|
Create Day2 config for Kakfa | Open | ca2853 | |
2.
|
Create Day-0 config profile for Kakfa | Open | ca2853 |