#opnfv-testperf: Testperf Weekly Meeting 20180726

Meeting started by gabriel_yuyang at 15:03:23 UTC (full logs).

Meeting summary

  1. Roll Call (gabriel_yuyang, 15:03:31)
    1. Mark Beierl (mbeierl, 15:03:33)
    2. Gabriel Yu (gabriel_yuyang, 15:03:38)

  2. Capacity analysis of K8S (gabriel_yuyang, 15:03:51)
    1. https://github.com/kubernetes-incubator/cluster-capacity/blob/master/doc/cluster-capacity.md (gabriel_yuyang, 15:04:06)
    2. Rex (mj_rex, 15:05:31)
    3. I see this as useful for a prediction of capacity, but think there is more benefit to then exercise the real environment and show how the reality matches the prediction. (mbeierl, 15:19:13)
    4. Perhaps we can then dive into the delta, and provide explanations or suggestions for tuning, or show "bottlenecks" :), (mbeierl, 15:20:04)
    5. data path of storage testing between k8s and openstack is not that different. How hypervisor behaves in 2 scenarios is quite different (gabriel_yuyang, 15:24:40)
    6. the mechanisms are very different. Technically , there is no hypervisor in container storage testing (gabriel_yuyang, 15:26:21)
    7. difference between container data path and VM data path is that the VM runs a full OS, which thinks it is talking to a BIOS. This level of emulation is mitigated with paravirtual drivers, but there is still overhead associated with the VM. A container runs directly on the host OS, and has access to physical devices natively if desired. (mbeierl, 15:27:39)
    8. the different storage options (AUFS, GlusterFS, etc) plus the ability to have shared volumes in containers, also have their own impacts (mbeierl, 15:30:11)
    9. Storperf could call Yardstick to get the context of K8S and then does storage testing (gabriel_yuyang, 15:30:18)
    10. StorPerf and Yardstick together would make a quicker path to K8s testing than doing them standalone. (mbeierl, 15:30:37)
    11. the idea would be to use Yardstick for the context, and simply pass the IP addresses and maybe ssh keys, or login for the containers to StorPerf, and then call StorPerf to execute (mbeierl, 15:31:20)
    12. another option would be to investigate creating a FIO image that can run as a service, and when it starts up have it call back to StorPerf to receive its workloads (mbeierl, 15:32:04)
    13. that way we do not have to have any SSH or auth in the containers (mbeierl, 15:32:21)
    14. Trevor asks about having StarlingX look at OPNFV testing, specifically performance testing. Mentioned different perf test cases, which measure specific scenarios (ie: failure recovery time) (mbeierl, 15:33:32)
    15. If that is the kind of performance testing that is interesting, where in OPNFV should these test cases live? (mbeierl, 15:34:11)
    16. Yardstick already has some tests such as live migration (mbeierl, 15:36:28)
    17. Yardstick also has HA test cases which get close to this level of testing (mbeierl, 15:39:33)
    18. Yardstick does have steps in a test case, and those steps can be timestamped (mbeierl, 15:40:26)
    19. Create a test case with multiple steps: 1) create the heat stack with VMs, 2) start a ping between them, 3) kill one VM, 4) wait for OpenStack to notice and respawn VM, 5) wait for VM to be created, 6) wait for ping between VMs to start back up. Then we can report on things like time between 3 and 4 and say how long it took for OpenStack to recover (mbeierl, 15:43:25)
    20. this can be applied to different recovery or failure situations (mbeierl, 15:44:05)
    21. https://scapy.net/ (mbeierl, 15:44:45)
    22. Alec: Scapy is a packet formator and trex could utilize these packets (gabriel_yuyang, 15:45:56)
    23. https://ostinato.org/ (mbeierl, 15:46:01)
    24. https://iperf.fr/ (mbeierl, 15:46:22)
    25. https://github.com/HewlettPackard/netperf (mbeierl, 15:46:52)
    26. https://wiki.openstack.org/wiki/StarlingX (mbeierl, 15:48:05)
    27. https://wiki.onap.org/pages/viewpage.action?pageId=3247218 ONAP VNF Validation Program (mbeierl, 15:55:12)
    28. For VNFs, (vIMS as an example), OPNFV Functest has does some work around the vIMS case, but can we do more around the call flow and scenario when it comes to moving beyond a simple "pass control traffic" to verify E2E control traffic? Should we expand to data plane? (mbeierl, 15:56:30)

Meeting ended at 16:01:29 UTC (full logs).

Action items

  1. (none)

People present (lines said)

  1. mbeierl (23)
  2. gabriel_yuyang (11)
  3. collabot (3)
  4. mj_rex (1)

Generated by MeetBot 0.1.4.