15:03:23 <gabriel_yuyang> #startmeeting Testperf Weekly Meeting 20180726 15:03:23 <collabot> Meeting started Thu Jul 26 15:03:23 2018 UTC. The chair is gabriel_yuyang. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:03:23 <collabot> Useful Commands: #action #agreed #help #info #idea #link #topic. 15:03:23 <collabot> The meeting name has been set to 'testperf_weekly_meeting_20180726' 15:03:31 <gabriel_yuyang> #topic Roll Call 15:03:33 <mbeierl> #info Mark Beierl 15:03:38 <gabriel_yuyang> #info Gabriel Yu 15:03:51 <gabriel_yuyang> #topic Capacity analysis of K8S 15:04:06 <gabriel_yuyang> #link https://github.com/kubernetes-incubator/cluster-capacity/blob/master/doc/cluster-capacity.md 15:05:31 <mj_rex> #info Rex 15:19:13 <mbeierl> #info I see this as useful for a prediction of capacity, but think there is more benefit to then exercise the real environment and show how the reality matches the prediction. 15:20:04 <mbeierl> #info Perhaps we can then dive into the delta, and provide explanations or suggestions for tuning, or show "bottlenecks" :), 15:24:40 <gabriel_yuyang> #info data path of storage testing between k8s and openstack is not that different. How hypervisor behaves in 2 scenarios is quite different 15:26:21 <gabriel_yuyang> #info the mechanisms are very different. Technically , there is no hypervisor in container storage testing 15:27:39 <mbeierl> #info difference between container data path and VM data path is that the VM runs a full OS, which thinks it is talking to a BIOS. This level of emulation is mitigated with paravirtual drivers, but there is still overhead associated with the VM. A container runs directly on the host OS, and has access to physical devices natively if desired. 15:30:11 <mbeierl> #info the different storage options (AUFS, GlusterFS, etc) plus the ability to have shared volumes in containers, also have their own impacts 15:30:18 <gabriel_yuyang> #info Storperf could call Yardstick to get the context of K8S and then does storage testing 15:30:37 <mbeierl> #info StorPerf and Yardstick together would make a quicker path to K8s testing than doing them standalone. 15:31:20 <mbeierl> #info the idea would be to use Yardstick for the context, and simply pass the IP addresses and maybe ssh keys, or login for the containers to StorPerf, and then call StorPerf to execute 15:32:04 <mbeierl> #info another option would be to investigate creating a FIO image that can run as a service, and when it starts up have it call back to StorPerf to receive its workloads 15:32:21 <mbeierl> #info that way we do not have to have any SSH or auth in the containers 15:33:32 <mbeierl> #info Trevor asks about having StarlingX look at OPNFV testing, specifically performance testing. Mentioned different perf test cases, which measure specific scenarios (ie: failure recovery time) 15:34:11 <mbeierl> #info If that is the kind of performance testing that is interesting, where in OPNFV should these test cases live? 15:36:28 <mbeierl> #info Yardstick already has some tests such as live migration 15:39:33 <mbeierl> #info Yardstick also has HA test cases which get close to this level of testing 15:40:26 <mbeierl> #info Yardstick does have steps in a test case, and those steps can be timestamped 15:43:25 <mbeierl> #info Create a test case with multiple steps: 1) create the heat stack with VMs, 2) start a ping between them, 3) kill one VM, 4) wait for OpenStack to notice and respawn VM, 5) wait for VM to be created, 6) wait for ping between VMs to start back up. Then we can report on things like time between 3 and 4 and say how long it took for OpenStack to recover 15:44:05 <mbeierl> #info this can be applied to different recovery or failure situations 15:44:45 <mbeierl> #link https://scapy.net/ 15:45:56 <gabriel_yuyang> #info Alec: Scapy is a packet formator and trex could utilize these packets 15:46:01 <mbeierl> #link https://ostinato.org/ 15:46:22 <mbeierl> #link https://iperf.fr/ 15:46:52 <mbeierl> #link https://github.com/HewlettPackard/netperf 15:48:05 <mbeierl> #link https://wiki.openstack.org/wiki/StarlingX 15:55:12 <mbeierl> #link https://wiki.onap.org/pages/viewpage.action?pageId=3247218 ONAP VNF Validation Program 15:56:30 <mbeierl> #info For VNFs, (vIMS as an example), OPNFV Functest has does some work around the vIMS case, but can we do more around the call flow and scenario when it comes to moving beyond a simple "pass control traffic" to verify E2E control traffic? Should we expand to data plane? 16:00:44 <gabriel_yuyang> mbeierl: it seems the VVP program aims at validating the interoperability between VNFs and ONAP 16:01:29 <gabriel_yuyang> #endmeeting