14:59:24 #startmeeting OPNFV Testing Working Group 14:59:24 Meeting started Thu May 4 14:59:24 2017 UTC. The chair is trevor_intel. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:59:24 Useful Commands: #action #agreed #help #info #idea #link #topic. 14:59:24 The meeting name has been set to 'opnfv_testing_working_group' 14:59:37 #info Trevor Cooper 15:01:59 GTM in invite does not work 15:02:16 Try this https://global.gotomeeting.com/join/819733085 15:04:14 #info Yujun Zhang 15:04:17 #info Helen Yao 15:04:37 #info kubi 15:04:51 Sign in is not working for me 15:05:28 #info Jack 15:05:37 I signed in 15:05:48 trevor_intel: I started the GTM 15:06:54 https://wiki.opnfv.org/display/meetings/Go-To-Meeting+info 15:08:28 #info Current GTM in invite is not valid 15:10:22 #This meeting time changed relative to Pacific so there is some confusion as to what meeting account to use 15:12:55 #Info we need a recommendation from TSC for all meetings to follow (Pacific or UTC) ... no way to know who is running over calls 15:13:38 #info we should also have protocol to announce before starting a meeting in #opnfv-meeting channel? 15:17:43 #action Trevor ask Ray for action by LF or TSC ... Mark to raise the issue too 15:18:51 #link https://wiki.opnfv.org/display/meetings/TestPerf 15:20:30 #link http://ircbot.wl.linuxfoundation.org/meetings/opnfv-testperf/2017/ 15:21:21 #info Mark Beierl 15:22:07 #info last minutes recorded were from 2017-04-06 15:22:44 #link http://ircbot.wl.linuxfoundation.org/meetings/opnfv-testperf/2017/opnfv-testperf.2017-04-06-14.59.html 15:23:20 #info Action for mbeierl and kubi001 to discuss Plugfest remote StorPerf activities closed with no action 15:23:55 trevor_intel: can you #chair me? 15:25:09 #topic Plugfest follow up 15:25:29 #info Not much to report, but attendees can send out emails to wg 15:25:44 #topic Bitergia 15:25:49 #link https://wiki.opnfv.org/display/testing/Result+alignment+for+ELK+post-processing 15:26:10 #chair mbeirl 15:26:10 Warning: Nick not in channel: mbeirl 15:26:10 Current chairs: mbeirl trevor_intel 15:26:22 #chair mbeierl 15:26:22 Current chairs: mbeierl mbeirl trevor_intel 15:26:54 #info Meeting coming up on 2017-05-10 with Bitergia 15:27:30 #info trying to create consistency and common methods of dashboarding test results. Bitergia is available to do that work 15:27:40 #info but we need to provide the guidance for them 15:31:29 #info Mark explains generic and custom reporting 15:37:44 #action PTLs to examine their own "customization" by looking at what details they provide when reporting test cases, and then propose sample graphs for Bitergia to look at 15:38:54 #info this is based on mbeierl's understanding, so if other PTLs have different requirements for Bitergia, please do go ahead and email the test-wg with your ideas 15:39:53 #action mbeierl to write email to test-wg (or put on wiki) his ideas for what custom reporting can look like within StorPerf's context 15:40:57 I'm on IRC only 15:41:16 #topic Test Landing Page 15:41:25 #link http://testresults.opnfv.org/reporting2/reporting/index.html#!/landingpage/table 15:43:37 #info we need to come up with a common definition of what it means to have a test reported: did the test run, or did it pass? 15:43:55 #info we still need to get together as a -wg and form the direction for this 15:47:23 I am also on IRC only. What is the goto meeting number? 15:47:37 JackChan_: https://global.gotomeeting.com/join/819733085 15:48:20 mbeierl: thank you! 15:50:06 #info Discussed integration of test projects with scenario / CI pipeline (limited set of tests say weekly?) 15:50:28 #info discussion on what it means to have projects (ie storperf and vsperf) part of a weekly pipeline, where a subset of the projects tests are run 15:51:18 #info example, vsperf runs iterations of versions of components, whereas when installed by OPNFV OpenStack installer, only one version of a software component is available 15:56:45 #info ideas for Summit test panel: aligned test reporting vs. what individual projects can do outside of the framework? 15:58:01 #info the new dashboard shows where the test projects all come together to present a unified view of testing. 15:58:46 #info but each project has more than what the unified view presents, so how can we let the public know about the additional workings of each project - what the project can do beyond the common framework? 16:01:32 #topic test levels 16:01:57 #info the concept of test levels is to help put tests into buckets according to how often they should be run 16:02:24 #info this is an interlock with the infra-wg where they are looking to have pods be re-purposed on demand. 16:02:38 #info for example, a change is made somewhere 16:02:58 #info the levels of test that get run are correlated with where that change was made 16:03:29 #info sanity should be run for all changes, but maybe deeper level tests are only run after a timer, such as daily 16:03:44 #info stress or performance level tests might only be run once a week. 16:04:13 #info so the infra-wg and test-wg are interlocked on getting this done in Euphrates. 16:04:51 #info Should add this to agenda for next week to see how we can help move this forward 16:05:01 Any other comments from anyone? 16:05:12 Ending meeting in 10s.... 16:05:19 5... 16:05:27 #endmeeting