#opnfv-testperf: OPNFV Testing Working Group

Meeting started by trevor_intel at 14:59:24 UTC (full logs).

Meeting summary

    1. Trevor Cooper (trevor_intel, 14:59:37)
    2. Yujun Zhang (yujunz, 15:04:14)
    3. Helen Yao (HelenYao, 15:04:17)
    4. kubi (kubi001, 15:04:37)
    5. Jack (JackChan, 15:05:28)
    6. https://wiki.opnfv.org/display/meetings/Go-To-Meeting+info (mbeierl, 15:06:54)
    7. Current GTM in invite is not valid (trevor_intel, 15:08:28)
    8. we need a recommendation from TSC for all meetings to follow (Pacific or UTC) ... no way to know who is running over calls (trevor_intel, 15:12:55)
    9. we should also have protocol to announce before starting a meeting in #opnfv-meeting channel? (mbeierl, 15:13:38)
    10. ACTION: Trevor ask Ray for action by LF or TSC ... Mark to raise the issue too (trevor_intel, 15:17:43)
    11. https://wiki.opnfv.org/display/meetings/TestPerf (mbeierl, 15:18:51)
    12. http://ircbot.wl.linuxfoundation.org/meetings/opnfv-testperf/2017/ (mbeierl, 15:20:30)
    13. Mark Beierl (mbeierl, 15:21:21)
    14. last minutes recorded were from 2017-04-06 (mbeierl, 15:22:07)
    15. http://ircbot.wl.linuxfoundation.org/meetings/opnfv-testperf/2017/opnfv-testperf.2017-04-06-14.59.html (mbeierl, 15:22:44)
    16. Action for mbeierl and kubi001 to discuss Plugfest remote StorPerf activities closed with no action (mbeierl, 15:23:20)
    17. Not much to report, but attendees can send out emails to wg (mbeierl, 15:25:29)

  1. Bitergia (trevor_intel, 15:25:44)
    1. https://wiki.opnfv.org/display/testing/Result+alignment+for+ELK+post-processing (trevor_intel, 15:25:49)
    2. Meeting coming up on 2017-05-10 with Bitergia (mbeierl, 15:26:54)
    3. trying to create consistency and common methods of dashboarding test results. Bitergia is available to do that work (mbeierl, 15:27:30)
    4. but we need to provide the guidance for them (mbeierl, 15:27:40)
    5. Mark explains generic and custom reporting (trevor_intel, 15:31:29)
    6. ACTION: PTLs to examine their own "customization" by looking at what details they provide when reporting test cases, and then propose sample graphs for Bitergia to look at (mbeierl, 15:37:44)
    7. this is based on mbeierl's understanding, so if other PTLs have different requirements for Bitergia, please do go ahead and email the test-wg with your ideas (mbeierl, 15:38:54)
    8. ACTION: mbeierl to write email to test-wg (or put on wiki) his ideas for what custom reporting can look like within StorPerf's context (mbeierl, 15:39:53)

  2. Test Landing Page (trevor_intel, 15:41:16)
    1. http://testresults.opnfv.org/reporting2/reporting/index.html#!/landingpage/table (mbeierl, 15:41:25)
    2. we need to come up with a common definition of what it means to have a test reported: did the test run, or did it pass? (mbeierl, 15:43:37)
    3. we still need to get together as a -wg and form the direction for this (mbeierl, 15:43:55)
    4. Discussed integration of test projects with scenario / CI pipeline (limited set of tests say weekly?) (trevor_intel, 15:50:06)
    5. discussion on what it means to have projects (ie storperf and vsperf) part of a weekly pipeline, where a subset of the projects tests are run (mbeierl, 15:50:28)
    6. example, vsperf runs iterations of versions of components, whereas when installed by OPNFV OpenStack installer, only one version of a software component is available (mbeierl, 15:51:18)
    7. ideas for Summit test panel: aligned test reporting vs. what individual projects can do outside of the framework? (mbeierl, 15:56:45)
    8. the new dashboard shows where the test projects all come together to present a unified view of testing. (mbeierl, 15:58:01)
    9. but each project has more than what the unified view presents, so how can we let the public know about the additional workings of each project - what the project can do beyond the common framework? (mbeierl, 15:58:46)

  3. test levels (mbeierl, 16:01:32)
    1. the concept of test levels is to help put tests into buckets according to how often they should be run (mbeierl, 16:01:57)
    2. this is an interlock with the infra-wg where they are looking to have pods be re-purposed on demand. (mbeierl, 16:02:24)
    3. for example, a change is made somewhere (mbeierl, 16:02:38)
    4. the levels of test that get run are correlated with where that change was made (mbeierl, 16:02:58)
    5. sanity should be run for all changes, but maybe deeper level tests are only run after a timer, such as daily (mbeierl, 16:03:29)
    6. stress or performance level tests might only be run once a week. (mbeierl, 16:03:44)
    7. so the infra-wg and test-wg are interlocked on getting this done in Euphrates. (mbeierl, 16:04:13)
    8. Should add this to agenda for next week to see how we can help move this forward (mbeierl, 16:04:51)


Meeting ended at 16:05:27 UTC (full logs).

Action items

  1. Trevor ask Ray for action by LF or TSC ... Mark to raise the issue too
  2. PTLs to examine their own "customization" by looking at what details they provide when reporting test cases, and then propose sample graphs for Bitergia to look at
  3. mbeierl to write email to test-wg (or put on wiki) his ideas for what custom reporting can look like within StorPerf's context


Action items, by person

  1. mbeierl
    1. mbeierl to write email to test-wg (or put on wiki) his ideas for what custom reporting can look like within StorPerf's context
  2. UNASSIGNED
    1. Trevor ask Ray for action by LF or TSC ... Mark to raise the issue too
    2. PTLs to examine their own "customization" by looking at what details they provide when reporting test cases, and then propose sample graphs for Bitergia to look at


People present (lines said)

  1. mbeierl (41)
  2. trevor_intel (16)
  3. collabot (6)
  4. JackChan_ (2)
  5. yujunz (2)
  6. HelenYao (1)
  7. kubi001 (1)
  8. JackChan (1)
  9. mbeirl (0)


Generated by MeetBot 0.1.4.