#opnfv-testperf: OPNFV Testing Working Group
Meeting started by trevor_intel at 14:59:24 UTC
(full logs).
Meeting summary
-
- Trevor Cooper (trevor_intel,
14:59:37)
- Yujun Zhang (yujunz,
15:04:14)
- Helen Yao (HelenYao,
15:04:17)
- kubi (kubi001,
15:04:37)
- Jack (JackChan,
15:05:28)
- https://wiki.opnfv.org/display/meetings/Go-To-Meeting+info
(mbeierl,
15:06:54)
- Current GTM in invite is not valid (trevor_intel,
15:08:28)
- we need a recommendation from TSC for all
meetings to follow (Pacific or UTC) ... no way to know who is
running over calls (trevor_intel,
15:12:55)
- we should also have protocol to announce before
starting a meeting in #opnfv-meeting channel? (mbeierl,
15:13:38)
- ACTION: Trevor ask
Ray for action by LF or TSC ... Mark to raise the issue too
(trevor_intel,
15:17:43)
- https://wiki.opnfv.org/display/meetings/TestPerf
(mbeierl,
15:18:51)
- http://ircbot.wl.linuxfoundation.org/meetings/opnfv-testperf/2017/
(mbeierl,
15:20:30)
- Mark Beierl (mbeierl,
15:21:21)
- last minutes recorded were from
2017-04-06 (mbeierl,
15:22:07)
- http://ircbot.wl.linuxfoundation.org/meetings/opnfv-testperf/2017/opnfv-testperf.2017-04-06-14.59.html
(mbeierl,
15:22:44)
- Action for mbeierl and kubi001 to discuss
Plugfest remote StorPerf activities closed with no action
(mbeierl,
15:23:20)
- Not much to report, but attendees can send out
emails to wg (mbeierl,
15:25:29)
- Bitergia (trevor_intel, 15:25:44)
- https://wiki.opnfv.org/display/testing/Result+alignment+for+ELK+post-processing
(trevor_intel,
15:25:49)
- Meeting coming up on 2017-05-10 with
Bitergia (mbeierl,
15:26:54)
- trying to create consistency and common methods
of dashboarding test results. Bitergia is available to do that
work (mbeierl,
15:27:30)
- but we need to provide the guidance for
them (mbeierl,
15:27:40)
- Mark explains generic and custom
reporting (trevor_intel,
15:31:29)
- ACTION: PTLs to
examine their own "customization" by looking at what details they
provide when reporting test cases, and then propose sample graphs
for Bitergia to look at (mbeierl,
15:37:44)
- this is based on mbeierl's understanding, so if
other PTLs have different requirements for Bitergia, please do go
ahead and email the test-wg with your ideas (mbeierl,
15:38:54)
- ACTION: mbeierl to
write email to test-wg (or put on wiki) his ideas for what custom
reporting can look like within StorPerf's context (mbeierl,
15:39:53)
- Test Landing Page (trevor_intel, 15:41:16)
- http://testresults.opnfv.org/reporting2/reporting/index.html#!/landingpage/table
(mbeierl,
15:41:25)
- we need to come up with a common definition of
what it means to have a test reported: did the test run, or did it
pass? (mbeierl,
15:43:37)
- we still need to get together as a -wg and form
the direction for this (mbeierl,
15:43:55)
- Discussed integration of test projects with
scenario / CI pipeline (limited set of tests say weekly?)
(trevor_intel,
15:50:06)
- discussion on what it means to have projects
(ie storperf and vsperf) part of a weekly pipeline, where a subset
of the projects tests are run (mbeierl,
15:50:28)
- example, vsperf runs iterations of versions of
components, whereas when installed by OPNFV OpenStack installer,
only one version of a software component is available (mbeierl,
15:51:18)
- ideas for Summit test panel: aligned test
reporting vs. what individual projects can do outside of the
framework? (mbeierl,
15:56:45)
- the new dashboard shows where the test projects
all come together to present a unified view of testing. (mbeierl,
15:58:01)
- but each project has more than what the unified
view presents, so how can we let the public know about the
additional workings of each project - what the project can do beyond
the common framework? (mbeierl,
15:58:46)
- test levels (mbeierl, 16:01:32)
- the concept of test levels is to help put tests
into buckets according to how often they should be run (mbeierl,
16:01:57)
- this is an interlock with the infra-wg where
they are looking to have pods be re-purposed on demand. (mbeierl,
16:02:24)
- for example, a change is made somewhere
(mbeierl,
16:02:38)
- the levels of test that get run are correlated
with where that change was made (mbeierl,
16:02:58)
- sanity should be run for all changes, but maybe
deeper level tests are only run after a timer, such as daily
(mbeierl,
16:03:29)
- stress or performance level tests might only be
run once a week. (mbeierl,
16:03:44)
- so the infra-wg and test-wg are interlocked on
getting this done in Euphrates. (mbeierl,
16:04:13)
- Should add this to agenda for next week to see
how we can help move this forward (mbeierl,
16:04:51)
Meeting ended at 16:05:27 UTC
(full logs).
Action items
- Trevor ask Ray for action by LF or TSC ... Mark to raise the issue too
- PTLs to examine their own "customization" by looking at what details they provide when reporting test cases, and then propose sample graphs for Bitergia to look at
- mbeierl to write email to test-wg (or put on wiki) his ideas for what custom reporting can look like within StorPerf's context
Action items, by person
- mbeierl
- mbeierl to write email to test-wg (or put on wiki) his ideas for what custom reporting can look like within StorPerf's context
- UNASSIGNED
- Trevor ask Ray for action by LF or TSC ... Mark to raise the issue too
- PTLs to examine their own "customization" by looking at what details they provide when reporting test cases, and then propose sample graphs for Bitergia to look at
People present (lines said)
- mbeierl (41)
- trevor_intel (16)
- collabot (6)
- JackChan_ (2)
- yujunz (2)
- HelenYao (1)
- kubi001 (1)
- JackChan (1)
- mbeirl (0)
Generated by MeetBot 0.1.4.