08:30:05 <anac1> #startmeeting Yardstick work meeting
08:30:05 <collabot`> Meeting started Mon Dec 14 08:30:05 2015 UTC.  The chair is anac1. Information about MeetBot at http://wiki.debian.org/MeetBot.
08:30:05 <collabot`> Useful Commands: #action #agreed #help #info #idea #link #topic.
08:30:05 <collabot`> The meeting name has been set to 'yardstick_work_meeting'
08:30:11 <anac1> #info Ana Cunha
08:30:42 <QiLiang> #info QiLiang
08:30:48 <rex_lee> #info Rex
08:31:10 <wuzhihui> #info WuZhihui
08:31:22 <anac1> #topic release-B planning
08:32:37 <anac1> #info the team has discussed the test cases and prioritized them on last meeting Dec 10th
08:32:49 <anac1> #link https://etherpad.opnfv.org/p/yardstick_release_b
08:32:56 <patrick11> #info patrick
08:33:33 <anac1> #info for prio 1 and prio 2 test cases, a decision is needed regarding:
08:33:35 <jnon_> #info jnon
08:33:46 <anac1> #info 1) run in daily/weekly
08:34:04 <anac1> #info 2) results to Grafana / testing dashboard
08:34:42 <anac1> #info the suggestion is: dump results from LF POD2 to testing dashboard
08:34:52 <anac1> #info and all other labs to Grafana
08:35:01 <anac1> opinions, comments?
08:36:19 <anac1> i'll log decisions here, let's continue on Etherpad: https://etherpad.opnfv.org/p/yardstick_release_b
08:36:34 <PerH> #info PerH
08:36:39 <jnon_> one DB for grafana, or one per lab ?
08:36:54 <anac1> one DB for grafana
08:37:08 <QiLiang> all other test project, only use lf pod2 to generate test result then display on testing dashboard?
08:37:42 <anac1> no, let me try to explain:
08:38:03 <anac1> we need another dispatcher for grafana (jnon can comment on this)
08:39:05 <anac1> so, the suggestion is to dump results from Yardstic runs on LF POD2 to testing dashboard, using the current dispatcher for json
08:39:44 <anac1> and all other yardstick runs on other labs to grafana, using new dispatcher (line protocol, not jisn)
08:39:47 <anac1> json
08:40:39 <QiLiang> got it, thanks.
08:40:43 <jnon_> ok
08:40:44 <anac1> so the Yardstic runs on LF POD2 use json dispatcher, Yardstic runs on other labs use line dispatcher
08:40:48 <patrick11> so the default dispatcher is to grafana, only the yardstick on LF POD2 is reconfig to testing dashboard.
08:40:54 <patrick11> is it right?
08:41:04 <anac1> yes, that's the suggestion - what do you think ?
08:41:23 <patrick11> I think it is ok.
08:41:45 <jnon_> im ok with that
08:42:10 <QiLiang> +1
08:42:16 <anac1> ok, good
08:43:27 <anac1> QiLiang and jnon_: for the new dispatcher, i count on you both, ok?
08:43:56 <QiLiang> ok
08:43:58 <jnon_> ok
08:44:03 <anac1> thanks !
08:44:34 <anac1> ok, now let's complete the table in etherpad with daily/weekly runs per test case
08:44:41 <anac1> #link https://etherpad.opnfv.org/p/yardstick_release_b
08:47:36 <anac1> #agreed tc001, tc002 daily+weekly
08:48:17 <QiLiang> anac1 did you recieve my personal info to get access to montreal lab, i also cc to Daniel, but there is on reply.
08:48:46 <anac1> yes, Daniel was out of office, back today
08:49:11 <anac1> i'll check with him later today (he's in Canada)
08:49:19 <QiLiang> ok, thanks!
08:49:33 <anac1> thank You!
08:53:09 <vincenzo_riccobe> Hi Ana, just a question.
08:53:54 <anac1> yes Vincenzo?
08:53:57 <vincenzo_riccobe> are you proposing to run daily the test cases? which config parameters would you like to go for? any preference?
08:54:13 <anac1> config for?
08:54:29 <vincenzo_riccobe> (depending on conf parameters you select the run could last for long time)
08:55:38 <vincenzo_riccobe> I mean it could take more than 1 day according to the params we select
08:55:44 <anac1> we need to time the runs altogether to aabout 3 hours total
08:55:59 <anac1> we can add extended runs on weekly, up to 24 hours
08:56:12 <anac1> so it makes sense to run daily+weekly, right?
08:56:25 <vincenzo_riccobe> ok, so we are going to select a subset of params for daily run, is that ok for you?
08:56:53 <anac1> yes, but run always the same subset, so we can compare trends
08:57:06 <vincenzo_riccobe> sure
08:57:46 <vincenzo_riccobe> I need to go unfortunately, sync later on on minutes
08:57:56 <anac1> ok, thanks
09:00:11 <anac1> #agrred daily/weekly/on demand runs on etherpad https://etherpad.opnfv.org/p/yardstick_release_b
09:00:59 <QiLiang> anac1: one question all the installer run the same yardstick test cases?
09:01:14 <anac1> yes, the generic yardstick
09:01:30 <anac1> the feature, it depends on where it is installed
09:02:06 <jnon_> well it runs different test suites
09:02:14 <anac1> so, we can compare the runs of the same tc in all labs, with different installers
09:02:22 <anac1> jnon_: what do you mean?
09:03:26 <anac1> the test suite file is one daily and one weekly per lab, but the tcs (the generic) are the same, right?
09:03:34 <jnon_> all labs runs different test suites
09:03:58 <jnon_> whichj test cases run depends on what is in the test suites
09:04:21 <jnon_> they could be the same ofc
09:05:04 <anac1> yes, but the test suites for the different labs should contains the same generic yardstick tc's
09:05:22 <jnon_> yes probably
09:06:13 <anac1> now we agreed on what tc should be run daily/weekly, we can agree on the test suite
09:06:27 <QiLiang> the generic yardstick means all the yardstick generic test cses?
09:06:32 <anac1> yes
09:06:51 <QiLiang> some test cases like ipv6, ha  does not need to test on all installer?
09:07:08 <kubi11> this shoulde be feature
09:07:14 <anac1> ipv6 is a tc for the feture, correct
09:07:18 <anac1> feature
09:07:26 <QiLiang> yes
09:07:50 <anac1> so i mean the ones named "yardstick generic" on the etherpad
09:08:02 <anac1> ipv6 will probably only run in 1 lab
09:08:12 <kubi11> yes santa clara
09:08:16 <anac1> yes
09:08:41 <anac1> another example, sdnvpn will probably not run on all installers
09:09:18 <anac1> but the generic ones, it should be possible to run on different labs part fo pharos
09:09:44 <anac1> we can start with what we have in lf pod2 and build from there?
09:10:00 <anac1> jnon_: what do you think?
09:10:52 <jnon_> yes but lot of test cases is not yet added to the pod2 test suite
09:11:29 <jnon_> i think it only runs 001 and maybe 002 currently
09:11:41 <anac1> i know, we need to add as we finish the tcs
09:12:13 <jnon_> yes
09:12:16 <anac1> what i mean is, use the lf pod2 daily to build jenkins job for santa clara lab
09:12:38 <jnon_> ok
09:12:42 <anac1> ok
09:13:02 <anac1> the santa clara lab connected to jenkins, right?
09:13:27 <jnon_> i dont know
09:13:43 <kubi11> connected ,but compass don't integrate with yardstick in jenkins
09:14:01 <anac1> ok, what do we need to do?
09:14:10 <anac1> can you help, kubi?
09:14:20 <QiLiang> santa clara is connected to jenkins but still some issuse to run yardstick automatically.
09:14:33 <kubi11> yes, of course
09:14:45 <anac1> ok, you on top of that, right?
09:15:12 <kubi11> yes
09:15:21 <anac1> we'll have same problems when we try apex and joid
09:15:44 <anac1> we ask each other for help, if needed
09:16:33 <anac1> will vnfgraph also run in sc lab?
09:17:03 <kubi11> yes
09:17:11 <anac1> ok
09:17:48 <anac1> #topic other
09:18:27 <anac1> #info meetings on Dec 24th and 31st will be cancelled
09:19:23 <kubi11> christmas?
09:19:36 <jnon_> and new year :)
09:19:41 <anac1> yes, christmas break
09:19:46 <anac1> and new year
09:19:48 <QiLiang> merry christmas and happy new year :)
09:19:55 <anac1> thanks
09:19:58 <patrick11> +1
09:19:58 <kubi11> merry christmas and happy new year
09:20:07 <MMcG> What about 28th?
09:20:27 <MMcG> Holiday in Ireland :-)
09:20:53 <anac1> ok, i plan to have the meeting for checking progress/issues
09:21:03 <anac1> will take notes
09:21:15 <MMcG> ok, thxs
09:21:27 <jnon_> well ill be working 28 and will be on irc probably
09:21:58 <anac1> you in China are working too on 28, right?
09:22:04 <patrick11> yes
09:22:05 <QiLiang> yes
09:22:42 <anac1> so we keep the meeting
09:23:02 <anac1> patcick11: do we need to discuss with HA this week?
09:23:51 <anac1> QiLiang: how's the progress with kvm?
09:24:15 <patrick11> I think etherpad is just ok. tomorrow yimin will reply to fu qiao.
09:24:26 <QiLiang> yunhong is testing locally
09:24:26 <anac1> ok, agree
09:24:37 <anac1> ok, on intel pod3?
09:24:47 <patrick11> we will reply there questions on etherpad tomorrow
09:24:48 <QiLiang> seems like almost done
09:25:01 <QiLiang> not intel pod3 i think
09:25:02 <anac1> patrick11: ok
09:25:42 <anac1> QiLiang: ok, thanks
09:25:49 <anac1> anything else ?
09:26:07 <jnon_> the influxdb dispatcher, who will do that? Me or QILiang?
09:26:18 <anac1> #action anac1 to update jira with weekly/daily info
09:27:08 <anac1> jnon_ nd QiLiang: who's available at the moment ?
09:27:40 <QiLiang> i'm available
09:27:41 <jnon_> I have some other things I need to do
09:28:00 <QiLiang> jnon help me review the code :)
09:28:15 <jnon_> yes ofc :)
09:28:31 <anac1> ok, so QiLiang do the influxdb dispatcher and jnon help - great !
09:29:23 <anac1> #action QiLiang to design influxdb dispatcher
09:30:07 <anac1> ok, we'll need help when the installation start at intel pod2 (apex) and intel pod5 (joid) - this is a heads up
09:30:33 <anac1> i let you all know when the installers give us a go-ahead
09:31:09 <anac1> that's all for today - thanks everyone !
09:31:14 <MMcG> any thoughts on results string format for InfluxDB?
09:31:45 <anac1> krihun was working on this
09:31:59 <anac1> he's not in this meeting
09:32:19 <anac1> i'll check with him
09:32:43 <MMcG> ok, would be good to have format when available so we can implement quickly
09:32:49 <anac1> #action anac1 to check with krihun on results string format for influxDB and update jira
09:33:12 <anac1> MMcG: yes, i'll ask him to update the info in jira
09:33:23 <MMcG> Great,thxs.
09:33:27 <anac1> np
09:33:33 <anac1> anything else?
09:33:56 <anac1> thanks everyone for today
09:34:01 <anac1> #endmeeting