08:30:04 <anac1> #startmeeting Yardstick work meeting
08:30:04 <collabot> Meeting started Thu Feb  4 08:30:04 2016 UTC.  The chair is anac1. Information about MeetBot at http://wiki.debian.org/MeetBot.
08:30:04 <collabot> Useful Commands: #action #agreed #help #info #idea #link #topic.
08:30:04 <collabot> The meeting name has been set to 'yardstick_work_meeting'
08:30:10 <anac1> #info Ana Cunha
08:30:32 <QiLiang> #info QiLiang
08:30:32 <kubi1> #info kubi
08:30:33 <jnon> #info Jörgen Karlsson
08:31:03 <anac1> first of all, congratulation QiLiang, wish you all hapiness
08:31:17 <anac1> your colleagues told us :-)
08:31:23 <QiLiang> thanks.
08:31:25 <QiLiang> :)
08:31:36 <anac1> #topic ODL scenario
08:32:07 <jnon> gratulations Qi Liang :)
08:32:41 <QiLiang> jnon: thanks! :)
08:32:49 <anac1> #info worked on Ericsson POD2 yardstick only, ODL Be
08:33:03 <anac1> #info on-going run with functest then yardstick
08:33:56 <anac1> #info worked for Compass virtual+baremetal, with 8143
08:33:56 <andrasb> #info andrasb
08:34:24 <jnon> yes, yardstick work with Fuel8 (ODL beryllium), on both virtual and baremetal, without 8143
08:34:28 <MatthewLi> #info Jun Li
08:34:40 <kubi1> great
08:34:52 <anac1> #info help from cisco, redhat and onosfw are offered to us
08:34:56 <kalyan> congratulations QiLiang
08:35:20 <anac1> jnon: let's merge 8143 again ?
08:35:26 <QiLiang> kalyan: thaks.
08:35:42 <anac1> i think kubi needs it, until we know more
08:35:51 <kubi1> yes, compass doesn't support ODL Be now.
08:35:55 <jnon> anac1: we maybe need to modify the patch first to not have any effect on beryllium nodes
08:36:17 <anac1> jnon: ok, can you do that ?
08:37:27 <anac1> i will update the troubleshooting wiki with some info, please check in a coulpe of hours time (or tomorrow)
08:37:42 <jnon> yes  maybe
08:37:51 <anac1> will keep all informed on progress of the current run
08:38:02 <jnon> if we know which nodes will have beryllium
08:38:10 <anac1> jnon: we know
08:38:35 <anac1> anything else on odl scenario ?
08:39:10 <anac1> #topic installers
08:39:12 <jnon> i can fix that today then, but we must fix the releng problem before we are able to push anything
08:39:22 <anac1> jnon: ok
08:39:42 * fdegir wonders what is releng problem
08:39:42 <jnon> or submit really
08:40:55 <jnon> fdegir: https://build.opnfv.org/ci/job/yardstick-verify-master/372/console :)
08:41:18 <anac1> #info after we fix the most urgent odl scenario, we need to check joid and apex faults
08:41:45 <anac1> we need apex for sfc
08:42:02 <MatthewLi> from my side, I hope that SDN works soon, since the release time is tight
08:42:47 <anac1> MatthewLi: I hope so to
08:43:06 <anac1> #topic documentation
08:43:51 <anac1> #info releng is helping yardstick to get the documentation generated everytime we push a patch
08:44:11 <MatthewLi> interested
08:44:12 <anac1> #info that is helpfull for code updates (autodoc)
08:45:20 <anac1> #info release notes draft will be pushed today - please all comment - will keep it open until close to release
08:46:17 <anac1> #info the faults listed on troubleshooting etherpad will be added too
08:46:58 <anac1> #info license info needs to go on docs too
08:47:14 <anac1> #info documentation project will publish an example
08:47:24 <anac1> any questions on docs?
08:47:59 <kubi1> when should we finish test result docs?
08:48:57 <anac1> kubi1: the release date is under discussion
08:49:15 <anac1> current assumption is 2 extra days for docs after the tests are finished
08:49:30 <anac1> that includes the test results + final reviews
08:49:43 <MatthewLi> interested about the release date since the vocation is coming for me
08:50:18 <anac1> MatthewLi: check the discussion here: https://etherpad.opnfv.org/p/steps_to_brahmaputra
08:50:32 <MatthewLi> anac1: ok I will check that
08:50:37 <MatthewLi> thank u
08:50:51 <anac1> i'm pushing to add time to compensate for the spring festival
08:51:19 <kubi1> thank you, anac1
08:51:26 <anac1> would guess will be end of feb
08:52:24 <anac1> so next week you all in China will be on vacation, right ?
08:52:35 <QiLiang> yes
08:52:36 <MatthewLi> this weekend actually
08:52:37 <kubi1> yes
08:53:17 <anac1> ok, have a great time (year of the monkey?) !
08:53:31 <kubi1> yes, year of the monkey
08:53:34 <QiLiang> :)
08:53:35 <kubi1> :-D
08:53:42 <anac1> #topic result visualization + database
08:54:09 <anac1> i see the grafana is working with the vtc data
08:54:44 <anac1> when we finish the faults and get the tests running, we can focus on making the results visible
08:55:06 <anac1> have we started pushing data to influxdB ?
08:55:23 <anac1> we have the nosdn scenario working
08:56:24 <QiLiang> we need config yardstick to push the test result influxdb
08:57:13 <QiLiang> lf pod will also push result to influxdb not mongodb?
08:57:25 <jnon> It should be config :ed for some nodes, but I have not checked the db so i dont know how much data we got there now
08:57:45 <jnon> lf pod pushes to mongo only
08:57:53 <QiLiang> ok
08:58:22 <jnon> I run ericsson-pod2 yesterday and it pushes to influx
08:58:57 <anac1> QiLiang: I will check with morgan what's valid currently - we continue with lf pod to mongo only, as jnon wrote
08:59:20 <QiLiang> ok, thanks
08:59:28 <anac1> jnon: great, i'll check the results, i'm interested in the actual values
09:00:05 <MatthewLi> anac1: interested on why change to influxdb instead of mongodb
09:00:08 <jnon> the push on the yardstick side looked successful, bu ti dont know if the data was successfully stored on the other side
09:00:56 <anac1> MatthewLi: we use influx (it's a metrics database) as it works with grafana (dashboard to visualization)
09:01:12 <anac1> mongodb is not a metrics database
09:01:28 <anac1> our tests are basically metrics
09:01:36 <MatthewLi> I see
09:02:40 <anac1> jnon: ok - i will check
09:02:45 <anac1> anything else ?
09:02:57 <kubi1> about huawei daily task, there is a question that i want to discuss.
09:03:11 <anac1> kubi1: go ahead
09:03:13 <kubi1> As you known, IPV6 testing will run on nosdn and odl scenario, but not support onos, but now we only defined one huawei_us_bare test suite, so ipv6 got error when it run on onos scenario
09:04:06 <kubi1> for my limited understanding, we have two solutions now:
09:04:06 <kubi1> 1. add more yaml to define odl scenario suite and onos cenario suite
09:05:00 <kubi1> 2.get env information and make a workaround in ipv6 code
09:05:25 <fdegir> excuse me for jumping in
09:05:51 <fdegir> you can perhaps create new scenarios named os-nosdn-ipv6-ha and os-odl_l2-ipv6-ha
09:06:08 <fdegir> and we can create 2 main jobs for compass to run these scenarios
09:07:18 <kubi1> do you mean make ipv6 as job and others as another job
09:08:32 <fdegir> kubi1: what we really have is
09:08:58 <fdegir> 1 common job for deployment
09:09:05 <fdegir> per pod/branch
09:09:13 <fdegir> and this job takes in scenario name as parameter
09:09:33 <fdegir> the only think we need to do on jenkins is to create parent jobs for these 2 scenarios
09:09:33 <kubi1> yes
09:09:48 <fdegir> so we can pass the scenario name and adjust the triggering/scheduling
09:09:55 <fdegir> deploy job doesn't change
09:10:06 <fdegir> since all the stuff is done by deploy.sh (or whatever is executed by the job)
09:10:18 <fdegir> this is same for yardstick as well; 1 job per pod/branch
09:10:24 <MatthewLi> from my side I choose 2, two much scenarios maybe not the best choice since IPv6 is only one test case among the test suite
09:10:40 <fdegir> so yardstick job gets the scenario name and runs test cases accordingly
09:10:50 <kubi1> sfc may need same solution~
09:10:50 <fdegir> so have 2 scenarios, 1 parent job per scenario
09:11:01 <fdegir> all the scenarios have this solution
09:11:10 <anac1> yes, sfc needs same solution
09:11:35 <fdegir> this kind of increases the no of jobs but at the same time make it easier to schedule or trace the logs of a certain scenario deployment/testing
09:12:54 <kubi1> yes, i think it is good for me
09:13:34 <anac1> fdegir: who creates the scenario ?
09:13:50 <fdegir> installer team + feature team together
09:14:02 <anac1> kubi1: do you agree ?
09:14:33 <QiLiang> do yardstick need create generate yardstick test cases as partent job and feature test case as sub job?
09:15:15 <kubi1> yes, agree:)
09:15:29 <anac1> kubi1: ok, thanks
09:15:53 <anac1> so yardstick needs to understand what the scenario name means and trigger the right test cases ?
09:17:29 <anac1> jnon: comments ?
09:18:46 <anac1> kubi1: please add this to the troubleshooting etherpad - we can continue there
09:19:00 <kubi1> ok
09:19:04 <jnon> anac1: the new jobs will invoke yardstick using the right test suites i assume
09:19:20 <jnon> looks ok to me
09:19:27 <anac1> jnon: but we need to add this feature in the code, right?
09:19:58 <jnon> no in releng only + two new test suite files
09:20:14 <kubi1> add some test suite in yardstick , yes?
09:20:32 <jnon> yes i think so
09:21:24 <anac1> ok, two new test suite files in yardstick + the new scenarios defined by installer&feature
09:22:05 <kubi1> for example, i will add odl-l2-ipv6.yaml and nosdn-ipv6.yaml for ipv6 test in yardstick,
09:23:13 <fdegir> would it be possible to sync names of yardstick test yaml files and test suites with the scenario name?
09:23:19 <fdegir> or is it a good thing to do?
09:23:22 <jnon> yes if i undersood fdegir correctly
09:24:40 <jnon> fdegir: yes but that would greatly increase the number of test suites
09:25:10 <MatthewLi> yep
09:25:13 <fdegir> jnon: that's right
09:25:21 <fdegir> jnon: but how long we can escape from this?
09:25:32 <kubi1> installer type * deploy_scenario
09:25:47 <fdegir> kubi1: should installer type be there really?
09:26:03 <fdegir> to me installer type shouldn't impact the test case
09:26:19 <jnon> fdgir i don't know, yardstick-daily generate suites name as YARDSTICK_SUITE_NAME=opnfv_${{NODE_NAME}}_{suite}.yaml
09:26:32 <fdegir> ok
09:26:40 <kubi1> yes, now is node_name
09:26:42 <fdegir> I have no strong opinion at the moment
09:26:55 <kubi1> every node need one suite
09:27:00 <fdegir> this needs to be thought about after b-release perhaps
09:27:11 <MatthewLi> agreed
09:27:25 <jnon> we could add DEPLOY_SCENARIO to name also but that will multiply the number of suites, maybe OK?
09:27:59 <jnon> or not
09:28:26 <kubi1> yes, existing suite need to change name if we dd DEPLOY_SCENARIO parameter
09:28:41 <MatthewLi> I prefer simple looked from the outside :)
09:28:59 <fdegir> if you ask me
09:29:12 <fdegir> I prefer this hidden from  outside world
09:29:25 <fdegir> you process the deploy scenario internally within your framework or script
09:29:37 <fdegir> now you are putting too much logic into jenkins jobs
09:29:50 <fdegir> just take in the parameter and select suite yourself some way
09:30:18 <fdegir> for example jenkins doesn't really know what installers do or how they deploy
09:30:23 <fdegir> it doesn't care
09:30:25 <jnon> yes simplest is option 2 maybe, ipv6 test case just checks DEPLOY_SCENARIO env
09:30:54 <kubi1> agree
09:31:06 <MatthewLi> jnon: that's what I said
09:31:06 <fdegir> anyway, I really shouldn't destroy your meeting
09:31:31 <anac1> ok, seems we have an agreement on option 2
09:31:32 <jnon> MAtthewLI: yes Matthew :)
09:32:36 <jnon> We can do a more complicated solution after release :)
09:32:45 <anac1> kubi1: go ahead and propose a patch
09:32:46 <kubi1> we have a complex solution to deal with it, i think we can do it in C Release
09:32:55 <anac1> agree
09:33:00 <kubi1> i prefer option 2
09:33:04 <kubi1> at now
09:33:36 <anac1> #agreed option2 as decribed for ipv6, sfc shall use same solution
09:33:41 <anac1> anything else?
09:34:04 <kalyan> odl scenario...we tried resolving ssh issue by doing some changes in the heat template.....its  working fine if we remove security groups.....
09:34:50 <MatthewLi> kalyan: maybe u can work togather with the compass team, I think they maybe face the same problem
09:35:26 <kubi1> which version of your odl
09:35:35 <anac1> kalyan: yes, patch 8143
09:35:38 <kalyan> lithium
09:35:50 <kubi1> SR2? or SR3?
09:36:05 <anac1> MathewLi: check patch 8143, kubi knows all about it
09:36:30 <kalyan> SR3
09:37:03 <kubi1> ok, i tested it on SR2, it worked fine based on patch 8143
09:37:04 <MatthewLi> anac1: ok will do that
09:37:42 <anac1> and remember guys, this is a workaround, not a solution - the important thing is to understand the why
09:37:56 <anac1> so we learn !
09:38:11 <anac1> anything else ?
09:38:42 <kubi1> not from me
09:38:53 <kalyan> ok we wil check that patch 8143
09:39:06 <anac1> kalyan: yes, please do
09:39:43 <anac1> thanks everyone for today, have a great time during spring festival !
09:39:49 <anac1> #endmeeting