08:00:01 <morgan_orange> #startmeeting Functest weekly meeting March 8th
08:00:01 <collabot`> Meeting started Tue Mar  8 08:00:01 2016 UTC.  The chair is morgan_orange. Information about MeetBot at http://wiki.debian.org/MeetBot.
08:00:01 <collabot`> Useful Commands: #action #agreed #help #info #idea #link #topic.
08:00:01 <collabot`> The meeting name has been set to 'functest_weekly_meeting_march_8th'
08:00:07 <morgan_orange> #info Morgan Richomme
08:00:39 <viktor_nokia> #info Viktor Tikkanen
08:01:14 <morgan_orange> #info agenda https://wiki.opnfv.org/functest_meeting
08:01:22 <morgan_orange> hi viktor_nokia
08:01:36 <viktor_nokia> Hi Morgan!
08:02:38 <morgan_orange> jose_lausuch:  is celebrating brahmaputra ins Stockholm
08:02:48 <viktor_nokia> :)
08:03:07 <viktor_nokia> BTW, juhak is on vacation this week...
08:03:12 <morgan_orange> so it should be quiete
08:04:03 <morgan_orange> let's discuss the different point of the agenda, Asian contributors may come and other people could see off line
08:04:10 <morgan_orange> #topic action points follow up
08:04:19 <morgan_orange> #link http://ircbot.wl.linuxfoundation.org/meetings/opnfv-testperf/2016/opnfv-testperf.2016-03-01-08.00.html
08:04:36 <morgan_orange> #info May-meimei did you have a look at the OPNFV Beijing meet-up group?
08:05:15 <morgan_orange> #info clean of the committer in progress, add of new committer OK, removal is a bit longer (need the green light of TSC) but it shall be OK soon
08:05:43 <morgan_orange> #info mail to express concern on git management done => we do everything in Master, bug fixes reported to Stable
08:05:59 <morgan_orange> #topic SR1
08:06:17 <morgan_orange> #info quiet post Brahmaputra time regarding the release
08:06:33 <morgan_orange> #action morgan_orange check new scenario candidates
08:07:01 <morgan_orange> #info Release manager is no more working in Linux FOundation (mail received on Friday), self organization to be planned for SR1
08:07:22 <morgan_orange> #info CI still working but no significant change since B release
08:07:42 <morgan_orange> #topic focus on Tempest
08:08:05 <morgan_orange> I think it would be nice to have a new look on the results
08:08:17 <morgan_orange> #link http://testresults.opnfv.org/dashboard/
08:09:18 <viktor_nokia> BTW, I 've installed official Brahmaputra with Apex in our internal lab
08:09:19 <morgan_orange> #info apex/odl_l2 60-80%, apex/onos 75-92%, apex/odl_l3 56-68%
08:09:27 <morgan_orange> cool and whart are the results
08:09:38 <morgan_orange> I was wondering on the influence of the POD on the results
08:09:46 <morgan_orange> as Apex is the only installer when we have only 1 reference
08:10:30 <viktor_nokia> tempest results were:
08:10:34 <viktor_nokia> Ran: 209 tests in 3310.0000 sec.  - Passed: 200  - Skipped: 3  - Expected Fail: 0  - Unexpected Success: 0  - Failed: 6 Sum of execute time for each test: 2801.4488 sec.
08:11:00 <viktor_nokia> I had to use --serial option because we have only few floating IPs
08:11:22 <morgan_orange> OK
08:11:23 <jose_lausuch> hi
08:11:29 <jose_lausuch> #info jose_lausuch
08:11:32 <jose_lausuch> partially
08:11:34 <morgan_orange> so it shows the influence of the POD...
08:12:02 <jose_lausuch> sitting next to fdegir :)
08:12:10 <morgan_orange> the LF POD1 seems slower compared to Ericsson POD2, Orange POD2, ...
08:12:21 <viktor_nokia> I run the tests from docker image installed on jumphost (if this makes some sense)
08:12:23 <morgan_orange> hi jose_lausuch, king regards to MC Fatih
08:12:59 <jose_lausuch> anac1 is also here :)
08:13:40 <jose_lausuch> viktor_nokia: shall we move officialy to --serial ?
08:13:46 <morgan_orange> viktor_nokia: I think we can communicate towards apex installer/ releng  on that, we have somehow the same issue than with Intel/Orange pod 2 pod...we know that results of same tests with same installer lead to different results that may be really misleadign for a global evaluation of the system
08:13:51 <jose_lausuch> what is the time difference?
08:14:08 <morgan_orange> As far as i remember we moved to --serial
08:14:19 <morgan_orange> on all the labs, did'nt we
08:14:34 <viktor_nokia> There are few test cases that fail with timeout error (and timeout is 300s)
08:14:44 <viktor_nokia> This impacts the total time
08:14:57 <viktor_nokia> especially in case of serial execution
08:15:04 <jose_lausuch> morgan_orange: no, cmd_line = "rally verify start " + OPTION + " --system-wide"
08:15:13 <morgan_orange> jose_lausuch: ok
08:15:21 <morgan_orange> I believed we did so
08:15:32 <morgan_orange> that is something we can try and merge and see the influence on LF POD1
08:15:43 <jose_lausuch> yes
08:15:54 <morgan_orange> jose_lausuch: Say Olá to Ana as well
08:16:03 <jose_lausuch> ok :)
08:16:26 <morgan_orange> #action viktor_nokia set --serial options in repo to see influence on LF POD1
08:16:28 <jose_lausuch> jonas bjurel  and Chris also here
08:16:33 <morgan_orange> the dream team
08:16:39 <jose_lausuch> we can try for a while the --serial
08:16:44 <jose_lausuch> and see the results
08:17:07 <morgan_orange> yes, I set the action point to viktor_nokia I will merge and cherry pick on Stable
08:17:19 <jose_lausuch> ok
08:17:51 <morgan_orange> It will be interesting to get a predeployement tool to get an audit of the POD, I already evoked that with qtip people
08:18:20 <morgan_orange> but it is difficult to justify strict success criteria if we are able to show very different results from one POD to another..
08:19:22 <morgan_orange> #action viktor_nokia send a mail to apex/pharos to indicate tempex results on apex/Nokia and show the different with results got on LF POD1
08:19:47 <morgan_orange> #info compass/nosdn 100%, compass/odl-l2 94-100%, compass/onos 94%, compass/ocl  31%
08:19:51 <viktor_nokia> ok
08:21:42 <morgan_orange> #info fuel/nosdn 86-98% (better results on LF POD2 than Ericsson POD2), fuel/odl_l2 88-95%, fuel/onos 91-95%, fuel /odl_l3 79-92%, fuel/bgpvpn 76%; fuel/nosdn_ovs 98-100%
08:22:06 <jose_lausuch> btw, did my patch work? to have rally and tempest preinstalled? I havent had time to check jenkins
08:22:06 <morgan_orange> it is strange to have difference between odl_nosdn and odl_nosd-ovs
08:22:21 <morgan_orange> jose_lausuch: I did not check as well
08:22:46 <morgan_orange> seems that last run still OK..
08:24:06 <morgan_orange> but if I compare last compass run with a previous one....last try lasts 2h39' and some days ago, it took  1h55'
08:24:14 <morgan_orange> not sure if it is due to the change
08:24:28 <morgan_orange> to be checked
08:24:46 <jose_lausuch> it strange, we should save some time by having it pre installed....
08:24:48 <jose_lausuch> yes
08:24:53 <jose_lausuch> can you action me?
08:25:11 <jose_lausuch> I will do a change also to have vIMS and Promise dependencies pre installed in the docker image
08:25:23 <morgan_orange> #info joid/nosdn ((Orange POD2) 88%, joid/odl_l2 98%, joid/onos: 87%
08:25:26 <jose_lausuch> the functest installation take to long when installing tempest and promise
08:25:56 <morgan_orange> it is probably a topic for discussion in Espoo
08:26:29 <morgan_orange> shall feature projects be always integrated in Functest docker image or shall we have something more modular
08:27:41 <morgan_orange> viktor_nokia: coudl we have a summary of the errors per pod/scenario, I think that most of them are common and we can have some indication of the influence on the scenario on the tempest results
08:27:59 <morgan_orange> I do not know for the future if we can keep the same custom list whatever the scenario
08:28:33 <morgan_orange> I can help doing that
08:28:46 <morgan_orange> #action morgan_orange initiate a page on Tempest errors per scenario/installer
08:28:57 <viktor_nokia> I can make some summary, will take some time...
08:29:00 <morgan_orange> #action viktor_nokia reference apex tempest errors
08:29:09 <morgan_orange> #action jose_lausuch reference fuel tempest errors
08:29:19 <morgan_orange> #action morgan_orange reference joid errors
08:29:19 <jose_lausuch> shall we have a common format?
08:29:31 <morgan_orange> #action May-meimei reference compass tempest errors
08:30:04 <morgan_orange> jose_lausuch: viktor_nokia I suggest I initiated the page, and once done we validate the format
08:30:08 <jose_lausuch> ok
08:30:15 <morgan_orange> we probably could also automate a little bit such things..
08:30:35 <jose_lausuch> we can parse the logs from artifacts
08:30:35 <morgan_orange> I have still the action plan to automate the weather forecasts...it is a good transition to the other topic
08:30:39 <jose_lausuch> and build something automatic
08:30:48 <jose_lausuch> better than maintaining a wiki
08:30:59 <morgan_orange> jose_lausuch: yes it will be better and we can reuse Valentin's framework
08:31:14 <morgan_orange> it is in python and able to call APi to dynamically create the page
08:31:23 <morgan_orange> so let's try to do it thsi way
08:31:34 <morgan_orange> we could decide to stop doing things an unmaitainable wiki... :)
08:33:13 <morgan_orange> #link http://testresults.opnfv.org/reporting/vims/
08:33:36 <jose_lausuch> I agree
08:33:54 <morgan_orange> the code is here https://git.opnfv.org/cgit/releng/tree/utils/test/result_collection_api/tools/reporting
08:34:05 <morgan_orange> we could have something similar for tempest
08:34:11 <morgan_orange> and for the weather forecast
08:34:36 <jose_lausuch> yep
08:34:48 <morgan_orange> #info tempest and general status to be reported dynamically (decision to avoid using wiki for such things as it is long and difficult to maintain)
08:35:27 <morgan_orange> #topic success criteria
08:35:45 <morgan_orange> #info during last test weekly meeting we initiated a discussion on the success criteria
08:36:04 <morgan_orange> #info there is also a mail thread => shall criteria be more strict
08:36:27 <morgan_orange> as indicated assuming that we can clearly see a big dependency towards the POD it is not easy to justify...
08:36:58 <morgan_orange> Moreover criteria for performance tests are obvious, for functional tests it is a bit test by test
08:37:18 <morgan_orange> see for Brahmaputra 90%, vPing, vIMS, ...we almost decided all the criteria
08:37:24 <morgan_orange> it is also a topic for Espoo
08:37:35 <morgan_orange> how to define such criteria
08:37:56 <morgan_orange> #info discussion on test criteria to be done during the Meet-up
08:38:49 <morgan_orange> #action morgan_orange share first version of Meet-up agenda before the end of the week
08:38:58 <morgan_orange> I would like also to add consideration on security
08:39:11 <morgan_orange> viktor_nokia: I saw that there is someone from Nokia dealing with opnfv-security
08:39:18 <morgan_orange> do you know him?
08:39:38 <viktor_nokia> hmmm... I need to check...
08:39:42 <morgan_orange> I think it would make sense to have a project dealing with test security
08:39:57 <morgan_orange> hardening of VMs, ...
08:40:52 <morgan_orange> David_Orange: should start working on security groups (which is already a first step) but when discussing with operational teams, a security audit of the system is missing
08:41:16 <morgan_orange> viktor_nokia: the guy is Luke Hinds
08:41:23 <morgan_orange> #link https://wiki.opnfv.org/security
08:41:43 <viktor_nokia> yes, he is involved into OPNFV Security Group but I don't know him personally
08:42:16 <morgan_orange> Ok I can contact him to see if they planned to implement tests, as far as I understand for the moment it is more guides
08:42:53 <morgan_orange> #action morgan_orange contact OPNFV security group to see if test projects related to security are planned or not for C release (maybe a topic also for the Meet-up)
08:43:09 <morgan_orange> #topic API/Dashbopard evolution
08:43:31 <morgan_orange> #info I got some question from Cisco people on the APi, they have good idea of evolution of the API
08:44:30 <morgan_orange> #info and they are working on ELK (Elastic Search LogStash Kibana), they are currently exporting mongo info to  use them in Kibana and try to reproduce what we have in our basic dashboard (based on MIT lib dygraph)
08:45:05 <morgan_orange> #info evolution of the API and the dashboards to be discussed in the Meetup
08:45:18 <morgan_orange> I asked whether they will contribute to Functest (as peter left)
08:45:30 <morgan_orange> wait for thei answer, I invited them to join the Meetup
08:45:41 <morgan_orange> we should have also a new E/// contributor soon
08:46:40 <morgan_orange> Nikolas who worked on bgpvpn project and has high skill in Rally/tempest/... shall join, we could organize a more formal introduction next week. I would like to invite the new contributor and let them the floor (so need GTM) to express their view with a new look
08:46:55 <morgan_orange> #action morgan_orange organize GTM next week for weekly meeting, keep a slot for new contributors
08:48:13 <morgan_orange> #info on the APi, we should add new fields (version/scenario) and a suite of unit tests is needed to ensure the stability of the APIs (migration or at least data change will be needed), option for the time search, for now we can use period=N to retrieve the last N days, but there is not API way to retrieve the last 5 scenarios X of installer Y
08:48:59 <morgan_orange> #info that is whty for the automation of the forecast => only the results of the last N days (throuhj period param) can be retrieved and then a post processing is needed => connected to teh discussion on success criteria
08:49:10 <morgan_orange> #topic task allocation
08:49:44 <morgan_orange> #info we will wait the introduction of new contributor but we shall be able to allocate the main tasks as soon as we know them
08:50:01 <morgan_orange> #info roadmpat for C release is expected after the meet up
08:50:17 <morgan_orange> but we need more feedback from our Asian contributors
08:50:31 <morgan_orange> #topic AoB
08:50:42 <morgan_orange> any information you want to share?
08:51:06 <viktor_nokia> not from my side...
08:51:25 <jose_lausuch> yes, https://www.youtube.com/watch?v=rOwN3m0045c
08:51:34 <morgan_orange> for the CFP berlin viktor_nokia would it make sense to propsoe something dealing with troubelshooting Tempest accross the different PODs, a challenge to ensure stability and interoperability
08:51:41 <jose_lausuch> I will create a new version soon speaking myself, and upload it to opnfv channel :)
08:51:54 <morgan_orange> cool
08:52:34 <morgan_orange> #info Jose Lausuch international Entertainement soon presents Functest versus Marvels
08:52:40 <morgan_orange> #link https://www.youtube.com/watch?v=rOwN3m0045c
08:52:58 <jose_lausuch> :)
08:53:31 <morgan_orange> viktor_nokia: for teh CFP for berlin, we should submit before End of march, so if you are interested I think we can draft something involving you, Jose and myself
08:54:01 <jose_lausuch> ok for me
08:54:07 <viktor_nokia> what is CFP? tried google already...
08:54:13 <morgan_orange> Call For Paper
08:54:22 <morgan_orange> we can submit presentation for the Summit
08:54:27 <jose_lausuch> viktor_nokia: to submit the presentation ideas for the submit
08:54:35 <morgan_orange> I already exchanged with jose on functest in general
08:54:37 <jose_lausuch> summit*
08:55:03 <viktor_nokia> In principle OK for me but I'm not 100% sure if I will be there
08:55:12 <morgan_orange> but something dedicated to tempest and how representative it is, explaining also the influenc of the POD,..;I think it could be useful to the community
08:55:30 <viktor_nokia> yes, definitely
08:56:07 <morgan_orange> #action morgan_orange initiate Tempest presentation on Berlin
08:56:20 <morgan_orange> anything else you want to share?
08:56:30 <morgan_orange> jose_lausuch: I have a meeting with .... E// in 4 minutes
08:56:37 <jose_lausuch> nope
08:56:41 <jose_lausuch> morgan_orange: cool
08:56:42 <viktor_nokia> Nokia public lab is almost ready
08:56:53 <morgan_orange> let's info that
08:57:01 <morgan_orange> #info Nokia public lab soon ready
08:57:07 <morgan_orange> it will be based on apex, I assume
08:57:12 <viktor_nokia> yes
08:57:21 <morgan_orange> great to have a second source..
08:57:22 <viktor_nokia> we have Brahmaputra there currently
08:57:52 <morgan_orange> ok thanks all, see you next week, hope I will be able to deal with all my action points....lots of internal communication ahead to explain what we have done :)
08:57:58 <morgan_orange> #endmeeting