08:00:24 #startmeeting Functest weekly meeting November 24th 08:00:24 Meeting started Tue Nov 24 08:00:24 2015 UTC. The chair is morgan_orange. Information about MeetBot at http://wiki.debian.org/MeetBot. 08:00:24 Useful Commands: #action #agreed #help #info #idea #link #topic. 08:00:24 The meeting name has been set to 'functest_weekly_meeting_november_24th' 08:00:33 #info Morgan Richomme 08:00:35 #info Viktor Tikkanen 08:00:39 #info Juha Kosonen 08:01:06 #info Qinglong Lan 08:01:09 #info agenda https://wiki.opnfv.org/functest_meeting 08:01:16 let's start 08:01:26 #topic Action point follow-up 08:01:35 #link http://ircbot.wl.linuxfoundation.org/meetings/opnfv-testperf/2015/opnfv-testperf.2015-11-17-10.02.html 08:02:08 #info action 1 => not possible to grant access to LF POD2, LF POD 2 is considered as a production CI lab, the access is restricted 08:02:35 #info access to be found on dev labs, Orange shall be able to grant one access to one of our lab 08:03:02 #info action2) => wiki updated https://wiki.opnfv.org/functextnexttaks 08:03:27 #info I added a table showing the dependencies towards Doctor, Copper, ONOSFW, OVNO, PolicyTest and Promise 08:03:31 #info XiaoguangLi 08:04:21 #info seen from the Summit some projects are already mature => question of integration, some projects not so clear and the availability (or not) of test suites to be integrated in functest for B release 08:04:55 #info action 3) work on Orange POD planned tomorrow (jumphost connected, env ready..) 08:05:13 #info action 4) JIRA updated and Sprint 3 started 08:05:43 #info action 5) viktor_nokia completed Tempest backlog we will have a dedicated topis later 08:06:11 #info action 6) discussion on the syntezis of functest for the dashboard...let's discuss it also in a separate topic 08:06:28 #topic B release Sprint 3 08:06:43 #info Jira Spint 3 started: https://jira.opnfv.org/secure/RapidBoard.jspa?rapidView=59 08:07:15 #info transistion Sprint, still some code to be produced (especially ODL test suite reformating) but first tasks on intgeration 08:07:58 #info LF POD 2 (Fuel) ready, jose should work on the integration with Apex, Mei Mei with Huawei/Compass and myself should discuss with joid team 08:08:16 #info reminder for B release Functest success criteria => running all our suites on the 4 env 08:08:38 welcome jose_lausuch :) 08:09:17 #topic status on Onos 08:09:23 Qinglong: go ahead 08:09:27 #info Huawei-POD is ready now, Need some time to debug with it 08:09:43 hi sorry for the late :) 08:09:43 #info ML3 is under working ,we are writing Testcases, about 3 weeks to finish scripts 08:09:45 is it Huawei US or Huawei china lab? 08:09:50 #info Jose Lausuch 08:09:52 china first 08:09:54 ok 08:10:02 then use USA 08:10:07 you know who is the lab contact? 08:10:16 I assigned Mei mei to run the complete functest 08:10:18 Mei mei... 08:10:24 is she the right contactN 08:10:27 ? 08:10:32 yes 08:10:39 this morning we had a call 08:10:39 ok 08:10:57 ML3 is under working 08:11:14 #info Mei Mei assigned to run functest suite on Huwaei target lab (in china First then US) 08:11:15 Need some time to write cases & scripts 08:11:24 Yes ,I'm now preparing the testing cases and scripts of L3 08:11:30 cool 08:12:06 i initiated the sprint yesterday for 3 weeks so it is may be a bit short for comletion but shall not be far 08:12:13 that's all. And we may ask some help we prepare CI 08:12:36 of course do not hesitate to ask for help regarding CI integration 08:12:58 the goal is to include ONOS as ODL today and to be able to run on any of the installer providing an ONOS configuration 08:13:13 CI will be tricky and fun... 08:13:36 as mentioned CI work initiated in Sprint 3 but will be more important in Sprint 4 with docs... 08:13:54 BTW, I can also help you to declare Onos testcase in the DB 08:13:56 sprint 4 shouldnt have that much of coding 08:14:11 if you have already the title of the testcases 08:14:19 good 08:14:20 i think we could declare only Onos as 1 suite 08:14:40 #info possibility to add ONOS as Functest testcase in DB 08:14:42 #link https://wiki.opnfv.org/brahmaputra_testing_page 08:14:59 #info B testing updated 08:15:00 that would be great 08:15:24 #info table to be updated still (with results and dashboard ready status) 08:15:39 #info we planned to discuss dashboard and DB during next thursday meeting 08:16:01 #action morgan_orange xiaoguang declare ONOS suite in the DB and update wiki accordingly 08:16:15 any question on ONOS? 08:16:58 #topic tempest 08:17:15 #info Viktor updated backlogs https://jira.opnfv.org/browse/FUNCTEST-70 08:18:00 #info last 9 errors analyzed by viktor_nokia 08:18:25 I will probably try to run those failed cases one by one in our setup 08:18:27 #info most of the errors seem linked to ODL 08:18:46 #info interesting to see the difference with ONOS... 08:19:04 we do not have also a reference without SDN controller 08:19:14 as far as I know it is not considered as a target configuration 08:19:25 could be interested to compare... 08:19:46 I am a bit surprised by the error related to quota 08:19:49 that would require changing neutron.conf 08:19:54 maybe not so complex 08:20:07 no but an additional config to manage 08:20:34 at least on dev labs we shall be able to run the suites with SDN controllers just to compare 08:20:40 yes 08:20:43 regarding quota 08:20:52 that happens sometimes if we have run tests before 08:20:58 and havent cleaned them 08:21:10 hmm 08:21:25 here as far as I remember it was the suite run automatically after a fresh install 08:21:25 but it shouldnt happen when running after a fresh installation 08:21:36 ok 08:21:39 so it should not be the case... 08:21:44 then, yes 08:21:55 quota errors could be also caused by resources which were not freed by previously failed test cases (in the same suite) 08:22:29 #info discussion on quota errors: strange as they occur after a fresh install and quota had been already extended for Arno 08:22:40 #info viktor_nokia "quota errors could be also caused by resources which were not freed by previously failed test cases (in the same suite)" 08:23:19 #action viktor_nokia (when a ev lab will be available) test the Tempest suite manually, check quota before the test 08:24:12 regarding "subnet net04_ext__subnet of external network net04_ext is visible for the user (could it be so that the user has admin rights for some reason?)" 08:24:47 could be linked to the way this default subnet is created 08:25:01 jose_lausuch: it is created by the Fuel installer, not Functest, right? 08:25:07 yes 08:25:09 only in fuel 08:25:14 foreman was different 08:25:17 I checked generated tempest.conf 08:25:19 and apex/compass I guess so 08:25:20 [identity] username = admin password = octopus tenant_name = admin alt_username = admin 08:25:35 tempest uses admin, right? 08:25:49 yes 08:25:54 alt_tenant_name = admin as well 08:25:57 and tempest creates its own nets 08:27:04 I do not know if it is easy to configure the test to create a different tenant (not admin) 08:27:14 create a user + a tenant ? 08:27:18 I guess tempest.conf? 08:27:24 for vIMS Valentin created a dedicated tenant 08:27:57 OK let's see also if we can progress on that when dev labs will be available 08:28:06 well, you need first a tenant (project) and then as many users as you want (normally 1) for that tenant/project 08:28:21 #info error seems linked to configuration , subnet created by fuel is visible, it should not be the case 08:28:42 #info discussion on the opportunity to change config in order to avoir running everything as admin, in admin tenant 08:29:02 #info subnet mentioned only created in Fuel, not the case with other installers a priori 08:29:11 #info so wait and see next run also on other labs to compare 08:29:21 #info and see if it is not a Fuel specific error 08:29:46 it seems there is also an error linked to the fact that we are on Juno 08:30:03 subnetpool-create not supported 08:30:11 mmm that was maybe run on Juno 08:30:23 but the target is Kilo or even Liberty 08:30:29 yes 08:30:35 so wait and see as well 08:30:52 also an issue linked to floatting IP 08:31:35 it caused probably by following row from generated tempest.conf: default_network = 172.30.10.0/24 08:31:54 which is the same as external network 08:32:02 #info tempets.conf to be improved 08:32:34 I thought rally genconf was in charge of that 08:33:34 it is the case, but there are probably someway by configuration to modify the target tempest.conf (as we used to faced for the keystone v3 tests) maybe some work to be done upstream in rally 08:34:29 ok 08:34:31 #action viktor_nokia test (when dev lab ready) the influence of tempest.conf change into Tempest, and see if upstream patch may be created on rally to fix the issues that would be solved by the config change 08:34:53 anyway thanks viktor_nokiafor the backlogs, it is better to see them here rather than in a wiki :) 08:35:22 9 errors assuming taht 2 are clearly identified as ODL, 2 related to quotas and the other maybe due to ODL, misconfiguration 08:35:33 I think it is not bad (it was worst in arno.. :)) 08:35:42 yes 08:35:45 any question regarding Tempest? 08:35:55 nop 08:36:11 what is the target test suite for tempest? 08:36:18 smoke? full? 08:36:25 smoke shall be the minimu 08:36:26 or customized? 08:36:34 customized will make sense 08:36:51 we can also run full 08:36:58 but that would give a lot of errors maybe 08:37:03 yes we can run 08:37:06 as soon as we have smoke 100% working 08:37:13 we can try the full, but... 08:37:18 not sure if its worth it 08:37:28 customized would be a file with individual test case list 08:37:33 yes 08:37:38 we imagined that for Arno 08:38:00 I did a list 08:38:06 for smoke 08:38:16 but it was slightly differnt for each installer 08:38:23 for me it will be a good option to extend smoke without going to full which will triggers lots of errors 08:38:32 +1 08:38:52 that is another point, we must remain in our test as installer agnostic as possible 08:39:06 we must not create a customized tempest suite for apex/fuel/compass/joid 08:39:20 we must garantee the common suite for all the installers 08:39:27 fully agree 08:39:34 yes, thats right 08:39:47 viktor_nokia: can I action you the study of the creation of a customized OPNFV tempest suite? 08:40:01 OK 08:40:19 #action viktor_nokia the opportunity to cretae a customized , installer & controller agnostic, OPNFV Tempest suite 08:40:30 it will be the same for Rally 08:40:52 for the moment we just merged the different scenarios of the different modules, it will probably make sense to create one OPNFV scenario 08:40:58 juhak: what do you think? 08:41:21 I agree 08:42:01 Ok I can actio nyou too then :) 08:42:24 ok 08:42:27 #action juhak study the creation of an OPNFV rally scenario 08:42:44 I think both action are covered by the current JIRA, no need to additional admin stuff 08:43:03 I suggets we spend some time next week to focus on Rally 08:43:05 great 08:43:19 #topic Discussion on the summary of Functest for dashboard 08:43:29 it was a action from last week 08:43:39 in the dashboard we shall provide an overall status 08:43:58 what will be our criterai for B-Release seen from the dashboard 08:44:34 I would say 08:44:43 everythink working ! :) 08:44:47 total/passed numbers for each project at least? 08:44:58 yes 08:45:36 I would say vPING 100% OK, Tempest & Rally (errors shall be documented) but we expect at least 90% for Tempest smoke suite 08:46:03 if we get behriluim we might have 100% smoke 08:46:06 for SDN controllers suites, as they are defined by us I would say 100% (but for ODL last time we got 3 errors that were documented bue to ODL) 08:46:38 and for me vIMS the criteria will be the ability to deploy orchestrator, VNF and run tests 08:46:47 as arno, we might get diffrent results for diffrent installers 08:46:51 but we do not really care of the results of the tests as they have not be tuned 08:48:19 basically what shall we put in the landing page https://www.opnfv.org/opnfvtestgraphs/summary for functest 08:48:29 to provide automatic green light from CI 08:48:53 ok 08:48:58 are you Ok with 08:49:05 - number of test suites 08:49:21 and independantly from the installers 08:49:26 - vPing 100% OK 08:49:36 so this page is the dashboard we are talking about? 08:49:51 yes 08:49:56 this page is the prototype 08:50:05 from jenkins, results are pushed into DB 08:50:20 then LF portal (this page) get info from the DB to build the dashboard 08:50:40 I created a "virtual" use case called status that could be used to give the overview of the project 08:50:49 instead of Test cases (3): vPing (functest), Tempest (functest), Ping (yardstick) 08:50:51 in the landing page 08:51:27 Marion from LF shall also add filters on the existing page to be able to compare the results per POD if we consider the project view 08:51:31 https://www.opnfv.org/opnfvtestgraphs/per-test-projects 08:52:00 BTW, foreman tempest number of failures (~40) corresponds with failure numbers in our foreman setup 08:52:18 we used to have more errors on foreman as well 08:52:34 not so many differences 08:52:39 as far as i remember 08:52:47 but now tim migrated to RDO 08:52:52 foreman will no more supported 08:53:00 ??? 08:53:02 so no need to spend time on troubleshouting 08:53:13 How about Apex? 08:53:24 Apex is based on RDO, which is based on OOO 08:53:37 that is a different approach (no more foreman) 08:53:48 so Foreman is completely abandoned? 08:53:57 yes 08:54:12 that is my understanding 08:54:22 I think so, yes 08:54:33 they are not longer working on it 08:54:41 all the efforts go to apex 08:54:47 and we will be able to run functest soon 08:54:54 for now, just virtualized 08:54:57 regarding the difference in arno between the installer we have this view 08:54:58 https://www.opnfv.org/opnfvtestgraphs/per-installers 08:55:50 and last run with foreman done on LF POD was not far from 40 errors 08:56:19 for 100 tests run (compared to 110 tests for 14 erros on Fuel) 08:56:33 so back to my summary 08:57:12 https://git.opnfv.org/cgit/releng/tree/utils/test/result_collection_api/dashboard/functest2Dashboard.py 08:57:22 for the status case 08:57:29 I planned (but not implemented yet) 08:57:35 nb of suites run 08:58:07 average tempest failure rate (shall be less than 10%) 08:58:37 vPing (must be fully OK) 08:58:54 and I suggest that all the controllers suite must be OK (as we manage them - not upstream suite) 08:59:08 vIMS, just consider the run in jenkins not the restuls 08:59:09 ok 08:59:31 and rally? 08:59:34 #info discussion on the criteria to be automatically displayed in dashboard page for B-release acceptance 08:59:40 rally like Tempest 08:59:43 ok 08:59:49 may be create only VIM category 09:00:10 #link https://git.opnfv.org/cgit/releng/tree/utils/test/result_collection_api/dashboard/functest2Dashboard.py 09:00:21 so we will have very basic lights 09:00:32 grenn lights for the VIM => tempest + rally > 90 % 09:00:41 green light for the controllers => 100% 09:00:49 ok, lets keep it simple 09:00:51 I agree 09:00:51 green light basic => vPing 100% 09:01:04 green light VNF deployment => deployment vIMS 09:01:09 4 criteria 09:01:09 let me info that 09:01:18 coreesponding to the 6 or 7 suites 09:01:20 #info grenn lights for the VIM => tempest + rally > 90 % 09:01:27 #info green light for the controllers => 100% 09:01:30 2 for VNFs + 2 for VIMs + N for Controllers 09:01:35 #info green light basic => vPing 100% 09:01:41 #info green light VNF deployment => deployment vIMS 09:01:47 ok 09:01:59 #action morgan_orange implement status for dashboard 09:02:04 #topic AoB 09:02:07 we are already late 09:02:09 I have something here :) 09:02:10 yes 09:02:11 sut some information 09:02:12 let me info quick 09:02:23 #info change in Functest team 09:02:45 #info -3 (HP, metaswitch) and +2 (Cisco, HP) 09:02:54 #info Fuel already integrated OpenContrail, I have a contact person from Juniper to see what test to run. Work to be started this week. 09:02:56 #info will send a mail for formal nomination 09:03:01 oops, I should wait 09:03:36 jose_lausuch: cool so maybe update the dependeincis towards OVNO in https://wiki.opnfv.org/functextnexttaks 09:04:21 #info ODL extension, Peter and amaged__ should work on this, refactoring to be planned. For B-Release cherry pick of ODL tests (GBP + integration tests) for C release study to automate the integration of upostream tests to be planned 09:04:24 ok 09:04:31 any othe rinfo you would like to share? 09:04:34 yes 09:05:14 ok have a good week and enjoy Sprint #3 09:05:18 #endmeeting