08:00:08 #startmeeting Functest weekly meeting August 30th 08:00:08 Meeting started Tue Aug 30 08:00:08 2016 UTC. The chair is morgan_orange. Information about MeetBot at http://wiki.debian.org/MeetBot. 08:00:08 Useful Commands: #action #agreed #help #info #idea #link #topic. 08:00:08 The meeting name has been set to 'functest_weekly_meeting_august_30th' 08:00:13 #topic call role 08:00:19 #info Morgan Richomme 08:00:23 #info Jose Lausuch 08:00:29 #info raghavendrachari 08:00:32 #info Juha Kosonen 08:00:35 #info Viktor Tikkanen 08:00:38 #info Valentin Boucher 08:00:59 #agenda: https://wiki.opnfv.org/display/functest/Functest+Meeting 08:01:11 #topic Action point follow up 08:01:24 #info AP1: morgan_orange SerenaFeng sync with Juraj for dashboard activity 08:01:30 #action morgan_orange SerenaFeng sync with Juraj for dashboard activity 08:01:55 no work on this (focus on troubleshooting / scenario) 08:01:59 #info AP2: morgan_orange lhinds see why security_scan is no more triggered on apex scenario 08:02:05 #info done, scenario constraint modified, security_scan now run, see next sections for discussion on issues 08:02:13 #info AP3: ollivier May-meimei clarify port management on ODL 08:02:18 #info done 08:02:24 #info AP4: morgan_orange contact Apex to see why 50% of the scenarios are FAIl due to creds 08:02:32 #info done: scenario whith failed credentials correspond to failed apex deployement...they try to run functest even if the SUT is not properly deployed 08:02:40 #info AP5: ollivier look at https://build.opnfv.org/ci/view/functest/job/functest-apex-apex-daily-master-daily-master/296/console 08:02:47 #info error not reproduced https://build.opnfv.org/ci/view/functest/job/functest-apex-apex-daily-colorado-daily-colorado/35/console 08:02:53 #info AP6: morgan_orange clean JIRAs 08:03:01 #info done even if it is not completed, David's last mail showed lots of unassigned (JIRA for which we did not put the release version for the fix) + 11 / 50 issues assigned to Colorado 1.0 that are unresolved 08:03:13 #action morgan_richomme re-clean based on David's list 08:03:21 #info for Colorado Release manager said that now only bug fixes must be cherry picked and correspond to a JIRA 08:03:26 #action morgan_richomme create JIRAs if needed to reflect bug fix activity 08:03:33 #info AP7 morgan_orange lauch commiter promotion procedure for SerenaFeng and ollivier 08:03:39 #info done, wait for the last committer vote 08:03:55 who is missing? 08:04:15 May-meimei: do you know the other Huawei committers in Functest (only their vote is missing) 08:04:55 Qinglong Lan, Li Xiaoguang and Zhang Haoyu 08:05:03 #info Juha Haapavirta 08:05:20 morgan_orange: I know them 08:05:28 any question regarding the action points, we will discuss secruity scan, dashboard and scenarios in the next sections 08:05:31 will notice them 08:05:42 May-meimei: ok thanks 08:06:01 #topic Colorado status 08:06:28 #info scenario status: apex, fuel and joid colorado run started 08:06:39 any news on compass colorado job? 08:07:29 dedicated reporting pages created: http://testresults.opnfv.org/reporting/functest/release/colorado/index-status-apex.html, http://testresults.opnfv.org/reporting/functest/release/colorado/index-status-compass.html, http://testresults.opnfv.org/reporting/functest/release/colorado/index-status-fuel.html, http://testresults.opnfv.org/reporting/functest/release/colorado/index-status-joid.html 08:07:39 master reporting still available 08:07:58 #info colorado reporting available 08:08:06 #info CG_Nokia 08:08:09 lhinds is sorry for being late 08:08:29 mmm if it wasnt for copper test we would have green in 2 scenarios for apex 08:08:49 same for parser/fuel 08:08:57 #info ollivier 08:09:05 yep we will detail a little bit as we have already some answers 08:09:18 #info apex: copper => mail from Tim, a fix will be provided 08:09:33 #info apex without copper => 2 scenarios will be already validated 08:09:43 #info apex/security scan => I will create a Jira 08:10:08 lhinds did you have time to look at the python error 08:10:35 not seen that yet, it was public holiday yesterday..do you have a link to the failed job? 08:11:17 lhinds any colorado job, error is the same whatever the scenario 08:11:29 installation looks fine 08:11:47 morgan_orange: this page doesnt work http://testresults.opnfv.org/reporting/functest/release/colorado/index-status-compass.html 08:12:12 #action morgan_orange check reporting links http://testresults.opnfv.org/reporting/functest/release/colorado/index-status-compass.html 08:12:22 there is no compass job on colorado 08:12:32 morgan_orange: compass job will begin to run soon 08:12:41 so jose_lausuch it is normal... 08:13:08 lhinds https://build.opnfv.org/ci/view/functest/job/functest-apex-apex-daily-colorado-daily-colorado/42/console 08:13:26 let's work on it offline 08:13:40 #info compass colorado jobs will be soon available 08:14:01 do you mean 'sudo: hiera: command not found'? 08:14:43 ah ok 08:15:27 oh i see it 08:15:39 #info compass/moon: discussion yesterday by mail - still some errors: odl (first test / AD-SAL), Tempest errors (as joid/xenial scenario), Moon (internal hardcoded passwords) 08:15:53 ok, I will raise a jira and fix 08:16:25 #info I suggest for the odl test to do nothing, i.e. accept that the first test is failed (normal because moon, that does an identitiy federation dit it only partially for odl) and document it 08:17:32 #info fuel/parser issue => mainly on noha scenarios (problem of resources in nonha pod?) => shall we consider a constraint in parser declaration to run it only on ha scenario? 08:18:27 we can ask them 08:18:31 but it might be the reason 08:18:35 it als fails in virtual deplyometns 08:19:08 SerenaFeng: is working with parser team, I do not know if we have a clear view on ressources in non ha... 08:19:54 mainly on virtual pods 08:20:01 #action SerenaFeng see with parser team if parser must be run on HA mode only 08:20:03 success on all physical pods 08:20:17 not related to HA mode 08:20:55 we think maybe resources on virtual env is not big enough 08:20:56 usually HA are on physicall modes, non HA on virtual, but it is not a rule 08:21:03 I have reply the email 08:21:23 all failure due to No host available 08:22:19 we can discuss it offline 08:22:24 ok 08:22:51 I will trouble shooting it with parser people asap 08:22:52 #info joid/domino: domino excluded from automatic reporting => manual workaround aggreed by domino and indicated in release note 08:23:21 #info joid/lxd scenario: discussion offline the cirrios lxd image does not display any log in the nova console 08:23:47 #info joid/lxd: suggestion remove tests relying on nova log for lxd scenarios and document it in the release note 08:23:50 any objection? 08:24:10 no 08:24:14 its what we can do for now 08:24:14 it means no healthcheck for lxd scenario as last test for DHCP is based on nova logs 08:24:33 and create a JIRA for improvement in healthcheck for D 08:24:42 Im not 100% happy with that, but we can't do much about it now 08:24:54 #action morgan_orange adapt lxd scenario declaration + release note accordingly 08:24:57 its a test problem, not the scenario functionality 08:25:32 #info all odl_l3 scenarios failed due to tempest not reaching 100% 08:25:50 #action viktor_t see if for odl_l3 we should exclude test cases 08:25:53 I have one comment/question... 08:26:05 I've checked this morning many recent functest.log files from different pods in artifacts but in all the logs tempest cases passed without a single error 08:26:16 e.g. in http://artifacts.opnfv.org/logs_functest_huawei-pod1.html 08:26:30 it seems that something has been changed between 27.7 and 12.8 so that after 12.8 only OK logs are stored into artifacts?? 08:27:22 was just seeking for odl_l3 related logs... 08:27:45 viktor_t: hmm, it corresponds to the change in jenkins, we exit -1 when there are errors and I assume du to this exit we do not call the function to push results to artifacts 08:27:59 we push when everything is OK... 08:28:11 thats bad 08:28:13 we should change this 08:28:21 should it be in other way? 08:28:32 we don't need OK logs :) 08:28:37 +1 08:28:43 +1 08:28:51 +1 08:28:53 anyway, for me it looks like tempest issues in odl_l3 are not common to all the installers 08:29:02 1) failed test cases are not always the same in different pods 08:29:15 2) some test cases either fails or passes in consecutive runs (see e.g. os-odl_l3-nofeature-ha at http://testresults.opnfv.org/reporting/functest/release/master/index-tempest-compass.html 08:29:31 #info issues with artifact: we push only the results of succesful scenarios 08:29:44 looks like ODL timing issue etc. 08:30:09 #info it is probably due to the refactoring of the functest job, if we exit -1, we do not push the results, the -1 should be only at the end of the run including results pushed to artifact 08:30:24 jose_lausuch: I can action you for that? 08:30:36 morgan_orange: yes 08:30:39 jira bug 08:31:02 #action jose_lausuch fix issue of artifacts not pushed in cased of exit -1 (any case failed) 08:31:48 #info regarding odl_l3 viktor_t says that errors are different depending on installers..but logs are needed...and as logs are not pushed when errors we do not have them. First fix artifact then troubleshoot odl_l3 scenarios 08:32:28 #action viktor_t once artifact issue fixed, study odl_l3 tempest errors 08:32:46 #info bgpvn troubleshooting in progress 08:32:56 that is all for the scenarios for me 08:33:11 #info bug/fix / framework 08:33:30 #info ollivier pushed some patches to fix the messy logs in jenkins 08:33:50 ollivier: is it ready to be merged? 08:33:59 morgan_orange: I think so 08:34:33 #link https://gerrit.opnfv.org/gerrit/#/c/19793/ 08:34:57 OK I will do after the meeting 08:35:00 morgan_orange: we should check if performance decreases as we stop buffering. But I should be ok. 08:35:41 ollivier: ok thanks for the correction 08:35:48 the ideal case is that we dont use "print" functions, and we use logger for everything 08:36:08 who can take care of removing logger as a function parameteres in functest_utils? 08:36:10 jose_lausuch: yes but we have a heritage, we should try to remove the print wherever they are... 08:36:10 +1 08:36:20 morgan_orange: true we should fix that. I will do 08:36:45 ok 08:36:54 jose_lausuch: at least one case you can't remove but I added the flush call 08:37:14 any other topic for Colorado? 08:37:27 bug/issue/concern/fear/ 08:37:27 when is the release date? 08:37:33 22th of September 08:37:37 ok 08:37:43 still a lot of read in the reporting page :) 08:37:50 #info config guide and dev guide reviewed by opnfvdocs 08:38:29 scoring are not bad, feature projects triggering fail are identified 08:38:47 some scenarios already green 08:38:55 BTW yardstick adopted also our reporting 08:39:02 http://testresults.opnfv.org/reporting/yardstick/release/master/index-status-joid.html 08:39:49 cool 08:40:10 #info dashboard will not be done by bitergia, so we will have to adapt our own (not priority / scenarios) but will be needed 08:40:24 #topic vote for 2 interns projects 08:40:26 okey, I will do that 08:40:49 old topic but I would like to create 2 asap (request from academic) 08:40:59 I suggested several ones in a previous mail 08:41:28 what topics ? 08:41:29 any proposal from your side (3 months intern) 08:41:52 we have almost everything covered by all the contributors 08:42:00 #link https://wiki.opnfv.org/display/DEV/Intern-projects-page 08:42:05 I would say as a possible topic/task, integration of VNFs 08:42:06 it is for the future...and just interns 08:42:25 my suggestions were around new VNFs 08:42:30 for example JOID will have a lot of things for vnfs with charms 08:42:38 there is an email around 08:42:39 or the vPing mentioned by Tim in a Jira for D 08:42:45 someone wants to integrate a vnf 08:42:54 yes the vEPC from OAI 08:43:02 exactly 08:43:08 that could be 1 example of task for intern 08:43:28 if no objection, I will suggest 2 topics around VNF integration 08:43:58 Ok if you have idea you want to share offline, send a mail 08:44:08 no objections 08:44:12 #action morgan_orange create 2 topics related to new VNF integration 08:44:35 #topic preparation of D release 08:44:50 already started a little bit (see patch Serena) 08:44:58 how shall we organize ourselves? 08:45:03 I would suggets to use the wiki page 08:45:15 https://wiki.opnfv.org/display/functest/Functextnexttaks 08:45:52 I see a big task related to templating/case abstration 08:46:05 before patching, it could make sense to share a high level view in the wiki 08:46:20 ok 08:46:38 I would like also to contact SNAPS to start considering integrating their framework...as it was already object oriented 08:46:56 ollivier: could you explain your idea of case abstraction to get rid of exec_tests.sh? 08:47:01 I mean, in the wiki 08:47:01 there are also sveral threads on new VNFs 08:47:06 it would be good to capture it 08:48:02 jose_lausuch: Yes (after finishing the pending tasks :)) 08:48:29 yardstick has also some plan about an API to trigger the test, I think we should invite them to try to have a consistant approach 08:48:37 maybe a meeting during OpenStack summit in barcelona 08:48:57 sure, this is not top prio 08:49:15 so do not hesitate to initiate the reflexions in D release in the wiki page 08:49:31 we may have a lso a meeting in Barcelona 08:49:44 and if needed organize a meetup in britanny (as we did in Espoo) 08:50:03 the idea is to have the roadmap ~ clear for November? 08:50:55 yep 08:50:56 ok 08:51:41 JuhaHaapa: Juhak viktor_t raghu_ May-meimei SerenaFeng jose_lausuch ollivier boucherv_orange ...1 or 2 words on your view of Functest for D release? 08:52:12 I will make some suggestions on the wiki page 08:52:16 lhinds maybe we can share the link for security 08:52:28 as you already intiated something 08:53:02 ok let's wikize but do not be shy... 08:53:06 for d-release morgan_orange ? 08:53:10 ok 08:53:11 I would say also code improved 08:53:18 lhinds yes 08:53:32 sure, makes sense 08:53:38 jose_lausuch: yes unit test especially for the util functions...but now that we have real developpers with us... 08:53:59 unit test could be another task for an intern :) 08:54:09 it was also a suggestion... 08:54:46 moreove Mark is managing intern programs ...we may invite him / unit test management in Storperf 08:54:51 #topic AoB 08:55:13 #info committer vote to be closed this week 08:56:12 regarding the plugfest in UNH, not sure we have a Functest representative now...not sure it is as mandatory as it was for plugfest #1 but it makes sense to be there... 08:57:04 when is it? 08:57:12 beginning of December 08:57:14 http://events.linuxfoundation.org/events/opnfv-plugfest 08:57:49 looks like Plugfest in Dead Poets Society university... :) 08:58:11 :) 08:58:41 any other topic you want to share... 08:58:53 yep, as info 08:59:02 https://hub.docker.com/r/opnfv/functest/ 08:59:11 I added a tag for our docker 08:59:19 in the Full Description field 08:59:22 if you click there 08:59:38 you'll see all the info of our container with the different layers 08:59:39 https://microbadger.com/#/images/opnfv/functest 09:00:03 that gives an idea about how the image is structured and how much size it takes each layer in the image 09:00:40 good point ...shall we change also the reference in Colorado docker file to point only to colorado report? 09:00:55 shall we add a -b colorado for all the internal repos? 09:01:03 what do you mean? 09:01:04 where? 09:01:12 ah when we clone the repos 09:01:12 yes I think so 09:01:13 mmm 09:01:13 https://git.opnfv.org/cgit/functest/tree/docker/Dockerfile 09:01:19 good point 09:01:29 but we can do that only in our colorado branch.. not in master 09:01:30 right? 09:01:35 yes 09:01:40 We should manage 2 Dockerfiles 09:01:44 we fixed the branch for rally and temepst 09:02:03 ollivier: good point 09:02:24 at the moment on colorado we use stable (in master latest..) 09:02:45 ok 09:02:48 morgan_orange: maybe 3. another for Debian testing to decreasing OS size :) 09:02:48 we can do that 09:03:25 #action jose_lausuch morgan_orange check Colorado docker consistancy and fix branch for OPNFV internal repos in Colorado(stable) docker 09:04:00 ok any other topic? 09:04:51 if no it is OK for today 09:05:00 I merged the logger related patches 09:05:03 have a good week 09:05:12 and enjoy D release...do not be shy... 09:05:17 #endmeeting