08:02:26 #startmeeting Functest weekly meeting November 29th 08:02:26 Meeting started Tue Nov 29 08:02:26 2016 UTC. The chair is jose_lausuch. Information about MeetBot at http://wiki.debian.org/MeetBot. 08:02:26 Useful Commands: #action #agreed #help #info #idea #link #topic. 08:02:26 The meeting name has been set to 'functest_weekly_meeting_november_29th' 08:02:29 hi 08:02:38 hi 08:02:42 #chair morgan_orange SerenaFeng HelenYao 08:02:42 Current chairs: HelenYao SerenaFeng jose_lausuch morgan_orange 08:02:50 #info agenda for today here: https://wiki.opnfv.org/display/functest/Functest+Meeting#FunctestMeeting-29/11(8UTC) 08:02:50 #info previous minutes: http://ircbot.wl.linuxfoundation.org/meetings/opnfv-functest/2016/opnfv-functest.2016-11-22-08.00.html 08:02:50 George Paraskevopoulos proposed functest: Fix tacker util script https://gerrit.opnfv.org/gerrit/25075 08:02:56 #topic role call 08:03:35 #info Morgan Richomme 08:03:40 #info rohitsakala 08:03:40 #info Helen Yao 08:03:42 #info SerenaFeng 08:03:45 George Paraskevopoulos proposed functest: Fix tacker util script https://gerrit.opnfv.org/gerrit/25075 08:04:52 #topic Action point follow-up 08:04:57 #info AP: HelenYao jose_lausuch SerenaFeng check if things could work with docker compose 08:05:04 I didn't have the time yet 08:05:10 me either 08:05:14 me, neither 08:05:39 maybe we can postpone it for E-river, let's see 08:05:52 seems reasonable... 08:06:19 #info not done, considering postponing it to E-river if no further solutions found soon. 08:06:25 #info AP: HelenYao jose_lausuch continue Alpine study 08:06:38 I did do some test, but Helen, you really tried it right? 08:07:06 yeah, no furthur investigation from last week 08:07:23 this we could try at least, to save 200 mb 08:07:25 I tried Alpine for the reporting page, looks promising but work needed...not straight forward as the nev is totally different 08:07:31 ya 08:07:37 nev=env 08:07:40 it needs some changes in the way we install things 08:07:54 #action HelenYao jose_lausuch continue investigation in Alpine 08:08:02 ok? 08:08:07 good 08:08:09 #info AP: submit a patch with a new directory "openstack" functest/utils/openstack/ to put all the new utils. DONE 08:08:20 #info AP: all review vping refactor and merge https://gerrit.opnfv.org/gerrit/#/c/24541/ . DONE 08:08:27 #info AP: fix Domino test case in CI 08:08:39 #info done 08:08:45 what's the status? 08:08:50 ok 08:08:58 thanks 08:09:00 #topic Feature project integration requests (Milestone2) 08:09:02 even if we may probably disable domino as we just run the test to skip the processing... 08:09:21 ok 08:09:36 linked to the thread on the 2 patches 24863 & 24745 08:10:04 regarding M2 and Movie, I did not answer to David for Movie 08:10:21 let me post the info I have so far 08:10:23 do we agree that if a project does not need OPNFV it does not need Functest? 08:10:33 yes, +1 to that 08:10:43 it doesnt make much sense I'd say 08:10:52 I see it as vsperf.. 08:11:14 same for me 08:11:16 #info Request from new feature projects: NetReady, Opera, Orchestra, Movie, Scalator, Ipv6, Barometer 08:11:23 #info NetReady: a simple ping test case using Gluon API 08:11:32 #info Opera: Clarwater vIMS deployment using Open-O instead of Cloudify 08:11:48 #info Orchestra: not much info about the test plans. 08:11:52 do you have any? 08:12:00 need to create the abstraction class for vnf onbaording 08:12:17 #action morgan_orange create VNF onboarding abstraction class 08:12:26 #info Escalator: test case for installer Daisy4NFV 08:12:43 #info IPv6: Run entire functest suite but making use of the IPv6 endpoints. Maybe nothing extra to do here. Scenarios that will implement IPv6: apex/os-nosdn-nofeature-ha/noha, apex/os-odl_l2-nofeature-ha/noha 08:13:27 #info Barometer: test collecting information from the compute nodes to validate the metrics/timestamps generated by Ceilometer. Supported scenarios: fuel/os-nosdn-kvm_ovs-ha/noha, fuel/os-nosdn-kvm_ovs_dpdk-ha/noha 08:13:44 #info Existing feature projects that will add new tests: SFC, SDNVPN 08:13:50 do you know any other ? 08:13:54 maybe I miss something 08:14:21 OAI test case (but considered as internal Functest case) + security group tests (internship) 08:14:44 apparently some refactoring on promise is planned 08:14:54 feel free to info :) 08:15:13 #info OAI test case + security group tests (internship) - (but considered as internal Functest case) 08:15:51 #info + healthcheck SNAPS 08:16:15 thanks 08:16:28 ok, let's get into matter 08:16:29 #topic Framework re-factor status 08:16:43 #info Discussion ongoing about what the tests should return (0 or !0) when the test fails. 08:16:54 so, my view is 08:17:11 We need to show RED in Jenkins when ANY of the tests failed. That was agreed and welcome by the community in general 08:18:24 that is what we have been doing since summer, and I think it works fine 08:18:34 the thing is we have 3 status: Test executed and PASS, test executed and FAIL, test not executed 08:18:55 test not executed = execution error 08:18:56 OK to report red when test id FAIL (good way to give feedback and to be strict) 08:19:58 what kind of execution errors do we usually have? 08:20:34 after we have a clear view about execution errors, we can decide whether to make jenkins red or not 08:20:38 it corresponds usually to bug in the framework (bad path, bad library, no connectivity) 08:20:39 due to typos or problem when creating the pre-resources 08:20:40 etc 08:21:09 in that case, it would be better for jenkins to show read 08:21:11 today we report Red in both cases 08:21:12 read = red 08:21:13 in that case we should raise an exception 08:21:21 yes 08:21:38 if there are typos that cause problems, we show read of course 08:21:52 but we don't report the test result to the DB 08:21:55 as far as I can see jenkins manages several states (Red/blue but also grey (interrupted) and yellow (not completed) 08:22:20 fdegir: is there a way in Jenkins to show a "orange/yellow" ball for a job? 08:22:57 maybe he is not in yet.. 08:22:59 usually when you have a daily job (with deploy/functest/yardstick) you may have yellow if one of the 3 is red 08:23:14 yes, but it is possible to manipulate that? 08:23:15 how will we make use of different colors? if we only want to distinguish working or not, i think red and blue are enough 08:23:29 see https://build.opnfv.org/ci/, you have yellow 08:23:32 like return 0 = blue, return -1 = red, what return value is yellow? 08:24:25 HelenYao: as we have 3 states we could imagine red = tests failed (scenario owner / feature proejct) and orange = execution error (usually our bad) 08:24:33 if we can have 2 states red and orange 08:24:43 if not we shoudl continue as we are today 08:25:03 red = execution | framework error 08:25:24 framework | test errors 08:25:46 I suggest we continue as today until we find a way to report yellow 08:25:57 orange...yellow is already in used 08:26:16 my view is that if we execute external tests, they should make sure to return !0 if the test fails 08:26:43 yes, but it is used for example when there is a parent job and one of the children is red 08:26:59 fuel daily job = fuel deploy + functest + yardstick 08:27:21 yes .. so it is already used yellow means that at least one of the children job is failed 08:27:28 it provides information 08:27:37 so we could not reuse the same 08:27:58 I can action myself and see with releng if we can customize it 08:28:11 ok 08:28:28 if not keep on reporting red as it is now well accepted ... 08:28:36 some jenkins plugin might be useful: https://wiki.jenkins-ci.org/display/JENKINS/Green+Balls 08:28:49 just one sample 08:29:10 do we agree on we report to DB when execution=ok ? 08:29:19 and don't when execution=error ? 08:29:20 #agreed 08:29:23 #action morgan_orange see with releng if we can customize the jenkins color to distinguish an error due to execution framework from an error due to the tests 08:29:36 #agreed 08:29:43 #info Agreement on pushing results to DB when the execution is ok (test can FAIL/PASS) but not pushing when there is an execution error 08:29:57 we can also agree on the fact to exclude the SKIP state 08:30:00 #agree 08:30:01 wich is useless 08:30:03 yes 08:30:16 #info Agreement on removing SKIP state 08:30:21 #info Agreement on exclude SKIP state (affeected scripts: domino, sdnvpn) 08:30:23 is it domino? 08:30:26 yes 08:30:28 ok 08:30:36 I copy pasted it to sdnvpn :p 08:30:41 I'll remove it as well 08:30:49 no need to start a feautre test suite if this suite is just skipped.. 08:30:55 I will remove from feature_base.py as well 08:31:10 probably the best way would be to revert the 2 patches... 08:31:24 so will still have three status? 08:31:43 #info Question: should the tests return 0 if the code execution is correct but the test failed? 08:31:52 that is the key question now 08:32:05 Cedric's opinion is that we shouldn't 08:32:06 yeah 08:32:18 I think we should, otherwise, how do we capture that the test failed? 08:32:26 exception raised? 08:32:26 if we shouldn't , how we manage the report? 08:32:43 sorry, my opinion is we should return !0 if the test fails (even though execution is ok) 08:32:54 agree 08:33:00 maybe that is not very elegant according to software engineering 08:33:06 but we need to capture it some way 08:33:07 agree on return !0 08:33:27 raising an exception would not be cleaner? 08:33:44 raising an exception? 08:33:51 that is when the code has troubles, right? 08:34:38 not necessarily 08:35:21 i think for tescase itself, it is acceptable to raise exception. for the main running, it should catch the exception and decide whether to raise another exception or return !0 08:35:30 George Paraskevopoulos proposed functest: Fix tacker util script https://gerrit.opnfv.org/gerrit/25075 08:35:53 the main running consists of several testcase running 08:36:15 we may need to define different level of exception 08:37:03 if we catch the exception, we would be able to report to jenkins a test or an execution error. It would be centralized 08:37:26 yes 08:37:48 but raising an exception and quiting the test is the same as return !0 08:38:43 i tend to support exception manipulation and it can be more scalable if we design it well 08:39:08 I think it is ok if we use different exit_code to indicate different executing result 08:39:09 from a pragmatic perspective maybe...but from a software dev (I am not in this category) no...as you will report a !0 status to a processing that was correct.. 08:40:08 I just tried running this raise Exception('execution error') 08:40:13 and the result value was 1 08:40:21 exception will be more meaningful than exit_code and it can have object-oriented ability 08:40:30 yep 08:40:55 'execution error' was just a string, it could be 'test failed' 08:41:00 you can raise execution error, test error,...centralize the status in the running part, try/catch and decide what to do 08:41:09 ok 08:41:11 +1 08:41:14 then the result is the same 08:41:16 we can customized our exception 08:41:24 in run_test we catch the exception if there is 08:41:37 and report green/red to yenkins? 08:41:39 jenkins 08:41:59 in the abstract file add if criteria is not PASS raise Exception('Test...') 08:42:20 for jenkins we have to see if it possible but yes it is the idea 08:43:15 but how do you know criteria is not passed? 08:43:20 for external tests, for example 08:43:21 if we design exception well, it can shows more information than exit_code such error msg 08:43:34 the info is already available (it is used to push results in the DB) 08:43:42 self.criteria 08:43:47 ok sure 08:43:48 but 08:43:50 some feature tests 08:43:57 use a command to run the external test 08:44:01 If we use exception to indicate the execution error, using the error_message to indicate why it is not executed, I think it make sense 08:44:07 what do we impose the external scripts? 08:44:40 using it to indicate the test failure, I don't get it 08:45:07 jose_lausuch: can you give more information? 08:45:16 jose_lausuch: do not catch the "impose the external scripts" 08:45:17 i don't quite follow it 08:45:26 https://wiki.opnfv.org/download/attachments/8685677/sfc-example.JPG?version=3&modificationDate=1479914953000&api=v2 08:45:32 this picture 08:45:41 imagine external tests 08:45:45 from other repos 08:46:00 we dont have control over the logic, we can just impose what to return and so on 08:46:18 we have our wrapper script, like domino, promise, etc 08:46:22 in this case, sfc 08:46:31 they will have run_tests.py to run their different tests 08:46:33 when a test is executed, it returns the test is success or fail, it makes sense 08:46:42 what should run_tests.py return? 08:47:04 if something wrong happened cause the test is executed, raise exception to indicate why it is not excuted 08:47:11 in this way I can understand 08:47:39 it's getting late... 08:47:44 can we have a follow up on this? 08:47:52 sure 08:47:53 I think it is what Cedric mentioned as missing we need add something to run(), push_to_db(), report_result() 08:48:15 I think Cedric said he will provide a patch 08:48:20 yes 08:48:21 how about waiting to see how Cedric proposed? 08:48:39 #action Cedric, propose a patch for TestCaseBase 08:48:56 let's wait for that and have a follow-up discussion about it 08:49:05 I'd like to talk about this as well 08:49:06 #topic Unified way to provide constants and env variables 08:49:07 sounds good 08:49:12 Serena? all yours :) 08:49:33 Unified way to provide constants and env variables? 08:50:00 yes, your poposal about separating constants and env variables 08:50:06 since we are mixing things 08:50:14 you wanted to propose a env.py module 08:50:15 ok, I will try to do it 08:50:27 can you explain your idea a bit? 08:50:28 jose_lausuch: it is helen patch 08:50:42 no? 08:50:52 yes, but there was a discussion about separating constants and env variables that are relevant 08:51:01 Serena proposed having an env.py 08:51:01 ok 08:51:12 for now we are mixing env with config_functest.py 08:51:19 note that in the releng module now there is a constants.py 08:51:41 https://git.opnfv.org/releng/tree/modules/opnfv/utils/constants.py 08:51:49 constants here is not the same with releng's 08:52:02 here we mean node_name, build_tag 08:52:18 and all the things we configured in config_functest.yml 08:52:46 ok to have an env.py 08:52:49 would be cleaner 08:53:13 my idea is to seperate things like node_name with things in config_functest.yaml 08:53:36 #info idea about seperating things like node_name with things in config_functest.yaml 08:53:57 things in config)functest.yaml called configuration 08:54:18 when we access config_functest.yaml failed, we raise exception directly 08:54:20 #info differenciate between configuration parameteres and environment variables 08:54:33 instead of reading from ENVs again 08:55:09 yes, I agree, if it is a configuration parameter, we shouldn't check the environment 08:55:13 i tend to support the idea that no let the downstream to decide whether the value he wants is a config param or an env param 08:55:16 something to change in constants.py maybe 08:55:42 the downstream does not have the knowledge 08:56:07 and to the outside we will provide constants.py, but to the internal, we still need to tell the difference between these two, using env.py and config.py 08:56:10 the knowledge is in config_functest,yaml 08:56:26 ok 08:56:33 and we only expose config_functest.yml in docker file 08:56:57 all the other things can be obtained from config_functest.yml 08:57:04 or env 08:57:20 #action SerenaFeng propose a patch about this topic 08:57:37 3 minutes 08:57:38 #topic Update Swagger framework + associated documentation using swagger2markup 08:58:02 rohitsakara had did a very good job 08:58:22 he will give us the introduction 08:58:22 ping rohitsakara 08:58:23 rohitsakala great! 08:58:28 I summarized what I found in this doc 08:58:29 https://docs.google.com/document/d/1jWwVZ1ZpKgKcOS_zSz2KzX1nwg4BXxzBxcwkesl7krw/edit?usp=sharing 08:58:34 Thanks @jose_lausuch 08:58:40 Thanks @SerenaFeng 08:58:47 #link https://docs.google.com/document/d/1jWwVZ1ZpKgKcOS_zSz2KzX1nwg4BXxzBxcwkesl7krw/edit?usp=sharing 08:59:15 looks great - so we all agree on using the swagger2markup.. :) 08:59:41 he gives us 4 options 08:59:57 1 and 3 only support swagger-ui 1.2 09:00:06 soryy 1 and 4 09:00:16 2 and 3 support swagger1.2 and 2.0 09:00:35 #link http://swagger2markup.github.io/spring-swagger2markup-demo/1.1.0/ 09:00:52 as we are using 1.2, and there's tools support it , I think we don't need to upgrade our swagger-ui now 09:01:19 we agree? 09:01:52 #info no need to upgrade swagger-ui for now, as we are using v1.2 and there're tools that support it 09:02:04 awesome 09:02:13 so there's swagger-codegen and swagger2markup left 09:02:38 swagger-codegen is maintained by the same team with swagger-ui, which we are using 09:03:08 yeah. 09:03:16 so what is the proposal? 09:03:23 I also gave a link to bash script on how to run swagger-codegen 09:03:41 there's a script written by rohisakala 09:04:01 Script :- https://usercontent.irccloud-cdn.com/file/WBMjul3s/script.sh 09:04:04 as I said he does a very good job 09:04:19 Command :- bash script.sh 09:04:20 #link https://usercontent.irccloud-cdn.com/file/WBMjul3s/script.sh 09:04:26 rohitsakala: well done 09:04:34 We can improve the UI using CSS. 09:04:42 ok 09:04:49 He probably has a better view than us on the best candidate 09:05:04 and the opportunity to migrate to 2.0 or not 09:05:04 #info proposal to use swagger-codegen 09:05:09 agree 09:05:20 agree 09:05:27 rohitsakala, which one do you prefer? 09:05:29 @morgan_orange 09:05:29 rohitsakala: Error: "morgan_orange" is not a valid command. 09:05:42 I think swagger -codegen does the job 09:05:44 rohitsakala: you dont need the "@" :) 09:05:47 ok 09:05:57 and no need to update to swagger 2.0 as of now. 09:06:00 @agreed use of swagger-codegen 09:06:00 morgan_orange: Error: "agreed" is not a valid command. 09:06:05 jose_lausuch: ok 09:06:11 #agreed use of swagger-codegen 09:06:17 ok, agree 09:06:19 ok 09:06:34 I have a doubt ?\ 09:06:41 we may probably share with the other testing projects...as yardstick, qtip are develpping APi, it is maybe interesting for them as well 09:07:04 yeah, at least qtip is tring to do the same thing 09:07:29 so maybe plan a 10 minutes presentations during the next APAC testing weekly meeting? 09:07:41 I ask them to wait until rohitsakala finish the work 09:07:42 should I create a jenkins job for the automatic updation of the api documentation whenever someone changes the code of the api like add a new api call. 09:07:45 from testing projects point of view, ui consistency is important 09:08:05 rohitsakala: that would be a good idea 09:08:09 no need for manual updates 09:08:24 jose_lausuch: Sure. :) 09:08:25 and we can use it as a verification when submit code 09:08:36 but it means that the API hosted in testresults.opnfv.org should also be automatically updated 09:08:45 morgan_orange, is unittest of testapi still used? 09:08:48 morgan_orange: yeah 09:08:51 we need a jenkins slave on that machine 09:09:17 SerenaFeng: hmm only manually I think 09:09:26 morgan_orange there's another question I want to raise 09:09:52 sometimes the testresults.opnfv.org maybe different with the repo 09:10:24 when we provide the restful api information in our wiki page, which one we should follow? 09:10:44 the testresults.opnfv.org? or the current code 09:10:51 ideally repo => automatic update on testresults.opnfv.org 09:11:13 today people have access to testresults.opnfv.org , so that is what they see/can use 09:11:25 so we need the automatic deployment now 09:11:31 but +1 to automate the deployment 09:11:35 it is time... 09:11:41 +1 too 09:11:43 and automatic upgrade 09:11:48 +1 09:12:07 we can add the build script for api docs also in that job. 09:12:40 yes 09:12:59 #action morgan_orange grant access to rohitsakala to testresults.opnfv.org 09:13:43 we are late :) 09:13:52 ok, that's all I can think of now 09:13:52 but good that you don't have other meetings now 09:14:08 let me share one last thing before we close for today 09:14:11 #topic PlugFest§Hackfest 09:14:18 oh, thanks 09:14:18 #info Jose will attend... 09:14:18 #info Juha and Jose will attend the event 09:14:33 #info We will prepare a Functest demo. 09:14:44 anything we can help 09:14:53 any points to be discussed F2F? 09:15:00 #info need support to set up a Database to store local results and possible Dashboard 09:15:03 probably several question on feature/vnf onboarding.. 09:15:10 #link https://wiki.opnfv.org/display/EVNT/Co-located+Plugfest-Hackfest+Planning+Page 09:15:35 #info Test Community Common Goals and Priorities by Trevor and me 09:15:39 probably 09:15:44 jose_lausuch: for the last edition, we created plugfest.opnfv.org (a clone of testresults.opnfv.org) 09:16:02 I also saw tha Trevor initiated page for dovetail 09:16:05 let's have an offline training for me if you agree :) 09:16:19 #info Functest versus Dovetail 09:16:32 versus? :) 09:16:33 #info Trevor Initiated some etherpads:https://etherpad.opnfv.org/p/yardstickcvp, https://etherpad.opnfv.org/p/vsperfcvp 09:16:45 shall we do the same? 09:16:52 probably to be discussed during the plugfest 09:17:05 the, the dovetail folks will be there 09:17:12 I plan to attend that session as well 09:17:13 jose_lausuch: no problem for the setup of the DB, maybe plugfest.opnfv.org is still available... 09:17:27 #topic AoB 09:17:54 #info work on landing page + data model to be initiated next week (with jack and Rex) 09:18:06 need to go (fire alarm exercice) 09:18:16 I proposed https://gerrit.opnfv.org/gerrit/#/c/25059/ about newton support, your feedback is appreciated 09:18:18 and jose_lausuch ask me help deploy a testapi, I said I will send a email, butI totally forget about it, very sorry about that 09:18:33 jose_lausuch do you still need that 09:18:33 #info I proposed https://gerrit.opnfv.org/gerrit/#/c/25059/ about newton support, your feedback is appreciated 09:19:06 SerenaFeng: yes, no worries, I need to do it for next week, but maybe we can reuse plugfest.opnfv.org 09:19:18 thanks HelenYao, I'll take a look 09:19:48 what should we do with https://gerrit.opnfv.org/gerrit/#/c/24801/ 09:20:20 SerenaFeng: we continue, it's a good idea 09:20:27 will I wait until we come into some conclusion? 09:20:33 ok 09:20:37 ok 09:20:49 thanks everyone 09:20:51 #endmeeting