08:00:32 #startmeeting Functest weekly meeting May 24th 08:00:32 Meeting started Tue May 24 08:00:32 2016 UTC. The chair is morgan_orange. Information about MeetBot at http://wiki.debian.org/MeetBot. 08:00:32 Useful Commands: #action #agreed #help #info #idea #link #topic. 08:00:32 The meeting name has been set to 'functest_weekly_meeting_may_24th' 08:00:39 #topic call roll 08:00:41 #info lhinds 08:00:42 #info Morgan Richomme 08:00:48 #info Juha Kosonen 08:00:48 #info Viktor Tikkanen 08:00:55 #info RaghavendraChari 08:00:55 #info agenda https://wiki.opnfv.org/display/functest/Functest+Meeting 08:01:10 #info Juha Haapavirta 08:01:10 #info CG_Nokia (Colum Gaynor) 08:01:10 #topic action point follow up 08:01:18 #link http://ircbot.wl.linuxfoundation.org/meetings/opnfv-testperf/2016/opnfv-testperf.2016-05-17-08.00.html 08:01:30 #info Jose Lausuch 08:01:44 #info AP1 ok: question on pep8 support transmitted 08:02:07 #info AP2 viktor_nokia did we increase the timeout value? 08:02:37 nnot yet 08:02:43 #info AP3 any ref on the Jira tickect on Apex about the timeout issue 08:03:05 but we have now a Jira ticket 08:03:29 https://jira.opnfv.org/browse/APEX-149 08:03:59 #info AP3 JIRA created https://jira.opnfv.org/browse/APEX-149 08:04:27 #info AP4 lhinds David_Orange any sync regarding sec group tests => maybe talk later in the slot on sec tests 08:04:57 #info AP5: there were exchanges on the chan on the way to retrieve dynamically IP for security tests 08:05:13 Not yet spoke with David, plan to get this scap stuff implemented and then speak with him towards end of the week I hope. 08:05:19 #info AP6 ok: I added security in the feature section, maybe not teh right place 08:05:20 #info SerenaFeng 08:05:43 also working with Tim Rozet on testing in POD7 Apex 08:05:43 jose_lausuch: we have a dedicated section for Tempest, Rally and the VNFs 08:06:20 would it make sense to put Tempest (full) and Rally (full) in one common category (OpenStack extended?) 08:06:32 where would you see security scan test in our tiers? 08:06:38 #info David Blaisonneau 08:07:05 is that question for me morgan_orange ? 08:07:16 for everybody... 08:07:28 today I put it in feature section (Tier 3) 08:07:41 I think its worth it, there will be a lot of info on configuring the test setups 08:07:47 but happy either way 08:07:56 #link https://wiki.opnfv.org/display/SWREL/Test+Release+Criteria 08:08:10 morgan_orange: that makes sense, and I Wanted to propose it as well 08:08:16 Openstack something 08:08:46 we can think of the name later 08:09:03 #action refactoring of the refactoring: group Tempest (full) and Rally (full) in a category Openstack XXXX (name to be found) 08:09:18 action me 08:09:27 its just about changing the testcaess.yaml 08:09:28 this category could be re used for long duration functation tests towards the infrastructure 08:09:44 for Tempest and Rally separate success criterias are defined for full and smoke 08:09:46 #action jose_lausuch group Tempest (full) and rally (full) in 1 category 08:09:53 in run scripts the criteria is currently hardcoded to 90% when pushing the summary 08:10:01 maybe it would be good to define the value elsewhere, e.g. in testcases.yaml? 08:10:32 juhak: good point, we can put it in testcases.yaml 08:10:48 good points, I think we can easily update the yaml find to indicate the criteria (the if status = ????) 08:10:54 #action jose_lausuch: propose success criteria on the testcases handling 08:11:12 its about modifying the testcases.yaml and the tier handler 08:11:16 tier builder 08:11:24 in particular the Testcase Class :) 08:11:41 #info AP7: presentation for teh design summit initiated 08:12:03 #info link https://gerrit.opnfv.org/gerrit/#/c/14517/ 08:12:26 morgan_orange: I've seen your email to Sofia 08:12:29 #info open docs/com/pres/summit-Berlin.html 08:12:32 yep 08:12:36 I think we should wait to get an answer 08:12:43 that might also fit to all projects 08:12:48 the framework could be in opnfvdocs (useful for all the projects) 08:12:52 and then you become a commiter in opnfvdocs :) 08:13:03 I puit it this time in Functest repos because last time the patch had been pending for months 08:13:17 wait for Sofia answer before merging, I could submit the patch in opnfvdocs 08:13:40 anyway people can have a look already. Some pictures are missing... 08:13:45 #info Nikolas Hermanns 08:14:05 for instance I do not have Nikolas's picture :) 08:14:09 morgan_orange: :)20622 lines 08:14:19 missing also Mei-mei and Cedric 08:14:46 jose_lausuch: I could only push the hml, the css and the images and make ref to the reveal framework upstrteam 08:14:55 I will discuss it with Sofia 08:15:07 I am not competing for the number of lines committed :) 08:15:27 no no 08:15:44 #topic Security suite 08:15:45 but it looks you spent some time with that commit, good effort! 08:16:02 lhinds: update on the scan test case 08:17:50 lhinds: still there? 08:17:57 Functionally everything is there now. You can set up profiles for each node type (compute, control, nagios etc), packages are installed, scan is run, and reports are now downloaded to the artifacts directory. 08:18:16 I am currently working with Tim Rozet to test this on POD 7 08:18:21 #info src lhinds Functionally everything is there now. You can set up profiles for each node type (compute, control, nagios etc), packages are installed, scan is run, and reports are now downloaded to the artifacts directory. 08:18:26 lhinds: feel free to #info the info :) 08:18:40 #info currently working with Tim Rozet to test this on POD 7 08:18:56 so you managed your access to the different nodes 08:19:01 #info main thing I need to verify is getting the IP addresses from the undercloud. This is easy (using Nova)...I just want to verify end to end 08:19:36 morgan_orange, yes.. I have that working now. I just want to get it going on an opnfv apex deployment, as been using my home lab for now 08:19:48 lhinds: ok, it means that the security scan will be (at least at the beginning ) only runnable on Apex, right? 08:20:26 so let's test the integration on Intel POD7, then we can help on the automation 08:20:29 morgan_orange, yes. Plan is to do the other installers, but as there is no unified connection method, they each need there own connect modules 08:20:42 oh yes... 08:20:44 Its a pain, and Genesis have meant to be fixing this from what I have seen 08:20:58 oh yes ... 08:21:04 I need to lean juju which has its own thing as well. 08:21:11 but let's work on Apex scenario first 08:21:22 Hopefully, at the summit people will get interested and join in and fill gaps. 08:21:36 it should be possible to test it on the other labs afterwards or use the Design summit to do it 08:21:49 anyone is welcome to get involved, even if playing with the scans and suggestiing different checks to make 08:21:51 so jose_lausuch we have to create the security scan in the testcases?yaml 08:21:58 yes 08:22:08 lhinds: can also do that when the tests are ready 08:22:12 I would say the same commit 08:22:15 #action morgan_orange add security_scan as testcase in the DB, in the config 08:22:37 jose_lausuch, morgan_orange I use pyaml , so easy for me to fit into the parent functest yaml file 08:23:09 I have one ini file I use for the scan settings, but I could port to yaml there as well. 08:23:19 suggest you guys look when I push and we can decide from there 08:23:30 you can keep you ini file, the yaml is used to describe the testcase 08:23:42 https://git.opnfv.org/cgit/functest/tree/ci/testcases.yaml 08:23:44 as I thought, sounds good to me then 08:23:50 it is just declarative 08:23:57 we should agree on the name of the testcase 08:24:08 security_scan? 08:24:25 SECScan ? 08:24:32 morgan_orange, yours :) 08:24:36 security_scan 08:25:05 #info security_scan 08:25:10 ok 08:25:37 thnaks lhinds it seems that we are not very far from the automation, it should be possible before the summit on Apex, which would be great 08:25:54 #topics Flash test status 08:25:59 hey 08:25:59 morgan_orange, yep. That is key for me, as I will have it as a topic at my talk :) 08:26:10 enikher: any update 08:26:28 yes, the protype look good 08:26:41 the next thing todo would be to include it into Functest 08:27:17 #info Flash test proto looks good 08:27:23 #info newt step Functest integration 08:27:24 we did not manage to fully sync with yardstick 08:27:44 ok transition time for yardstick.... 08:27:55 for functest you probably can see with jose_lausuch 08:28:05 ok 08:28:08 where shall we declare flash test in our testcases 08:28:12 morgan_orange: we are sitting next to each other :) 08:28:27 I know... 08:29:02 thats it for the moment 08:29:05 lhinds: you were asking the other day for flash test 08:29:13 we do lack a bit on time at the moment 08:29:25 I do ... 08:29:27 :-) 08:29:30 jose_lausuch, I think I am covered now, but would still be interested in seeing the prototype 08:29:37 jose_lausuch, might be useful for fuel 08:29:43 lhinds: ok 08:29:59 could be the way to generalize to all the installers... 08:30:25 shall we plan a GTM next week for some demos? (security_scan, flash test and the API (next topic) 08:30:36 morgan_orange: +1 08:30:53 morgan_orange: not sure if I will manage to show flash-test 08:31:02 I don't have a setup for this at the moment 08:31:29 ok we could at least show security_scan and the API and do flash test later. 08:31:43 morgan_orange, I will be able to show the test happening on my lab, in fact hopefully on POD 7, but if not certainly on my own env 08:31:48 when you say you have no setup, you do not have labs to test? 08:31:55 ok 08:32:25 morgan_orange, I do have POD 7 (it took a while to get my VPN), but Tim needs to change something for me. 08:32:50 so I am underway there, just delayed from a few things.. but its moving forward now 08:32:51 ok and enikher do you have a lab where you can test flash tests 08:33:17 SerenaFeng: can test eveything on his laptop :) 08:33:44 ok anyway, let's see the status on friday, we could maybe show some demos or postponed depending on the status 08:33:53 #topic Test APi Status 08:34:16 SerenaFeng: it is up to you 08:34:59 for now, only dashboard APIs left 08:35:14 and I have a question, I have send you an email 08:35:27 It will be finished today 08:35:34 just to precise to everybody, SerenaFeng is refactoring the test APi that we are using to declare pods/projects/cases/results/dashboard 08:35:55 morgan_orange: what about putting duration out of details? 08:36:09 she developped swagger/tornado addons to have automatic docuementation + unit tests (as we need soem stability here) 08:36:17 I will put version number to the url path next time 08:36:21 jose_lausuch: yep there are questions on teh evolution of teh models (as we are refactoring) 08:36:30 then I can work on swagger work 08:36:49 if you see any change in the model, it is time to indicate it 08:37:04 I agreee that start/stop in the fields would be nice 08:37:25 it means that the startinfo must be stored at the test level at the beginning of teh tests 08:37:43 yes 08:37:53 but this can be done out of the test cases 08:37:53 it means also that the udration put in some testcases will be useless (but we may keep it for the dashboard - some tests are using it) 08:37:58 meaning, in the framework as such 08:38:20 but it makes sense to have a start a stop, wwe added a status and a Trust indicator also for colorado 08:38:36 #action morgan_orange add start/stop fields in the API 08:38:54 yes 08:39:06 I also think that another param to retriever the N last results of a given test in a given configuration would make sense 08:39:08 the more generic we do it the better 08:39:12 to avoid duplicating code as well 08:39:20 yes 08:39:34 for the moment we have Period taht allows us to retrieve teh results from the last Perido days 08:40:10 #action morgan_orange add a param to be able to retrieve last results (last occurence not test over last days) 08:40:35 i remember also someone asking for teh possibility to requests results over a given time window 08:40:51 yes, but I didnt fully understand that 08:40:57 BTW jose_lausuch do you use the pod table for the infra GUI 08:41:15 what do you mean? 08:41:30 jose_lausuch: get/results/(from 5/6/2016 to 1/8/2016) 08:41:52 in th eGUI you shared in the infra management, you have a list of PODs 08:42:07 ah yes 08:42:12 I assume you use another DB, not the testresults.opnfv.org/testapi/pods 08:42:15 no no 08:42:19 I use a mysql local db 08:42:24 and dummy entries 08:42:26 not the official pods 08:42:58 well, they are official, but Its just info I put manually in the DB 08:42:58 :) 08:43:08 ok I understand, would it make sense (in the future...) to have only one DB with PODs declaration? 08:43:18 yes, and I would say the pharos one 08:43:24 for the test results, it is a way for us to control the test collection 08:43:28 I have some ideas to have a lot of info for the pods 08:43:34 if the pod is not declared we do not accept the results 08:43:47 I'd say that's something post-colorado 08:44:01 for the moment in teh DB the model is poor http://testresults.opnfv.org/testapi/pods 08:44:16 creation_date, name, mode, details 08:45:12 yes 08:45:14 back to SerenaFeng and her great work, you said that dashboard should be integrated soon, any estiamtion for teh swagger work? 08:45:25 and we also need to distinguish in pharos the type of pod 08:45:32 ci pod, dev pod, single node 08:45:50 it will be ready before the end of next week 08:45:52 #action morgan_orange add type in pod description in the DB 08:46:25 I imagine for after colorado, when the pharos DB is ready and 100% functional and trustable we switch to get the pods from that one 08:46:28 and all the installer? 08:46:40 and also the installer of the pod? 08:46:56 jose_lausuch: OK makes sense 08:47:19 SerenaFeng: in theory the PODs must not be specialized 08:47:27 so pod and installer should not be linked 08:47:38 and I think we should connect project with pod, just like testcase with project 08:47:38 we make this connection through the test results 08:47:49 when we push a result we indicate the pod, the installer, the scenario 08:48:29 not sure to follow the link pod / project 08:49:02 all the link is done in testcase 08:49:04 jose_lausuch: BTW the pharos DB could be this DB, it is managed in releng... 08:49:16 which db? 08:49:20 so no need to connect pod with project 08:49:28 jose_lausuch: the mongo DB 08:49:31 mmmm 08:49:34 not sure 08:49:48 for pharos info I think it makes more sense some sql based 08:49:52 jose_lausuch: ok 08:49:59 so relational 08:50:06 ok 08:50:26 we can also discuss it this afternoon 08:50:30 we have a meeting about that 08:50:36 once API refactored, we will have some integration work / data already in the DB 08:50:49 Oh, by the way, I see lots of ***2Dashboard.py, do I need to make the unittests of them all? 08:51:04 SerenaFeng: no it is too specific 08:51:12 ok, 08:51:55 historically, the ***2dashboard.py were a way to post process the results in order to provide a graphable version of the post processed results 08:52:26 each *** is supposed to know what it wants to display in the dashboard 08:52:31 so it is really test specific 08:52:41 if unit tests they must be done by **** 08:52:41 yeah, each one is unique structure 08:52:45 not at the framework level 08:53:07 #topic Sprint #8 08:53:16 #info Sprint started last week for 3 weeks 08:53:36 it should be over week before the Summit... 08:54:02 #info Colorado roadmap has been shared by the TSC to the board 08:54:18 #info release will be mid of September, with first "freeze" beginning of July 08:54:27 morgan_orange: I think we could create all the sprints at the beginning 08:54:32 I think we should have our internal features ready for this date 08:54:45 and if someone thinks that a task cannot be made this sprint, it has to be moved to the proper sprint 08:55:19 ok 08:55:40 I think that is what we did for Colorado 08:55:53 Here as we had in theory only 2 Sprints, we did not really care 08:56:00 but it will be cleaner to do it this way 08:56:25 #info for D rivers, create all the sprints at the beginning and invite people to place their Jiras in the accurate Sprint 08:56:56 #link https://jira.opnfv.org/secure/RapidBoard.jspa?rapidView=59 08:57:04 so for the moment everything is in the Sprint 08:57:19 probably need to create a Jira for Flash test integration 08:57:38 any issues with the JIRas? 08:57:51 risk/problem/concern/doubt/... 08:58:41 I create a Releng task, it is about to add zte-pod1 to the functest dashboard 08:58:43 so far not 08:59:00 maybe about this one https://jira.opnfv.org/browse/FUNCTEST-157 08:59:04 not sure if I'll manage 08:59:05 SerenaFeng: OK it is just one param to add 08:59:07 https://jira.opnfv.org/browse/RELENG-112 08:59:12 its some work :) 08:59:13 note that the dashboard was only for brahmaputra 08:59:32 I have already make a code review 08:59:35 I planned to adapt it to Colorado, but I need to change things in ****2dashboard.py as the data model changed a little bit 08:59:45 SerenaFeng: you can assign the Jira to me$ 08:59:54 https://gerrit.opnfv.org/gerrit/#/c/14537/ 09:00:38 I already change the code, maybe I can abandon the gerrit and assign it to you? 09:00:43 ok I already did some modifications after pod renaming, directly on the web server bu I will have a look at that 09:00:58 do not abandon it, I will merge it 09:01:07 #topic AoB 09:01:08 ok, thank you 09:01:12 we are already late... 09:01:18 just some stuff to share 09:01:31 in the weekly test meeeting, we had 2 discussions last time 09:01:42 Test collection from a scenario 09:01:58 frankbrockners: detailed all the tests he would like to integrate for the vpp scenario 09:02:36 I answered by mail, that the list he mentioned for functest would be fine (as by default we try to run everything and only restrict test in case of constarint) 09:02:57 anyway it could make sense to think to an API to allow people to generate their own customize list 09:03:16 probably not in C but in the future. This could be share with all the test projects 09:03:22 thanks morgan_orange 09:03:41 you might not even need an API - a simple config file would be good enough 09:04:02 that way a scenario owner could decide which tests to run - even in CI/CD 09:04:30 it could well be that you know that 90% of your tests already work - and you're interested in the results of say 1 or two specific tests 09:04:47 if you had a config file - one could focus the testing on those two only 09:04:47 we have already this file, we just need a script that could eventually overwrite it in the container. Today we generate this file automatically beased on teh scenario name and the static description of teh cases 09:05:01 as a consequence - we would greatly cut down test execution time 09:05:21 great - that is good news 09:05:29 I think for manual processing it is just a question of documentation 09:05:35 as you can already do it 09:06:04 for automation, it should be possible to speficiy a file rather than rely on the one dunamically built 09:06:19 how would i modify it without chaning the jenkins job? 09:06:23 changing 09:07:41 agreed - as part of the jjb you should have a pointer to where to retrieve the test config from 09:07:47 we have to think on the better way, for teh moment I imagine we could modify 'onceĆ  teh jenkins job in order to allow a custom list rather than the default 09:08:06 could be on a OPNFV git - could be somewhere else... just mget it from somewhere 09:08:16 ok let's continue offline 09:08:36 the issue is that few folks have access to jjb 09:08:45 last point was on the Summit presentation, I invite everybody to read it 09:08:53 any other points you want to share? 09:08:57 morgan_orange - ok - sorry for the distraction here 09:09:47 frankbrockners: I have to go.. :) 09:10:02 thanks everybody 09:10:06 enjoy your functest week 09:10:10 #endmeeting