15:03:41 #startmeeting OPNFV Test and Performance 15:03:41 Meeting started Thu Jan 14 15:03:41 2016 UTC. The chair is trevor_intel. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:03:41 Useful Commands: #action #agreed #help #info #idea #link #topic. 15:03:41 The meeting name has been set to 'opnfv_test_and_performance' 15:03:50 #info Maryam Tahhan 15:03:56 #info Trevor Cooper 15:03:57 #info Ana Cunha 15:04:25 #info Bin Hu 15:04:46 #info Iben Rodriguez 15:05:29 #info Jack Morgan 15:05:36 #info Jose Lausuch 15:05:38 #info Frank Brockners (IRC only) 15:05:49 #info Morgan Richomme (IRC only) 15:08:47 morgan_orange1: is vsperf not submitting results to the DB also? 15:09:08 I can see results http://213.77.62.197/results?projects=vsperf&case=tput_ovsdpdk 15:09:39 mtahhan: there are more things missing, not only that one 15:09:40 also installers 15:09:46 ah 15:10:31 jose_lausuch: thanks 15:10:41 my mic doesnt work… 15:11:12 mtahhan: submitted lots of results 15:11:44 but the example is based on only 2 examples, it does not include all the up to date results... 15:11:50 #topic test dashboard demo 15:12:49 oups I anticipated (my statement was related to the dashboard) 15:17:08 is there a link to view the dashboard demo? 15:18:35 https://wiki.opnfv.org/collection_of_test_results 15:18:37 #link https://git.opnfv.org/cgit/releng/tree/utils/test/result_collection_api 15:27:11 frankbrockners: https://global.gotomeeting.com/join/305553637 15:27:47 trevor_intel: unfortunately already on a GTM (board call) :-( 15:28:16 link to the demo https://d67-p53-prod-opnfvfeature.linuxfound.info/opnfvtestgraphs/per-test-projects to access: user: opnfv pass: StagingOPNFV@ 15:32:08 #topic test scenarios 15:32:18 #link https://wiki.opnfv.org/brahmaputra_testing_page 15:32:33 #link https://wiki.opnfv.org/brahmaputra_testing_page 15:35:57 #link https://wiki.opnfv.org/brahmaputra_testing_page#scenario_and_jenkins_job_naming_scheme 15:37:08 as part of the scenarios discussion - could you also discuss about automatic documentation of test results across all test projects? 15:37:33 as part of a release documentation, we'd need those (and not only in the MongoDB) 15:37:56 #topic test artifacts and results 15:38:26 #info how to store test results? 15:38:50 question is more, which artifact providing for the release 15:38:56 1 doc per project 15:39:07 1 automatic doc by postprocessing the results in the DB 15:39:10 which format 15:39:11 #info Ana drafted template for yardstick - test results 15:39:18 yep 15:39:29 at a minimum we'd need a tar ball of test results per project 15:39:38 it would be nice to have the same format 15:39:39 #info Yardstick will produce a report with summary of results 15:39:39 and Maryam in vsperf also I think 15:39:50 #link https://git.opnfv.org/cgit/yardstick/tree/docs/templates/test_results_template.rst 15:39:53 per scenario per test project 15:39:57 i meant to say 15:40:05 #info the above is the template for yardstick report 15:41:09 #info VSPERF has a similar template for results 15:41:26 template looks interesting 15:41:42 but would that work for e.g. what we have with tempest results, robot results etc. 15:41:49 and can you automatically generate all that? 15:42:40 at a minimum we can tar all the results up in a "scenario-xyz-test-results.tar" 15:42:40 frankbrockners: what is "all that"? 15:43:08 "all that" = test results that would need to go into the template 15:43:21 might be quite late for everyone to follow this format 15:44:02 or someone would step up and pull all the results from mongodb and put into an rst doc like above 15:44:11 automatic code generation from jenkins to a pdf or html through a rst template woudl be fine 15:44:12 but not sure who that someone would be... 15:44:19 but it is a bit late 15:45:09 what kinf of granularity would you put in the doc 15:45:27 for now I would start with raw results 15:45:41 if you can automagically aggregate, then fine 15:45:46 then jenkins or DB dump is fine.. 15:45:57 but as a user of a scenario, i would want the details of what works and what does not 15:46:07 agreed 15:46:10 it is in jenkins 15:46:28 frankbrockners: needs some analysis of results, not raw result ... e.g. pass/fail, how is performance, etc. 15:46:33 if you want to see what happen on fuel-odl_ovs_ha => the best place today is jenkins 15:46:49 IMO it doesn't make sense to packe up the raw results as part of the release 15:47:02 +1 morgan_orange1 15:47:09 the idea of the DB was to push preformated raw results including a possible status 15:47:21 for yardstick, we will go through the results and provide an interpretation of the results - as our test cases are performance 15:47:22 but not sure jenkins will store the results for long 15:47:23 trevor_intel: If you can aggregate them - fine. but raw would be a starting point 15:47:37 yes - this is the issue 15:47:58 which is why we need to save them - and store somewhere permanently 15:48:00 and in the DB we do not have the notion of scenario...as it appears this week :) 15:48:24 could be added in the version header 15:49:10 so for brahmaputra => doc will detail the testcases and provide help to interpretate results 15:49:17 raw results in jenkins and/or in the db 15:49:31 but good place for improvement for automatic result doc generation 15:49:57 good driver to move to ELK 15:57:03 #info Still open what functest wil ldeliver in the release for test results 15:57:44 #info e.g. document what passes what fails with explanations 15:58:16 wehave a user and an install/config guide 15:58:24 we will point to jenkins 15:58:38 and maybe to dsahboard if it is up to date.. 15:59:20 as the results are not stable and not always reproducible...writing a doc... 15:59:42 morgan_orange1: makes sense 16:00:57 https://git.opnfv.org/cgit/yardstick/tree/docs/templates/test_results_template.rst 16:01:22 #info IEEE Std 829-2008 16:01:39 thank you 16:01:45 #endmeeting