15:03:41 <trevor_intel> #startmeeting OPNFV Test and Performance
15:03:41 <collabot`> Meeting started Thu Jan 14 15:03:41 2016 UTC.  The chair is trevor_intel. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:03:41 <collabot`> Useful Commands: #action #agreed #help #info #idea #link #topic.
15:03:41 <collabot`> The meeting name has been set to 'opnfv_test_and_performance'
15:03:50 <mtahhan> #info Maryam Tahhan
15:03:56 <trevor_intel> #info Trevor Cooper
15:03:57 <anac1> #info Ana Cunha
15:04:25 <bin_> #info Bin Hu
15:04:46 <iben> #info Iben Rodriguez
15:05:29 <jmorgan1> #info Jack Morgan
15:05:36 <jose_lausuch> #info Jose Lausuch
15:05:38 <frankbrockners> #info Frank Brockners (IRC only)
15:05:49 <morgan_orange1> #info Morgan Richomme (IRC only)
15:08:47 <mtahhan> morgan_orange1: is vsperf not submitting results to the DB also?
15:09:08 <mtahhan> I can see results http://213.77.62.197/results?projects=vsperf&case=tput_ovsdpdk
15:09:39 <jose_lausuch> mtahhan: there are more things missing, not only that one
15:09:40 <jose_lausuch> also installers
15:09:46 <mtahhan> ah
15:10:31 <mtahhan> jose_lausuch: thanks
15:10:41 <jose_lausuch> my mic doesnt work…
15:11:12 <morgan_orange1> mtahhan: submitted lots of results
15:11:44 <morgan_orange1> but the example is based on only 2 examples, it does not include all the up to date results...
15:11:50 <trevor_intel> #topic test dashboard demo
15:12:49 <morgan_orange1> oups I anticipated (my statement was related to the dashboard)
15:17:08 <frankbrockners> is there a link to view the dashboard demo?
15:18:35 <jose_lausuch> https://wiki.opnfv.org/collection_of_test_results
15:18:37 <anac1> #link https://git.opnfv.org/cgit/releng/tree/utils/test/result_collection_api
15:27:11 <trevor_intel> frankbrockners: https://global.gotomeeting.com/join/305553637
15:27:47 <frankbrockners> trevor_intel: unfortunately already on a GTM (board call) :-(
15:28:16 <mtahhan> link to the demo https://d67-p53-prod-opnfvfeature.linuxfound.info/opnfvtestgraphs/per-test-projects to access: user: opnfv pass: StagingOPNFV@
15:32:08 <trevor_intel> #topic test scenarios
15:32:18 <mtahhan> #link https://wiki.opnfv.org/brahmaputra_testing_page
15:32:33 <trevor_intel> #link https://wiki.opnfv.org/brahmaputra_testing_page
15:35:57 <trevor_intel> #link https://wiki.opnfv.org/brahmaputra_testing_page#scenario_and_jenkins_job_naming_scheme
15:37:08 <frankbrockners> as part of the scenarios discussion - could you also discuss about automatic documentation of test results across all test projects?
15:37:33 <frankbrockners> as part of a release documentation, we'd need those (and not only in the MongoDB)
15:37:56 <trevor_intel> #topic test artifacts and results
15:38:26 <trevor_intel> #info how to store test results?
15:38:50 <morgan_orange1> question is more, which artifact providing for the release
15:38:56 <morgan_orange1> 1 doc per project
15:39:07 <morgan_orange1> 1 automatic doc by postprocessing the results in the DB
15:39:10 <morgan_orange1> which format
15:39:11 <trevor_intel> #info Ana drafted template for yardstick - test results
15:39:18 <morgan_orange1> yep
15:39:29 <frankbrockners> at a minimum we'd need a tar ball of test results per project
15:39:38 <frankbrockners> it would be nice to have the same format
15:39:39 <anac1> #info Yardstick will produce a report with summary of results
15:39:39 <morgan_orange1> and Maryam in vsperf also I think
15:39:50 <anac1> #link https://git.opnfv.org/cgit/yardstick/tree/docs/templates/test_results_template.rst
15:39:53 <frankbrockners> per scenario per test project
15:39:57 <frankbrockners> i meant to say
15:40:05 <anac1> #info the above is the template for yardstick report
15:41:09 <trevor_intel> #info VSPERF has a similar template for results
15:41:26 <frankbrockners> template looks interesting
15:41:42 <frankbrockners> but would that work for e.g. what we have with tempest results, robot results etc.
15:41:49 <frankbrockners> and can you automatically generate all that?
15:42:40 <frankbrockners> at a minimum we can tar all the results up in a "scenario-xyz-test-results.tar"
15:42:40 <trevor_intel> frankbrockners: what is "all that"?
15:43:08 <frankbrockners> "all that" = test results that would need to go into the template
15:43:21 <frankbrockners> might be quite late for everyone to follow this format
15:44:02 <frankbrockners> or someone would step up and pull all the results from mongodb and put into an rst doc like above
15:44:11 <morgan_orange1> automatic code generation from jenkins to a pdf or html through a rst template woudl be fine
15:44:12 <frankbrockners> but not sure who that someone would be...
15:44:19 <morgan_orange1> but it is a bit late
15:45:09 <morgan_orange1> what kinf of granularity would you put in the doc
15:45:27 <frankbrockners> for now I would start with raw results
15:45:41 <frankbrockners> if you can automagically aggregate, then fine
15:45:46 <morgan_orange1> then jenkins or DB dump is fine..
15:45:57 <frankbrockners> but as a user of a scenario, i would want the details of what works and what does not
15:46:07 <frankbrockners> agreed
15:46:10 <morgan_orange1> it is in jenkins
15:46:28 <trevor_intel> frankbrockners: needs some analysis of results, not raw result ... e.g. pass/fail, how is performance, etc.
15:46:33 <morgan_orange1> if you want to see what happen on fuel-odl_ovs_ha => the best place today is jenkins
15:46:49 <trevor_intel> IMO it doesn't make sense to packe up the raw results as part of the release
15:47:02 <frankbrockners> +1 morgan_orange1
15:47:09 <morgan_orange1> the idea of the DB was to push preformated raw results including a possible status
15:47:21 <anac1> for yardstick, we will go through the results and provide an interpretation of the results - as our test cases are performance
15:47:22 <morgan_orange1> but not sure jenkins will store the results for long
15:47:23 <frankbrockners> trevor_intel: If you can aggregate them - fine. but raw would be a starting point
15:47:37 <frankbrockners> yes - this is the issue
15:47:58 <frankbrockners> which is why we need to save them - and store somewhere permanently
15:48:00 <morgan_orange1> and in the DB we do not have the notion of scenario...as it appears this week :)
15:48:24 <morgan_orange1> could be added in the version header
15:49:10 <morgan_orange1> so for brahmaputra => doc will detail the testcases and provide help to interpretate results
15:49:17 <morgan_orange1> raw results in jenkins and/or in the db
15:49:31 <morgan_orange1> but good place for improvement for automatic result doc generation
15:49:57 <morgan_orange1> good driver to move to ELK
15:57:03 <trevor_intel> #info Still open what functest wil ldeliver in the release for test results
15:57:44 <trevor_intel> #info e.g. document what passes what fails with explanations
15:58:16 <morgan_orange1> wehave a user and an install/config guide
15:58:24 <morgan_orange1> we will point to jenkins
15:58:38 <morgan_orange1> and maybe to dsahboard if it is up to date..
15:59:20 <morgan_orange1> as the results are not stable and not always reproducible...writing a doc...
15:59:42 <trevor_intel> morgan_orange1: makes sense
16:00:57 <anac1> https://git.opnfv.org/cgit/yardstick/tree/docs/templates/test_results_template.rst
16:01:22 <trevor_intel> #info IEEE Std 829-2008
16:01:39 <nahad> thank you
16:01:45 <trevor_intel> #endmeeting