08:02:26 <jose_lausuch> #startmeeting Functest weekly meeting November 29th
08:02:26 <collabot`> Meeting started Tue Nov 29 08:02:26 2016 UTC.  The chair is jose_lausuch. Information about MeetBot at http://wiki.debian.org/MeetBot.
08:02:26 <collabot`> Useful Commands: #action #agreed #help #info #idea #link #topic.
08:02:26 <collabot`> The meeting name has been set to 'functest_weekly_meeting_november_29th'
08:02:29 <jose_lausuch> hi
08:02:38 <HelenYao> hi
08:02:42 <jose_lausuch> #chair morgan_orange SerenaFeng HelenYao
08:02:42 <collabot`> Current chairs: HelenYao SerenaFeng jose_lausuch morgan_orange
08:02:50 <jose_lausuch> #info agenda for today here: https://wiki.opnfv.org/display/functest/Functest+Meeting#FunctestMeeting-29/11(8UTC)
08:02:50 <jose_lausuch> #info previous minutes: http://ircbot.wl.linuxfoundation.org/meetings/opnfv-functest/2016/opnfv-functest.2016-11-22-08.00.html
08:02:50 <OPNFV-Gerrit-Bot> George Paraskevopoulos proposed functest: Fix tacker util script  https://gerrit.opnfv.org/gerrit/25075
08:02:56 <jose_lausuch> #topic role call
08:03:35 <morgan_orange> #info Morgan Richomme
08:03:40 <rohitsakala> #info rohitsakala
08:03:40 <HelenYao> #info Helen Yao
08:03:42 <SerenaFeng> #info SerenaFeng
08:03:45 <OPNFV-Gerrit-Bot> George Paraskevopoulos proposed functest: Fix tacker util script  https://gerrit.opnfv.org/gerrit/25075
08:04:52 <jose_lausuch> #topic Action point follow-up
08:04:57 <jose_lausuch> #info AP: HelenYao jose_lausuch SerenaFeng check if things could work with docker compose
08:05:04 <jose_lausuch> I didn't have the time yet
08:05:10 <SerenaFeng> me either
08:05:14 <HelenYao> me, neither
08:05:39 <jose_lausuch> maybe we can postpone it for E-river, let's see
08:05:52 <morgan_orange> seems reasonable...
08:06:19 <jose_lausuch> #info not done, considering postponing it to E-river if no further solutions found soon.
08:06:25 <jose_lausuch> #info AP: HelenYao jose_lausuch continue Alpine study
08:06:38 <jose_lausuch> I did do some test, but Helen, you really tried it right?
08:07:06 <HelenYao> yeah, no furthur investigation from last week
08:07:23 <jose_lausuch> this we could try at least, to save 200 mb
08:07:25 <morgan_orange> I tried Alpine for the reporting page, looks promising but work needed...not straight forward as the nev is totally different
08:07:31 <jose_lausuch> ya
08:07:37 <morgan_orange> nev=env
08:07:40 <jose_lausuch> it needs some changes in the way we install things
08:07:54 <jose_lausuch> #action HelenYao jose_lausuch continue investigation in Alpine
08:08:02 <jose_lausuch> ok?
08:08:07 <HelenYao> good
08:08:09 <jose_lausuch> #info AP: submit a patch with a new directory "openstack" functest/utils/openstack/ to put all the new utils. DONE
08:08:20 <jose_lausuch> #info AP: all review vping refactor and merge https://gerrit.opnfv.org/gerrit/#/c/24541/ . DONE
08:08:27 <jose_lausuch> #info AP: fix Domino test case in CI
08:08:39 <morgan_orange> #info done
08:08:45 <jose_lausuch> what's the status?
08:08:50 <jose_lausuch> ok
08:08:58 <jose_lausuch> thanks
08:09:00 <jose_lausuch> #topic Feature project integration requests (Milestone2)
08:09:02 <morgan_orange> even if we may probably disable domino as we just run the test to skip the processing...
08:09:21 <jose_lausuch> ok
08:09:36 <morgan_orange> linked to the thread on the 2 patches 24863 & 24745
08:10:04 <morgan_orange> regarding M2 and Movie, I did not answer to David for Movie
08:10:21 <jose_lausuch> let me post the info I have so far
08:10:23 <morgan_orange> do we agree that if a project does not need OPNFV it does not need Functest?
08:10:33 <jose_lausuch> yes, +1 to that
08:10:43 <jose_lausuch> it doesnt make much sense I'd say
08:10:52 <jose_lausuch> I see it as vsperf..
08:11:14 <morgan_orange> same for me
08:11:16 <jose_lausuch> #info Request from new feature projects: NetReady, Opera, Orchestra, Movie, Scalator, Ipv6, Barometer
08:11:23 <jose_lausuch> #info NetReady: a simple ping test case using Gluon API
08:11:32 <jose_lausuch> #info Opera: Clarwater vIMS deployment using Open-O instead of Cloudify
08:11:48 <jose_lausuch> #info Orchestra: not much info about the test plans.
08:11:52 <jose_lausuch> do you have any?
08:12:00 <morgan_orange> need to create the abstraction class for vnf onbaording
08:12:17 <jose_lausuch> #action morgan_orange create VNF onboarding abstraction class
08:12:26 <jose_lausuch> #info Escalator: test case for installer Daisy4NFV
08:12:43 <jose_lausuch> #info IPv6: Run entire functest suite but making use of the IPv6 endpoints. Maybe nothing extra to do here. Scenarios that will implement IPv6: apex/os-nosdn-nofeature-ha/noha,  apex/os-odl_l2-nofeature-ha/noha
08:13:27 <jose_lausuch> #info Barometer: test collecting information from the compute nodes to validate the metrics/timestamps generated by Ceilometer. Supported scenarios: fuel/os-nosdn-kvm_ovs-ha/noha, fuel/os-nosdn-kvm_ovs_dpdk-ha/noha
08:13:44 <jose_lausuch> #info Existing feature projects that will add new tests: SFC, SDNVPN
08:13:50 <jose_lausuch> do you know any other ?
08:13:54 <jose_lausuch> maybe I miss something
08:14:21 <morgan_orange> OAI test case (but considered as internal Functest case) + security group tests (internship)
08:14:44 <morgan_orange> apparently some refactoring on promise is planned
08:14:54 <jose_lausuch> feel free to info :)
08:15:13 <morgan_orange> #info OAI test case  + security group tests (internship) - (but considered as internal Functest case)
08:15:51 <morgan_orange> #info + healthcheck SNAPS
08:16:15 <jose_lausuch> thanks
08:16:28 <jose_lausuch> ok, let's get into matter
08:16:29 <jose_lausuch> #topic Framework re-factor status
08:16:43 <jose_lausuch> #info Discussion ongoing about what the tests should return (0 or !0) when the test fails.
08:16:54 <jose_lausuch> so, my view is
08:17:11 <jose_lausuch> We need to show RED in Jenkins when ANY of the tests failed. That was agreed and welcome by the community in general
08:18:24 <jose_lausuch> that is what we have been doing since summer, and I think it works fine
08:18:34 <morgan_orange> the thing is we have 3 status: Test executed and PASS, test executed and FAIL, test not executed
08:18:55 <jose_lausuch> test not executed = execution error
08:18:56 <morgan_orange> OK to report red when test id FAIL (good way to give feedback and to be strict)
08:19:58 <HelenYao> what kind of execution errors do we usually have?
08:20:34 <HelenYao> after we have a clear view about execution errors, we can decide whether to make jenkins red or not
08:20:38 <morgan_orange> it corresponds usually to bug in the framework (bad path, bad library, no connectivity)
08:20:39 <jose_lausuch> due to typos or problem when creating the pre-resources
08:20:40 <jose_lausuch> etc
08:21:09 <HelenYao> in that case, it would be better for jenkins to show read
08:21:11 <morgan_orange> today we report Red in both cases
08:21:12 <HelenYao> read = red
08:21:13 <jose_lausuch> in that case we should raise an exception
08:21:21 <jose_lausuch> yes
08:21:38 <jose_lausuch> if there are typos that cause problems, we show read of course
08:21:52 <jose_lausuch> but we don't report the test result to the DB
08:21:55 <morgan_orange> as far as I can see jenkins manages several states (Red/blue but also grey (interrupted) and yellow (not completed)
08:22:20 <jose_lausuch> fdegir: is there a way in Jenkins to show a "orange/yellow" ball for a job?
08:22:57 <jose_lausuch> maybe he is not in yet..
08:22:59 <morgan_orange> usually when you have a daily job (with deploy/functest/yardstick) you may have yellow if one of the 3 is red
08:23:14 <jose_lausuch> yes, but it is possible to manipulate that?
08:23:15 <HelenYao> how will we make use of different colors? if we only want to distinguish working or not, i think red and blue are enough
08:23:29 <morgan_orange> see https://build.opnfv.org/ci/, you have yellow
08:23:32 <jose_lausuch> like return 0 = blue,  return -1 = red,  what return value is yellow?
08:24:25 <morgan_orange> HelenYao: as we have 3 states we could imagine red = tests failed (scenario owner / feature proejct) and orange = execution error (usually our bad)
08:24:33 <morgan_orange> if we can have 2 states red and orange
08:24:43 <morgan_orange> if not we shoudl continue as we are today
08:25:03 <morgan_orange> red = execution | framework error
08:25:24 <morgan_orange> framework | test errors
08:25:46 <jose_lausuch> I suggest we continue as today until we find a way to report yellow
08:25:57 <morgan_orange> orange...yellow is already in used
08:26:16 <jose_lausuch> my view is that if we execute external tests, they should make sure to return !0 if the test fails
08:26:43 <jose_lausuch> yes, but it is used for example when there is a parent job and one of the children is red
08:26:59 <jose_lausuch> fuel daily job = fuel deploy + functest + yardstick
08:27:21 <morgan_orange> yes .. so it is already used yellow means that at least one of the children job is failed
08:27:28 <morgan_orange> it provides information
08:27:37 <morgan_orange> so we could not reuse the same
08:27:58 <morgan_orange> I can action myself and see with releng if we can customize it
08:28:11 <jose_lausuch> ok
08:28:28 <morgan_orange> if not keep on reporting red as it is now well accepted ...
08:28:36 <HelenYao> some jenkins plugin might be useful: https://wiki.jenkins-ci.org/display/JENKINS/Green+Balls
08:28:49 <HelenYao> just one sample
08:29:10 <jose_lausuch> do we agree on we report to DB when execution=ok ?
08:29:19 <jose_lausuch> and don't when execution=error ?
08:29:20 <HelenYao> #agreed
08:29:23 <morgan_orange> #action morgan_orange see with releng if we can customize the jenkins color to distinguish an error due to execution framework from an error due to the tests
08:29:36 <morgan_orange> #agreed
08:29:43 <jose_lausuch> #info Agreement on pushing results to DB when the execution is ok (test can FAIL/PASS) but not pushing when there is an execution error
08:29:57 <morgan_orange> we can also agree on the fact to exclude the SKIP state
08:30:00 <SerenaFeng> #agree
08:30:01 <morgan_orange> wich is useless
08:30:03 <jose_lausuch> yes
08:30:16 <morgan_orange> #info Agreement on removing SKIP state
08:30:21 <jose_lausuch> #info Agreement on exclude SKIP state (affeected scripts: domino, sdnvpn)
08:30:23 <jose_lausuch> is it domino?
08:30:26 <morgan_orange> yes
08:30:28 <jose_lausuch> ok
08:30:36 <jose_lausuch> I copy pasted it to sdnvpn :p
08:30:41 <jose_lausuch> I'll remove it as well
08:30:49 <morgan_orange> no need to start a feautre test suite if this suite is just skipped..
08:30:55 <SerenaFeng> I will remove from feature_base.py as well
08:31:10 <morgan_orange> probably the best way would be to revert the 2 patches...
08:31:24 <SerenaFeng> so will still have three status?
08:31:43 <jose_lausuch> #info Question: should the tests return 0 if the code execution is correct but the test failed?
08:31:52 <jose_lausuch> that is the key question now
08:32:05 <jose_lausuch> Cedric's opinion is that we shouldn't
08:32:06 <SerenaFeng> yeah
08:32:18 <jose_lausuch> I think we should, otherwise, how do we capture that the test failed?
08:32:26 <morgan_orange> exception raised?
08:32:26 <SerenaFeng> if we shouldn't , how we manage the report?
08:32:43 <jose_lausuch> sorry, my opinion is we should return !0 if the test fails (even though execution is ok)
08:32:54 <SerenaFeng> agree
08:33:00 <jose_lausuch> maybe that is not very elegant according to software engineering
08:33:06 <jose_lausuch> but we need to capture it some way
08:33:07 <HelenYao> agree on return !0
08:33:27 <morgan_orange> raising an exception would not be cleaner?
08:33:44 <jose_lausuch> raising an exception?
08:33:51 <jose_lausuch> that is when the code has troubles, right?
08:34:38 <morgan_orange> not necessarily
08:35:21 <HelenYao> i think for tescase itself, it is acceptable to raise exception. for the main running, it should catch the exception and decide whether to raise another exception or return !0
08:35:30 <OPNFV-Gerrit-Bot> George Paraskevopoulos proposed functest: Fix tacker util script  https://gerrit.opnfv.org/gerrit/25075
08:35:53 <HelenYao> the main running consists of several testcase running
08:36:15 <HelenYao> we may need to define different level of exception
08:37:03 <morgan_orange> if we catch the exception, we would be able to report to jenkins a test or an execution error. It would be centralized
08:37:26 <HelenYao> yes
08:37:48 <jose_lausuch> but raising an exception and quiting the test is the same as return !0
08:38:43 <HelenYao> i tend to support exception manipulation and it can be more scalable if we design it well
08:39:08 <SerenaFeng> I think it is ok if we use different exit_code to indicate different executing result
08:39:09 <morgan_orange> from a pragmatic perspective maybe...but from a software dev (I am not in this category) no...as you will report a !0 status to a processing that was correct..
08:40:08 <jose_lausuch> I just tried running this raise Exception('execution error')
08:40:13 <jose_lausuch> and the result value was 1
08:40:21 <HelenYao> exception will be more meaningful than exit_code and it can have object-oriented ability
08:40:30 <morgan_orange> yep
08:40:55 <jose_lausuch> 'execution error' was just a string, it could be 'test failed'
08:41:00 <morgan_orange> you can raise execution error, test error,...centralize the status in the running part, try/catch and decide what to do
08:41:09 <jose_lausuch> ok
08:41:11 <HelenYao> +1
08:41:14 <jose_lausuch> then the result is the same
08:41:16 <morgan_orange> we can customized our exception
08:41:24 <jose_lausuch> in run_test we catch the exception if there is
08:41:37 <jose_lausuch> and report green/red to yenkins?
08:41:39 <jose_lausuch> jenkins
08:41:59 <morgan_orange> in the abstract file add if criteria is not PASS raise Exception('Test...')
08:42:20 <morgan_orange> for jenkins we have to see if it possible but yes it is the idea
08:43:15 <jose_lausuch> but how do you know criteria is not passed?
08:43:20 <jose_lausuch> for external tests, for example
08:43:21 <HelenYao> if we design exception well, it can shows more information than exit_code such error msg
08:43:34 <morgan_orange> the info is already available (it is used to push results in the DB)
08:43:42 <morgan_orange> self.criteria
08:43:47 <jose_lausuch> ok sure
08:43:48 <jose_lausuch> but
08:43:50 <jose_lausuch> some feature tests
08:43:57 <jose_lausuch> use a command to run the external test
08:44:01 <SerenaFeng> If we use exception to indicate the execution error, using the error_message to indicate why it is not executed, I think it make sense
08:44:07 <jose_lausuch> what do we impose the external scripts?
08:44:40 <SerenaFeng> using it to indicate the test failure, I don't get it
08:45:07 <HelenYao> jose_lausuch: can you give more information?
08:45:16 <morgan_orange> jose_lausuch: do not catch the "impose the external scripts"
08:45:17 <HelenYao> i don't quite follow it
08:45:26 <jose_lausuch> https://wiki.opnfv.org/download/attachments/8685677/sfc-example.JPG?version=3&modificationDate=1479914953000&api=v2
08:45:32 <jose_lausuch> this picture
08:45:41 <jose_lausuch> imagine external tests
08:45:45 <jose_lausuch> from other repos
08:46:00 <jose_lausuch> we dont have control over the logic, we can just impose what to return and so on
08:46:18 <jose_lausuch> we have our wrapper script, like domino, promise, etc
08:46:22 <jose_lausuch> in this case, sfc
08:46:31 <jose_lausuch> they will have run_tests.py to run their different tests
08:46:33 <SerenaFeng> when a test is executed, it returns the test is success or fail, it makes sense
08:46:42 <jose_lausuch> what should run_tests.py return?
08:47:04 <SerenaFeng> if something wrong happened cause the test is executed, raise exception to indicate why it is not excuted
08:47:11 <SerenaFeng> in this way I can understand
08:47:39 <jose_lausuch> it's getting late...
08:47:44 <jose_lausuch> can we have a follow up on this?
08:47:52 <HelenYao> sure
08:47:53 <morgan_orange> I think it is what Cedric mentioned as missing we need add something to run(), push_to_db(), report_result()
08:48:15 <morgan_orange> I think Cedric said he will provide a patch
08:48:20 <jose_lausuch> yes
08:48:21 <HelenYao> how about waiting to see how Cedric proposed?
08:48:39 <jose_lausuch> #action Cedric, propose a patch for TestCaseBase
08:48:56 <jose_lausuch> let's wait for that and have a follow-up discussion about it
08:49:05 <jose_lausuch> I'd like to talk about this as well
08:49:06 <jose_lausuch> #topic Unified way to provide constants and env variables
08:49:07 <HelenYao> sounds good
08:49:12 <jose_lausuch> Serena? all yours :)
08:49:33 <SerenaFeng> Unified way to provide constants and env variables?
08:50:00 <jose_lausuch> yes, your poposal about separating constants and env variables
08:50:06 <jose_lausuch> since we are mixing things
08:50:14 <jose_lausuch> you wanted to propose a env.py module
08:50:15 <SerenaFeng> ok, I will try to do it
08:50:27 <jose_lausuch> can you explain your idea a bit?
08:50:28 <morgan_orange> jose_lausuch: it is helen patch
08:50:42 <morgan_orange> no?
08:50:52 <jose_lausuch> yes, but there was a discussion about separating constants and env variables that are relevant
08:51:01 <jose_lausuch> Serena proposed having an env.py
08:51:01 <morgan_orange> ok
08:51:12 <SerenaFeng> for now we are mixing env with config_functest.py
08:51:19 <morgan_orange> note that in the releng module now there is a constants.py
08:51:41 <morgan_orange> https://git.opnfv.org/releng/tree/modules/opnfv/utils/constants.py
08:51:49 <SerenaFeng> constants here is not the same with releng's
08:52:02 <SerenaFeng> here we mean node_name, build_tag
08:52:18 <SerenaFeng> and all the things we configured in config_functest.yml
08:52:46 <morgan_orange> ok to have an env.py
08:52:49 <morgan_orange> would be cleaner
08:53:13 <SerenaFeng> my idea is to seperate things like node_name with things in config_functest.yaml
08:53:36 <jose_lausuch> #info idea about seperating things like node_name with things in config_functest.yaml
08:53:57 <SerenaFeng> things in config)functest.yaml called configuration
08:54:18 <SerenaFeng> when we access config_functest.yaml failed, we raise exception directly
08:54:20 <jose_lausuch> #info differenciate between configuration parameteres and environment variables
08:54:33 <SerenaFeng> instead of reading from ENVs again
08:55:09 <jose_lausuch> yes, I agree, if it is a configuration parameter, we shouldn't check the environment
08:55:13 <HelenYao> i tend to support the idea that no let the downstream to decide whether the value he wants is a config param or an env param
08:55:16 <jose_lausuch> something to change in constants.py maybe
08:55:42 <HelenYao> the downstream does not have the knowledge
08:56:07 <SerenaFeng> and to the outside we will provide constants.py, but to the internal, we still need to tell the difference between these two, using env.py and config.py
08:56:10 <jose_lausuch> the knowledge is in config_functest,yaml
08:56:26 <jose_lausuch> ok
08:56:33 <SerenaFeng> and we only expose config_functest.yml in docker file
08:56:57 <SerenaFeng> all the other things can be obtained from config_functest.yml
08:57:04 <SerenaFeng> or env
08:57:20 <jose_lausuch> #action SerenaFeng propose a patch about this topic
08:57:37 <jose_lausuch> 3 minutes
08:57:38 <jose_lausuch> #topic Update Swagger framework + associated documentation using swagger2markup
08:58:02 <SerenaFeng> rohitsakara had did a very good job
08:58:22 <SerenaFeng> he will give us the introduction
08:58:22 <SerenaFeng> ping rohitsakara
08:58:23 <jose_lausuch> rohitsakala great!
08:58:28 <rohitsakala> I summarized what I found in this doc
08:58:29 <rohitsakala> https://docs.google.com/document/d/1jWwVZ1ZpKgKcOS_zSz2KzX1nwg4BXxzBxcwkesl7krw/edit?usp=sharing
08:58:34 <rohitsakala> Thanks @jose_lausuch
08:58:40 <rohitsakala> Thanks @SerenaFeng
08:58:47 <jose_lausuch> #link https://docs.google.com/document/d/1jWwVZ1ZpKgKcOS_zSz2KzX1nwg4BXxzBxcwkesl7krw/edit?usp=sharing
08:59:15 <morgan_orange> looks great - so we all agree on using the swagger2markup.. :)
08:59:41 <SerenaFeng> he gives us 4 options
08:59:57 <SerenaFeng> 1 and 3 only support swagger-ui 1.2
09:00:06 <SerenaFeng> soryy 1 and 4
09:00:16 <SerenaFeng> 2 and 3 support swagger1.2 and 2.0
09:00:35 <jose_lausuch> #link http://swagger2markup.github.io/spring-swagger2markup-demo/1.1.0/
09:00:52 <SerenaFeng> as we are using 1.2, and there's tools support it , I think we don't need to upgrade our swagger-ui now
09:01:19 <jose_lausuch> we agree?
09:01:52 <jose_lausuch> #info no need to upgrade swagger-ui for now, as we are using v1.2 and there're tools that support it
09:02:04 <jose_lausuch> awesome
09:02:13 <SerenaFeng> so there's swagger-codegen and swagger2markup left
09:02:38 <SerenaFeng> swagger-codegen is maintained by the same team with swagger-ui, which we are using
09:03:08 <rohitsakala> yeah.
09:03:16 <jose_lausuch> so what is the proposal?
09:03:23 <rohitsakala> I also gave a link to bash script on how to run swagger-codegen
09:03:41 <SerenaFeng> there's a script written by rohisakala
09:04:01 <rohitsakala> Script :- https://usercontent.irccloud-cdn.com/file/WBMjul3s/script.sh
09:04:04 <SerenaFeng> as I said he does a very good job
09:04:19 <rohitsakala> Command :- bash script.sh
09:04:20 <SerenaFeng> #link https://usercontent.irccloud-cdn.com/file/WBMjul3s/script.sh
09:04:26 <HelenYao> rohitsakala: well done
09:04:34 <rohitsakala> We can improve the UI using CSS.
09:04:42 <jose_lausuch> ok
09:04:49 <morgan_orange> He probably has a better view than us on the best candidate
09:05:04 <morgan_orange> and the opportunity to migrate to 2.0 or not
09:05:04 <jose_lausuch> #info proposal to use swagger-codegen
09:05:09 <SerenaFeng> agree
09:05:20 <rohitsakala> agree
09:05:27 <SerenaFeng> rohitsakala, which one do you prefer?
09:05:29 <rohitsakala> @morgan_orange
09:05:29 <collabot`> rohitsakala: Error: "morgan_orange" is not a valid command.
09:05:42 <rohitsakala> I think swagger -codegen does the job
09:05:44 <jose_lausuch> rohitsakala: you dont need the "@" :)
09:05:47 <morgan_orange> ok
09:05:57 <rohitsakala> and no need to update to swagger 2.0 as of now.
09:06:00 <morgan_orange> @agreed use of swagger-codegen
09:06:00 <collabot`> morgan_orange: Error: "agreed" is not a valid command.
09:06:05 <rohitsakala> jose_lausuch: ok
09:06:11 <morgan_orange> #agreed use of swagger-codegen
09:06:17 <SerenaFeng> ok, agree
09:06:19 <jose_lausuch> ok
09:06:34 <rohitsakala> I have a doubt ?\
09:06:41 <morgan_orange> we may probably share with the other testing projects...as yardstick, qtip are develpping APi, it is maybe interesting for them as well
09:07:04 <SerenaFeng> yeah, at least qtip is tring to do the same thing
09:07:29 <morgan_orange> so maybe plan a 10 minutes presentations during the next APAC testing weekly meeting?
09:07:41 <SerenaFeng> I ask them to wait until rohitsakala finish the work
09:07:42 <rohitsakala> should I create a jenkins job for the automatic updation of the api documentation whenever someone changes the code of the api like add a new api call.
09:07:45 <HelenYao> from testing projects point of view, ui consistency is important
09:08:05 <jose_lausuch> rohitsakala: that would be a good idea
09:08:09 <jose_lausuch> no need for manual updates
09:08:24 <rohitsakala> jose_lausuch:  Sure. :)
09:08:25 <SerenaFeng> and we can use it as a verification when submit code
09:08:36 <morgan_orange> but it means that the API hosted in testresults.opnfv.org should also be automatically updated
09:08:45 <SerenaFeng> morgan_orange, is unittest of testapi still used?
09:08:48 <rohitsakala> morgan_orange: yeah
09:08:51 <jose_lausuch> we need a jenkins slave on that machine
09:09:17 <morgan_orange> SerenaFeng: hmm only manually I think
09:09:26 <SerenaFeng> morgan_orange there's another question I want to raise
09:09:52 <SerenaFeng> sometimes the testresults.opnfv.org maybe different with the repo
09:10:24 <SerenaFeng> when we provide the restful api information in our wiki page, which one we should follow?
09:10:44 <SerenaFeng> the testresults.opnfv.org?  or the current code
09:10:51 <morgan_orange> ideally repo => automatic update on testresults.opnfv.org
09:11:13 <morgan_orange> today people have access to testresults.opnfv.org , so that is what they see/can use
09:11:25 <SerenaFeng> so we need the automatic deployment now
09:11:31 <morgan_orange> but +1 to automate the deployment
09:11:35 <morgan_orange> it is time...
09:11:41 <jose_lausuch> +1 too
09:11:43 <SerenaFeng> and automatic upgrade
09:11:48 <SerenaFeng> +1
09:12:07 <rohitsakala> we can add the build script for api docs also in that job.
09:12:40 <morgan_orange> yes
09:12:59 <morgan_orange> #action morgan_orange grant access to rohitsakala to testresults.opnfv.org
09:13:43 <jose_lausuch> we are late :)
09:13:52 <SerenaFeng> ok, that's all I can think of now
09:13:52 <jose_lausuch> but good that you don't have other meetings now
09:14:08 <jose_lausuch> let me share one last thing before we close for today
09:14:11 <morgan_orange> #topic PlugFest§Hackfest
09:14:18 <jose_lausuch> oh, thanks
09:14:18 <morgan_orange> #info Jose will attend...
09:14:18 <jose_lausuch> #info Juha and Jose will attend the event
09:14:33 <jose_lausuch> #info We will prepare a Functest demo.
09:14:44 <HelenYao> anything we can help
09:14:53 <morgan_orange> any points to be discussed F2F?
09:15:00 <jose_lausuch> #info need support to set up a Database to store local results and possible Dashboard
09:15:03 <morgan_orange> probably several question on feature/vnf onboarding..
09:15:10 <jose_lausuch> #link https://wiki.opnfv.org/display/EVNT/Co-located+Plugfest-Hackfest+Planning+Page
09:15:35 <jose_lausuch> #info Test Community Common Goals and Priorities by Trevor and me
09:15:39 <jose_lausuch> probably
09:15:44 <morgan_orange> jose_lausuch: for the last edition, we created plugfest.opnfv.org (a clone of testresults.opnfv.org)
09:16:02 <morgan_orange> I also saw tha Trevor initiated page for dovetail
09:16:05 <jose_lausuch> let's have an offline training for me if you agree :)
09:16:19 <morgan_orange> #info Functest versus Dovetail
09:16:32 <jose_lausuch> versus? :)
09:16:33 <morgan_orange> #info Trevor Initiated some etherpads:https://etherpad.opnfv.org/p/yardstickcvp, https://etherpad.opnfv.org/p/vsperfcvp
09:16:45 <morgan_orange> shall we do the same?
09:16:52 <morgan_orange> probably to be discussed during the plugfest
09:17:05 <jose_lausuch> the, the dovetail folks will be there
09:17:12 <jose_lausuch> I plan to attend that session as well
09:17:13 <morgan_orange> jose_lausuch: no problem for the setup of the DB, maybe plugfest.opnfv.org is still available...
09:17:27 <morgan_orange> #topic AoB
09:17:54 <morgan_orange> #info work on landing page + data model to be initiated next week (with jack and Rex)
09:18:06 <morgan_orange> need to go (fire alarm exercice)
09:18:16 <HelenYao> I proposed https://gerrit.opnfv.org/gerrit/#/c/25059/ about newton support, your feedback is appreciated
09:18:18 <SerenaFeng> and jose_lausuch ask me help deploy a testapi, I said I will send a email, butI totally forget about it, very sorry about that
09:18:33 <SerenaFeng> jose_lausuch do you still need that
09:18:33 <HelenYao> #info I proposed https://gerrit.opnfv.org/gerrit/#/c/25059/ about newton support, your feedback is appreciated
09:19:06 <jose_lausuch> SerenaFeng: yes, no worries, I need to do it for next week, but maybe we can reuse plugfest.opnfv.org
09:19:18 <jose_lausuch> thanks HelenYao, I'll take a look
09:19:48 <SerenaFeng> what should we do with https://gerrit.opnfv.org/gerrit/#/c/24801/
09:20:20 <jose_lausuch> SerenaFeng: we continue, it's a good idea
09:20:27 <SerenaFeng> will I wait until we come into some conclusion?
09:20:33 <jose_lausuch> ok
09:20:37 <SerenaFeng> ok
09:20:49 <jose_lausuch> thanks everyone
09:20:51 <jose_lausuch> #endmeeting