08:02:09 <jose_lausuch> #startmeeting Functest weekly meeting January 24th 2017
08:02:09 <collabot> Meeting started Tue Jan 24 08:02:09 2017 UTC.  The chair is jose_lausuch. Information about MeetBot at http://wiki.debian.org/MeetBot.
08:02:09 <collabot> Useful Commands: #action #agreed #help #info #idea #link #topic.
08:02:09 <collabot> The meeting name has been set to 'functest_weekly_meeting_january_24th_2017'
08:02:22 <jose_lausuch> #topic role call
08:02:26 <jose_lausuch> #info Jose Lausuch
08:02:29 <morgan_orange> #info Morgan Richomme
08:02:36 <HelenYao> #info Helen Yao
08:02:40 <jose_lausuch> #chair morgan_orange
08:02:40 <collabot> Current chairs: jose_lausuch morgan_orange
08:02:50 <SerenaFeng> #info SerenaFeng
08:02:50 <LindaWang> #info Linda Wang
08:02:55 <jose_lausuch> #link https://wiki.opnfv.org/display/functest/Functest+Meeting#FunctestMeeting-24/01(8UTC)
08:03:51 <morgan_orange> #topic Action point follow-up 5' (of the previous 2 meetings)
08:04:01 <morgan_orange> #link http://ircbot.wl.linuxfoundation.org/meetings/opnfv-functest/2017/opnfv-functest.2017-01-10-08.00.html
08:04:19 <morgan_orange> #info AP1: HelenYao check compass issues (issue with keystone version during snapshot/clean phase)
08:04:27 <jose_lausuch> #topic action items
08:04:33 <jose_lausuch> #undo
08:04:33 <collabot> Removing item from minutes: <MeetBot.ircmeeting.items.Topic object at 0x202fb10>
08:04:40 <jose_lausuch> didnt see you put it
08:05:02 <morgan_orange> HelenYao: I think this AP is under control
08:05:17 <HelenYao> the baremetal is failing and the virtual is ok
08:05:33 <morgan_orange> #info baremetal is failing and the virtual is ok
08:05:41 <HelenYao> https://jira.opnfv.org/projects/FUNCTEST/issues/FUNCTEST-680
08:05:51 <HelenYao> this epic has all the open issues on CI
08:05:51 <morgan_orange> #link https://jira.opnfv.org/projects/FUNCTEST/issues/FUNCTEST-680
08:06:08 <morgan_orange> #info AP2 morgan_orange initiate a mail with Functest contributor to indicate current status on community lab access
08:06:08 <morgan_orange> steve ask for access to ad oc community labs
08:06:33 <morgan_orange> #info done (got feedback on the mailing list)+ great virtual labs from jose
08:06:50 <morgan_orange> #info AP3 steve ask for access to ad oc community labs
08:06:52 <morgan_orange> #info done
08:07:00 <morgan_orange> #info AP4 jose_lausuch remove colorado view in jenkins functest page
08:07:02 <morgan_orange> #info done
08:07:09 <morgan_orange> #info AP5 jose_lausuch update JIRA sprint
08:07:13 <jose_lausuch> you are too fast :)
08:07:13 <morgan_orange> I think it is done..
08:07:17 <jose_lausuch> #info done
08:07:26 <jose_lausuch> this week we start sprint 5
08:07:40 <morgan_orange> #info sprint 5 starts this week
08:07:44 <morgan_orange> #info AP6 ashishk sinc with Mark Beier
08:08:14 <jose_lausuch> is he in the chat?
08:08:24 <morgan_orange> #info mbeir made a presentation on unit test last week during testing wg meeting, new one is planned to exchange on the project best practices
08:08:41 <morgan_orange> #action ashishk sinc with Mark Beier
08:08:52 <morgan_orange> #info AP7 plan slot with GTM on the 26/1 for unit tests => done
08:09:02 <SerenaFeng> about this
08:09:06 <morgan_orange> #info AP8 plan slot with GTM on the 2/2 for unit tests => done
08:09:15 <SerenaFeng> at that time I will be on the train
08:09:39 <morgan_orange> there is no rush we may postpone if it is more convenient
08:09:46 <morgan_orange> please update the meeting page
08:09:53 <SerenaFeng> so I will not attend it
08:09:55 <HelenYao> all members in china will be OOO on Feb.2
08:10:08 <jose_lausuch> official holiday?
08:10:12 <HelenYao> yes
08:10:13 <morgan_orange> OK so makes sense to move that to the 9th
08:10:14 <jose_lausuch> ok
08:10:17 <jose_lausuch> yep
08:10:19 <HelenYao> the last day of spring festival
08:10:31 <HelenYao> Feb.9 sounds good
08:10:40 <morgan_orange> #action jose_lausuch review agenda for teh 2 (public Holiday in China)
08:10:49 <morgan_orange> #info AP9: morgan_orange plan a meeting tomorrow 5PM CET to discuss VNF on boarding - send invitation and bridge details
08:11:00 <morgan_orange> #info done, on VNF onboarding patch submitted last week
08:11:12 <morgan_orange> #info AP10 all feedback requested on https://gerrit.opnfv.org/gerrit/#/c/26695/
08:11:21 <jose_lausuch> #info done and merged
08:11:33 <morgan_orange> now teh AP from last week :)
08:11:45 <morgan_orange> #info AP11: jose_lausuch contact Copper PTL to see what happens with Copper test case
08:12:02 <morgan_orange> I think LindaWang is trying to fix path issues dealing with copper
08:12:05 <jose_lausuch> #info Bryan was saying that he can't test it without community labs
08:12:27 <morgan_orange> possible to test on virtual pod (Apex), at least the path issue..
08:12:48 <morgan_orange> can we action LindaWang to test her patch on the virtual pod for that?
08:12:53 <jose_lausuch> yes
08:12:59 <LindaWang> ok
08:13:13 <morgan_orange> #action LindaWang check path issue with copper on apex virtual pod
08:13:19 <morgan_orange> #info AP12: jose_lausuch contact Doctor PTL for Doctor test case
08:13:27 <jose_lausuch> #info not done
08:13:47 <morgan_orange> #info AP13: juhak troubleshoot on Apex error
08:13:54 <morgan_orange> juhak: any update
08:14:00 <juhak> #info occasional failures, seems to be related to odl
08:14:11 <juhak> Tim's comment in APEX-380: "ODL failed due to a floating IP bug in ODL, which should be resolved by building a new RPM for ODL"
08:14:52 <jose_lausuch> juhak: do we have the same behaviour in our vpod?
08:15:20 <jose_lausuch> juhak: if we get a new RPM do you know how to re-deploy with that new package?
08:15:21 <juhak> is there vpod deployed with odl?
08:15:36 <jose_lausuch> juhak: no
08:15:42 <jose_lausuch> juhak: maybe we could add it
08:15:52 <morgan_orange> something we already discuss and is beyond Functest scope, it would make sense at the beginning of the test to get the version of components i.e. odl, it would make sense to get this information from the installers
08:16:08 <morgan_orange> #info AP14 talk to trozet about openstack version vs keystone version. Can we use v3 as all the others?
08:16:33 <jose_lausuch> #action jose_lausuch talk to trozet about openstack version vs keystone version. Can we use v3 as all the others?
08:16:45 <morgan_orange> #info AP15 review all https://gerrit.opnfv.org/gerrit/#/c/27015/
08:17:01 <jose_lausuch> #info merged
08:17:01 <morgan_orange> #info done and merged
08:17:05 <jose_lausuch> man :D
08:17:18 <jose_lausuch> #undo
08:17:18 <collabot> Removing item from minutes: <MeetBot.ircmeeting.items.Info object at 0x24608d0>
08:17:20 <morgan_orange> #info AP16 SerenaFeng get an status update about Compass deployment problems
08:17:38 <morgan_orange> it was HelenYao not SerenaFeng because Zerena is working for Zte and Helen fir Huawei
08:17:47 <jose_lausuch> yes
08:17:53 <jose_lausuch> but I think it is duplicated anyway
08:17:55 <SerenaFeng> yes ;)
08:17:58 <jose_lausuch> there was already an AP
08:18:01 <HelenYao> the compass has been working on the endpoint issue
08:18:13 <HelenYao> once that patch is fixed, the baremetal build would move on
08:18:14 <morgan_orange> #info AP16 canceled => see AP17
08:18:15 <morgan_orange> #info AP 17 HelenYao get an status update about Compass deployment problems
08:18:43 <morgan_orange> yes we saw new results on the reporting page, including compass deployement with SNAPS tests OK...
08:18:58 <morgan_orange> #info AP18 jose_lausuch talk to jmorgan1 about intel pods status
08:19:14 <morgan_orange> #infopods seem back to life, new runs from joid seen in reporting page
08:19:15 <jose_lausuch> #info intel pods are back, JOID is back
08:19:21 <morgan_orange> do not undo
08:19:32 <morgan_orange> it is better than my inputs.. :)
08:19:39 <jose_lausuch> you missed a space this time :p
08:19:44 <morgan_orange> #info AP19 morgan_orange contact narinder to get visibility...would be hard to troubleshoot if no run before the end of january
08:20:01 <morgan_orange> #info see AP18, joid is back, issues with SNAPS see next topics
08:20:18 <morgan_orange> #info AP19 jose_lausuch test connection_check on a fuel env
08:20:39 <morgan_orange> #unfo
08:20:41 <morgan_orange> #undo
08:20:41 <collabot> Removing item from minutes: <MeetBot.ircmeeting.items.Info object at 0x2460e10>
08:20:41 <jose_lausuch> #info didn't have time myself, but Steven tried and found to root cause
08:20:56 <morgan_orange> #info AP20 jose_lausuch test connection_check on a fuel env
08:21:18 <morgan_orange> 20 APs i think it is our record...
08:21:19 <jose_lausuch> #info problem with the RC file in Fuel. Maybe we could tweak the RC file coming from the installer (fetch_os_creds)
08:21:35 <morgan_orange> probably similar issues with joid
08:21:47 <jose_lausuch> ya
08:21:57 <jose_lausuch> the v3 in the endpoint url was missin
08:21:59 <jose_lausuch> strange
08:22:01 <jose_lausuch> anyway
08:22:04 <jose_lausuch> will continue with that
08:22:07 <jose_lausuch> #topic Functest server
08:22:18 <jose_lausuch> #info Baremetal server provided to Functest team.
08:22:33 <jose_lausuch> #info 2 virtual deployments: Fuel + Apex
08:22:43 <morgan_orange> #info thanks to infra team! very appreciated
08:22:55 <morgan_orange> #info we should find a way to get similar resources for compass/joid
08:22:56 <jose_lausuch> #info it is possible to create more deployments in Fuel so that each one of us have a different one to play with
08:23:21 <jose_lausuch> hehe yes
08:23:29 <jose_lausuch> I'm also very much an infra guy :)
08:23:42 <HelenYao> jose_lausuch: bravo
08:23:57 <jose_lausuch> #action create more than 1 fuel deployment. Resources are ok (enough disk and ram)
08:24:02 <jose_lausuch> who of you want a fuel deployment?
08:24:09 <jose_lausuch> I can try to create up to 3 or 4
08:24:14 <jose_lausuch> I'd like to have 1 :)
08:24:32 <morgan_orange> you think it makes more sense to have a dedicated by tester or different scenario
08:24:42 <jose_lausuch> HelenYao: LindaWang: is it ok for you to share one?
08:24:46 <morgan_orange> I think it would make sense to have a nosdn-nofeature, a odlènofeature
08:24:54 <morgan_orange> and 2 that could be changed depending on the need
08:24:56 <jose_lausuch> yes
08:24:59 <HelenYao> jose_lausuch: it's ok
08:25:02 <LindaWang> jose_lausuch:  I am fine with it.
08:25:05 <morgan_orange> bgpvpn, sfc, ovs, ...
08:25:18 <jose_lausuch> I would like to go so deep
08:25:23 <jose_lausuch> for example
08:25:28 <jose_lausuch> there is a POD already for SFC guys
08:25:32 <jose_lausuch> no need to do it ourselves
08:25:38 <jose_lausuch> same for bgpvpn
08:25:45 <jose_lausuch> I would add only ODL
08:25:55 <jose_lausuch> well, I will add the plugin
08:26:07 <jose_lausuch> and I'll assing a deployment for everyone
08:26:11 <jose_lausuch> and then you can do whatever you want
08:26:22 <jose_lausuch> you can re-deploy many times in the gui
08:26:31 <jose_lausuch> you can activate odl and so on
08:26:35 <jose_lausuch> would that be ok?
08:26:44 <SerenaFeng> I think we can deploy them with different scenarios
08:27:00 <HelenYao> once the fuel mode works, we can set up a compass by following the same pattern
08:27:04 <SerenaFeng> so everyone can access
08:27:17 <jose_lausuch> but what scenarios?
08:27:20 <SerenaFeng> in this way every one will have resources to test
08:27:34 <SerenaFeng> scenarios functest supported
08:27:36 <jose_lausuch> my idea is that everyone gets a pod to play with
08:27:40 <jose_lausuch> and no collisions
08:27:50 <SerenaFeng> sdn /nosdn/ odl_l2/odl_l3 ...etc
08:28:14 <jose_lausuch> I can prepare those 3 as well
08:28:14 <jose_lausuch> ok
08:28:30 <HelenYao> r u referring to deploying one scenario at a time on the pod and recreate it per everyone's interest
08:28:31 <jose_lausuch> let's do it like that then
08:28:48 <SerenaFeng> if all the pod are the same scenarios, still there will be some tests cannot be tested
08:29:18 <jose_lausuch> #action create 3 scenarios: nosdn, odl_l2, odl_l3
08:29:51 <jose_lausuch> ok?
08:29:56 <SerenaFeng> ok
08:29:58 <HelenYao> awesome
08:30:01 <juhak> ok
08:30:25 <LindaWang> ok
08:30:27 <HelenYao> if there is anything that we can help, pls let us know
08:30:28 <jose_lausuch> for Apex I can't do that, at least I think we can't have more than 1 at the same time..
08:30:33 <jose_lausuch> HelenYao: ok, thanks
08:30:50 <jose_lausuch> what scenario do you want in apex?
08:30:53 <jose_lausuch> nosdn as now?
08:31:47 <HelenYao> we can wait to see if the nosdn is working. if it works, we can set up more
08:31:59 <jose_lausuch> ok
08:32:02 <jose_lausuch> good
08:32:12 <jose_lausuch> #topic Troubleshooting status (short update)
08:32:29 <jose_lausuch> #info morgan_orange and HelenYao already sent some information by email
08:33:00 <jose_lausuch> #info JOID is back and running Functest https://build.opnfv.org/ci/view/functest/job/functest-joid-baremetal-daily-master/
08:33:24 <jose_lausuch> #info some ugly warnings in the healthcheck
08:33:46 <jose_lausuch> #info   There was a JIRA about refactoring the healthcheck, but I think we should close it and replace it by SNAPS asap (when it works on all the installers)
08:34:04 <morgan_orange> yep +1, wait for SNAPS to be OK on all installers then remove bash
08:34:36 <HelenYao> +1
08:34:37 <jose_lausuch> there is a small issue
08:34:45 <jose_lausuch> healthcheck creates a flavor that vping uses
08:34:55 <jose_lausuch> if you run vping without healhcheck it fails due to missing flavor
08:35:03 <jose_lausuch> maybe we should add the creation of that flavor
08:35:04 <morgan_orange> #link http://testresults.opnfv.org/reporting/functest/release/master/index-status-apex.html
08:35:05 <HelenYao> yeah, that is a problem
08:35:18 <SerenaFeng> agredd
08:35:20 <HelenYao> i think we need to create the flavor during the prepare_env
08:35:26 <morgan_orange> +1
08:35:29 <jose_lausuch> morgan_orange: no weather icons?
08:35:30 <SerenaFeng> vping shouldn't rely on healthcheck
08:35:41 <jose_lausuch> HelenYao: I would like to avoid that
08:35:51 <SerenaFeng> they are independent testcases
08:35:55 <jose_lausuch> each test case should create whatever it needs
08:36:01 <SerenaFeng> agree
08:36:02 <HelenYao> we seem to use m1.tiny, m1.large in our code
08:36:11 <SerenaFeng> and should clean the resources it created
08:36:18 <jose_lausuch> SerenaFeng: yes
08:36:22 <morgan_orange> yes each test shall be autonomous
08:36:27 <jose_lausuch> we also need to add flavor clean in openstack_clean
08:36:34 <SerenaFeng> for now it is okey, because all the testcases run in serial
08:36:36 <HelenYao> jose_lausuch: in that way, we need to add pre_env and clean in testcasebase
08:36:49 <SerenaFeng> it will be a problem after we run them in parallel
08:36:51 <jose_lausuch> pre_env?
08:36:58 <jose_lausuch> yes
08:37:06 <morgan_orange> in the vnfBase, there is a prepare method to create user/tenant/..;
08:37:08 <jose_lausuch> using SNAPS we don't need to do our cleanup
08:37:22 <HelenYao> yeah, the testcasebase should provide abstract method of pre_env and clean_env
08:37:39 <SerenaFeng> create tenant/user/flavor should be managed by testcase itself
08:37:52 <SerenaFeng> each testcase need different resources
08:38:04 <HelenYao> SerenaFeng: agree
08:38:04 <SerenaFeng> it is difficult for testcasebase to manage them
08:38:13 <morgan_orange> for VNF onboarding we systematically need that
08:38:45 <morgan_orange> we create a tenant/user with the name of the vnf and we clean everything at the end
08:38:54 <HelenYao> an abstract method in testcasebase and the child class can implement is based on its real need
08:39:19 <SerenaFeng> I think we need to sync some work in featurebase to testcasebase
08:39:24 <jose_lausuch> in that sense, that is very similar to HEAT
08:39:49 <HelenYao> jose_lausuch: what is HEAT? the openstack service?
08:39:54 <jose_lausuch> oh
08:40:05 <SerenaFeng> like prepare/post/log_results....etc
08:40:09 <jose_lausuch> HelenYao: HEAT is cool, it's a short way to create all the resources you need
08:40:16 <jose_lausuch> HelenYao: it has its own cleanup
08:40:33 <HelenYao> jose_lausuch: could u provide a link?
08:40:38 <jose_lausuch> https://wiki.openstack.org/wiki/Heat
08:40:43 <morgan_orange> it will probably be a bit short to rething that for Danube
08:40:52 <morgan_orange> I think that the priority is now to make the test cases run
08:40:53 <jose_lausuch> http://docs.openstack.org/developer/heat/template_guide/hot_guide.html
08:40:59 <morgan_orange> and see such evolution for E
08:41:02 <jose_lausuch> morgan_orange: +1
08:41:11 <SerenaFeng> +1
08:41:27 <HelenYao> +1
08:41:29 <jose_lausuch> ok
08:41:33 <jose_lausuch> let's move on
08:41:36 <morgan_orange> for joid/SNAPS the error is strange  return os_credentials.OSCreds(username=config['OS_USERNAME'],
08:41:36 <morgan_orange> KeyError: 'OS_USERNAME'
08:41:44 <morgan_orange> this env variable is in theory created
08:42:02 <morgan_orange> but when SNAPS fail, it exists and we do not have the functest.log to check
08:42:05 <jose_lausuch> ya, it should
08:42:33 <jose_lausuch> shall we ask Steven to modify that behaviour?
08:42:41 <morgan_orange> we can discuss with him
08:42:49 <jose_lausuch> can you take the action?
08:42:55 <morgan_orange> but I think it makes sense to get a FAIL status without exiting the CI
08:43:01 <HelenYao> OS_USERNAME is demanded for every auth
08:43:02 <jose_lausuch> me too
08:43:10 <morgan_orange> #action morgan_orange contact steve to discuss exit conditions in SNAPS
08:43:21 <jose_lausuch> thanks
08:43:26 <HelenYao> how come OS_USERNAME is not provided
08:43:28 <morgan_orange> HelenYao: yes that is why is surprising...Tempest suite works well..
08:43:39 <morgan_orange> without OS_USERNAME, nothing will work
08:43:40 <HelenYao> hmm, interesting
08:43:44 <morgan_orange> but ok let's move on
08:43:46 <jose_lausuch> maybe it is provided, but not part of config[]
08:43:48 <jose_lausuch> #topic Dockerfile for ARM
08:44:11 <jose_lausuch> #info ARMBAND team needs a different Dockerfile to run Functest on ARM based PODs
08:44:27 <jose_lausuch> #info the differences are not big, but still it's a different Dockerfile
08:44:34 <jose_lausuch> the question is
08:44:45 <jose_lausuch> they willl provide a build -server to build the image
08:45:00 <jose_lausuch> 1) Should we update a new image with a new tag? latest_arm or something?
08:45:10 <SerenaFeng> what is the difference?
08:45:13 <jose_lausuch> 2) How do we do the automated build in jenkins?
08:45:27 <jose_lausuch> some x64 libraries we install need to be different
08:45:41 <jose_lausuch> for example
08:45:47 <HelenYao> we can create a new job for arm on CI
08:45:58 <jose_lausuch> yes, thats clear
08:45:59 <jose_lausuch> but
08:46:15 <jose_lausuch> is it safe to have 2 Dockerfiles in the /docker directory?
08:46:16 <jose_lausuch> like
08:46:18 <jose_lausuch> Dockerfile
08:46:20 <jose_lausuch> Dockerfile_arm
08:46:28 <HelenYao> we can rename the dockerfile
08:46:31 <SerenaFeng> I don't think two different docker image is a good idea
08:46:57 <jose_lausuch> ok, this is where your input is very appreciated
08:46:58 <jose_lausuch> ideas? :)
08:47:16 <HelenYao> i think having 2 dockerfiles are safe
08:47:26 <SerenaFeng> 2 dockerfiles is not a problem
08:47:38 <SerenaFeng> docker build support specify dockerfile
08:48:03 <HelenYao> we can test how much is the package difference between arm and x86
08:48:05 <jose_lausuch> another difference is : instead of    "FROM ubuntu:14.04"  it's    "FROM aarch64/ubuntu:14.04"
08:48:31 <SerenaFeng> okey, that definitely needs two dockerfile :)
08:48:40 <HelenYao> if the original base image is totally different, I will vote for two dockerfile
08:49:00 <morgan_orange> is there any other option?
08:49:04 <jose_lausuch> https://hastebin.com/etiyoxanec.diff
08:49:22 <jose_lausuch> ok
08:49:37 <ollivier> jose_lausuch: cannot get your difff here (proxy)
08:49:39 <jose_lausuch> #info We need 2 dockerfiles, our default one and another one for ARM
08:49:50 <morgan_orange> but the question will be the same for other projects
08:50:00 <morgan_orange> e.g. yardstick, qtip
08:50:07 <morgan_orange> the arm support will raise the same question
08:50:27 <jose_lausuch> ollivier: http://pastebin.com/raw/NgheSrCv
08:50:41 <jose_lausuch> arm might focus first on functest
08:50:43 <jose_lausuch> then, we will see
08:50:45 <ollivier> we could also consider switching to alpine
08:51:06 <jose_lausuch> you already had that good idea some time ago
08:51:06 <HelenYao> i am a bit worried about the package support on alpine
08:51:15 <jose_lausuch> but Helen did some investigation
08:51:15 <ollivier> jose_lausuch: pastebin is blocked too. I will switch to my public access :)
08:51:27 <jose_lausuch> ollivier: too many restrictions in your proxy :)
08:51:42 <morgan_orange> alpine is like considering heat for resource preparation... a good topic for E now :)
08:51:48 <jose_lausuch> yes
08:51:52 <HelenYao> +1
08:51:53 <jose_lausuch> too late for D
08:52:01 <ollivier> jose_lausuch: that's why I am always connected to my public net.
08:52:22 <jose_lausuch> so
08:52:27 <jose_lausuch> how do we trigger the docker build?
08:52:35 <jose_lausuch> do we trigger both builds?
08:52:44 <HelenYao> i think so
08:52:54 <jose_lausuch> the default on any opnfv build server, and the other one at the same time on the ARM build server
08:52:56 <HelenYao> just the same way as we do now for x86
08:53:12 <jose_lausuch> ahd pushing the 2 images
08:53:15 <jose_lausuch> what tag?
08:53:20 <jose_lausuch> latest_arm ?
08:53:45 <ollivier> jose_lausuch: I will use a dedicated name opnfv/functest_arm
08:53:50 <SerenaFeng> how about functest_arm?
08:53:54 <jose_lausuch> aha
08:53:54 <morgan_orange> +1
08:53:55 <jose_lausuch> ok
08:54:01 <jose_lausuch> so, a new docker repo
08:54:24 <jose_lausuch> but what happens when yardstick also needs to support arm
08:54:25 <HelenYao> I am thinking the same, is it okay to have a new repo?
08:54:28 <jose_lausuch> another repo for that?
08:54:29 <ollivier> yes. and you should keep the same tags
08:54:29 <jose_lausuch> mmmm
08:54:34 <jose_lausuch> not very much scalable
08:54:52 <morgan_orange> but clearer
08:55:00 <SerenaFeng> yes
08:55:03 <jose_lausuch> opnfv/yardstick_arm   opnfv/storperf_arm  ?
08:55:06 <SerenaFeng> tag will confuse user
08:55:08 <HelenYao> how about put the arm in one repo, for functest and yardstick?
08:55:11 <ollivier> I don't understand the issue regarding the scalability
08:55:31 <HelenYao> if that is the case, it would be opnfv/arm:functest
08:55:40 <HelenYao> i am not sure about this
08:55:50 <jose_lausuch> we are duplicating the repos if arm wants to support all the test projects
08:56:17 <HelenYao> if putting as opnfv/arm, it will be ok for all test projects
08:56:30 <SerenaFeng> but more confused
08:56:32 <jose_lausuch> yes
08:56:43 <HelenYao> we have to balance
08:56:44 <jose_lausuch> how do we now say colorado arm functest? :D
08:56:52 <SerenaFeng> tag is used to manage version, not identify project
08:56:54 <jose_lausuch> opnfv/arm:functest_danube ?
08:56:57 <jose_lausuch> not clear
08:57:02 <jose_lausuch> ya
08:57:08 <jose_lausuch> I'd prefer different repo
08:57:12 <jose_lausuch> or same repo with different tag
08:57:15 <jose_lausuch> as you wish
08:57:27 <morgan_orange> opnfv/functest_arm:latest
08:57:40 <ollivier> jose_lausuch: I think https://jira.opnfv.org/browse/FUNCTEST-621 can be closed.
08:57:44 <jose_lausuch> #info use new repo for arm functest builds opnfv/functest_arm:latest
08:57:45 <morgan_orange> and yardstick will do opnfv/yardstick:danube
08:58:19 <jose_lausuch> ollivier: done
08:58:23 <morgan_orange> and we may reconsider this in E but I think it will be more clear to have dedicated repo >test>_arm
08:58:39 <morgan_orange> 2 minutes left
08:58:41 <jose_lausuch> yes
08:58:43 <jose_lausuch> aob?
08:58:47 <jose_lausuch> Status on feature projects?
08:58:50 <jose_lausuch> you added this?
08:58:50 <jose_lausuch> ok
08:58:52 <jose_lausuch> #topic Status on feature projects
08:58:56 <morgan_orange> yep
08:59:05 <jose_lausuch> quickly
08:59:11 <HelenYao> ollivier, juhak, SerenaFeng: is it ok to be notified if the image build fails?
08:59:19 <morgan_orange> Do we have a good view on the feature project we may have to deal with for Danube
08:59:40 <jose_lausuch> morgan_orange: there are not many more than Colorado..
08:59:54 <jose_lausuch> same features basically, with improvements/new tests
08:59:57 <morgan_orange> if so it could make sense to create a Jira and distribute them among us to have a counterpart to a feature project
09:00:13 <morgan_orange> + the mano related projects
09:00:23 <jose_lausuch> ya
09:00:40 <morgan_orange> some projects may even not be there for Danube.1.0
09:00:40 <jose_lausuch> https://jira.opnfv.org/browse/FUNCTEST-353
09:00:42 <morgan_orange> e.g. moon
09:00:51 <SerenaFeng> HelenYao, sure
09:00:59 <SerenaFeng> I will +2 to it
09:01:04 <HelenYao> SerenaFeng: thx
09:01:08 <morgan_orange> for promise for instance, still not clear
09:01:18 <jose_lausuch> ollivier, juhak, SerenaFeng: is it ok to be notified if the image build fails?
09:01:32 <juhak> fine for me
09:01:43 <HelenYao> juhak: great
09:01:58 <jose_lausuch> https://gerrit.opnfv.org/gerrit/#/c/27271/
09:02:06 <morgan_orange> #info most of the feature projects for Danube are known (only additional tests + mano related projects)
09:02:25 <jose_lausuch> promise not clear?
09:03:08 <morgan_orange> I assume for the moment we keep the same tests, but gerald mentioned they were refactoring
09:03:17 <jose_lausuch> ya right
09:03:25 <morgan_orange> I was just wondering when we could really test the target
09:03:34 <jose_lausuch> MS5 is this friday
09:03:40 <morgan_orange> shall we spend time to refactor (according to our abstraction) or wait for the refactoring
09:03:41 <jose_lausuch> scenarios should be ready for testing :D
09:03:50 <jose_lausuch> refactor what?
09:04:08 <morgan_orange> for promise we do not use the abstraction class now
09:04:23 <morgan_orange> shall we do it with old version of promise or wait new version of promise then do it
09:04:26 <jose_lausuch> well, I think the goal is to get rid of exec_tests.sh
09:04:33 <SerenaFeng> there are some projects still use the old framework
09:04:38 <jose_lausuch> so +1 for refactor the test cases that are old
09:04:53 <morgan_orange> even if they will maybe not exist anymore...
09:05:13 <jose_lausuch> we can maybe wait for promise
09:05:20 <morgan_orange> but OK, let's see what is remainign and do our best to get ridd of them
09:05:24 <morgan_orange> a topic for next week?
09:05:27 <jose_lausuch> maybe
09:05:35 <jose_lausuch> #action jose_lausuch add topic on feature tests refactor
09:05:41 <jose_lausuch> and I think we are done
09:05:43 <morgan_orange> thanks our virtual pod it is easier to test :)
09:05:48 <jose_lausuch> :)
09:05:49 <morgan_orange> #topic AoB
09:06:02 <jose_lausuch> it was a challenge having 2 different installers at the same time
09:06:08 <jose_lausuch> but it's good feedback for the community
09:06:16 <jose_lausuch> next challenge: 3 installers
09:06:35 <SerenaFeng> since I will be absent since tomorrow
09:06:36 <jose_lausuch> anyone aob?
09:06:37 <morgan_orange> towards OPNFVaaS, which was one of the infra priority
09:06:53 <rohitsakala> Hi all, giving an update of my internship work.
09:07:00 <jose_lausuch> rohitsakala: go ahead
09:07:05 <rohitsakala> I had completed these tasks as of now.
09:07:11 <SerenaFeng> Jose_lausuch and morgan_orange will you please mentor rohitsakal during the perio
09:07:12 <rohitsakala> 1. Create jenkins job for unit tests and code coverage.
09:07:18 <morgan_orange> SerenaFeng: ok
09:07:23 <rohitsakala> 2. Create jenkins job for automatic backup of mongodb.
09:07:31 <jose_lausuch> SerenaFeng: till when are you off?
09:07:40 <rohitsakala> 3. Create jenkins job for automatic update docker image in repository, docker deploy in testresults, generate swagger api-docs and push into artifacts.
09:07:53 <SerenaFeng> I guess from tomorrow to 4th Feb.
09:07:58 <rohitsakala> Docker deploy builder patch need to be updated.
09:08:09 <jose_lausuch> #info Serena is OoO until 4th Feb.
09:08:24 <jose_lausuch> rohitsakala: ok, good job with that
09:08:47 <SerenaFeng> and the testapi-docs is ready now
09:08:52 <rohitsakala> SerenaFeng: , please add if I am missing anything
09:08:55 <morgan_orange> For our Chinese contributors, enjoy you break!
09:09:17 <LindaWang> morgan_orange:  Thank you
09:09:18 <jose_lausuch> yes, have a good time
09:09:23 <SerenaFeng> will you please share the link?
09:09:26 <HelenYao> thx. I will be working until this Friday and be back on Feb.3
09:09:28 <rohitsakala> Link :- http://artifacts.opnfv.org/releng/docs/testapi.html
09:09:45 <morgan_orange> rohitsakala: great job!
09:09:59 <SerenaFeng> for now it is not very good, due to some field's absent
09:10:02 <jose_lausuch> #link http://artifacts.opnfv.org/releng/docs/testapi.html
09:10:12 <SerenaFeng> I will add the absent field when I am back
09:10:13 <morgan_orange> SerenaFeng: yes but the full chain is in place...
09:10:14 <jose_lausuch> it looks great
09:10:20 <HelenYao> rohitsakala: well done~
09:10:30 <morgan_orange> changes are minor now
09:10:44 <SerenaFeng> the main issue is jenkins slave
09:10:56 <SerenaFeng> gsutils configuration
09:10:56 <morgan_orange> I can also share the freemind map initiated by Kumar on VNF catalog internship
09:11:04 <jose_lausuch> SerenaFeng: did Aric help you with that?
09:11:15 <jose_lausuch> morgan_orange: what about next week?
09:11:17 <rohitsakala> jose_lausuch: SerenaFeng Aric sent me a mail
09:11:19 <jose_lausuch> we are 10 minutes ahead :)
09:11:26 <morgan_orange> just put the link
09:11:30 <jose_lausuch> ok
09:11:57 <rohitsakala> SerenaFeng: gsutils is configured in testresults.
09:11:57 <SerenaFeng> ok, I think we can end the meeting, and discuss offline
09:12:06 <SerenaFeng> okey, great
09:12:10 <jose_lausuch> ok
09:12:11 <HelenYao> SerenaFeng: could u put the gsutils config somewhere? there are some pods having trouble with gsutils and ur info will be helpful
09:12:18 <jose_lausuch> HelenYao: nope
09:12:22 <SerenaFeng> nope
09:12:25 <jose_lausuch> HelenYao: this is sensitive information :)
09:12:34 <jose_lausuch> shouldnt be available for the whole community
09:12:44 <morgan_orange> #link https://framindmap.org/c/maps/295778
09:12:44 <HelenYao> how can i know it
09:12:45 <SerenaFeng> only Mr big know the configuration
09:12:45 <jose_lausuch> you need to create a healpdesk ticket with it
09:12:59 <HelenYao> ok
09:13:07 <morgan_orange> we did not speak on bitergia...
09:13:08 <jose_lausuch> only Aric/Trevor/Fatih can do that
09:13:19 <jose_lausuch> we will do it on Thursday
09:13:24 <morgan_orange> yes :)
09:13:36 <jose_lausuch> ok
09:13:37 <morgan_orange> I think it is in line with what we discussed in barcelona, so no issue
09:13:38 <SerenaFeng> morgan_orange is it a french website?
09:13:39 <HelenYao> jose_lausuch: see, thx for the heads-up
09:13:42 <jose_lausuch> thank you all
09:13:44 <jose_lausuch> #endmeeting