14:02:26 <fdegir> #startmeeting Cross Community CI
14:02:26 <collabot> Meeting started Wed Jan 11 14:02:26 2017 UTC.  The chair is fdegir. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:26 <collabot> Useful Commands: #action #agreed #help #info #idea #link #topic.
14:02:26 <collabot> The meeting name has been set to 'cross_community_ci'
14:02:37 <fdegir> #topic Rollcall
14:03:06 <fdegir> please type in your name
14:03:10 <fdegir> #info Fatih Degirmenci
14:03:14 <hwoarang> #info Markos Chandras
14:03:20 <s_berg> #info Stefan Berg
14:03:32 <hw_wutianwei> #info Tianwei Wu
14:04:01 <fdegir> I hope yolanda can join as well
14:04:12 <yolanda> #info Yolanda Robla
14:04:13 <fdegir> first of all, welcome everyone and happy new year to you all
14:04:14 <yolanda> sorry
14:04:22 <hwoarang> happy new year
14:04:33 <hw_wutianwei> happy new year
14:04:40 <fdegir> #topic General Updates
14:05:08 <Julien-zte> #info Julien
14:05:23 <fdegir> as I mentioned
14:05:24 <fdegir> #info We have renamed this activity and expanded the scope to work on all Cross Community CI Activities together
14:05:29 <Julien-zte> happy new year
14:06:08 <fdegir> #info This means that OpenStack, ODL and future Cross Community CI activities will be driven under this as we can share stuff between these activities
14:06:24 <hwoarang> sounds good
14:06:41 <fdegir> and finally, I want to welcome s_berg to the team!
14:06:42 <Julien-zte> I have find several jobs related this have been created in releng
14:06:57 * s_berg waves
14:07:08 <fdegir> moving on to the next topic
14:07:10 <s_berg> Thanks Fatih.
14:07:20 <fdegir> #topic OpenStack Cross Community CI
14:07:44 <fdegir> #info Bifrost verification jobs were disabled during Intel lab migration
14:07:54 <fdegir> #info They've been enabled back yesterday
14:08:18 <hw_wutianwei> I have deployed the openstack using Openstack-ansble. but it is unstable
14:08:19 <fdegir> hwoarang: do you want to give info about the bifrost log uploads?
14:08:23 <Julien-zte> good news
14:08:36 <hwoarang> so yeah i have posted a patchset to upload logs to artifacts.opnfv.org
14:08:48 <hwoarang> this was requested during a review in the upstream openstack gerrit
14:08:55 <hw_wutianwei> you need try several times
14:08:59 <hwoarang> so please go and review it if you have a couple of minutes to spare
14:09:02 <hw_wutianwei> I also submit the patch in gerrit. people can use this patch to test. it is necessary to improve.
14:09:02 <fdegir> #info Patch to upload bifrost logs to artifacts.opnfv.org is up for review
14:09:17 <fdegir> #link https://gerrit.opnfv.org/gerrit/#/c/26831/
14:09:51 <fdegir> #info Once we fix this part, I'll ping Bifrost PTL and try to get us voting rights for bifrost on OpenStack Gerrit
14:10:11 <fdegir> they already discussed this in ironic meeting but there were some questions from the rest of the OpenStack people
14:10:13 <jmorgan1> #info Jack Morgan
14:10:18 <hwoarang> sounds good
14:10:32 <fdegir> anyone wants to give an update about openstack-ansible?
14:10:40 <fdegir> perhaps hw_wutianwei ?
14:11:01 <hwoarang> personally, i have started looking at it, but suse support is somewhat missing so i guess i will have to do the suse port first :)
14:11:10 <hw_wutianwei> Do we have a plan when we can deploy the Openstack?
14:11:21 <yolanda> i started to test yesterday, but i had problems because my host was centos
14:11:27 <yolanda> so had to adjust package install, network
14:11:33 <yolanda> still on the process of testing it
14:11:41 <fdegir> #info hwoarang started looking into openstack-ansible, but suse support is somewhat missing so he will have to do the suse port first
14:11:41 <hw_wutianwei> I just test the ubuntu
14:11:43 <Julien-zte> hi fdegir, we use try to deploy vms by bifrost for openstack-ansilbe
14:12:18 <Julien-zte> the builderimage only support 14.04 ? and openstack-ansible requires 16.04 for ubuntu?
14:12:21 <fdegir> hw_wutianwei: I ask the same quesiton to you as you made some progress I suppose
14:12:32 <Julien-zte> and the bifrost only deploy one nics for the vms.
14:12:53 <Julien-zte> shall we improve these settings with multiple nics?
14:12:55 <yolanda> Julien-zte, so bifrost works fine with xenial noe
14:12:56 <yolanda> now
14:12:57 <s_berg> I've tried successfully (so far) with Ubuntu 16.04 and the Newton build, looked to be a supported combo.
14:13:18 <fdegir> s_berg: I suppose it is virtual?
14:13:28 <hw_wutianwei> bifrost suppot the 16.04
14:13:34 <Julien-zte> ok, good news
14:13:50 <Julien-zte> any try for multiple nics with bifrost?
14:13:51 <s_berg> fdegir: Yes, virtual all-in-one-host.
14:14:14 <s_berg> Would love to try it out in a distributed setting (will look at doing that virtually as well to try it out)
14:14:18 <yolanda> Julien-zte, no support yet for multiple nics, but TheJulia (bifrost PTL) suggested we could create a raw network_info.json and we could pass it
14:14:25 <fdegir> #info s_berg tried openstack-ansible virtual all-in-one-host with Ubuntu 16.04 and the Newton build
14:14:44 <yolanda> as an alternative, hw_wutianwei was configuring all the networking using some ansible playbooks after deploying bifrost...
14:15:06 <Julien-zte> that's what we want to try.
14:15:24 <Julien-zte> hw_wutianwei, any progress has achieved?
14:15:30 <hw_wutianwei> it is seem ok, when I try.
14:16:04 <fdegir> hw_wutianwei: I suppose the patch you sent to releng is for running openstack-ansible?
14:16:06 <fdegir> https://gerrit.opnfv.org/gerrit/#/c/26865/
14:16:18 <hw_wutianwei> yep
14:16:21 <fdegir> (haven't had time to look at the details)
14:16:30 <fdegir> great hw_wutianwei
14:16:39 <hw_wutianwei> and configuring all the networking of vms
14:16:55 <Julien-zte> that's great
14:16:57 <fdegir> I hope to give a try and send comments if I find any
14:17:08 <Julien-zte> I will pay more time in codes in releng about bifrost
14:17:15 <fdegir> #info hw_wutianwei sent a patch to releng for openstack-ansible. Please try and comment back
14:17:18 <fdegir> #link https://gerrit.opnfv.org/gerrit/#/c/26865/
14:17:20 <Julien-zte> it will save our time
14:17:53 <fdegir> I think we are having some progress with openstack-ansible
14:18:03 <fdegir> which is really good
14:18:16 <yolanda> so far the comments i had for that review, same as other, is that it shall be isolated in a different folder
14:18:20 <yolanda> and don't break our current jobs
14:18:50 <yolanda> also support for centos/suse shall be needed, as we test on these hosts as well. Even if the vms are xenial, the deployment hosts can be other
14:19:03 <hwoarang> ideally it should be a separate job for now
14:19:08 <Julien-zte> the network settings' patch will be upstreamed in openstack?
14:19:20 <fdegir> yes, our aim was, is, and will be supporting the 3 OSes: ubuntu, centos, suse
14:19:25 <fdegir> like we are doing with bifrost
14:19:59 <fdegir> hw_wutianwei: I will put this as a comment but just to mention here as well
14:19:59 <Julien-zte> if new job required, add it in releng:)
14:20:15 <fdegir> hw_wutianwei: it would be good to have simple readme accompanying the code like bifrost
14:20:31 <hw_wutianwei> ok, i will add this
14:20:38 <fdegir> hw_wutianwei: so we can understand what it is the osa is doing
14:20:55 <Julien-zte> good
14:21:03 <fdegir> any more info/comment about openstack-ansible?
14:21:16 <hwoarang> i do
14:21:26 <fdegir> please go ahead hwoarang
14:21:31 <hw_wutianwei> i also will add a few comments to this script
14:21:35 <hwoarang> so what's the plan for puppet-infracloud? Yolada, are we still looking at both of them?
14:21:48 <hwoarang> *Yolanda
14:21:52 <fdegir> yolanda: ^
14:22:22 <yolanda> infracloud is quite stopped at the moment
14:22:48 <yolanda> i guess first we need to evaluate if OSA is a valid option and we could go with that instead
14:23:05 <hwoarang> ok but i guess the final goal is to use one of them and not provide code for both
14:23:35 <yolanda> yes, i'd say so
14:23:38 <hwoarang> ok
14:23:40 <hwoarang> second question
14:23:51 <hwoarang> so we want to use OSA on top of bifrost
14:24:20 <hwoarang> how are we going to handle this in the CI? say upstream bifrost makes a change and our CI kicks in. Are we going to do a full bifrost+OSA cycle and provide feedback for that?
14:24:34 <hwoarang> if one of the two components is broken, then our feedback is somewhat useless
14:24:54 <yolanda> it will be same situation as bifrost + puppet-infracloud
14:25:03 <fdegir> what I think is
14:25:22 <fdegir> there are 2 types of CI activities here
14:25:39 <fdegir> first one is having bifrost + osa working for opnfv to deploy openstack from master
14:25:53 <fdegir> this will be done using the proven/working versions of bifrost + osa
14:26:07 <fdegir> second one is providing feedback to openstack bifrost + osa
14:26:12 <fdegir> and this could be done in a way that
14:26:31 <fdegir> if we get bifrost change, we should test whole chain with bifrost change + proven/working version of osa
14:26:33 <fdegir> and vice versa
14:26:38 <Julien-zte> yes, the openstack version is verified earlier than opnfv now does.
14:26:42 <fdegir> so we only change one of them at any given time
14:27:02 <fdegir> we need to keep track of what we verified in order to lock versions both for opnfv purposes
14:27:09 <fdegir> and for upstream patch verification
14:27:14 <hwoarang> oh so we plan to provide feedback for the combined solution
14:27:15 <hwoarang> ok
14:27:18 <Julien-zte> what will we do to "change one of them at any given time"
14:27:21 <fdegir> so we can always have working baseline for ourselves
14:27:38 <fdegir> and latest + working baseline for upstream verification
14:27:43 <fdegir> I hope this does make sense
14:27:59 <fdegir> perhaps I shouldn't guess to much but
14:28:13 <fdegir> if we imagine a job that does bifrost and osa in serial order
14:28:20 <fdegir> and if we are verifying bifrost only
14:28:24 <fdegir> osa one shouldn't vote
14:28:32 <fdegir> to upstream
14:28:59 <fdegir> but we will run it for our own purposes to see if that bifrost patch breaks our bifrost + osa working baseline if we intend to move to it
14:29:31 <hwoarang> yes but we could also provide feedback for the individual components too. so the bifrost jobs provides feedback, then we test osa on top and we provide feedback for that too.
14:29:49 <fdegir> hw_wutianwei: but our verificaiton is based on patchset-submitted
14:29:53 <hwoarang> anyway i was wondering because i guess it would be helpful especially for hw_wutianwei to structure the code accordingly
14:29:56 <fdegir> hwoarang: ^
14:30:18 <fdegir> we will find it out soon
14:30:21 <hwoarang> ok
14:30:30 <fdegir> once hw_wutianwei patch gets in, we should directly create jobs for it
14:30:33 <fdegir> and make it run
14:31:05 <fdegir> and I still have the ap to bring up baremetal daily jobs for bifrost
14:31:11 <fdegir> which I need to look into
14:31:12 <Julien-zte> hi fdegir, if we want to work as a 3rd ci for openstack, I think we will prepare multiple slaves. for new in opnfv only is working?
14:31:26 <Julien-zte> too many updates in openstack
14:31:31 <fdegir> Julien-zte: we run verification for both for openstack and opnfv
14:31:42 <fdegir> Julien-zte: for bifrost
14:31:44 <fdegir> https://build.opnfv.org/ci/view/3rd%20Party%20CI/
14:31:56 <fdegir> Julien-zte: when we come to the point where we have things up and running
14:32:00 <Julien-zte> I mean multiple slaves for the same job
14:32:03 <fdegir> and if we experience resource shortages
14:32:13 <fdegir> I know who to talk to
14:32:16 <fdegir> jmorgan1: ^
14:32:27 <Julien-zte> ^_^
14:32:34 <fdegir> Julien-zte: we can improve that by using VMs rather than baremetal for verification
14:32:42 <fdegir> as hwoarang does for suse
14:32:54 <fdegir> as we need to verify it on all OSes we support
14:33:01 <hwoarang> a somewhat big-ish VM should be able to run the bifrost job just fine
14:33:05 <Julien-zte> yah
14:33:18 <fdegir> hwoarang: can you type in the specs of the vm you created for suse?
14:33:22 <Julien-zte> you mentioned, intel pods come back now, fgedir
14:33:27 <fdegir> I might try doing that
14:33:39 <fdegir> for ubuntu and then centos perhaps
14:34:03 <fdegir> Julien-zte: right - we use intel-pod4 which has been back since last week
14:34:10 <hwoarang> 16 vcpus, 120G disk, 16G RAM
14:34:40 <fdegir> #info a vm with 16 vcpus, 120G disk, 16G RAM should be able to run bifrost
14:34:50 <fdegir> should we move on?
14:35:04 <hw_wutianwei> huawei also can provide virtual pod  now
14:35:15 <fdegir> hw_wutianwei: moarrr hw
14:35:34 <fdegir> thanks hw_wutianwei, I'll contact you offline
14:35:50 <hw_wutianwei> ok
14:35:57 <hwoarang> lol
14:36:09 <fdegir> #topic ODL Cross Community CI status
14:36:21 <Julien-zte> hi fdegir, if please send me gpg key, I will send you a openvpn account for the test the timedelay
14:36:24 <fdegir> #info We also started running patchset verification for ODL netvirt
14:37:11 <fdegir> #info How this works is that; when a new patch is sent for ODL netvirt, we bring up OpenStack from snapshots in about 10 minutes, install netvirt, and run functest against it
14:37:37 <fdegir> #info Currently trozet is working on creating apex snapshots which will be incorporated into the job we created for this purpose
14:37:52 <fdegir> #link https://build.opnfv.org/ci/view/3rd%20Party%20CI/job/odl-netvirt-verify-virtual-master/
14:38:13 <fdegir> we will have more progress with this coming days/weeks
14:38:15 <jmorgan1> fdegir: yes?
14:38:28 <fdegir> jmorgan1: we put an ap on you
14:38:48 <fdegir> that was it about odl patch verification
14:38:57 <fdegir> last topic is
14:39:02 <fdegir> #topic Infra needs/updates
14:39:17 <fdegir> #info The Cross Community CI resources came back online
14:39:23 <fdegir> #info Thanks jmorgan1 for this
14:39:31 <fdegir> #info The page is updated with latest assignments
14:39:40 <fdegir> #link https://wiki.opnfv.org/display/pharos/Intel+Pod4
14:39:58 <fdegir> I suppose everyone has somewhere to work on
14:40:05 <jmorgan1> fdegir: we don't have any more resources right now in our lab
14:40:07 <fdegir> if not, please contact me so I try to find machines for you
14:40:17 <fdegir> jmorgan1: I was joking :)
14:40:42 <fdegir> #action fdegir to contact hw_wutianwei for vpods from Huawei
14:40:49 <fdegir> #topic AOB
14:40:56 <fdegir> anyone wants to bring up anything?
14:41:43 <fdegir> I suppose not
14:41:53 <fdegir> thank you all and talk to you in 2 weeks
14:41:57 <fdegir> #endmeeting