14:02:01 <hwoarang> #startmeeting Cross Community CI
14:02:01 <collabot> Meeting started Wed Dec 13 14:02:01 2017 UTC.  The chair is hwoarang. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:01 <collabot> Useful Commands: #action #agreed #help #info #idea #link #topic.
14:02:01 <collabot> The meeting name has been set to 'cross_community_ci'
14:02:09 <David_Orange> hwoarang: hello
14:02:15 <jmorgan1> hello
14:02:16 <hwoarang> #topic Rollcall
14:02:23 <David_Orange> #info David_Orange
14:02:24 <jmorgan1> #info Jack Morgan
14:02:26 <hwoarang> #info Markos Chandras
14:02:29 <mardim> #info Dimitrios Markou
14:02:30 <joekidder_> #info Joe Kidder
14:02:42 <hwoarang> #link Agenda is available on https://etherpad.opnfv.org/p/xci-meetings
14:02:42 <hw_wutianwei> #info Tianwei Wu
14:02:43 <mbuil> #info Manuel Buil
14:02:56 <hwoarang> lets wait 1 minute for people to add additional items
14:03:52 <hwoarang> ok lets do it
14:03:53 <mbuil> is fdegir travelling again?
14:04:01 <hwoarang> he has some family business
14:04:10 <mbuil> ah ok
14:04:12 <hwoarang> #topic Scenarios/Feature Status: os-odl-sfc
14:04:16 <hwoarang> mbuil: mardim stage is yours
14:04:27 <mardim> ok I will start
14:04:31 <mbuil> ladies first ;)
14:04:36 <mardim> hahhah :P
14:04:49 <mardim> so the only thing I want to add
14:04:58 <mardim> is that the OVS-NSH patch
14:05:00 <mardim> is merged
14:05:10 <mardim> so it will be available in queens
14:05:12 <hwoarang> in OSA you mean?
14:05:13 <hwoarang> ok
14:05:23 <hwoarang> #info ovs-nsh patch is merged in OSA and will be available in Queens
14:05:26 <mardim> https://review.openstack.org/#/c/517259/
14:05:27 <hwoarang> great
14:05:29 <David_Orange> mardim: great
14:05:34 <hwoarang> #link https://review.openstack.org/#/c/517259/
14:05:43 <mardim> thanks that's all from me
14:05:49 <hwoarang> thank you mardim
14:05:54 <mardim> but mbuil has alot of stuff
14:05:58 <mbuil> #info on the upstream side, we are waiting for two related patches to be merged. First patch I think is ready and just needs somebody to merge it
14:06:13 <mbuil> #link https://review.openstack.org/#/c/525264/
14:06:13 <hwoarang> link so i can look at it after meeting?
14:06:13 <mbuil> #info second patch is waiting for a tempest fix to get a +1 from Zuul
14:06:16 <hwoarang> great
14:06:21 <mbuil> #link https://review.openstack.org/#/c/510909/
14:06:21 <mbuil> #info this is the tempest fix
14:06:22 <mbuil> #link https://review.openstack.org/#/c/527686/
14:06:32 <mbuil> #info SFC scenario was successfully merged into releng-xc
14:06:32 <mbuil> #link https://gerrit.opnfv.org/gerrit/#/c/43469/
14:06:32 <mbuil> #info it works with opensuse and ubuntu with master and pike. To launch it, here are the instructions:
14:06:32 <mbuil> #link https://wiki.opnfv.org/display/sfc/Deploy+OPNFV+SFC+scenarios
14:07:07 <mbuil> this time I wrote it before the meeting :P
14:07:20 <mardim> mbuil, we realized that :P
14:07:31 <hwoarang> thank you mbuil
14:07:33 <mbuil> I guess the first step is to get the deployment automated in a daily jjob
14:07:43 <mbuil> The second step would be to also trigger the functest testcases but for that we need to use a bigger VM because we would need ~70G of memory
14:08:00 <durschatz> #info Dave Urschatz
14:08:13 <hwoarang> #info need some daily CI jobs for deployment
14:08:20 <ttallgren> #info tapio tallgren
14:08:27 <hwoarang> #info need a way to get a bigger VM so functest can run
14:08:33 <epalper> #info Periyasamy Palanisamy
14:08:45 <jmorgan1> hwoarang: how big if bigger?
14:08:52 <jmorgan1> s/if/si
14:08:54 <jmorgan1> is
14:09:01 <hwoarang> we use 48G, mbuil says they need 70
14:09:16 <hwoarang> so we need at least 140G on the host (because we run max 2 VMs) per slave
14:09:22 <hwoarang> so we should be ok to raise that
14:09:23 <mbuil> hwoarang: I ran successfully on a 65G host but it was swapping 1G
14:09:35 <jmorgan1> this is space not ram, correct?
14:09:42 <hwoarang> no it's RAM ;p
14:10:05 <jmorgan1> do we know why sp much ram?
14:10:30 <jmorgan1> just curious
14:10:50 <hwoarang> so we have this big VM right?
14:10:51 <David_Orange> hwoarang: this is for which test case ? you try VIMS ?
14:10:59 <hwoarang> in it we create 3 VMs
14:11:15 <hwoarang> all 3 VMs have the same memory because we can't set memory per VM
14:11:28 <hwoarang> it's only a temp problem until we can set memory per VM
14:11:31 <hwoarang> David_Orange: for SFC
14:11:38 <jmorgan1> hwoarang: ah, ok
14:11:40 <David_Orange> hwoarang: ah ok
14:11:48 <mbuil> jmorgan: xci VMs need around 12G, ODL requires around 4G, that means we need 16G VMs. Besides, our testcases use 4 non-cirros VMs which require 2G of RAM each
14:11:50 <jmorgan1> one idea is run vm in container instead of vm
14:12:27 <hwoarang> yep
14:12:32 <hwoarang> for not right now
14:12:39 <jmorgan1> this is how google does it learned this week
14:13:15 <hwoarang> we will see how to improve it. it's not too far away
14:13:22 <hwoarang> ok so anything else about SFC?
14:13:24 <epalper> jmorgan1: vm in container ?
14:13:34 <mbuil> hwoarang: nothing else
14:13:36 <jmorgan1> epalper: yes, kubernetes based
14:13:47 <epalper> jmorgan1: can you please send the link ?
14:14:10 <epalper> i.e about running vm inside a container that saw it
14:14:13 <jmorgan1> epalper: i don't have one, you might need to investigate. If I find it, I will send it out
14:14:20 <epalper> ok, sure
14:14:21 <hwoarang> #topic Scenarios/Feature Status: os-odl-bgpvpn
14:14:24 <hwoarang> epalper: ^
14:14:43 <epalper> I have raised blueprint spec in osa
14:14:44 <epalper> https://review.openstack.org/#/c/523171/
14:14:55 <hwoarang> #info blueprint for OSA has been created
14:14:59 <hwoarang> #link https://review.openstack.org/#/c/523171/
14:15:06 <epalper> its under review and raised few patches around osa, os neutron and xci
14:15:25 <hwoarang> #link blueprint under review, WIP patches for osa, os_neutron and xci
14:15:28 <hwoarang> great :)
14:15:36 <epalper> I request you to review this patch: https://review.openstack.org/#/c/523907/
14:15:38 <hwoarang> will review the latest revision of that spec this afternoon
14:15:46 <epalper> Testing for the above patch is pending
14:16:03 <epalper> thanks hwoarang
14:16:10 <hwoarang> #link request for reviews on WIP https://review.openstack.org/#/c/523907/
14:16:14 <hwoarang> thank you epalper
14:16:32 <hwoarang> #topic Scenarios/Feature Status: os-nosdn-nofeature
14:16:52 <hwoarang> #info fdegir is working on bringing ovs and ceph pieces under this scenario
14:17:17 <hwoarang> #topic Scenarios/Feature Status: Kubernetes in XCI
14:17:25 <hwoarang> hw_wutianwei: ^
14:17:27 <hw_wutianwei> hwoarang: hi
14:18:28 <hw_wutianwei> I find there are two patch https://gerrit.opnfv.org/gerrit/#/c/48711/ and https://gerrit.opnfv.org/gerrit/#/c/48739/, after they are merged, I will update the patch.
14:18:43 <hw_wutianwei> #link https://gerrit.opnfv.org/gerrit/#/c/46153/
14:18:56 <hw_wutianwei> and request you to review this
14:19:29 <hw_wutianwei> I am trying to deploy the kubernetes using master branch code. but I have met some trouble. I am trying to solve.
14:19:54 <hwoarang> #info need to disassociate XCI and OSA to make room for more NFVIs
14:19:59 <hwoarang> #link https://gerrit.opnfv.org/gerrit/#/c/48711/
14:20:12 <hwoarang> #info WIP to deploy k8s from master
14:20:19 <hwoarang> hw_wutianwei: that would be helpful indeed thanks!
14:20:21 <hw_wutianwei> the patch 46153 use kubespray to deploy the  v1.8.4 of kubernetes
14:20:35 <jmorgan1> hw_wutianwei: do you have this documented anywhere? I'd like to look at kubernetes in xci
14:21:04 <hw_wutianwei> do you mean deploy master branch?
14:21:37 <David_Orange> hw_wutianwei: we will need to deploy both (for ONAP use case)
14:22:17 <hw_wutianwei> David_Orange: I am trying
14:23:00 <hwoarang> so ok to move on?
14:23:20 <hw_wutianwei> David_Orange:  I will ask you fo help, if need
14:23:31 <hwoarang> ok moving on. more k8s talk after the meeting :)
14:23:32 <hwoarang> #topic Scenarios/Feature Status: Congress in XCI
14:23:36 <hwoarang> Taseer: you around?
14:23:37 <David_Orange> hw_wutianwei: you will succed, sure for the help
14:24:16 <hwoarang> guess not
14:24:30 <hwoarang> ok so for the OSA bit we still need os_congress merged
14:24:38 <hw_wutianwei> jmorgan1: this is the plan for k8s in XCI: https://etherpad.opnfv.org/p/xci-k8s
14:24:39 <hwoarang> #info os_congress role WIP
14:24:41 <hwoarang> #link https://review.openstack.org/#/c/522491/
14:24:50 <jmorgan1> hw_wutianwei: thanks, just found it myself
14:25:19 <hw_wutianwei> jmorgan1:  jsut found it :)
14:25:26 <hw_wutianwei> *just
14:25:34 <hwoarang> #topic Scenarios/Feature Status: Upcoming work
14:25:46 <hwoarang> #info Promise/Blazar: Taseer will start working on blueprint for blazar for OSA.
14:25:53 <hwoarang> #info HA/Masakari: we met and will meet again with HA/Masakari team to decide the way forward
14:26:10 <hwoarang> #topic General Framework Updates: CentOS support
14:26:13 <hwoarang> ttallgren: ^
14:26:42 <ttallgren> I was planning to submit my patches this wek, but then I got some problems after rebasinh
14:27:16 <hwoarang> #info CentOS support is ready for review but need to resolve rebase problems
14:27:17 <ttallgren> I have time to work on it again later this week
14:27:26 <hwoarang> ok then thank you!
14:27:53 <hwoarang> moving on
14:27:55 <hwoarang> #General Framework Updates: Improving stability
14:28:01 <hwoarang> #topic General Framework Updates: Improving stability
14:28:06 <hwoarang> David_Orange: ^
14:28:20 <David_Orange> #info i worked on it during plugfest, testing it on our pods and on cengen
14:28:54 <David_Orange> #info i am doing the last changes to make it runs with actual os deploy (example root access on opnvf_host :()
14:29:37 <David_Orange> #info it runs with ubuntu, prepared for centos and suse, but some stuff need to be checked for those 2 OS
14:30:10 <David_Orange> opnvf_host is based on ubuntu cloud image, i suppose you have the same for opensuse and centos
14:30:30 <hwoarang> yes
14:30:44 <hwoarang> but we can build and host images in artifacts.opnfv.org
14:30:51 <David_Orange> #info i split the code to post several patch, one for the VM creation on PDF/DF
14:30:52 <hwoarang> we already host stuff there
14:31:28 <David_Orange> #info then one for bifrost
14:32:00 <David_Orange> i will follow the model of OSA split, if you are ok
14:32:45 <hwoarang> sure
14:32:55 <hwoarang> ok sounds promising. Looking forward to it
14:32:58 <hwoarang> anything else?
14:33:08 <David_Orange> hwoarang: has you want, we can set an option to dl them or build them. for now i DL the official ubuntu cloud image for opnfv_hosts, the build the image for bifrost
14:33:21 <hwoarang> yeah we will see
14:33:38 <David_Orange> nothing more, i hope it will not be too big patchs for the review
14:34:05 <David_Orange> one last thing
14:34:39 <hwoarang> yes?
14:34:40 <David_Orange> i made this code to be fully separated from actual xci structure, to avoid perturbations
14:35:12 <David_Orange> i will post the patches, the work on merging folders if this works is ok for you
14:35:26 <hwoarang> well lets see first
14:35:31 <David_Orange> s/the work/then work/
14:35:39 <hwoarang> i know many things are moving around these days :(
14:35:39 <David_Orange> hwoarang: sure
14:36:37 <David_Orange> that all
14:36:45 <hwoarang> great thanks!
14:36:52 <hwoarang> moving on
14:36:53 <hwoarang> #topic Enabling additional CI loops
14:37:01 <hwoarang> #info this work will proceed soon.
14:37:15 <hwoarang> #topic AoB
14:37:30 <jmorgan1> what is additional CI loops?
14:38:08 <hwoarang> so i guess it means more CI jobs but fdegir put it there so i may not interpret it correctly :)
14:38:22 <hwoarang> like more scenarios tested, ha, no-ha etc
14:38:23 <jmorgan1> ok, i'll ask him later. thanks
14:38:28 <hwoarang> #info Intel POD16-node{4,5} machines will be offline between 9am-12pm PST for RAM adjustments
14:38:49 <jmorgan1> hwoarang: it looks like just node4, node5 need 256Gb ram in pod16
14:39:11 <jmorgan1> hwoarang: just those two nodes will be down (if that is correct)
14:39:11 <hwoarang> aha
14:39:23 <hwoarang> yep
14:39:34 <jmorgan1> ok, so minimal impact i htink
14:39:45 <hwoarang> yes i believe so
14:39:47 <jmorgan1> as those are not being used too much yet
14:40:03 <hwoarang> not sure if they are hooked up to jenkins yet
14:40:11 <hwoarang> i only know about node1 and 3
14:40:13 <jmorgan1> no, not yet
14:40:16 <hwoarang> ok
14:40:20 <jmorgan1> i saw same
14:40:32 <hwoarang> great
14:40:49 <jmorgan1> i concur
14:40:57 <hwoarang> #info Anyone needs resource to do their work on? talk to Fatih :)
14:41:18 <hwoarang> so my understanding is that pod16 is going to be used by CI, and pod21 for devs but master fatih can confirm later on
14:41:32 <hwoarang> sooooooooo
14:41:37 <hwoarang> anything else? :)
14:41:37 <jmorgan1> hwoarang: yes, i beleive so
14:41:47 <hwoarang> btw jmorgan1 thank you for the shiny new hardware :)
14:42:16 <jmorgan1> hwoarang: no problem, enjoy
14:42:18 <hwoarang> you are our santa claus :)
14:42:20 <hwoarang> hehe
14:42:24 <David_Orange> in blue
14:42:28 <hwoarang> lol
14:42:45 <hwoarang> okkk
14:42:49 <hwoarang> i guess nothing else so...
14:42:59 <hwoarang> #endmeeting