14:00:43 <fdegir> #startmeeting Cross Community CI
14:00:43 <collabot> Meeting started Wed Nov 15 14:00:43 2017 UTC.  The chair is fdegir. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:43 <collabot> Useful Commands: #action #agreed #help #info #idea #link #topic.
14:00:43 <collabot> The meeting name has been set to 'cross_community_ci'
14:00:48 <fdegir> #topic Rollcall
14:00:56 <fdegir> lets have a quick status update
14:01:15 <electrocucaracha> #info Victor Morales
14:01:17 <hw_wutianwei> #info tianwei Wu
14:01:19 <David_Orange> #info David Blaisonneau
14:01:19 <mbuil> #info Manuel Buil
14:01:29 <trinaths> #info Trinath Somanchi - NXP
14:01:35 <joekidder> #info Joe Kidder
14:01:40 <fdegir> the agenda is pretty same as the last few weeks
14:01:45 <fdegir> #link https://etherpad.opnfv.org/p/xci-meetings
14:02:01 <fdegir> starting with scenario status
14:02:12 <fdegir> #topic Scenario Status: os-odl-sfc
14:02:36 <fdegir> mbuil: any changes since the last time?
14:02:52 <mbuil> yes, should I info myself or better tell you and you do the filtering?
14:03:02 <fdegir> mardim: please info in
14:03:14 <fdegir> mbuil: ^
14:03:39 <mbuil> fdegir
14:03:42 <mbuil> fdegir: ok
14:03:58 <ttallgren> #info TapioT
14:04:29 <epalper> #info Periyasamy Palanisamy
14:04:37 <mbuil> #info We are doing two things. 1st thing is upstreaming features to os_neutron
14:04:48 <mbuil> #link https://review.openstack.org/#/c/517259/
14:05:01 <mbuil> #link https://review.openstack.org/#/c/510909/
14:05:46 <mbuil> #info 2nd thing, we are trying to make it work with master. Unfortunately, we are finding several issues when trying xci with simple ODL to do L2 and L3
14:06:14 <fdegir> mbuil: any guess about where those issues are coming from?
14:06:21 <fdegir> mbuil: osa itself or xci stuff?
14:06:27 <mbuil> #info It deployes but things don't work. That is a consequence of not using tempest anymore :(
14:06:42 <mardim> #info Dimitrios Markou
14:07:38 <durschatz> #info Dave Urschatz
14:07:45 <mbuil> fdegir: one issue came from xci, three issues came from os_neutron and another issue I suspect is coming from glance openstack (not able to create a image ==> https://hastebin.com/ewufafehaq.vbs)
14:08:13 <fdegir> mbuil: ok
14:08:16 <fdegir> I'll info this in
14:08:23 <fdegir> #info one issue came from xci, three issues came from os_neutron and another issue I suspect is coming from glance openstack (not able to create a image ==> https://hastebin.com/ewufafehaq.vbs)
14:08:42 <mbuil> before we start running xci with scenarios we need to be careful because as we don't use tempest, there might be bugs when doing standard cloud operations
14:08:49 <fdegir> about tempest; we can enable it back
14:09:13 <hwoarang> #info Markos Chandras
14:09:22 <fdegir> it was excluded due to expiration of cirros dns record
14:09:47 <mbuil> fdegir: I would like that :)
14:09:58 <fdegir> actioning myself to try tempest locally and enable it if it works
14:10:11 <fdegir> #action fdegir to try tempest and enable it
14:10:20 <fdegir> mbuil: anything else?
14:10:24 <mardim> Also I want to add here that i get an error in ZUUl which I cannot replicate locally
14:10:54 <mardim> IS related to the linux headers which are essential for the ovs-nsh
14:10:57 <mardim> installation
14:11:03 <mbuil> fdegir: nothing else from my side
14:11:04 <fdegir> mardim: please paste link to that so we capture it
14:11:13 <mardim> if anyone has any idea why this is happenning
14:11:17 <mardim> please tell me
14:11:24 <mardim> #link https://hastebin.com/omonutovat.sm
14:11:42 <fdegir> mardim: we can take that after the meeting
14:11:52 <mardim> fdegir, sire thanks
14:11:55 <mardim> sure
14:12:03 <fdegir> moving on to the next scenario
14:12:15 <mardim> fdegir, Also I want to add something more
14:12:18 <fdegir> #topic Scenario Status: os-nosdn-ovs
14:12:39 <fdegir> mardim: please info in while we wait epalper electrocucaracha
14:12:50 <mardim> fdegir, I have also this patch for proper testing of ODL in OSA
14:13:11 <mardim> #link https://review.openstack.org/#/c/518964/
14:13:15 <mardim> that's all
14:13:17 <mardim> thanks :)
14:13:18 <fdegir> thx mardim
14:13:30 <fdegir> electrocucaracha: epalper: anything to say about os-nosdn-ovs?
14:13:55 <fdegir> I see the patch is (almost) ready to go in
14:13:57 <fdegir> #link https://gerrit.opnfv.org/gerrit/#/c/43447/
14:13:57 <electrocucaracha> fdegir: ?
14:14:08 <epalper> #info review for this scenario is at https://gerrit.opnfv.org/gerrit/#/c/43447/
14:14:25 <fdegir> electrocucaracha: sorry - I thought you are also looking into ovs
14:14:34 <epalper> # there is a dependent review https://gerrit.opnfv.org/gerrit/#/c/46859/
14:14:57 <epalper> #info i have tested this scenario locally and it works
14:15:08 <electrocucaracha> fdegir: yes, that's one thing that I have in my plate, but I'm still dealing with aio behind proxy :S
14:15:10 <fdegir> epalper: I see a comment from hw_wutianwei in first patch
14:15:36 <hw_wutianwei> fdegir: I just suggest to do that
14:15:47 <durschatz> #info there is a dependent review https://gerrit.opnfv.org/gerrit/#/c/46859/ for above info
14:15:47 <fdegir> epalper: you can perhaps address/respond to that
14:16:00 <epalper> fdegir: sure
14:16:32 <fdegir> thx for infoing in durschatz
14:16:44 <fdegir> before we move to the next scenario, I want to take a short discussion here
14:16:48 <fdegir> about ovs and ceph
14:17:11 <fdegir> ceph change has been merged
14:17:19 <fdegir> thanks to hw_wutianwei and anyone else contributed to that
14:17:29 <fdegir> the thing is, opnfv uses ovs and ceph by default
14:17:40 <fdegir> and these are part of os-nosdn-nofeature scenario
14:18:13 <fdegir> I think we need to combine these two and push it as os-nosdn-nofeature scenario as releng-xci/xci/scenarios/os-nosdn-nofeature
14:18:22 <fdegir> any comments/thoughts?
14:19:23 <fdegir> any objections?
14:19:30 <electrocucaracha> makes sense
14:19:33 <hw_wutianwei> after ovs finish, we can combine these to os-nosdn-nofeature scenario
14:19:53 <fdegir> hw_wutianwei: yes, that's the way but just want to ensure we are all on the same page
14:20:32 <fdegir> #info Once ovs integration is done, it will be combined together with ceph under the scenario os-nosdn-nofeature
14:20:49 <fdegir> we will still need vanilla osa for upstream verification which needs to be handled separately
14:20:54 <tinatsou> #info Tina Tsou
14:21:00 <fdegir> moving on to
14:21:05 <fdegir> #topic Scenario Status: os-odl-nofeature
14:21:14 <fdegir> epalper: is it you again?
14:22:13 <epalper> #info I'm testing https://gerrit.opnfv.org/gerrit/#/c/39239/ again to look for any changes required at openstack-user-config.yml file
14:22:56 <mbuil> epalper: when deploying xci + ODL in master, we have a problem because the ODL service in haproxy is conflicting with the repo service. I created this patch to fix it: https://review.openstack.org/#/c/519661/. There is one thing though which I don't get, why do we have two ports of ODL in haproxy? The 8080 I guess is used by neutron when using ODL in HA mode but the 8181? As far as I know, there is no service using that port, right?
14:22:56 <fdegir> thx epalper
14:24:09 <fdegir> #info when deploying xci + ODL in master, there is a problem because the ODL service in haproxy is conflicting with the repo service. A patch was proposed to fix it: https://review.openstack.org/#/c/519661/
14:24:37 <fdegir> mbuil: you can perhaps take it after the meeting so we don't keep others waiting
14:24:52 <mbuil> fdegir: ok
14:24:54 <fdegir> #topic Scenario Status: kubernetes in XCI
14:25:03 <fdegir> hw_wutianwei: I think you took it over from s3wong
14:25:09 <fdegir> any updates?
14:25:10 <hw_wutianwei> fdegir: yep
14:25:34 <hw_wutianwei> i upload a patchset according stephen's, and it work now with aio.
14:26:01 <hw_wutianwei> #link https://gerrit.opnfv.org/gerrit/#/c/46153/
14:26:11 <fdegir> #info k8s works with aio now, the patch is under review
14:26:39 <fdegir> glad to hear
14:26:39 <hw_wutianwei> fdegir: yep, I hope you can give more suggestion
14:26:57 <hw_wutianwei> and I will improve that
14:27:01 <fdegir> this brings a question now
14:27:12 <fdegir> our CI verification is for openstack and we have nothing for k8s
14:27:25 <fdegir> we need to add a mechanism to allow scenario based verification
14:27:34 <fdegir> rather than default/vanilla osa
14:27:53 <fdegir> hw_wutianwei: we can work on this together perhaps and adapt jobs accordingly
14:28:14 <hw_wutianwei> fdegir: ok
14:28:17 <fdegir> #info CI for XCI needs to be adapted, enabling scenario based patchset verification
14:28:37 <fdegir> we can talk about the details coming days
14:28:50 <fdegir> and everyone is welcomed to review/provide suggestions obviously
14:28:55 <hw_wutianwei> fdegir: and i aslo need add other scenarios such as ha noha in k8s  later
14:29:05 <fdegir> hw_wutianwei: +1
14:29:37 <fdegir> #info ha and noha will be added for k8s later on
14:29:42 <fdegir> moving to congress
14:29:46 <hw_wutianwei> fdegir: I have one quetion
14:29:54 <fdegir> hw_wutianwei: go ahead
14:31:03 <hw_wutianwei> fdegir, David_Orange: I found there is a patch about  install K8S with rancher
14:31:20 <hw_wutianwei> fdegir: and I use kubespray
14:31:37 <David_Orange> hw_wutianwei: yes, it the code i was talking last week
14:31:44 <hw_wutianwei> do we need support both of them
14:32:06 <David_Orange> hw_wutianwei: extracted from what we have here in Lannion.
14:32:42 <fdegir> maybe I can share what I think about this (and similar things)
14:32:53 <David_Orange> hw_wutianwei: no, as i said to you last week i share only of needed, i can close the patch, the important is to have something working
14:33:58 <hw_wutianwei> David_Orange: ok
14:34:05 <fdegir> when we talk about XCI, we mainly talk about providing feedback with the toolset we picked and framework we are developing
14:34:29 <fdegir> another important aspect of XCI is to give people opportunity to try things out, experiment with them and come up with different ways of doing things
14:35:21 <fdegir> we of course have a framework to fit in
14:35:37 <fdegir> and as long as those new things that come to XCI fulfil what is required by the framework, I am fine with it personally
14:36:24 <fdegir> but one thing to highlight here is that, whoever comes up with those, they need to ensure that stays in the framework and carried by them until it is accepted by the rest of the XCI and the OPNFV as a whole
14:36:44 <fdegir> so, please continue bringing in new things and share them with the rest
14:37:15 <hw_wutianwei> fdegir, David_Orange: thank you to make it clear
14:37:25 <fdegir> now congress
14:37:32 <fdegir> #topic Feature Status: Congress
14:37:36 <fdegir> Taseer: are you with us?
14:37:38 <Taseer> ye
14:37:40 <Taseer> yes
14:37:52 <Taseer> role has already been merged
14:38:02 <fdegir> #info Congress role has been merged upstream
14:38:16 <Taseer> #link github.com/openstack/openstack-ansible-os_congress
14:38:24 <Taseer> but the patch in OSA has not
14:38:44 <fdegir> Taseer: link please
14:39:01 <Taseer> okay
14:39:17 <Taseer> #link https://review.openstack.org/#/c/503971/
14:39:52 <Taseer> evrardjp commented something about an experimental job.
14:40:08 <fdegir> Taseer: it looks good
14:40:11 <Taseer> but looks like he is not on work this week
14:40:37 <fdegir> I mean no objection to patch itself but rather having scenario and the job
14:40:52 <fdegir> Taseer: he was at openstack summit so it might take few days until he recovers
14:41:03 <Taseer> okay
14:41:06 <fdegir> thanks Taseer
14:41:15 <fdegir> hw_wutianwei: skipping ceph as it's done already
14:41:18 <Taseer> fdegir: you are welcome !
14:41:22 <hw_wutianwei> fdegir: ok
14:41:33 <fdegir> #topic Improving Stability
14:41:52 <fdegir> so hwoarang and David_Orange have been talking about fixing stability issues
14:42:04 <fdegir> hwoarang: David_Orange: can you please summarize it for the rest?
14:42:34 <hwoarang> i will let David_Orange summarize since he took over stuff this week as i am busy with other things
14:43:01 <David_Orange> fdegir: stability you mean the bifrost from PDF ?
14:43:10 <fdegir> David_Orange: yes and moving bifrost into vm
14:43:24 <David_Orange> fdegir: ok
14:44:04 <David_Orange> i am reading the actual code to see how things works for now, to be sure i do not misse something
14:44:31 <David_Orange> i try to see how all environment variables works
14:45:08 <David_Orange> and will probably remove many of them
14:45:14 <mbuil> what means PDF in this context?
14:45:28 <David_Orange> but they will be set in the I/PDF
14:45:47 <David_Orange> mbuil: the description of the pod
14:45:50 <David_Orange> https://gerrit.opnfv.org/gerrit/#/c/46493/5/xci/file/ha/vpod-pdf.yaml
14:46:09 <mbuil> David_Orange: thanks
14:46:30 <David_Orange> to have one code to rule all cases: aio/ha/mini/noha + vm/baremetal
14:47:36 <fdegir> maybe I can add a more general summary to this
14:47:42 <David_Orange> i will merge my code and the actual to do as it is done now for OSA: generate config/generate bifrost inventory/run bifrost vanilla playbooks
14:48:05 <fdegir> #info What we mean with "improving stability" in this context is that, we want to isolate ourselves from the host as much as possible, ensuring when someone (users/developers/CI) attempts to use XCI, things work smoothly due to less number of things that might conflict with each other
14:48:56 <fdegir> #info Another aspect of this work is to increase the reuse by splitting node specific bifrost stuff from the actual stack/scenario installation so whatever you may want to install will work with the bifrost pieces we have
14:49:06 <David_Orange> so the idea is to push bifrost in a VM
14:49:53 <ttallgren> Container?
14:49:55 <fdegir> #info Incorporating PDF/IDF is important for ensuring we can run on any (type of) POD which has its PDF available
14:50:06 <mbuil> ok and use an extra level of nested virtualization
14:50:08 <electrocucaracha> is that going to prevent the installation of bifrost stuff in the host?
14:50:14 <hwoarang> hopefully
14:50:28 <fdegir> mbuil: no extra nesting
14:50:37 <David_Orange> ttallgren: it is made by kolla
14:50:46 <fdegir> node enrollment/deployment will be driven from opnfv vm rather than host
14:50:57 <hwoarang> besically bifrost will only take care of enrollement and provisioning. not with creating the VMs itself
14:51:02 <fdegir> and the target nodes will be on the same level as opnfv vm - same as today
14:51:16 <David_Orange> ttallgren: but for now, we reuse the opnfv host
14:51:17 <hwoarang> mbuil: no  functional changes from where you stand
14:51:20 * electrocucaracha nice
14:51:35 <fdegir> ttallgren: we have plans to look at kolla and possibility to get rid of opnfv vm by moving those pieces into container
14:51:40 <fdegir> ttallgren: but one step at a time
14:51:41 <hwoarang> in the end you will get the same X VMs like today
14:51:47 <mbuil> I see, thanks
14:52:07 <fdegir> ttallgren: our concern today is isolation, splitting bifrost from overall process, and PDF/IDF
14:52:21 <electrocucaracha> fdegir: in that future, do we have plans to support kolla and osa?
14:52:34 <fdegir> electrocucaracha: that's a possibility
14:52:52 <fdegir> we need to move on
14:53:11 <fdegir> we can talk about this and other topics after the meeting
14:53:19 <fdegir> #topic Zuulv3 Prototype
14:53:57 <fdegir> #info We had a discussion with OpenStack Infra last week to start a prototype in OPNFV to try things that are not tried by OpenStack Infra
14:54:13 <fdegir> #info The prototype will be limited in scope initially and the details can be seen on the link below
14:54:21 <fdegir> #link https://etherpad.opnfv.org/p/opnfv-zuul-prototype
14:54:41 <fdegir> #info Everyone is welcomed to share their thoughts on the scope of the prototype and help us to get it work
14:55:09 <fdegir> #topic Hardware Availability
14:55:34 <fdegir> #info I have been told that we will not get second POD for XCI so we have an issue with hardware availability now
14:55:52 <fdegir> #info The only POD we have will be configured and mainly dedicated for CI
14:56:07 <fdegir> if anyone is in urgent need of hardware, please reach out to me and we see what we can do
14:56:52 <fdegir> and we appreciate if anyone has a machine to provide, especially to XCI developers
14:57:06 <fdegir> again, ping me if that's the case
14:57:10 <fdegir> the last topic is
14:57:22 <fdegir> #topic Working with other communities
14:57:41 <mbuil> are Intel PoDs back in service?
14:57:52 <fdegir> mbuil: pod20 has been retired
14:58:04 <fdegir> mbuil: pod16 is the only pod we have - which is the CI POD I mentioned
14:58:22 <fdegir> #info We are in contact with DPDK and CNCF regarding XCI and more info will be shared once we have it
14:58:22 <mbuil> ok, retired without replacement I suspect :(
14:58:40 <fdegir> #info We are also talking to OpenStack OpenLab people regarding possible collaboration
14:58:50 <fdegir> #info More info to follow when we have it
14:58:59 <fdegir> #topic AoB
14:59:08 <fdegir> so, 1 minute for any last minute topic
14:59:10 <durschatz> fdegir: I have another set of students coming in January and want to make sure I can successfully deploy os-nosdn-nofeature, have access to horizon and create VMs on that scenario. When is the best time for me to chime in and test the new stability changes?  My hardware suply is a bit thin now also.
14:59:35 <fdegir> durschatz: once David_Orange fixes what he is working on, you can give it a try
14:59:44 <fdegir> durschatz: it will have pre-queens
14:59:46 <durschatz> :-)
15:00:03 <David_Orange> fdegir: about stability topic
15:00:22 <fdegir> David_Orange: let me end the meeting first
15:00:31 <fdegir> thank you all for joining and talk to you next week
15:00:33 <David_Orange> fdegir: sure :)
15:00:33 <fdegir> #endmeeting