13:00:40 <hwoarang> #startmeeting Cross Community CI
13:00:40 <collabot> Meeting started Wed Jul 19 13:00:40 2017 UTC.  The chair is hwoarang. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:40 <collabot> Useful Commands: #action #agreed #help #info #idea #link #topic.
13:00:40 <collabot> The meeting name has been set to 'cross_community_ci'
13:00:53 <hwoarang> #topic Rollcall
13:00:57 <uli-k> #info Uli
13:01:07 <thaj> #info Thaj
13:01:11 <jvidal> #info Juan Vidal
13:01:22 <hw_wutianwei> #info Tinawei Wu
13:01:27 <bobmon01> #info Bob Monkman
13:01:28 <hwoarang> i will be chairing the meeting today since fdegir is away
13:01:38 <bobmon01> hey cristina
13:01:58 <David_Orange> #info David Blaisonneau
13:02:11 <CristinaPauna> hi bob
13:02:31 <hwoarang> #link https://etherpad.opnfv.org/p/xci-meetings
13:02:40 <yolanda> hi
13:02:57 <David_Orange> hi
13:03:08 <Tina> Hi David and Yolanda
13:03:16 <David_Orange> Hi Tina
13:04:17 <hwoarang> if there is anything that's not on the agenda already feel free to add it
13:04:30 <hwoarang> moving on to first topic
13:04:37 <hwoarang> #topic XCI
13:04:49 <AlexAvadanii> #info Alexandru Avadanii (Enea)
13:04:55 <m5p3nc3r> #info Matt Spencer
13:05:03 <CristinaPauna> #info Cristina Pauna (Enea)
13:05:22 <yolanda> #info Yolanda Robla
13:05:30 <hwoarang> we should prepare for the xci migration to releng-xci. Fatih proposed a layout on https://gerrit.opnfv.org/gerrit/#/c/36587/
13:05:31 <timirnich> #info Tim Irnich
13:05:54 <hwoarang> #info layout for releng-xci proposed by Fatih on https://gerrit.opnfv.org/gerrit/#/c/36587/
13:06:18 <mbuil> #info Manuel Buil
13:06:23 <Tina> #info Tina Tsou
13:06:32 <hwoarang> have a look at it if possible
13:06:44 <mardim> #info Dimitris Markou
13:07:22 <hwoarang> anything that you would like to add on that front?
13:07:48 <yolanda> long review... i'll need some time to digest
13:08:19 <hwoarang> yep but we could focus on the proposed scructure for now. as far as i understand, the patchset itself is an FYI
13:08:23 <David_Orange> yolanda: the main thing for this subject is on Fatih comment
13:08:29 <hwoarang> yep
13:08:37 <dmcbride> #info David McBride
13:08:52 <David_Orange> hwoarang: +1 for the layout
13:09:07 <hwoarang> ok moving on
13:09:18 <David_Orange> hwoarang: i would add the fact that we should use pdf as source
13:09:43 <hwoarang> #info David_Orange suggests to use pdf as source
13:10:19 <hwoarang> ok moving on
13:10:22 <hwoarang> XCI on ARM
13:10:27 <David_Orange> hwoarang: and as less BASH variables or sources as possible, only generate a conf file at the startup of the bash file if possible
13:11:05 <hwoarang> #info David_Orange suggests to simplify user-visible configuration
13:11:07 <bobmon01> hwoarang: Armband team is represented
13:11:30 <hwoarang> great. so anything you would like to share?
13:11:52 <bobmon01> #info our goal is to understand how we can participate
13:12:07 <bobmon01> #info we wish to have process, docs, servers for AR testing
13:12:11 <bobmon01> ARM testing
13:12:18 <bobmon01> #info ARM testing
13:12:35 <bobmon01> #info we need to know what we need to do to support the project
13:13:03 <bobmon01> #info Alex and Cristina here can coment on how we support bare metals pods
13:13:23 <bobmon01> #info ARM will only support bare metal pods in XCI for th forseeable future
13:14:07 <hwoarang> my personal opinion would be to simply get ubuntu 16 on an ARM host, run XCI and see what fails so we can get an idea on what's the current status right now
13:14:51 <bobmon01> #info Cristina: I assume we can try this on Armband Pharos lab perhaps?
13:15:14 <AlexAvadanii> Ubuntu 16.04 on ARM works just fine, we can spawn a few VMs and do an Openstack deploy
13:15:30 <hwoarang> so XCI should work as is right now
13:15:32 <trinaths> #info Trinath Somanchi
13:16:03 <AlexAvadanii> but the limitation is nested virt support - your Openstack deploy will spawn *emulated* VMs if your deployed POD is virtual
13:16:09 <bobmon01> #info we do not yet know what is "in" XCI that is different from what we do today with Fuel on that same OS
13:16:43 <trinaths> #info Trinath Somanchi (NXP)
13:17:31 <hwoarang> bobmon01: ok i can explain after the meeting. perhaps we should conclude now that 'further clarifications' are necessary in order for you to proceed
13:17:33 <AlexAvadanii> I assume that as part of XCI we will want to run at least some smokestests, which will be a lot slower with nested virt - they still cover the aspects we are after, like nova functionality; but they don't cover the virtualization layer (libvirt, qemu) in the same way a baremetal deploy would
13:18:38 <AlexAvadanii> if the scope of XCI is Openstack components only (nova, neutron etc.) and not KVM itself - virtual PODs on ARM will work just fine
13:19:05 <hwoarang> lets discuss that at the end ok?
13:19:15 <bobmon01> #info hworang: OK, but my question will be if bare metals pods can be supporte dby us in XCI in future
13:19:40 <hwoarang> #info baremetal support is in the TODO list
13:19:51 <bobmon01> #info +1
13:19:56 <hwoarang> moving on
13:20:02 <hwoarang> SNAPS-OO Support
13:20:07 <hwoarang> anyone^?
13:21:04 <hwoarang> nope. ok moving to Stress Test Support
13:21:53 <hwoarang> nope. on to ODL Patchset verification
13:22:05 <hwoarang> mbuil: mardim is this ^ you?
13:22:45 <mbuil> howarang: I am not sure... what do you mean by that?
13:23:23 <hwoarang> not quite sure either it was on the agenda
13:23:41 <trinaths> (doubt) is this the meeting for pharos lab?
13:23:49 <jose_lausuch> I have some feedback regarding the Stress Test Support
13:23:56 <jose_lausuch> but I don't think this is the right place
13:23:56 <mardim> maybe is this for Juan and his patches to os_neutron_role?
13:23:59 <hwoarang> trinaths: nope, this is for XCI
13:23:59 <dfarrell07> #info Daniel Farrell
13:24:02 <mardim> jvidal *
13:24:08 <trinaths> ok
13:24:09 <hwoarang> jose_lausuch: up to you
13:24:47 <jose_lausuch> I think we can't reach a consensus here since the people involved in that activity are not present
13:24:51 <jose_lausuch> but let me give you an overview
13:24:56 <hwoarang> ok
13:25:09 <jose_lausuch> the testing community has a proposal to run long term testing on OPNFV
13:25:42 <jose_lausuch> we though about XCI to avoid opnfv installer dependencies
13:25:44 <jose_lausuch> there are 5 installers
13:26:12 <jose_lausuch> it would be great to test all of them if we had unlimited resources in CI
13:26:17 <jose_lausuch> we need of course a baremetal POD
13:26:30 <jose_lausuch> but we think XCI is a good vehicle to achieve our goal
13:26:41 <hwoarang> #info testing community considered XCI to avoid installer dependencies.
13:26:57 <jose_lausuch> but there are some concerns from the community that we should test on OPNFV installers instead of "upstream" openstack
13:27:02 <uli-k> can you explain "dependencies"?
13:27:04 <jose_lausuch> it was discussed during TSC meeting
13:27:11 <jose_lausuch> well, not dependencies
13:27:21 <jose_lausuch> what I mean is that we would need to install all of the,
13:27:25 <jose_lausuch> *them
13:27:26 <jose_lausuch> right?
13:27:35 <jose_lausuch> that's unfeasible
13:27:43 <jose_lausuch> with only 1 pod
13:27:51 <jose_lausuch> and limited number of people of this activity
13:28:15 <uli-k> Theoretically the deploy-result should be the same - independent, which installer.
13:28:40 <uli-k> So why not just test with one of them? And at a certain point verify with another one.
13:29:14 <jose_lausuch> round robin you mean?
13:29:20 <uli-k> A bit....
13:29:20 <jose_lausuch> and which one of them do you recommend?
13:29:23 <uli-k> slow
13:29:28 <jose_lausuch> that's unmanageable
13:29:38 <uli-k> The most stable one -  since you want to have a stable basis
13:29:45 <jose_lausuch> we won't have time to provide enough feedback if we spend 1 week installing something
13:29:53 <jose_lausuch> what is the most stable one?
13:29:54 <jose_lausuch> :)
13:30:09 <hwoarang> #info Uli mentioned that end result is the same despite the installer that was used
13:30:21 <hwoarang> #info jose raised the point of limited hardware, time and human resources
13:30:26 <hwoarang> shall this be taken offline?
13:30:29 <jose_lausuch> we know that the results differ from the installers
13:30:30 <uli-k> XCI will deploy from master. So you will get different software for each deployment
13:30:32 <jose_lausuch> and that's proven in CI
13:30:47 <hwoarang> since it's similar to what TSC discussed yesterday and afaik there is already an e-mail thread about it
13:30:52 <jose_lausuch> can we get stable/ocata from XCI?
13:30:56 <jose_lausuch> that is the goal of Euphrates
13:31:00 <hwoarang> it's in the TODO
13:31:13 <hwoarang> #info Jose raised the issue of not having stable XCI branches
13:31:19 <hwoarang> #info stable brances are in the TODO
13:31:26 <dmcbride> we know that the same scenario installed by different installers results in *different* deployed configurations
13:31:27 <jose_lausuch> I didn't raise anything :)
13:31:34 <mbuil> jose_lausuch: why Ocata and not Pike?
13:31:46 <jose_lausuch> mbuil: Euphrates is targeting Ocata
13:31:52 <jose_lausuch> dmcbride: am I right?
13:32:00 <dmcbride> jose_lausuch: correct
13:32:07 * uli-k Only I raised that concern ... You can ignore me
13:32:14 <hwoarang> we are getting offtopic here so lets move on please
13:32:41 <jose_lausuch> ok, sorry
13:32:53 <hwoarang> #topic General status for OSA
13:32:58 <hwoarang> hw_wutianwei: still around?^
13:33:05 <hw_wutianwei> hi
13:33:10 <hwoarang> CI for OSA
13:33:16 <hwoarang> is this something you can help us with?
13:33:28 <dmcbride> another advantage of using xci is that the test team can identify optimal configurations, which the installers can use as a data point to adjust their own deployed configurations
13:33:46 <hwoarang> we need to create new jenkins job to test  xci patches and also have a periodic job that tests the latest upstream code so we can do tags more quickly
13:33:58 <hw_wutianwei> I think it is necessary to make it more stable
13:34:12 <hw_wutianwei> and i will try to do
13:34:24 <hw_wutianwei> now compass4nfv
13:34:42 <hw_wutianwei> use osa is stable
13:34:50 <hwoarang> #action hw_wutianwei to try to have a look on jenkins jobs
13:35:29 <hwoarang> good
13:35:34 <hwoarang> hw_wutianwei: anything else?
13:35:41 <yolanda> i've been looking at making cloud more consumable, with the certificates and openrc
13:35:42 <hw_wutianwei> in my opinion, daily build is necessary
13:36:14 <hwoarang> #info yolanda is working on certificates and openrc improvements
13:36:44 <hwoarang> we have a separate features topic so we will discuss that there too
13:36:46 <hwoarang> OSA on baremetal
13:36:57 <hwoarang> David_Orange: ^ a quick overview please
13:37:05 <hwoarang> #link https://gerrit.opnfv.org/gerrit/#/c/36587/
13:37:06 <David_Orange> hwoarang: sure
13:37:36 <David_Orange> #info orange pod1 is installed with bifrost + osa using ^ patch
13:37:46 <David_Orange> #info 5 nodes baremetal
13:37:51 <hwoarang> that's great
13:38:34 <David_Orange> #info this is still beta and install can be erratic but all was green on las t install
13:38:54 <hwoarang> thank you David_Orange
13:39:23 <hwoarang> OSA on SUSE
13:39:24 <David_Orange> #info the 36587 patch is just FYI, it does not plan to replace xci scripts, just a way for me to learn on bifrost and osa
13:39:45 <hwoarang> yep we will review it once we move stuff to releng-xci i suppose
13:39:55 <hwoarang> thank for your work so far
13:40:12 <hwoarang> OSA on SUSE
13:40:15 <hwoarang> that's me
13:40:32 <hwoarang> #info almost all roles have gained support for SUSE excpet nova and ceph
13:40:33 <David_Orange> hwoarang: sure, you are welcome, i will follow my work on it, we can switch topic
13:41:10 <hwoarang> #info AIO is on TODO
13:41:21 <hwoarang> next one is Certificates
13:41:26 <hwoarang> yolanda that's you right^
13:41:37 <yolanda> yes
13:41:47 <hwoarang> #link https://jira.opnfv.org/browse/RELENG-266
13:41:49 <yolanda> so i found that there were problems consuming the cloud
13:42:00 <yolanda> because it was on https, and the certificate was not trusted
13:42:14 <yolanda> so i started a patch to fix it, and also provide openrc files to easily source the creds
13:42:17 <hwoarang> #info yolanda found out that there were problems consuming the cloud
13:42:29 <hwoarang> #info https issues. certificates not trusted
13:42:43 <yolanda> #link https://gerrit.opnfv.org/gerrit/37619
13:42:54 <yolanda> that's WIP but i'm working on it this week
13:43:12 <hwoarang> thank you yolanda
13:43:35 <hwoarang> moving on
13:43:45 <hwoarang> #topic General status for Bifrost
13:44:06 <hwoarang> that sounds easy enough and can be done with existing hardware resources. anyone want to take care of it?
13:44:20 <yolanda> i have one patch pending for review in bifrost
13:44:22 <yolanda> #link https://review.openstack.org/483998
13:44:37 <yolanda> as i started to use xci on my own datacenter, i needed to customize nameservers
13:44:54 <yolanda> and i found out that bifrost only was supporting setting one nameserver, while OSA was enforcing 2
13:45:00 <yolanda> so i did that fix on bifrost
13:45:18 <hwoarang> thank you i will have a look on it myself as well
13:45:35 <hwoarang> ok i will take care of periodic jobs
13:45:51 <hwoarang> #action hwoarang to have a look on enabling periodic jobs for bifrost
13:46:04 <hwoarang> yolanda: anything else?
13:46:16 <hwoarang> bifrost has been very quiet lately which is good
13:46:17 <yolanda> nothing from my side
13:46:19 <hwoarang> ok
13:46:21 <hwoarang> #topic Feature Status
13:46:32 <hwoarang> odl/tacker/ovs-nsh
13:46:35 <hwoarang> mbuil: mardim go^
13:46:51 <jvidal> ODL is pending review: https://review.openstack.org/#/c/480128/
13:47:03 <hwoarang> #link https://review.openstack.org/#/c/480128/
13:47:11 <jvidal> it's just a first integration
13:47:23 <jvidal> but works for L2 scenarios at least
13:47:30 <hwoarang> #info ODL integration pending upstream review
13:47:48 <jvidal> there are a couple of smaller patches pending, one for upstream openstack-ansible (for dependency management)
13:47:49 <hwoarang> # first integration - works on L2 scenarios
13:48:06 <jvidal> and a downstream one in releng, to optionally deploy with ODL
13:48:16 <jvidal> mbuil, your turn
13:48:37 <mbuil> in L3 we are hitting an ODL bug in Carbon. THere is also a bug in the networking-odl side for L3 which should be fixed by the Pike release
13:49:13 <hwoarang> #info L3 has a bug in ODL carbon and network-odl. Should be ok in the pike release
13:49:17 <mbuil> networking-odl bug ==> https://review.openstack.org/#/c/356839/
13:50:03 <hwoarang> #link https://review.openstack.org/#/c/356839/
13:50:08 <Tina> I feel ODL it quite mature now
13:50:29 <mbuil> for tacker, openstack-infra added tacker role to OSA today ==> https://review.openstack.org/#/c/482873/
13:50:39 <hwoarang> #link https://review.openstack.org/#/c/482873/
13:51:05 <hwoarang> #info openstack-infra created the tacker role repository
13:51:11 <hwoarang> great
13:51:30 <mbuil> now I need to commit the patch to OSA with the installer playbook
13:51:38 <mbuil> mardim, your turn
13:52:00 <mardim> Ok so in OVS-NSH we have this spec https://review.openstack.org/#/c/476121/
13:52:10 <hwoarang> #link https://review.openstack.org/#/c/476121/
13:52:34 <mardim> Also we are really close to make the OVS-nsh play in aio OSA
13:52:54 <mardim> and also I created the OVS-NSH packages for ubuntu centos and suse
13:53:07 <mardim> they are upstream in private PPA
13:53:09 <hwoarang> # info ovs-nsh support on XCI/OSA is going well
13:53:20 <hwoarang> # ovs-nsh packages created for ubuntu, centos, suse
13:53:24 <hwoarang> #info ovs-nsh packages created for ubuntu, centos, suse
13:53:39 <hwoarang> sounds awesome!
13:53:44 <hwoarang> anything else?
13:53:44 <mardim> Also with mbuil we needed to investigate the fact that the VMs don't have external connectivity when we have OVS
13:54:06 <mardim> so we I created a playbook to create some necessary networking
13:54:24 <mardim> so the VM can ping the internet and assign floating IPs
13:54:27 <mardim> that;s all
13:54:31 <hwoarang> #info ovs networking  issues - under investigation
13:54:33 <hwoarang> thank you
13:54:44 <hwoarang> Pending Work
13:54:47 <hwoarang> ceph
13:55:00 <hwoarang> i guess nobody is working on that yet
13:55:23 <hwoarang> congress
13:55:25 <hwoarang> ditto
13:55:35 <hwoarang> ovs
13:55:47 <hwoarang> mbuil:  is this something you are doing^?
13:55:59 <hwoarang> or maybe i don't remember correctly
13:56:14 <mbuil> ovs networking issues, yes, that is the L3 problems I mentioned before
13:56:32 <hwoarang> ok
13:56:39 <hwoarang> fds
13:56:40 <hw_wutianwei> mbuil: could you tell me the difference between ovs nsh and ovs?
13:57:02 <hwoarang> hw_wutianwei: after the meeting please as we only have 5 minutes left if that's OK
13:57:10 <hw_wutianwei> ok
13:57:26 <hwoarang> last topic
13:57:30 <hwoarang> #topic Scenario Status
13:57:39 <hwoarang> anyone?^ :)
13:58:15 <hwoarang> anyone want to talk about 'os-nosdn-nofeature-ha' or 'os-odl-sfc-ha' or 'os-odl-nofeature-ha' scenarios?
13:58:53 <hwoarang> i guess not
13:59:22 <hwoarang> ok that's it for today! thank you for attending
13:59:25 <hwoarang> #endmeeting