13:00:24 <fdegir> #startmeeting Cross Community CI
13:00:24 <collabot> Meeting started Wed Aug 16 13:00:24 2017 UTC.  The chair is fdegir. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:24 <collabot> Useful Commands: #action #agreed #help #info #idea #link #topic.
13:00:24 <collabot> The meeting name has been set to 'cross_community_ci'
13:00:34 <fdegir> #topic Rollcall
13:00:50 <fdegir> anyone around for XCI meeting?
13:01:06 <yolanda> hi
13:01:07 <hwoarang> o/
13:01:12 <hw_wutianwei> hi
13:01:16 <fdegir> hello
13:01:31 <fdegir> so the agenda is on its usual place
13:01:35 <mbuil> #info Manuel Buil
13:01:37 <fdegir> #link https://etherpad.opnfv.org/p/xci-meetings
13:01:48 <epalper> hi
13:02:00 <acm> #info Al Morton
13:02:08 <fdegir> so we start with the first topic, which is
13:02:16 <fdegir> #topic Migration to releng-xci repository
13:02:44 <fdegir> I suppose you all noticed the mail from Trevor regarding this
13:03:07 <fdegir> #info XCI finally has a proper repository to continue the work
13:03:12 <hwoarang> yup
13:03:16 <fdegir> #link https://gerrit.opnfv.org/gerrit/gitweb?p=releng-xci.git;a=summary
13:03:37 <fdegir> please ensure you send any existing/new patches to this instead of releng
13:03:57 <fdegir> #info Bifrost jobs have been adjusted and should properly work
13:04:17 <hwoarang> they do
13:04:28 <mardim> #info Dimitrios Markou
13:04:34 <fdegir> #info The main thing for us now is to properly structure our work to ensure sustainable way forward
13:04:56 <fdegir> #info As a first step, I posted short info about how things can be structured
13:05:02 <fdegir> #link https://etherpad.opnfv.org/p/releng-xci-repo
13:05:24 <epalper> #info Periyasamy Palanisamy
13:05:36 <fdegir> please directly update it, post comments/questions and so on so we can more forward
13:05:43 <qiliang> #info qiliang
13:06:02 <fdegir> one comment about the structure above is that we can start working on it without touching what we currently have
13:06:14 <fdegir> so we do not mess things up for odl, ovs, sfc and other work
13:06:22 <fdegir> and once new stuff is ready, we can do the switch
13:06:28 <fdegir> any comments?
13:06:31 <hwoarang> yes
13:06:38 <hwoarang> can we remove all the xci stuff from releng?
13:06:43 <hwoarang> to avoid confusion?
13:06:51 <yolanda> ++
13:07:01 <fdegir> hwoarang: yolanda: is this what you mean?
13:07:03 <fdegir> #link https://gerrit.opnfv.org/gerrit/#/c/39413/
13:07:05 <hwoarang> all the prototypes/ directory
13:07:18 <hwoarang> aha
13:07:19 <hwoarang> yes
13:07:22 <fdegir> :)
13:07:28 <hwoarang> missed this patchset
13:07:35 <fdegir> it is good to suprise you once a while
13:07:39 <hwoarang> lol
13:08:04 <fdegir> but as you see in releng-xci repo, I propose to have prototypes directory
13:08:11 <fdegir> for that type of strange work
13:08:30 <hwoarang> fair enough
13:08:32 <fdegir> I mean whatever you have directly related to xci should go into its usual place
13:08:49 <fdegir> but if you want to share something new, ground breaking invention, etc., prototypes might be useful
13:09:10 <hwoarang> ok
13:09:11 <fdegir> that's all for repo migration
13:09:21 <fdegir> moving to the topic that matters
13:09:31 <fdegir> #topic SFC
13:09:47 <fdegir> mbuil: mardim: stage is yours
13:09:58 <fdegir> and jvidal and epalper as well
13:10:07 <mardim> mbuil: go first
13:11:29 <fdegir> mardim: you might start while we wait for mbuil
13:11:35 <mardim> ok good
13:11:42 <mardim> So my part is OVS-NSH
13:11:55 <mardim> we succesfully deployed OVS-NSH
13:12:01 <mardim> fo noha and HA
13:12:04 <mardim> scenarios
13:12:30 <mardim> but the problem is that we are not sure if we should upstream the changes
13:12:30 <fdegir> #info ovs-nsh is successfully deployed for noha and ha
13:12:42 <fdegir> mardim: what type of changes you needed to do?
13:12:44 <hw_wutianwei> mardim: one question, do you use master?
13:13:08 <hwoarang> fwiw ovs-2.8 will have nsh support
13:13:19 <hwoarang> without all the out-of-tree patches
13:13:29 <mardim> fdegir: enough changes in openstack ansible and in neutron-role also
13:13:35 <hwoarang> that will possibly simplify your playbooks
13:13:55 <mardim> Yes that ie why I am not sure if we should upstram the changes
13:14:20 <fdegir> mardim: hwoarang: do you have an estimate regarding when ovs-2.8 will be available?
13:14:21 <mardim> because right now we use a private ubuntu repo where we constructed the packages
13:14:28 <hwoarang> fdegir: later this month
13:14:36 <mardim> with the NSH patches
13:14:49 <hwoarang> they branched from master so the release will happen really soon
13:14:55 <mardim> and if the guys for OVS are gonna have NSH support in 2.8 relese
13:15:01 <hwoarang> schedule says some time in august
13:15:22 <mardim> IMHO there is no need to upstream my changes in OVS-NSH
13:15:26 <mardim> what do you think ?
13:15:43 <fdegir> mardim: my suggestion would be that you put your work into releng-xci/prototypes or something so you can continue your work
13:15:49 <hwoarang> yeah
13:16:01 <hwoarang> mardim: the release of ovs-2.8 doesn't mean that existing distros will adopt it
13:16:15 <hwoarang> so you will still need some magic to get it to install and work on existing distros
13:16:35 <hwoarang> possibly build your own ovs packages
13:16:35 <mbuil> Right now NSH support is added to OVS 2.6 with a "private" patch. NSH will be included natively in ovs 2.8 but as far as I know, only available when using OpenFlow1.5. I don't think we will get Openflow1.5 supported in ODL before the Oxygen release happening in February. But here I am speculating
13:17:25 <mardim> ok so your suggestions are to upload my OVS-nsh patches to the apropriate ansible projects right ?
13:17:26 <fdegir> I leave what way you would want to proceed to you :)
13:17:42 <fdegir> mardim: do you have a blueprint for this for osa?
13:18:02 <mardim> yes I do but it isn't merged yet
13:18:07 <mardim> let me find the link
13:18:17 <mbuil> I prefer to do as you guys suggest and have it in releng, because the NSH patch is a bit ugly and very specific of SFC
13:19:08 <mardim> https://review.openstack.org/#/c/476121/
13:19:11 <mardim> here ^
13:19:26 <fdegir> #info ovs nsh blueprint for osa
13:19:27 <fdegir> #link #link https://review.openstack.org/#/c/476121/
13:19:41 <fdegir> I leave the decision to you
13:19:50 <fdegir> but the order of preference for us is
13:20:10 <fdegir> upstream (osa/bifrost, etc.) if appropriate - you need to judge
13:20:12 <fdegir> releng-xci
13:20:16 <fdegir> and anything else
13:20:39 <fdegir> so whatever you might have in github should go to one of the above and finally to upstream
13:20:48 <fdegir> thanks mardim
13:20:52 <fdegir> mbuil: your turn
13:20:55 <mardim> thanks :)
13:21:40 <fdegir> it seems we lost mbuil again
13:21:57 <fdegir> epalper: would you like to give update about your odl patch?
13:22:15 <mbuil> SFC scenario is almost ready. We are just missing two things, 1 - There is a SSL certification problem between Tacker & Heat that Taseer is looking at together with me
13:22:34 <fdegir> #info SFC scenario is almost ready. We are just missing two things, 1 - There is a SSL certification problem between Tacker & Heat that Taseer is looking at together with me
13:22:43 <mbuil> 2 - The ha scenario does not deploy with clustered ODL. Maridm looking into that
13:22:48 <mbuil> *Mardim
13:22:56 <epalper> let me give after mbuil's update
13:22:57 <fdegir> #info The ha scenario does not deploy with clustered ODL. mardim is looking into that
13:23:05 <fdegir> mbuil: is this still ocata?
13:23:24 <mbuil> Yes, Openstack Ocata with ODL Nitrogen
13:23:57 <mbuil> SOrry, there is a third thing ==> we are doing several steps, unfortunately, manually
13:24:20 <mbuil> We are working on github private branches because in the last 3 weeks OSA gate jobs were VERY unstable
13:24:56 <fdegir> mbuil: would it help if we had patch verification jobs in opnfv jenkins for osa stuff?
13:24:59 <mbuil> We have several patches in the pipleine, especially for the neutron role, but until these two don't get merged, we cannot do much ==> https://review.openstack.org/#/c/480131/ https://review.openstack.org/#/c/480128/
13:25:54 <fdegir> mbuil: nevermind - i thought your stuff was only on github
13:26:12 <mbuil> fdegir: our "working" stuff is on github
13:26:51 <fdegir> how should we handle this?
13:27:03 <mbuil> for example, ODL L3 support, ODL bug fixes, SFC support, networking-sfc support patches...  those are only on github and will be merged into upstream when the previous links get merged
13:27:11 <fdegir> we can have them in releng-xci and run verification for them
13:27:20 <fdegir> this also helps us to run other stuff as well
13:27:21 <mbuil> when I say github I mean in our private branches in github :(
13:27:46 <epalper> yes mbuil, those patches are very much needed for ODL-XCI integrating testing too.
13:27:54 <fdegir> cause the scenario in front of us is like this (in release context)
13:27:57 <fdegir> releng-xci stuff
13:28:00 <fdegir> upstream stuff
13:28:03 <fdegir> and github
13:28:26 <fdegir> 3 different places for sfc - if I didn't miss anything
13:28:47 <fdegir> we can't do much for upstream but we can at least bring what you have done to one place
13:29:18 <fdegir> what do you say mbuil mardim?
13:29:33 <fdegir> bringing things from github into releng-xci/prototypes
13:29:38 <mbuil> we depend on several repos: releng, OSA, neutron-role, tacker-role (waiting for this patch ==> https://review.openstack.org/#/c/485259/ ), odl-integration role in OpenDaylight
13:30:19 <fdegir> why I am asking this is that we want to have jenkins jobs for sfc
13:30:45 <fdegir> but the state of things makes things overly complicated
13:31:00 <mbuil> fdegir: I am fine with that. Howexactly should we do it, we get a clone for each of those repos in releng-xci?
13:31:14 <fdegir> this will always happen so we need to find a way which can continue using in future
13:31:27 <mardim> so if understand correctly
13:31:35 <fdegir> and we can't do similar things (using github private branches etc) for each and every feature
13:31:50 <mardim> fdegir proposes that we can use
13:32:01 <mardim> our private forks from github
13:32:18 <mardim> and put rthem in releng-xci for the release of Euphrates ?
13:32:20 <mardim> or ?
13:32:34 <fdegir> mardim: yes - but it is just a proposal :)
13:32:51 <mardim> ok thanks
13:32:56 <fdegir> you need to think about this - similar to ovs-nsh stuff
13:33:30 <fdegir> mbuil: depending on what you come up with, we can discuss jenkins jobs
13:34:00 <fdegir> moving to epalper
13:34:34 <fdegir> #info the patch for odl integration to xci is this
13:34:38 <epalper> raised https://gerrit.opnfv.org/gerrit/#/c/39239/ for XCI-ODL integration
13:34:46 <fdegir> #link https://gerrit.opnfv.org/gerrit/#/c/39239/
13:35:13 <epalper> and tested mini and no-ha scenarios with private github for neutron-role and osa
13:36:17 <fdegir> thanks epalper
13:36:22 <epalper> Do I need to wait for fix for ha scenario (i.e. fixing ODL to be installed in clustered mode) ?
13:36:41 <fdegir> mardim: ^
13:36:44 <mbuil> epalper: did you try L2 and worked?
13:37:09 <mardim> epalper: No do not wait proceed
13:37:15 <epalper> yes, I have checked flows/groups on the computes and looked proper
13:37:27 <mardim> epalper: the clustering thing will be ready for Euphrates2.0 I think
13:37:28 <epalper> havent tried any traffic test
13:38:27 <fdegir> everyone is welcomed to review the patch
13:38:28 <epalper> ok, let me atleast test ODL is installed as standalone mode in all the neutron server nodes
13:38:53 <fdegir> anything else to add for sfc?
13:39:01 <mardim> I am good
13:39:15 <fdegir> ok
13:39:28 <fdegir> #topic Distro Support
13:39:39 <fdegir> hwoarang: I heard SUSE joined to the party
13:39:49 <fdegir> hwoarang: can you give us an update about AIO please?
13:40:18 <mbuil> if SUSE joins the party, then it will be a great party ;)
13:40:31 <hwoarang> yeah i have a few patches pending mostly related to an old bug https://review.openstack.org/#/q/status:open+branch:master+topic:bug/1637509
13:40:52 <hwoarang> but it's fairly close. not sure what else if pending after that.
13:40:58 <fdegir> #info upstream SUSE AIO almost works
13:40:58 <hwoarang> i hope to have it working before final pike release
13:41:12 <fdegir> #info hwoarang has a few patches pending mostly related to an old bug https://review.openstack.org/#/q/status:open+branch:master+topic:bug
13:41:33 <hwoarang> #undo
13:41:41 <fdegir> #chair hwoarang
13:41:41 <collabot> Current chairs: fdegir hwoarang
13:41:42 <hwoarang> #info hwoarang has a few patches pending mostly related to an old bug https://review.openstack.org/#/q/status:open+branch:master+topic:bug/1637509
13:42:40 <fdegir> so we can essentiall have openstack deployed from master for all 3 distros by euphrates for aio
13:42:50 <hwoarang> then of course we need to fix xci to remove all the xenial specific stuff
13:43:00 <fdegir> right
13:43:13 <hwoarang> yeah it should be possible to do it on time
13:43:22 <hwoarang> centos can actually progress in parallel
13:43:32 <hwoarang> provided one can start looking at it asap
13:43:33 <fdegir> tapio started looking into centos
13:43:41 <fdegir> I hope yolanda can support him
13:44:02 <yolanda> glad to help
13:44:11 <fdegir> thanks :)
13:44:24 <fdegir> this brings us to the bifrost issue on centos hwoarang mentioned
13:44:33 <fdegir> is that a real problem, or?
13:44:44 <hwoarang> yolanda has been looking at it. i workarounded by disabling selinux
13:44:47 <hwoarang> yolanda can tell you more
13:44:55 <yolanda> it seems to be a problem with policies
13:45:04 <yolanda> right now it runs with selinux enabled, and a custom policy i added
13:45:17 <yolanda> but it will need more investigation on a proper fix in upstream
13:45:43 <fdegir> but we have the workaround for our bifrost jobs at least
13:45:49 <hwoarang> #link https://bugs.launchpad.net/diskimage-builder/+bug/1710973
13:45:59 <fdegir> do you think if we cause confusion for upstream bifrost?
13:46:00 <yolanda> yep, i added the policy i created
13:46:12 <fdegir> I mean, their patches +1d by our jobs
13:46:17 <fdegir> and might be -1d by upstream
13:46:37 * fdegir doesn't remember if upstream bifrost has jobs on centos
13:46:41 <hwoarang> it doesn't
13:46:47 <yolanda> it seems to come from diskimage-builder, not from bifrost directly
13:47:00 <hwoarang> it's experimental and there is another selinux issue iirc that prevents vms from starting
13:47:09 <fdegir> #info The issue with bifrost on centos seems to come from diskimage-builder, not from bifrost directly
13:47:12 <yolanda> however, this problem seems to depend on version, because i tested on a clean centos7 and worked
13:47:31 <fdegir> ok
13:47:55 <fdegir> so it is good that it is investigation/work in progress
13:48:12 <fdegir> the question about using centos vs centos minimal
13:48:18 <hwoarang> #link https://gerrit.opnfv.org/gerrit/#/c/39383/
13:48:36 <fdegir> hwoarang: thanks :)
13:48:46 <fdegir> #info The patch above switches to centos minimal
13:49:01 <hwoarang> iirc openstack ci uses the -minimal elements so i believe we should do the same. we already use -minimal for ubuntu and opensuse
13:49:01 <fdegir> moving on as we are nearing the end of the meeting
13:49:11 <hwoarang> ok
13:49:16 <fdegir> #info openstack ci uses the -minimal elements so i believe we should do the same. we already use -minimal for ubuntu and opensuse
13:49:21 <fdegir> #topic Baremetal deployments
13:49:38 <fdegir> David from Orange got it working on baremetal
13:50:03 <fdegir> I hope to catch him but anyone is interested in helping, you're welcome
13:50:25 <fdegir> #info Baremetal support needs to be added to XCI
13:50:36 <fdegir> #topic Extending CI
13:50:55 <fdegir> we need to bring up jobs for xci
13:51:14 <fdegir> running full bifrost/osa
13:51:22 <hwoarang> true
13:51:28 <hw_wutianwei> fdegir: yep
13:51:30 <fdegir> which is needed for sfc as well
13:51:47 <fdegir> this is the first prio if you ask me
13:51:56 <fdegir> and the next set of jobs is for chasing upstream masters
13:51:59 <mardim> I agree
13:52:00 <fdegir> periodic jobs
13:52:14 <fdegir> and the last one is to have osa patch verification
13:52:58 <fdegir> so we need someone to set up jobs for os-nosdn-nofeature scenario using vms
13:53:20 <fdegir> this is the job: https://build.opnfv.org/ci/view/OPNFV%20XCI/job/xci-os-nosdn-nofeature-ha-virtual-xenial-daily-master/
13:53:56 <fdegir> #topic AoB
13:54:17 <fdegir> #info we need to move our documentation to readthedocs
13:54:26 <fdegir> #info which will be part of OPNFV Infra documentation
13:54:41 <fdegir> #info So I will start moving the documentation from wiki to releng-xci/docs
13:54:55 <mbuil> fdegir: is that a requirement for all projects?
13:55:14 <fdegir> mbuil: nope, this is xci specific parts
13:55:21 <fdegir> mbuil: sfc documentation should be in sfc repo
13:55:30 <mbuil> fdegir: ok, thanks
13:55:35 <fdegir> mbuil: and we need to document the "service" as part of opnfv infra documentation
13:56:10 <fdegir> anyone has any comment regarding this?
13:56:30 <fdegir> any other topic to bring up?
13:56:56 <mbuil> I have question, releated to the previous topic. How many jjobs will we have per scenario in xci? just one looking into master or another one based on latest "stable" release of the different components?
13:57:35 <fdegir> mbuil: that's part of the conversation we need to have with David and then TSC later on
13:57:48 <mbuil> mbuil: ok
13:58:00 <fdegir> mbuil: since XCI is for master and supporting stable versions, I prefer to have nothing but master
13:58:19 <fdegir> but this is my opinion and still under discussion
13:58:58 <fdegir> then I thank you all for joining and more importantly for the work
13:59:04 <fdegir> talk to you in 2 weeks
13:59:07 <fdegir> #endmeeting