13:00:04 #startmeeting Cross Community CI 13:00:04 Meeting started Wed Sep 27 13:00:04 2017 UTC. The chair is fdegir. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:00:04 Useful Commands: #action #agreed #help #info #idea #link #topic. 13:00:04 The meeting name has been set to 'cross_community_ci' 13:00:10 hi everyone 13:00:23 please type in your name if you are joining the XCI meeting 13:00:24 hey 13:00:26 #topic Rollcall 13:00:29 hello 13:00:33 #info Jack Morgan 13:00:37 hi 13:00:39 #info Stephen Wong 13:00:51 the agenda is in its usual place: https://etherpad.opnfv.org/p/xci-meetings 13:00:53 #info Tianwei Wu 13:01:46 #info qiliang 13:01:49 let's start and others can join us on the way 13:01:53 first topic is 13:02:02 #topic Multi-distro Support 13:02:25 #info Ubuntu has been working fine already 13:02:42 #info Markos Chandras 13:02:46 #info Support for OpenSUSE has been introduced recently 13:02:55 hwoarang: anything to add for OpenSUSE? 13:03:03 it should JustWork(tm) 13:03:15 #info David Blaisonneau 13:03:30 hwoarang: no worries, we will see it ourselves soon :) 13:03:50 Tapio doesn't seem to be online so infoing it for him 13:03:59 #info Tapio is looking into Centos 13:04:26 might worth getting whatever he has on gerrit so we can get some CI feedback 13:04:51 #info mbuil 13:04:57 #info It would be good to get what Tapio did for Centos to Gerrit so we get CI feedback 13:04:57 i don't see any patches for him in gerrit 13:05:05 is this meeting weekly or biweekly? :P 13:05:07 right 13:05:13 mbuil: it's been weekly since last week 13:05:21 #info Apart from general distro support, we also introduced xci-verify jobs for all 3 distros (like 5 minutes ago..) 13:05:29 #link https://build.opnfv.org/ci/view/OPNFV%20XCI/ 13:05:31 fdegir: I see, thanks. mardim, you were right :P 13:05:45 which also happens inside a VM built from scratch on every run 13:05:48 #info See section xci verify master 13:06:01 so that should eliminate problems due to stale artifacts from previous builds 13:06:21 ttallgren: we were just talking about centos, any update? 13:06:39 #info On top of enabling jobs, we now run verification in clean VMs to ensure leftovers of previous runs don't cause issues 13:06:48 I reinstalled my CentOS box and got new problems 13:06:58 #info Whenever a new patch comes in, a new VM gets created for each 3 distros 13:07:28 Besides /etc/ssl/certs, glean is acting up in CentOS guest VM 13:07:31 ttallgren: we now enabled jobs for centos so you will get feedback from CI 13:07:42 ok, I will check thos 13:07:44 ttallgren: can you post any of your patches on gerrit so jenkins can test them (jenkins can run on centos too now) and perhaps help you share the work? 13:07:55 what you try in those VMs? to run aio flavor? 13:08:01 mbuil: mini 13:08:12 #info verify jobs use mini flavor 13:08:25 but there is not healthcheck or tempest, right? You only check that the deployment is successfully deployed, right? 13:08:28 aio looked tricky, so I am just using noha 13:08:33 #info We run 2 verify jobs on single slave so VM stuff helped us with resource shortage as well 13:08:40 mbuil: that's coming up 13:08:55 fdegir: ok thanks :) 13:09:25 #info One final note about verify jobs is that, only xci-verify-ubuntu votes. Voting for other distro jobs will be enabled once we are sure we fixed the framework/jobs themselves 13:09:34 and if it fails in one distro, can we merge it? 13:09:44 mbuil: depends on the nature of the failure 13:09:45 fdegir: ok, you replied to my question :) 13:10:00 fdegir: only ubuntu votes 13:10:14 mbuil: you can merge it "technically" but it would be good to fix it if you know the fix already 13:10:21 if not, then yes, merge is possible 13:10:39 this was verify jobs 13:11:01 #info We need similar jobs for osa-periodic and the new way of running stuff will make things much easier for osa-periodic jobs as well 13:11:19 #info These jobs will be created soon once the basics with verify jobs/VM stuff is settled 13:11:32 moving to testing... 13:11:38 #topic Testing 13:11:50 #info The change that prepares the deployment for functest has been merged 13:12:04 #link https://gerrit.opnfv.org/gerrit/#/c/42069/ 13:12:28 #info Next step is to integrate healthcheck into xci-verify/osa-periodic - it will start its life as none-voting for verify jobs 13:12:49 mbuil: do you want to talk about sfc testing now or as part of scenario status update? 13:13:40 moving on to 2 really important topics 13:13:41 fdegir: better with the scenario update. I have not a lot of news 13:13:45 #topic Documentation 13:14:07 #info We are getting closer to release date and we must have documentation ready no matter what we get working 13:14:36 #info Reviews will be coming to your way and please read them as they will be used by our users and they must be clear 13:15:02 #info I appreciate if anyone wants to help out 13:15:23 #topic Release Readiness 13:15:36 this is tricky 13:15:44 anyone wants to speak up? 13:15:57 about multi-distro support or scenario support? 13:16:11 for multidistro i think we may not have centos ready in time 13:16:28 at least not all of it 13:16:31 ttallgren: what do you think? 13:16:45 ttallgren: you will be asked this as part of your euphrates interviews 13:16:52 ttallgren: it is better you get it working :) 13:16:54 I would agree 13:17:12 not being able to promise that it will work 13:17:40 I already have nightmares about journalists asking me about CentOS support :-) 13:17:41 I think the best thing to do at this point is to get your patches to gerrit ttallgren 13:17:53 and we see what it breaks 13:18:29 I need to give yet another heads up regarding centos then 13:18:42 I am running my testing with a simple script that sets a few variables and uses screen to display all kinds of information 13:19:26 I have two patches in Gerrit that I also use, and I delete the exit -1. Thats all 13:19:42 I have this issue now: https://bugs.launchpad.net/bifrost/+bug/1719864 13:19:49 ttallgren: you can mimic what the CI will do for centos by simply running ./xci/scripts/vm/start-new-vm.sh centos 13:20:03 this will createa a brand new VM for centos7 and run xci-deploy.sh in it 13:20:32 maybe that helps instead of having your own wrappers. might make it easier to compare your results with the CI 13:21:03 for release we are only talking about VM deployemnts, not baremetal? 13:21:04 Yes, I saw that. However, the glean stuff now fails everywhere, both host and guest 13:21:10 jmorgan1: that's right 13:21:16 ok 13:21:29 jmorgan1: guess why - all because of PDF!!! :P 13:21:50 thx ttallgren hwoarang 13:21:51 fdegir: better to get it right forst time then redue later 13:21:59 jmorgan1: +1 13:22:16 #topic Scenario/Feature Status: os-nosdn-nofeature 13:22:33 #info This is the scenario we are running everywhere and no need to talk about it in detail 13:22:53 #info It simply works - if upstream doesn't mess something up... 13:23:07 #topic Scenario/Feature Status: os-odl-nofeature 13:23:12 epalper: any update? 13:23:46 I suppose not 13:24:00 #info No update for os-odl-nofeature 13:24:02 #topic Scenario/Feature Status: os-odl-sfc 13:24:06 mbuil: now your turn 13:24:08 fdegir: I already started working on the patch that moves the SFC scenario code to the SFC repo: https://gerrit.opnfv.org/gerrit/#/c/43165/ but I haven't had much time to invest on it due to bugs in SFC testing 13:24:28 #info mbuil started moving sfc scenario to sfc repo 13:24:38 #link https://gerrit.opnfv.org/gerrit/#/c/43165/ 13:24:58 fdegir: And yesterday evening I found an issue with the functest <--> xci integration. The quotes in the ip of the env.j2 template brings problems 13:25:14 mbuil: ok - will you send a patch for it? 13:25:17 fdegir: I don't know why I did not experience it before though 13:25:22 fdegir: yeah, I can do it 13:25:27 mbuil: thx 13:25:55 #info mbuil identified an issue with functest-prepare role with env.j2 (having quotes in ip) - he will send a patch for it 13:26:03 mbuil: and testing? 13:26:32 Guillermo Herrero proposed pharos: Updated fuel adapter (vlan1000, interfaces format) https://gerrit.opnfv.org/gerrit/43307 13:26:34 fdegir: manual testing of SFC in a xci-deployed environment is working 13:26:52 #info Manuel testing of SFC in xci works - it needs to be automated asap 13:27:09 fdegir: when I say manual I mean there a few steps in the deployment which are not yet automated but the test was done through functest alpine container 13:27:42 #info Testing is done using alpine based containers for functest 13:27:52 #info There are few things that need to be automated as well 13:27:56 fdegir: those few steps are needed because of bugs in some of our integrated projects (tacker, OVS, ODL...). They are workarounds to avoid those problems 13:28:10 mbuil: will you add them to your sfc role? 13:28:26 fdegir: yes 13:28:39 ok, so they will be automated 13:28:56 thx Manuel! 13:29:15 fdegir: os-odl-nofeature: merge is pending for https://gerrit.opnfv.org/gerrit/#/c/39239/ and https://review.openstack.org/#/c/496580/ 13:29:17 I want to bring everyone's attention to what Manuel is doing with the move of sfc role to sfc repo 13:29:33 this is crucial for XCI in order to push scenarios to where they belong; project repos 13:29:56 please take extra look and ensure review anything related to this so we can put good foundation in place 13:30:03 Please take a look and give your review comments 13:30:20 epalper: thanks 13:30:31 epalper: this is also realted to what Manuel is doing for sfc 13:30:53 epalper: we need to find a good way to position scenarios so we can consume them no matter where they are stored 13:31:14 moving to the next topic 13:31:28 one more thing 13:31:33 #topic Scenario/Feature Status: k8s-nosdn-nofeature 13:31:37 s3wong: any update 13:32:02 epalper: sorry epalper - missed that 13:32:05 shall i also look into os-nosdn-ovs scenario ? 13:32:10 epalper: can we take it in AoB please? 13:32:18 I spent most of the week try to make the kubespray + kubeadm support working in my setup as reference 13:32:41 The kubeadm support was merged about two weeks ago: https://github.com/kubernetes-incubator/kubespray/commit/6744726089245c724b6927d419064a84551931e2 13:32:44 #info s3wong tried to make the kubespray + kubeadm support working in my setup as reference 13:32:54 #info The kubeadm support was merged about two weeks ago: https://github.com/kubernetes-incubator/kubespray/commit/6744726089245c724b6927d419064a84551931e2 13:33:42 s3wong: anything you can send to gerrit so we can take a look at? 13:33:58 As mentioned last week the goal is to get k8s-nosdn-nofeature working for aio 13:34:06 s3wong: yes 13:34:31 others, such as David_Orange, are interested in k8s so it would be good to share whatever you have 13:34:47 so people can take a look at them, trying them out or at least giving feedback 13:34:47 fdegir: s3wong: +1 :) 13:35:22 fdegir: I will try to post something soon --- please note that I will be on PTO next week, so hopefully I can get a WIP patch out soon 13:35:22 cause, there is no update on etherpad or gerrit so it is difficult to see the progress 13:35:27 and possible share the load 13:35:38 that would be good s3wong 13:35:40 thx a lot 13:35:58 moving to Congress 13:36:07 #topic Scenario/Feature Status: Congress 13:36:13 Taseer: are you with us? 13:37:02 I suppose not 13:37:07 #info No update for congress 13:37:18 #topic Scenario/Feature Status: Ceph 13:37:28 hw_wutianwei: anything you want to share? 13:37:42 I have submit the patch, https://gerrit.opnfv.org/gerrit/#/c/42503/ 13:38:14 it failed in verify, but manual deploying was fine 13:38:30 #info hw_wutianwei's patch works manually but failed on CI 13:38:37 #link https://gerrit.opnfv.org/gerrit/#/c/42503/ 13:38:38 maybe I missed something, 13:38:40 fdegir: do we have etherpad link for conress? if not, can we ask to create one for visability? 13:38:55 hw_wutianwei: I think the jobs will hopefully be more stable moving forward 13:39:19 hw_wutianwei: due to use of clean VMs so we can take a look at any failures from now on and try to find out if anything else needs to be done 13:39:47 fdegir: agree 13:39:53 jmorgan1: https://jira.opnfv.org/browse/RELENG-247 13:40:13 Taseer: can you please put the links to blueprint and any other patches you might have to the Jira issue above? 13:40:35 and I will try to fix this 13:40:39 #action Taseer to put the links to blueprint and any other patches to https://jira.opnfv.org/browse/RELENG-247 for better visibility 13:40:56 thx hw_wutianwei 13:41:17 #topic Baremetal Status 13:41:28 fdegir: you are welcome 13:41:29 fdegir: I am here. 13:41:32 David_Orange: your turn 13:41:37 yes 13:42:05 about baremetal i updated all yaml files, playbooks and config after a yamllint check 13:42:25 and now testing it on ONAP Orange openlab 13:43:06 this platform is freshly plugged so i have to debug a fex things (like pxe interfaces) 13:43:28 #info All yaml files, playbooks and config are updated after a yamllint check 13:43:37 #info It is now being tested in ONAP Orange OpenLab 13:43:44 the hardware is DELL, and pod1 is HP, so for the bifrost part it would validate the playbooks 13:44:18 and i am still open to reviews :) 13:44:29 David_Orange: yep 13:44:41 David_Orange: I think the release stuff is taking everyone's attention 13:44:43 but no pression, main release first :) 13:44:53 David_Orange: I hope things will improve soon once we push this out 13:44:58 np 13:45:12 thx a lot for the effor David_Orange and especially with PDF work 13:45:31 thanks David_Orange 13:45:39 #topic AoB 13:45:50 epalper: coming back to os-nosdn-ovs scenarios 13:46:01 epalper: can you summarize it please? 13:46:16 your welcome, now AlexAvadanii is doing a big part of the job 13:46:20 There is request to install OVS without odl 13:46:48 like os with linux bridges 13:46:51 epalper: I suppose this means switching to ovs as default instead of linux bridges 13:47:03 yes fdegir 13:47:45 #info epalper is working introducing ovs into osa 13:48:24 epalper: a question 13:48:24 but still need to figure out how tunnels between OVS can be configured without ODL 13:48:32 to mardim as well perhaps 13:48:44 epalper: mardim: ovs nsh stuff will be part of upstream ovs 13:49:06 epalper: mardim: so epalper doing this will also help that as well 13:49:10 right now "no" 13:49:18 ok 13:49:30 fdegir, I think yes it wil be part in 2. 13:49:34 2.8* 13:49:54 so it will enable taking that in at least - some way/form 13:50:14 Various different forms of NSH patches had been under OVS review for the past 2+ years... 13:50:53 I think that was one of the difficulties for OPNFV for quite a while 13:51:01 nsh is in 2.8.0 already 13:51:25 but the kernel part is missing unless you use the out-of-tree ovs module which i am not sure which distros provide it right now 13:51:32 which version of ovs you will be integrating epalper ? 13:52:06 shouldn't it be ovs 2.6 ? this is the version being used across deployments 13:52:20 hwoarang, I din't know that so ok 13:53:02 I can't say much about ovs so my question was for capturing the info 13:53:19 #info ovs 2.6 will be integrated at first phase 13:53:39 thx epalper 13:53:50 jmorgan1: I see you added a note to etherpad 13:53:59 fdegir: yup 13:54:05 jmorgan1: do you want to info that in yourself so we get the written confirmation? 13:54:29 so I can use it against you... 13:54:41 #info a second POD will be provided ready this week in the Intel Pharos lab 13:55:06 thanks a lot Jack! 13:55:21 #info Intel Pharos lab has a firewall issue blocking jenkins so wil be unusable until fixed 13:55:21 I think this will help a lot, especially with the multi-distro enablement 13:55:52 #info The plan for the pod20 is that it will continue to serve as development pod 13:56:29 at some point I will upgrade POD19-24 to latest hardware platform (next month or two) 13:56:41 #info The machines in new pod will become jenkins slaves, running all kinds of virtual deployments (bifrost-verify, bifrost-periodic, osa-verify, osa-periodic, xci-verify) 13:56:42 do POD20 wil be impacted then 13:57:13 #info Intel PODs 19-24 will be refreshed in coming months - we need to prepare when the time appraoches 13:57:31 thx again jmorgan1 13:57:38 did we miss anyone? 13:57:58 or anyone wants to bring any last minute topic? 13:58:10 yes 13:58:19 go ahead jmorgan1 13:58:40 we have some folks on my intel team who are interested in bringing kolla support to xCI 13:59:00 #info Intel team is interested in bringing kolla support to XCI 13:59:01 they are currently evaluating how to do it and understanding xCI more 13:59:19 #info They are currently evaluating how to do it and understanding XCI more 13:59:45 jmorgan1: can we have them in one of the upcoming meetings? 14:00:20 fdegir: sure, we might ask for GTM first so you answer questions i can not 14:00:37 jmorgan1: ok - then we schedule a separate meeting 14:00:44 fdegir: yup, thanks 14:00:58 #action fdegir to coordinate with jmorgan1 to have a GTM meeting b/w XCI and Intel Kolla team 14:01:04 thanks everyone 14:01:08 #endmeeting