14:00:06 #startmeeting Cross Community CI 14:00:06 Meeting started Wed Nov 22 14:00:06 2017 UTC. The chair is fdegir. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:06 Useful Commands: #action #agreed #help #info #idea #link #topic. 14:00:06 The meeting name has been set to 'cross_community_ci' 14:00:10 #topic Rollcall 14:00:13 hi everyone 14:00:17 #info David Blaisonneau 14:00:24 please info in your name if you are joining the meeting 14:00:27 #info Markos Chandras 14:00:27 Hi 14:00:35 #info Tapio Tallgren 14:00:40 #info tianwei 14:00:50 #link Agenda is available on https://etherpad.opnfv.org/p/xci-meetings and pretty same 14:00:56 #info Manuel Buil 14:01:06 #info Dimitrios Markou 14:01:13 #info Joe KIdder 14:01:18 let's start 14:01:31 #topic Scenarios/Feature Status: os-odl-sfc 14:01:31 #info Dave Urschatz 14:01:37 mbuil: mardim: go ahead 14:01:50 mbuil, you can start 14:02:18 mardim: I was working this week on xci - ODL (l2,l3), so I have no updates in this section 14:02:19 #info Victor Morales 14:02:29 thx mbuil 14:02:33 mardim: but you have! 14:02:39 mbuil, Yes I have 14:02:43 So 14:02:51 I have this ovs-nsh patch 14:02:58 https://review.openstack.org/#/c/517259/ 14:03:03 #info mardim has ovs-nsh patch 14:03:10 #link https://review.openstack.org/#/c/517259/ 14:03:12 which is in a good state 14:03:18 needs some minor upgrades 14:03:37 hwoarang, Please give me some review feedback :) 14:03:41 #info The patch is in good state and few more minor updates are necessary 14:03:51 also 14:03:59 I released some fixes for neutron role 14:04:04 which are merged 14:04:11 let me paste 14:04:16 #info Some fixes have been done to neutron role 14:04:21 https://review.openstack.org/#/c/518964/ 14:04:27 https://review.openstack.org/#/c/520367/ 14:04:37 #link https://review.openstack.org/#/c/518964/ 14:04:39 mardim: remove thie 'do not merge' there and then we can discuss :) 14:04:46 https://review.openstack.org/#/c/496231/ 14:04:50 #link https://review.openstack.org/#/c/520367/ 14:04:57 #link https://review.openstack.org/#/c/496231/ 14:04:59 hwoarang, haha ok i will remove the barier :P 14:05:16 mardim: should I create another patch for opensuse support or we add it to your patch? 14:05:26 it seems you you have been pretty busy mardim 14:05:34 mbuil, Maybe another one could be better 14:05:37 mardim: I am talking about OVS-NSH support for opensuse 14:05:45 fdegir, You bet fdegir ;) 14:05:45 mardim: ok 14:06:03 That's all from me thanks 14:06:06 thx mardim 14:06:29 #topic Scenarios/Feature Status: os-nosdn-ovs 14:06:36 epalper: do you want to give a quick update? 14:07:12 moving on 14:07:14 #topic Scenarios/Feature Status: os-odl-nofeature 14:07:22 who wants to talk about this? 14:07:24 is that you mbuil ? 14:07:54 fdegir: yes I have something 14:08:01 please go ahead 14:08:44 fdegir: I was testing the deployment of xci + ODL in master and found several bugs. Deployment was deploying correctly but then it was not possible to create VMs,etc 14:09:16 #info mbuil has been testing the deployment of xci + ODL in master and found several bugs. Deployment was ok but then it was not possible to create VMs, etc. 14:09:46 fdegir: for example this patch ==> https://gerrit.opnfv.org/gerrit/#/c/47417/ 14:09:54 was fixing one of them 14:09:56 #link https://gerrit.opnfv.org/gerrit/#/c/47417/ 14:10:22 #info The patch linked above fixes one of the issues 14:10:33 fdegir: apart from that, we realized with epalper that when deploying with ODL+OVS, the default network_mapping which is done in OSA is not correct 14:10:57 #info It has also been found out that when deploying with ODL+OVS, the default network_mapping which is done in OSA is not correct 14:11:12 fdegir: so we concluded that we need to add the provider_networks in the user_variables 14:11:26 #info The conclusion was to add the provider_networks in the user_variables 14:11:32 and we should add the variable host_bind_override: eth12 in the vlan provider network 14:11:51 #info the variable host_bind_override: eth12 in the vlan provider network should be added as well 14:11:58 otherwise it does a vlan:br-vlan mapping in the compute which breaks the connection of the compute to internet 14:12:12 I guess we will document this 14:12:14 #info otherwise it does a vlan:br-vlan mapping in the compute which breaks the connection of the compute to internet 14:12:24 mbuil: we should 14:12:42 and one small bug in openSUSE ==> https://gerrit.opnfv.org/gerrit/#/c/47643/ 14:12:53 mbuil: we need to have documentation for generic scenarios which I'll ping you later on 14:12:58 let me action myself for doc stuff 14:13:13 #action fdegir to find out how to incorporate scenario information to existing XCI documentation 14:13:15 so this morning, I managed to deploy ODL with xci and L2 and L3 were working correctly without any manual modification 14:13:44 #info A patch to fix minore issue on openSUSE is under review 14:13:48 I could ping between VMs, attach floating ips and ssh to those floating ips from the opnfv VM 14:13:51 #link https://gerrit.opnfv.org/gerrit/#/c/47643/ 14:14:08 #info As of this morning, ODL is deployable on XCI and L2 and L3 were working correctly without any manual modification 14:14:36 that's it 14:14:41 #info VMs can be pinged from each other, floating IPs can be attached and the VMs can be ssh'ed to the floating IPs from the opnfv VM 14:14:46 mbuil: this is all master 14:14:53 fdegir: of course! 14:14:58 mbuil: gr8t 14:15:09 thanks for the work and updates! 14:15:13 mbuil: welcome to the master world 14:15:24 btw 14:15:50 hwoarang: thanks! But the release manager warned me that OPNFV TSC does not like it, they prefer the Pike world :P 14:15:50 mbuil: all the updates you provided is valid for ubuntu only or will it work on openSUSE as well once the patch you pasted above gets merged? 14:16:38 opensuse is voting so it must work there! 14:16:46 fdegir: I am working in openSUSE. Peri is working in Ubuntu. My morning test was done in openSUSE and I need to get an update from Peri regarding Ubuntu 14:17:05 mbuil: ok so we can say it either works on both of them or pretty close 14:17:10 which is good 14:17:32 #info os-odl-nofeature either works on ubuntu and openSUSE or pretty close to be working 14:17:38 Peri is epalper (just in case) :P 14:17:47 yep 14:17:52 so the next topic is 14:18:11 #topic Scenario/Feature Status: K8S in XCI 14:18:26 hw_wutianwei: do you have an update for us? 14:18:32 fdegir: hi 14:18:36 yep 14:19:17 I have finished the scenarion of aio mini and noha, but ha is still under development 14:19:35 ^scenarios 14:19:45 #info k8-nosdn work is done for aio, mini, and noha flavors - ha is still under development 14:19:57 and i create a script to deploy the k8s, and this script is called by xci-deploy.sh. 14:20:10 #link https://gerrit.opnfv.org/gerrit/#/c/46153/ 14:20:25 I think the xci-deploy.sh is too long 14:20:28 hw_wutianwei: ^ is the patch isn't it? 14:20:42 fdegir: yes 14:20:58 hw_wutianwei: we need to work on the main script once the work David_Orange is doing is concluded 14:21:20 hw_wutianwei: anything else to add for k8s? 14:21:51 fdegir: so you mean we need add deploy k8s in xci-deploy.sh 14:22:04 don't split it 14:22:19 hw_wutianwei: I think the script should be adjusted in order to do right things depending on the scenario 14:22:25 hw_wutianwei: quick question - should the sandbox xci work, using ubuntu 16.04 on a VM with the aio scenario? 14:22:48 hw_wutianwei: and to do that, we need the bifrost part and the vim part are splitted from each other 14:22:57 meaning if I follow the guide, should I expect it to succeed? 14:23:01 yeah we need to finish with the isolation first 14:23:07 joekidder: I think it can work 14:23:07 and see where that leaves us 14:23:37 joekidder: the deployment works fine on ci for the flavor mini 14:23:40 joekidder: https://build.opnfv.org/ci/job/xci-verify-ubuntu-deploy-virtual-master/273/console 14:23:47 joekidder: when did you clone the repo? 14:23:52 there was a fix yesterday 14:24:00 hw_wutianwei: thanks, then I'll keep digging:). Last week I picked it up. 14:24:04 joekidder: https://gerrit.opnfv.org/gerrit/47575 14:24:12 ok, I'll try it again fresh. Thanks! 14:24:16 np 14:24:27 David_Orange: anything to add for Rancher? 14:24:47 fdegir: no, i focused on stability 14:24:55 David_Orange: +1 14:25:01 so an update from me 14:25:08 David_Orange: i am back in action so lets talk after the meeting 14:25:22 fdegir: but rancher shall be ready for PDF/IDF, we propose it later, with sylvain_orange 14:25:24 #info We had a meeting with CNCF Cross Cloud CI Team last week and learnt that they have a script to deploy k8s from master 14:25:47 #info As far as we know, the tools (kubespray/rancher) we evaluate doesn't support the deployment from master 14:26:16 #info We might need to look at what CNCF Cross Cloud CI Team has but it can wait until we fix the stability and get k8s working in XCI with what we have 14:26:33 more info will follow in AoB Topic 14:26:43 moving to 14:26:52 #topic Scenario/Feature Status: Congress in XCI 14:26:59 Taseer: JP should be back by now 14:27:06 Taseer: is there any progress with congress? 14:27:06 fdegir: No 14:27:15 #info No change for congress yet 14:27:19 thx Taseer 14:27:26 No problem. 14:27:38 Taseer: jp is back so you can ping him. afaik only the job is missing right? 14:27:42 JP was busy with Openstack day France 14:27:53 #info Upcoming Features: Promise/Blazar and HA/Masakari 14:28:14 #info We have been having some discussions with Promise and HA Teams to provide support for them within XCI 14:28:21 hwoarang: okay, I will ping him again 14:28:28 #info Taseer kindly offered help for Promise and we are waiting for project to decide 14:28:58 #link https://etherpad.opnfv.org/p/xci-blazar 14:29:08 #info We will have a meeting with HA Team during Plugfest to discuss the details 14:29:16 #link https://etherpad.opnfv.org/p/xci-ha 14:29:47 now the high-prio item 14:29:51 #topic Improving Stability 14:29:57 David_Orange: any updates? 14:30:06 fdegir: yes :) 14:30:08 #info I am working on stability improvement by pushing bifrost in a VM. This will be done with 2 new roles 14:30:19 #info The first is to create local VMs based on PDF/IDF 14:30:57 #info those definitions files have changed to use ipmi driver for bifrost and vbmc. i will update them with the code 14:31:04 #info This role is likely to be finished and already tested on virtual pod PDF/IDF and baremetal pod PDF/identifier. No more need of aio, mini, nonha, ha config files. 14:31:49 #info The second role will be for bifrost. I am now testing bifrost install from Kolla to avoid deploying a VM with a cloud image and a config drive in the previous step. The goal is to deploy all VM, including opnfv_host from Bifrost. 14:32:03 David_Orange: about the driver 14:32:12 fdegir: yes 14:32:25 David_Orange: I thought we needed to use agent_ssh for vms? 14:33:09 David_Orange: will this create a virtual pod with distinct compute and controller nodes? 14:33:10 fdegir: i reuse the bifrost method, when the create local vm for testing. ipmi + vbmc 14:33:42 David_Orange: but then power type for the nodes will not be the "real" ones, isn't it? 14:34:22 David_Orange: we can perhaps talk about it after the meeting 14:34:22 fdegir: why ? vbmc is a wrapper to libvirt power status 14:34:52 durschatz: yes, create as many nodes as required by the scenario, as configured in the PDF/IDF 14:35:04 :-) 14:35:09 David_Orange: one last question 14:35:17 David_Orange: what is the expected completion for this? 14:35:43 fdegir: a date ? Hope all will be functional and shared for next meeting and talk about it during plugfest 14:35:51 David_Orange: good 14:35:53 including bifrost part 14:36:12 fdegir: 120% of my time is on it :) 14:36:22 I would like to use CENGN vPOD to help out. 14:36:27 #action David_Orange to finish the work for stability by plugfest 14:36:31 David_Orange: :) 14:36:32 at Plugfest 14:36:44 The bifrost inventory generation was working for the baremetal proto, so it should not be too hard to set up for VM PDF/IDF. 14:37:17 durschatz: sure, it was my plane ;) but for ONAP on one server, it should not work :( 14:37:51 Im ok with no ONAP since I have another need to deploy just openstack for students in January 14:37:56 #info the biggest part will be the cleanup of unused var/script at the end. but we can find a slot for that cleaning during the plugfest 14:38:24 durschatz: we will fix it and hopefully your experience will be better next time 14:38:43 :-) and I can test it out at Plugfest… 14:38:54 of course 14:39:04 David_Orange: I suppose that's all 14:39:13 fdegir: that is all. 14:39:17 David_Orange: and I assume there are no updates for baremetal so skipping that topic 14:40:00 fdegir: no update on baremetal itself, but sylvain_orange is now deploying kolla on our pods using pdf/idf 14:40:20 David_Orange: good to have kolla competence 14:40:29 thx David_Orange 14:40:30 fdegir: so this may be another thing to test during the plugfest if all is green 14:40:37 thx sylvain_orange :) 14:40:44 ;) 14:41:07 I saved welcomes to the end of the meeting 14:41:10 moving to the next topic 14:41:12 #topic Zuul Prototype 14:41:28 #info We got the machine up for the prototype and working on setting it up 14:41:42 #info The scope we have for the prototype seems to be pretty good 14:42:01 #info More info will be shared once we have some progress 14:42:17 #topic Working with Other Communities 14:42:33 #info As I mentioned earlier, we met with CNCF Cross Cloud CI Team to find out how we can collaborate 14:42:49 #info The information will be documented on the link below and updates will be shared 14:43:04 #info Some of you might be pulled into that work when we get some traction 14:43:16 #link https://etherpad.opnfv.org/p/cross-cloud-and-xci 14:43:36 #info We also had a meeting with OpenStack OpenLab Team last week and it looks pretty good 14:43:53 #info I expect increased collaboration between OPNFV and OpenStack in Cross Community efforts 14:44:07 #info More info is available on the link below 14:44:25 fdegir: will anyone from CNCF be at plugfest? 14:44:38 #link https://etherpad.openstack.org/p/xciandopenlab 14:45:01 durschatz: unfortunately the CNCF/KubeCon is at the same week as the plugfest... 14:45:15 ah yes :-( 14:45:38 #info Similar to the CNCF collaboration, more people might need to be pulled into OpenLab work 14:45:55 and one last update 14:46:16 #info We also met with Linaro/DPDK yesterday to see how we can contribute to efforts to improve CI for DPDK 14:46:24 more info will follow 14:46:46 as you see, we have many different opportunities to help each other and thing will be much more fun :) 14:46:57 next topic is 14:47:01 #topic Roadmap/Backlog 14:47:18 #info We need to take a breath and work on our roadmap because people are asking 14:47:33 #info And we also need to see what we all are working on and what else is in the pipeline 14:48:00 #info A highlevel roadmap will be created together with a backlog and will be shared during one of the meetings to collect feedback from the team 14:48:21 #info And then this information will be shared with OPNFV Community at large and other communities we work with 14:48:45 and the last topic is 14:48:58 #info New Teammates 14:49:02 #undo 14:49:02 Removing item from minutes: 14:49:06 #topic New Teammates 14:49:23 #info joekidder and sylvain_orange joined us so I welcome them 14:49:29 happy to have you with us! 14:49:33 \o/ 14:49:40 \o/ 14:49:44 Welcome! 14:49:52 thanks:) 14:49:55 Welcome \o/ 14:49:56 welcome! 14:50:17 before we end the meeting, anyone wants to bring anything we missed? 14:50:21 #topic AOB 14:50:24 yes 14:50:26 thank you 14:50:43 PDF/IDF review. if it's ready can we extend it to all flavors and merge it? what's missing? 14:50:51 fdegir, I think you can take back the server that you gave us (to me and manuel) 14:51:08 hwoarang: if David_Orange deems it good then it can be merged 14:51:12 fdegir, we found our own servers so we do not need it anymore 14:51:13 David_Orange is our PDF/IDF expert 14:51:25 fdegir: pdf/idf merge ? no 14:51:36 i mean merge the patcheset that's pending 14:51:41 not merging the two files ;p 14:51:52 fdegir, hwoarang. i will push an update from my actual code 14:51:57 ok 14:52:01 fair enough 14:52:07 mardim: thx for letting me know 14:52:07 quick question about AIO flavor? is it working? I 14:52:10 Are you planning to use the Nokia POD for XCI? 14:52:24 fdegir, no pro 14:52:31 electrocucaracha: mini works but I'm not sure about aio 14:52:37 I'm getting a some public key permissions errors 14:52:37 hwoarang: do you know it? 14:52:38 hwoarang: i will do it in the hour, ok for you ? 14:52:48 fdegir: it should work why would it not? 14:53:16 hwoarang: I think morgan_orange reported similar problem as electrocucaracha last week 14:53:27 David_Orange: yeah no urgency just wanted to know what's pending there. i don't want to see it as part of the overall bifrost work so i wanted to make sure it gets merged separately 14:53:31 what problem? 14:53:38 electrocucaracha: can you put the log to pastebin or something? 14:53:46 we bumped shas since last week 14:53:51 so all logs are invalid ;p 14:54:04 electrocucaracha: please try again with the latest code :) 14:54:10 fdegir: sure, let me recreate it again 14:54:29 electrocucaracha: please pull latst first so you can convince hwoarang if soemthing doesn't work 14:54:41 ttallgren: are you talking about the Nokia POD reserved for stress testing? 14:54:49 Yes 14:54:54 * hwoarang thinks that we could use more HW resources so we can test more flavors 14:55:08 David_Orange: was it you who was supposed to deploy XCI on it? 14:55:26 * fdegir is trying his best to find some hardware... 14:55:33 fdegir: yes 14:55:38 ttallgren: ^ 14:56:04 #info Intel lab will be in maintenance due to power stuff and will be down until next week 14:56:05 i will test the stability part on it, to ensure the code is good 14:56:22 before we end the meeting 14:56:27 ttallgren: how is centos doing today? 14:56:33 fdegir: but if it is needed by someone else, i can wait 14:56:33 WiP... 14:56:49 I do not think anyone is using the Nokia POD 14:56:52 David_Orange: I think Stress Testing is one of the high prio items 14:57:38 fdegir: lets talk about it after the meeting 14:57:44 David_Orange: +1 14:57:58 #info XCI on CentOS is WIP 14:58:10 if nothing else, I thank you all and talk to you next week! 14:58:26 #endmeeting