16:03:56 #startmeeting OPNFV BGS daily check in 16:03:56 Meeting started Tue May 12 16:03:56 2015 UTC. The chair is frankbrockners. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:03:56 Useful Commands: #action #agreed #help #info #idea #link #topic. 16:03:56 The meeting name has been set to 'opnfv_bgs_daily_check_in' 16:04:00 <[1]JonasB> #info Jonas Bjurel 16:04:04 roll call.... 16:04:06 #info Peter Bandzi 16:04:06 #info Stefan Berg 16:04:10 #info Frank Brockners 16:04:17 #info Morgan Richomme 16:04:24 #info Jose Lausuch 16:04:30 #info Fatih Degirmenci 16:05:04 #info Dan Radez 16:05:21 we're starting to have a quorum... let's get started 16:05:24 #info Tim Rozet 16:05:37 #info daily status updates... 16:06:02 let's start with functest - real brief - Morgan did a detailed update on the TSC 16:06:09 #topic functest status 16:06:20 #info readiness page updated 16:06:32 #wait for green light on POD1 to start testing on POD1 16:06:45 #info wait for green light on POD1 to start testing on POD1 16:07:26 #info Tempest re-run after POD2 automatic reinstallation, better results: 97 (+13) tests OK / 26 failures (-7) 16:07:39 #info results posted on the Pad 16:07:50 #info AT&T offered help to have a look at the issues 16:07:52 * ChrisPriceAB I have to run, but will check back here later to see what actions are needed. Let's get Arno rolled out! 16:08:06 #info test vPing failed on POD2 (Timeout) under investigation 16:08:22 maybe jose_lausuch you want to add something for vPing? 16:08:47 #info arnaud is working on a cirros image for vPING 16:08:53 I will upload the code soon 16:09:09 thats it 16:09:15 so that is for us 16:09:33 #info getting support to fix the tempest failed tests 16:09:34 quick q: per the email discussion: do you plan to create a table on root causes for the 26 failures? 16:10:02 I will, I would like to wait for POD1 test to see if we got the same erros 16:10:11 Others (e.g. Bryan) per the above offered to help with investigations 16:10:19 I created a tempest page for R1 on the wiki 16:10:25 i will put the investigation table here 16:10:26 makes sense to wait for POD1 16:10:43 thanks morgan_orange 16:11:01 #info the automated functest deployment worked on POD2 16:11:35 ok ... sounds like this leads us to POD updates/ autodeployment readiness... 16:11:49 #topic updates on LF POD deployments 16:12:00 <[1]JonasB> #info At last we're up with on LF-Lab POD1 with a fully good autodeployment 16:12:14 great! 16:12:28 <[1]JonasB> #info Next, Fatih will connect the autodeploy to jenkins and do a run 16:12:35 cool 16:12:46 <[1]JonasB> #info after that we should start functest on POD1 16:12:46 #info An example script at auto_deploy_fuel.sh will deploy a three controller/two compute/Ceph setup 16:13:06 #info /home/opnfv that is 16:13:11 #info I have a question though. 16:13:49 #info I've stored the Fuel configuration in Git (dea.yaml) but I am hesitant to also store the hardware config which contains passwords for ipmi (dha.yaml). 16:14:20 #info Personally I feel this information is tightly coupled to the lab itself, anyone has any gut feeling or wisdom in this manner? 16:14:21 #info we stored the passwords into the genesis repo already, someone cant access without VPN 16:14:49 #info if someone already has VPN access and want to destroy the setup they already can :) 16:15:23 #info So I hear no arguments against doing this? Then I'll add this info for the Fuel config as well... 16:15:40 #info OK, a commit is coming up. :) 16:15:54 <[1]JonasB> Lets go for this, and make it more secure/better practice in R2 16:15:56 IMHO we're ok to go with the assumption that LF's FW is the gate that protects us for now 16:16:19 cool - sounds like pragmatic agreement 16:16:34 <[1]JonasB> #info Thats the good part, now to the less good: We are struggeling with interference between ODL and OS HA proxy in HA deployments. Investigation ongoing 16:16:50 <[1]JonasB> #info no ETA today. 16:17:14 <[1]JonasB> Thats all from me 16:17:15 JonasB: a port conflict or IP address conflict? 16:17:33 [1]JonasB ^ 16:17:40 <[1]JonasB> trozet: Seems to be a bridge conflict 16:18:28 <[1]JonasB> trozet: Once we nailed down the issue a little further, we might want your oppinion/help 16:18:50 sure, would be glad to help 16:19:08 [1]JonasB: Any pointers that you can share that hint at the issue? 16:20:13 <[1]JonasB> frankbrockners: Will share when I have a little more info. I'm afraid it looks like a little tied to how fuel works with bridges, but we'll see 16:20:26 ok - thanks 16:20:37 let's move to POD2 16:20:40 Wait 16:20:45 Just one request for help. :) 16:20:56 * frankbrockners please.... 16:21:26 I needed to make a really weird bridge setup in pod1 to strip these vlan 0 tags for the VM bridge. Would be very grateful if a Centos guru could assist at persisting this: 16:21:36 bridge name bridge id STP enabled interfaces 16:21:36 br0 8000.0025b5a0008f no enp8s0.0 16:21:36 pxebr 8000.0025b5b000ff no enp7s0.0 16:21:48 We can take it offline, but offers appreciated. :) 16:22:08 I'll send a separate plea out in #opnfv-bgs as well. 16:22:11 So, done, thanks. 16:22:20 <[1]JonasB> wait 16:22:37 <[1]JonasB> Is this the same way as it was done in POD2? 16:23:13 * stefan_berg is doing a vconfig add enps0 0, to clarify 16:23:31 stefan_berg cant you just make the interface config files, then linux networking will bring it up on reboot? 16:23:42 [1]JonasB: in fact on POD2 only vbox restart was needed to accept vlan0 16:24:25 <[1]JonasB> pbanzi: Weired, so VBOX considers untagged and VLAN0 being the same? 16:25:11 trozet: Ah, just throwing in a /etc/sysconfig/network-scripts/ifcfg-enp7s0.0 and that would do the trick, removing the ifcfg-enp7s0 file? 16:25:11 [1]JonasB: most OS should, VID 0 is reserved and typically used to specify priority bits 16:25:42 stefan_berg: you need to modify 2 ifcfg files, ping me in opnfv-bgs after the meeting and I can help 16:26:08 It's the bridging that messes things up, throwing these tagged packages into the bridge and down the pipe to the VM makes it confused (handling it as a vlan and not decoding). 16:26:10 remember VLAN0 isn't really a VLAN - it is just used to covey priority tagging to the interface in case no VLAN is provided 16:26:12 For kvm. 16:26:32 [1]JonasB: it really depends on the NIC driver to strip off the VLAN header and accept the packet. The problem is Centos7 recently added support for the Cisco ENIC driver 16:26:45 [1]JonasB: not sure how vbox process it internaly-- it magicaly started work when VM is restarted 16:27:15 It would be great if it could be turned off though, it seems only to complicate things in this context... 16:27:25 ok - can trozet: Appreciated, thanks! 16:27:40 #info one solution to this VLAN problem: 16:28:00 ok - can stefan_berg and trozet synch how to make the above bridge config permanent? 16:28:25 Yes! 16:28:29 #info I looked at the ENIC driver code. If the NIC supports RRQ it will strip the tag. If someone has the time you could modify the ENIC driver to strip the tag no matter what. This would hand the packet to any hypervisor untagged. 16:30:28 #info However, creating a VLAN interface with tag 0 and and connecting that to the VM distribution bridge seems to be a functioning workaround, although a bit too much black magic for me to like it... 16:31:10 anyway I'll sync up with stefan_berg after the meeting, should we keep going Frank? 16:31:20 We have an enhancement request pending to make vlan 0 prio tagging support configurable 16:31:24 trozet - yes 16:31:29 let's move to POD2 16:31:35 #info Updates on POD2 16:31:48 #info as previously mentioned, LF POD2 was able to re-deploy correctly after LF config changes 16:32:12 #info deploy.sh virtualization patch is done and working. Submitted a gerrit review: https://gerrit.opnfv.org/gerrit/#/c/519/ 16:32:39 #info working on adding br-ex to the setup. I think this will resolve some of functest failures 16:32:50 #info had a question about br-ex to both functest team and Fuel team 16:33:47 #info 1. Do installers need to configure the provider, external network for functest in neutron? It requires knowing the public subnet, gateway, DNS so I would assume so? 16:34:16 #info or does functest team's tests automatically determine that info and create the provider network? 16:35:21 #info 2. Also, I was planning on only adding br-ex and configuring it for one Controller. How does Fuel currently do this? 16:36:22 <[1]JonasB> trozet: on 1 - don't know if required, but we're doing so currently 16:37:04 [1]JonasB: OK well the functest cases passed on your setup in Ericsson lab so that must work OK. I'll create the provider network unless I hear otherwise 16:38:02 [1]JonasB: for #2 do you guys setup one br-ex on 1 controller, or do you setup 3 on 3 controllers? I could see ODL having some problems with more than one br-ex, I need to check with them. 16:38:48 <[1]JonasB> stefan_berg: Can you answer trozet on 2. 16:39:04 trozet: https://docs.mirantis.com/openstack/fuel/fuel-6.0/user-guide.html#network-settings is an overview of the parameters fed into Fuel for the network config if that helps. 16:39:43 #info https://docs.mirantis.com/openstack/fuel/fuel-6.0/user-guide.html#network-settings is an overview of the parameters fed into Fuel for the network config (stefan_berg) 16:39:55 Sorry, thanks Frank. 16:40:06 <[1]JonasB> trozet: I believe it is set-up for all three but that only one network node is active at one time. 16:40:35 #info OK. I'll take a look at the guide 16:41:37 #info Once the external network is running and we get a re-run of tempest smoke on pod2, I'll help morgan_orange debug the failures and we can create a root cause section on the wiki 16:42:04 radez, do you want to comment on the ISO? 16:42:18 * frankbrockners you read my mind 16:43:20 * frankbrockners radez might have stepped out... 16:43:43 ok well thats it from me 16:43:52 oh one request 16:44:10 morgan_orange: Do you know about the experiences Arnaud had with the ISO so far? 16:44:22 go ahead TIm 16:44:28 #info those who submitted comments for changes to the foreman installation guide: the guide was merged, please go ahead and submit gerrit reviews with your changes 16:45:27 good point 16:45:29 <[1]JonasB> #info Infact the same status for fuel installation guide, merged but review it anyway! 16:46:08 given that radez seems to have stepped out - we'll probably hear on the ISO tomorrow... 16:46:17 looks like we're done for today. 16:46:24 <[1]JonasB> I have an AOB 16:46:25 ill try to connect with radez before then to see if i can get an update from him 16:47:34 <[1]JonasB> #topic AoB 16:47:49 #topic AOB 16:48:07 #chair [1]JonasB 16:48:07 Current chairs: [1]JonasB frankbrockners 16:48:14 * frankbrockners now it'll work .. 16:48:16 <[1]JonasB> #info Sweeden have public holliday on Thu - Fri this week :-( 16:48:29 * stefan_berg Hmm... :) 16:48:55 <[1]JonasB> Yus wanted to let you know:-) 16:49:06 <[1]JonasB> All from me 16:49:10 * frankbrockners as we concluded before - we live in the wrong country... 16:49:29 #info Thu is a public holiday in many countries in Europe 16:49:43 are folks ok to still do the synch meetings? 16:49:57 I thought about keeping them on the calendar 16:50:01 thoughts? 16:50:03 <[1]JonasB> Actuall Fri is not, but the unions have succeeded! 16:50:09 I'll be here, listening at least - would appreciate the meetings going on 16:50:34 <[1]JonasB> frankbrockners: I can do shorter synch meetings, but not both TSC and BGS 16:50:39 best way to stay in sync with progress 16:51:46 [1]JonasB: You choose - but we'd love to have you in BGS :-) -- I can play proxy in TSC as usual if folks are fine with that 16:52:11 <[1]JonasB> frankbrockners: Rather only join BGS :-) 16:52:38 Let's keep things on the calendar for now - and we'll see who joins - with some discipline we can try to stay below 30min 16:53:01 #info BGS team will continue with daily synch meetings at 9am PDT 16:53:18 done for today? 16:53:23 <[1]JonasB> Yep 16:54:40 ok -- thanks everyone 16:54:45 #endmeeting