14:03:45 <[1]JonasB> #startmeeting BGS Fuel status
14:03:45 <collabot> Meeting started Tue Jun  2 14:03:45 2015 UTC.  The chair is [1]JonasB. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:03:45 <collabot> Useful Commands: #action #agreed #help #info #idea #link #topic.
14:03:45 <collabot> The meeting name has been set to 'bgs_fuel_status'
14:04:00 <[1]JonasB> #info Jonas Bjurel
14:04:11 <mskalski> #info Michal Skalski
14:04:16 <jose_lausuch> #info Jose Lausuch
14:04:17 <lmcdasm> . but i think its cause i dont have a "tty" on there
14:04:19 <szilard> #info Szilard Cserey
14:04:27 <lmcdasm> #info Daniel Smith
14:04:40 <pbandzi> #info Peter Bandzi
14:05:21 <[1]JonasB> I guess our focus for today will be on Daniels and Joses heroic work
14:06:06 <[1]JonasB> So Daniel and Jose - who wants to go first?
14:06:19 <jose_lausuch> well, he can talk about his progress on POd1
14:06:24 <jose_lausuch> testing is currently happening :)
14:07:20 <[1]JonasB> So is that testing with a deploy with Daniels latest patch-set?
14:07:24 <lmcdasm> #info LFPOD1 in VXLAN ODL Mode similar to FOREMAN
14:07:46 <lmcdasm> #info an updates to yesterday patch (well a re-write) will be dnoe after some test results are in so i can see how good/bad things are
14:08:11 <[1]JonasB> Great, are we seeing the same issues as in POD2?
14:08:22 <[1]JonasB> Or too early to tell?
14:08:22 <lmcdasm> #info got some tips from Herr Berg about how to make this automatic from a FUEL point of view - this is aligned with Jose/Fatih in that they can call something it FUEL and it will cascade to nodes
14:08:25 <jose_lausuch> too early!
14:08:27 <jose_lausuch> just started
14:08:58 <lmcdasm> #info my thinking is basically making it "like FOREMAN" is to captalize on the issues they have fixed there - however, keep in mind the ovs switch is about 3 revisions differnet but one can hope
14:09:17 <lmcdasm> #info Jose can get me some feedback today .. i will implement more fixes and merge tonigth
14:09:22 <jose_lausuch> #info pod 2 detected possible failure impeding vping to succeed: metadata service is disabled, Needed to spawn the VM with userdata
14:09:25 <lmcdasm> #info target a automated run tomorrow
14:09:36 <jose_lausuch> vPing just failed
14:09:43 <jose_lausuch> it seems that it doesnt get an ip by dhcp...
14:09:59 <lmcdasm> ok.. lets finish Jonas meeting
14:10:01 <lmcdasm> and then t-shoot more
14:10:04 <jose_lausuch> yes
14:10:13 <jose_lausuch> otherwise, too many things in parallel and we lose focus
14:10:18 <jose_lausuch> lets finish this quick then
14:10:44 <[1]JonasB> Well, maybe we should just end here so you guys can do what is needed?
14:11:01 <lmcdasm> uh.. well. either or
14:11:09 <jose_lausuch> well, maybe I can say 2 more things
14:11:15 <[1]JonasB> Ok
14:11:16 <lmcdasm> what else are we doing? Autodeplolyment / release status (with / without ODL?) - go ahead Jose
14:11:22 <jose_lausuch> well no, it is for POD2, so not for this meeting
14:11:34 <jose_lausuch> maybe for pod1 automation :
14:11:57 <jose_lausuch> #info a new jenkins job to be created which will trigger the ODL activation after deployment job and before functest job
14:12:22 <[1]JonasB> Just a question to Daniel, will it be possible to turn on/off ODL from the fuel host?
14:12:40 <[1]JonasB> Jose just answered :-)
14:13:09 <lmcdasm> it will be possible to enable ODL after deploy yes
14:13:11 <jose_lausuch> thats the idea
14:13:19 <lmcdasm> will it be possible to switch back? .. uhh. not now no
14:13:20 <lmcdasm> :)
14:13:43 <[1]JonasB> Not needed either/or is enough!
14:13:45 <lmcdasm> i mean - i have stated to do backups to try and switch back - but since its not in scope for System Under Test - i didnt think about building that part of the script out
14:13:47 <lmcdasm> ok
14:13:57 * lmcdasm almost cracked a sweat
14:14:01 <jose_lausuch> hahah
14:15:07 <[1]JonasB> So if nothing more lets end here. I know youre working really hard!!
14:15:18 <szilard> wait
14:15:24 <[1]JonasB> Yep
14:15:25 <lmcdasm> ya wait for Szi!
14:15:27 <szilard> I am deploying on Montreal HP server
14:15:35 <szilard> Blades: 1,4,5,6,8,15
14:15:51 <szilard> but the autodeployment fails with
14:16:15 <szilard> some strange ceph issues
14:16:16 <szilard> ceph-deploy --overwrite-conf config pull node-3 returned 1 instead of one of [0]
14:16:21 <szilard> have you experienced that
14:16:30 <szilard> before
14:16:38 <szilard> or maybe I should ask Stefan
14:16:38 <lmcdasm> ua
14:16:43 <lmcdasm> i have
14:16:45 <lmcdasm> its NTP
14:16:56 <lmcdasm> did you set the NTP source to the NTP VM i have running in there?
14:17:15 <lmcdasm> cause you cant use "external NTP" (its blocked)
14:17:31 <lmcdasm> so i built one that is accesible from the (10.118.34 and .36.0/24 nets)
14:17:32 <szilard> you mean this
14:17:33 <szilard> NTP1: 0.ca.pool.ntp.org
14:17:33 <szilard> NTP2: 1.ca.pool.ntp.org
14:17:33 <szilard> NTP3: 2.ca.pool.ntp.org
14:17:34 <lmcdasm> no
14:17:37 <lmcdasm> you cant use thouse
14:17:38 <lmcdasm> those*
14:17:51 <lmcdasm> those are external and in a full HA you have to have a real NTP
14:17:56 <szilard> aha
14:18:05 <lmcdasm> in a nested environment since the clock is all off the hypervisor (and its SA) no biggie
14:18:08 <lmcdasm> in HA you gotta have a real onw
14:18:19 <lmcdasm> ill send you the IP to set and you can redploy / re-do pre-deploy.sh with that IP
14:18:29 <szilard> thank's lot Daniel !
14:18:57 <[1]JonasB> Thx Dan
14:19:03 <szilard> well that's what I'm doing right now, so thanx again
14:19:06 <szilard> Daniel
14:19:38 <szilard> I don't have any other stuff we can close the meeting if you want
14:19:52 <[1]JonasB> Anyone else ?
14:20:00 <jose_lausuch> nop
14:20:13 <[1]JonasB> #endmeeting