13:58:40 <[1]JonasB> #startmeeting OPNFV Fuel Status No 10
13:58:40 <collabot> Meeting started Wed Apr 29 13:58:40 2015 UTC.  The chair is [1]JonasB. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:58:40 <collabot> Useful Commands: #action #agreed #help #info #idea #link #topic.
13:58:40 <collabot> The meeting name has been set to 'opnfv_fuel_status_no_10'
13:59:14 <mskalski> #info Michal Skalski
13:59:21 <stefan_berg> #info Stefan Berg
13:59:24 <[1]JonasB> #info Jonas Bjurel
14:00:09 <SzilardCserey_ER> #info Szilard Cserey
14:00:33 <[1]JonasB> #topic Auto deployment
14:00:49 <stefan_berg> #info IPMI adapter almost complete.
14:00:52 <[1]JonasB> #info Stefan can you give an update of the prototype
14:01:03 <stefan_berg> #info Looking to test it on Monday in Montreal.
14:01:19 <stefan_berg> #info It will require an installed Fuel master.
14:01:19 <[1]JonasB> Great!
14:01:33 <SzilardCserey_ER> #info I'm converting prototype to python
14:01:49 <stefan_berg> #info Initial plan is to only run one compute, one controller so require two (three with FM) blades,
14:01:53 <SzilardCserey_ER> #info + testing the prototype itself
14:02:43 <lmcdasm> #info Daniel Smit
14:02:54 <[1]JonasB> #info Stefan what hinders us to install the Fuel master just like you did in the previous prototype
14:03:28 <stefan_berg> #info There's no support in IPMI to handle virtual ISO.
14:03:58 <[1]JonasB> #inf right, but the fuel master runs inside a VM on the jumpstart
14:04:05 <[1]JonasB> #info right, but the fuel master runs inside a VM on the jumpstart
14:04:13 <stefan_berg> #info The way to do this as generic as possible would be to do the Fuel installation using the custom function in the API, by means of a temporary PXE server.
14:04:37 <stefan_berg> #info Ah, oh, then it's like ten minutes extra work. :)
14:04:39 <lmcdasm> question please?
14:04:58 <[1]JonasB> mr Smith, go on!
14:05:25 <lmcdasm> the HP servers have the method to connect Virtual DVD (via the OA - not the direct ILO on the blade - its a Virtual Connect thing).. so for setting up a Fuel on Baremetal you have that method (not the nicest but works)
14:05:47 <lmcdasm> and for the virtaul setup, isnt that a case of having a "OS" there already and then calling the same adapter and attaching a "disk image" at boot time to that VM?
14:06:14 <[1]JonasB> lmcdasm: Correct
14:06:17 <lmcdasm> for ESXi i deliver a "emmpty" ovf that basically has a blank drive and an attached ISO so that it boots and starts with FUEL
14:06:18 <stefan_berg> lmcdasm: Absolutely! The plan here was to look at a more generic ipmi solution that would work on a larger set of hw flavors.
14:06:23 <lmcdasm> ahh..
14:06:29 <lmcdasm> ok.. sorry then
14:06:48 <lmcdasm> on that track though - Dell (NUaman) is looking for one for their HW now as well (iDRAC) - dont know if you saw that
14:06:53 <stefan_berg> No, good comment!
14:07:32 <lmcdasm> anyway - thx for clarity
14:07:43 <[1]JonasB> Stefan: So then we can try to install fuel with libvirt and the rest with IPMI on mondaY
14:08:29 <stefan_berg> [1]JonasB: Yes, looking forward to it. I just need to dig up a network diagram to find the jumphost.
14:09:03 <[1]JonasB> #info On Monday we will try to install fuel with libvirt and deploy the rest with IPMI in Eri Lab
14:09:16 <stefan_berg> Need two blades only then, and that I have already. The existing Fuel server, lmcdasm, it's on the jumphost?
14:09:59 <lmcdasm> ok
14:10:04 <[1]JonasB> Stefan: which blades do you have?
14:10:04 <lmcdasm> uhh.. lemme see
14:10:20 <lmcdasm> for Stefans stuff - the 5,8,13,14,15 (and how Szi has split it up)
14:10:27 <lmcdasm> i didnt setup a FUEL on those -casuse you have themm
14:10:36 <lmcdasm> the FUEL that is running is on Blade 7 and used for Baremetal
14:10:37 <stefan_berg> I suggest I go with 5 + 15 - OK, Szilard?
14:10:46 <lmcdasm> i was gonna turn that off in about 10 minutes so you dont see it anymore
14:10:48 <SzilardCserey_ER> 8, 13 for me
14:10:54 <lmcdasm> and you can bring up a FUEL wherever you like
14:11:02 <SzilardCserey_ER> I basically have all my stuff on 13
14:11:03 <lmcdasm> (cause the FUEL for BareMetal is on a ESXI VM)
14:11:11 <stefan_berg> Cool, 5 + 15 is easy to remember as well. And the jumphost is on another blade?
14:11:13 <[1]JonasB> Stefan you need three dont you, one jumphost + 1 controller + 1 compute?
14:11:22 <lmcdasm> Stefan - i dont know about your jumpstart host.
14:11:28 <lmcdasm> you can "use" the Baremetal ESXI one
14:11:41 <lmcdasm> yuou can build a new one on bladed 14 if you like (libvirt) or whatever0
14:11:43 <lmcdasm> up to you
14:12:11 <stefan_berg> Ah, OK, I thought it was already existing. That's cool, I'll set it up on blade 14 then.
14:12:38 <lmcdasm> ya.. the qeustion is one of 'context",,  there are lots (about 10 FUEL servers) running inside that sandbox now
14:12:45 <stefan_berg> Actually quite good, then I can drive the deploy from there and it's already a "ci blade".
14:12:46 <lmcdasm> however, for your blades since they are Baremetal
14:12:52 <lmcdasm> they are tied to the FLB nis (for booting)
14:13:01 <stefan_berg> en1 and en2?
14:13:04 <lmcdasm> so you can see the FUEL VM that is used for the HA Baremetal now - cause of networking
14:13:14 <lmcdasm> ya.. if thats what Ubuntu calls them
14:13:21 <[1]JonasB> #info Stefan will use Bl 5,14, 15 for the Monday test
14:13:23 <lmcdasm> the ones you cant boot from are p1px and p2px
14:13:35 <lmcdasm> lets update this
14:13:57 <lmcdasm> on the etherpad
14:14:01 <lmcdasm> i will do it
14:14:12 <[1]JonasB> lmcdasm: great
14:14:23 <stefan_berg> I think we need a small network plan here. Oh, that't great lmcdasm!
14:14:23 <[1]JonasB> Jose: are you online?
14:14:44 <SzilardCserey_ER> agree, network plan would be wonderful :)
14:14:58 <stefan_berg> lmcdasm: (And you are referring to the non mezzanine nics right? They are then en1 and en2, yes.)
14:15:34 <lmcdasm> #stefan_berg - ok - they change depending on the OS :) - RHEL calls em something else.. but no biggyt.. so long as you know what nics you need :)
14:16:45 <lmcdasm> Szi - check this out (not perfect) - http://elxjtqrs22-wv.ki.sw.ericsson.se:9001/p/montreal
14:17:00 <SzilardCserey_ER> thanks
14:17:41 <jose_lausuch> hey sorry
14:17:54 <jose_lausuch> not many updates today
14:17:59 <jose_lausuch> struggling with installation
14:18:05 <lmcdasm> #updated the doc.
14:18:10 <[1]JonasB> #topic testing
14:18:28 <[1]JonasB> Jose:What problem do you have
14:18:34 <lmcdasm> #info - not - Jose is trying a Mutli-HA on a single blade installation
14:18:41 <jose_lausuch> #info created http://artifacts.opnfv.org/functest/docs/functest.html
14:19:02 <jose_lausuch> #info still discussing how to provide the functest container
14:19:08 <lmcdasm> #info - so there will be some challenges there - we adding a new NTP there to accomodate  and JOse can explain the rest for installation issues
14:19:29 <lmcdasm> #info - got the conatiner info and i have rally installed in a container.. should be able to delivery to the group sometime today for them to takea look at
14:19:42 <jose_lausuch> #info not problems really, just had a bad installation and had to redeploy with some additionals settings
14:20:03 <jose_lausuch> #info I will try the container in BL9
14:20:06 <[1]JonasB> Jose:Ok, great
14:20:27 <[1]JonasB> lmcdasm: How will you deliver the container, a dowmload?
14:20:47 <lmcdasm> #info - first i will just put it up on our public net server for Jose and team to test
14:20:54 <lmcdasm> when they are happy (cause im sure it wont work 100& first time)
14:20:59 <lmcdasm> then we will add a stub in the build system
14:21:11 <lmcdasm> and it can be "built" as they need (Dockerfile will be added into GIT somewhere)
14:21:13 * stefan_berg likes the idea of building it
14:21:23 <[1]JonasB> Ok
14:21:26 <lmcdasm> and they we can sort out when it gets called (as part of FUEL or something bigger)
14:21:55 <[1]JonasB> Anything more
14:21:58 <jose_lausuch> shall we add the dockerfile in the functest repo?
14:22:00 <[1]JonasB> ?
14:22:02 <lmcdasm> since in my mind, im working towards a "push button" that deliveres not just FUel (and deployment) but the testing VM's as well so we have an entire CI chain built on the fly (making im dreaming)
14:22:11 <lmcdasm> sure Jose.. whever you think is best
14:22:16 <jose_lausuch> ok
14:22:29 <lmcdasm> wherever .. once you guys are happy with it we can discuss where the best fit it
14:22:30 <lmcdasm> is*
14:22:38 <jose_lausuch> #info vPing script added to the repo for review
14:22:59 <jose_lausuch> ok
14:23:04 <[1]JonasB> jose: Add all of us for review!
14:23:16 <jose_lausuch> who "all" ? :)
14:23:25 <lmcdasm> Szi, Stefan, me, JOnas
14:23:26 <lmcdasm> :
14:23:27 <lmcdasm> :)
14:23:28 <lmcdasm> all
14:23:37 <[1]JonasB> exactly ;-)
14:23:41 <jose_lausuch> ok
14:23:58 <[1]JonasB> Anything more?
14:23:59 <lmcdasm> and of course, Tim and Dan if you want them to look at it for the FOREMAN side of things
14:24:07 <lmcdasm> yes - Jonas one thing :)
14:24:17 <[1]JonasB> lmcdasm: go
14:25:15 <lmcdasm> I would like to tear down blade 11 and 12 (Nested) and add them to the IMS core setup (since im gonna be tight on space)  if thats ok - and secondly, i need you and Chris to follow up with Ray and team about Ostack Summit and how this IMS Core is gonna be presented, etc - since we "arent" really earmarked for anything  -when you can :)
14:25:23 <jose_lausuch> sorry, vPing got merged already, my mistake..
14:25:59 <[1]JonasB> lmcdasm: I'll talk to Chris
14:26:08 <lmcdasm> great.
14:26:21 <[1]JonasB> lmcdasm: Who is using 11 and 12?
14:26:22 <lmcdasm> for the blades- we can leave them till Friday but i will take them on Monday?
14:26:34 <lmcdasm> currently no-one officially
14:26:46 <[1]JonasB> lmcdasm: goahead
14:26:48 <lmcdasm> ack
14:26:52 <lmcdasm> updating the page now as wlel
14:26:54 <lmcdasm> well*
14:26:55 <[1]JonasB> #topic hollidays
14:27:23 <[1]JonasB> #info Friday is public holliday in sweden, Thu is normaly half day in Sweden
14:27:43 <[1]JonasB> #info Jonas will be away on Thu, Fri
14:27:53 <[1]JonasB> Rest of you?
14:27:55 <SzilardCserey_ER> #info in Hungary we have only Friday as public holliday
14:27:57 <mskalski> in Poland also Friday is holiday
14:28:02 <stefan_berg> #info Stefan away Thu, Fri as well.
14:28:03 <lmcdasm> #info - Daniel off on friday - in the office on Thursday
14:28:16 <jose_lausuch> #info Jose away Fri
14:28:20 <mskalski> #info Michal off in Friday
14:28:29 <[1]JonasB> Great!
14:28:32 <SzilardCserey_ER> #info Szilard off Fri
14:28:34 <lmcdasm> #info _ using Jonas credit card to purchase libations for the Fridya -
14:28:50 <[1]JonasB> ;-)
14:28:59 * lmcdasm expects to see pictures of Stefan Berg's burning drunken Swedish man
14:29:10 <mskalski> :)
14:29:14 <stefan_berg> haha!
14:29:19 <[1]JonasB> Anything more?
14:29:32 <morgan_orange> just one regarding test
14:29:41 <morgan_orange> you confirmed the fuel based installation will integrate heat?
14:29:43 <[1]JonasB> Hi Morgan
14:29:50 <morgan_orange> it is required by Metaswitch for vIMS
14:30:00 <morgan_orange> Hi All, and sorry for the wild merge of vPing..
14:30:21 <morgan_orange> just to precise it is a very basic tests that probably already exists in tempest
14:30:30 <[1]JonasB> morgan: yes we can do that, anything more they need in terms of networking or other stuff?
14:30:42 <morgan_orange> just using python client to boot VM get IP and run Ping in cloud-init
14:31:07 <morgan_orange> it was the main concern of Andrew and Martin
14:31:17 <morgan_orange> nothing special on network side
14:31:31 <lmcdasm> hey Morgan
14:31:35 <[1]JonasB> What about storage?
14:31:38 <morgan_orange> they boot a VM IMS (just basic core) + a tester VM (sipp with scenario) and play basic SIP scenario
14:31:54 <lmcdasm> for Andrew and Martin - they were concerned cause i didnt open up Horizon and Heat for them (network wise)
14:32:02 <lmcdasm> but that can be donem, the tools are all there
14:32:06 <morgan_orange> as far as I understood they use external images
14:32:11 <lmcdasm> thats fine
14:32:19 <lmcdasm> they can either SCP them to the node
14:32:26 <lmcdasm> or they can pull (one-way) from the network
14:32:47 <lmcdasm> im still looking for a network diagram on what they are building so I can ensure that they have what they need (i have a list of "things")
14:33:03 <morgan_orange> they are shy to share doc ...
14:33:06 <lmcdasm> one question- which environment are they going to do this on - an /// one we setup?
14:33:15 <morgan_orange> I asked them to comment/modify/ what I initiated in the doc
14:33:32 <lmcdasm> hehe.. i know- but i mean, its a hard thing to take when they say "the enviornment isnt useable" and we try to make it right for them in th edark :)
14:33:35 <lmcdasm> the dark*
14:33:45 <morgan_orange> We suggets them, to use one E/// POD then LF
14:33:51 <morgan_orange> sorry I have to speak
14:33:55 <lmcdasm> hehe.. no sweat
14:33:58 <morgan_orange> meeting with NTT started
14:34:05 <[1]JonasB> going once
14:34:09 <lmcdasm> Jonas
14:34:16 <[1]JonasB> Yes
14:34:26 <lmcdasm> is the tests that Morgan is talking about now gated on the LF POD installation?
14:34:45 <[1]JonasB> Dont know
14:34:46 <lmcdasm> i just dont want us to get Andre and guys ready when we dont have a system for them yet  / are they aware that LF POD1 isnt ready yet?
14:34:47 <lmcdasm> ok
14:35:03 <lmcdasm> ill send a mail to Morgan to follow up and sync in Joseph G. (our man!).
14:35:08 <morgan_orange> no they are waiting for green light
14:35:12 <lmcdasm> so they can be in the loop together as that POD comes up
14:35:16 <lmcdasm> great.. thanks Morgan!
14:35:24 <[1]JonasB> going once
14:35:33 <[1]JonasB> going twise
14:35:50 <[1]JonasB> Thanks all and have good hollidays!
14:35:57 <[1]JonasB> #endmeeting