14:58:04 <trevor_intel> #startmeeting OPNFV Pharos 14:58:04 <collabot> Meeting started Wed Sep 9 14:58:04 2015 UTC. The chair is trevor_intel. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:58:04 <collabot> Useful Commands: #action #agreed #help #info #idea #link #topic. 14:58:04 <collabot> The meeting name has been set to 'opnfv_pharos' 14:58:18 <trevor_intel> #info trevor_intel 14:58:47 <trevor_intel> Proposals for agenda topics ... 14:59:11 <trevor_intel> 1. Labs for CI resources 14:59:20 <trevor_intel> 2. MAAS PoC update 14:59:28 <trevor_intel> 3. Rls B milestones 14:59:37 <trevor_intel> 4. Jira backlog 14:59:53 <fdegir> #info Fatih Degirmenci 15:00:19 <uli-k> #info Uli Kleber 15:00:20 <trevor_intel> Please comment on agenda ... what do you think we should discuss? 15:00:53 <uli-k> sounds good. 15:00:56 <fdegir> good to start with 15:01:55 <trevor_intel> Anybody who can represent MAAS and give us an update? 15:02:14 <trevor_intel> Lets defer it then 15:02:44 <trevor_intel> Start with CI resources ... email from Morgan and Fatih? 15:02:59 <fdegir> morgan_orange: ping 15:03:09 <morgan_orange> fdegir: ping 15:03:31 <fdegir> morgan_orange: can you join the meeting? 15:03:34 <fdegir> pharos 15:03:48 <fdegir> we're starting to talk about the mail you sent 15:03:48 <morgan_orange> I can but cyril_auboin_ora is here for Orange 15:04:03 <fdegir> ok, welcome cyril_auboin_ora 15:04:05 <morgan_orange> he will be the orange contributor to pharos 15:04:08 <cyril_auboin_ora> thanks ;) 15:04:21 <morgan_orange> but I can join this week :) 15:04:21 <fdegir> anyway, I agree with what morgan_orange stated 15:04:22 <trevor_intel> welcome cyril_auboin_ora: 15:04:30 <morgan_orange> #info Morgan Richomme 15:04:32 <cyril_auboin_ora> thank you 15:04:47 <fdegir> to summarize what morgan_orange said 15:04:49 <morgan_orange> #info Pharos wiki page refactored a little bit 15:04:57 <fdegir> we need to know how many pods each lab provides 15:05:04 <fdegir> and what are they purposed for 15:05:15 <morgan_orange> #link http://wiki.opnfv.org/pharos 15:05:23 <fdegir> are they only available for metal deployment 15:05:33 <morgan_orange> #info proposal to add columns to have visibility on the number of PoDs and their usage 15:05:40 <morgan_orange> #info to optimize resources.. 15:05:48 <fdegir> or they can provide resources for other activities such as build, virt deploy, etc 15:06:06 <morgan_orange> #info it is just a proposal and it is up to the Pharos project to decide to keep them or not 15:06:23 <fdegir> and this is actually inline with what octopus expects/needs 15:06:43 <fdegir> we need to know how a certain slave can be used 15:06:46 <trevor_intel> As teh Intel lab owner I don't knwo what teh requriemetns are for "other" activities 15:07:02 <trevor_intel> So I don't know what to offer 15:07:14 <fdegir> an example could be the repurposed LF POD1 15:07:24 <fdegir> we will get 6 servers from POD1 15:07:34 <fdegir> 2 of them will be ubuntu and used for builds 15:07:59 <fdegir> like installer iso builds, verify jobs that do builds such as vswitchperf 15:08:22 <fdegir> the rest of the servers, 4 of them, are centos and will be used for virt deployments 15:08:40 <fdegir> so these are the types of activities we need resources for 15:08:47 <trevor_intel> And who has access to them? 15:08:50 <fdegir> single/standalone nodes 15:08:56 <fdegir> only ci 15:08:58 <lmcdasm> #info daniel smith 15:09:02 <fdegir> for the ci stuff 15:09:10 <morgan_orange> so practically, Intel POD2 (if no more used by redhat team to be confirmed) could be used for CI activities 15:09:12 <fdegir> and ci people to install needed tools, packages etc 15:09:24 <trevor_intel> Yes that is fine 15:09:58 <fdegir> trevor_intel: does this mean we can break the pod down and connect each server seperately yo jenkins? 15:10:08 <fdegir> to opnfv jenkins I mean 15:10:12 <trevor_intel> But I feel Octopus should document all the CI requirements for a community lab including support needs 15:10:25 <trevor_intel> Yes anythign is poissible :) 15:10:35 <trevor_intel> If we know what is needed 15:10:59 <fdegir> trevor_intel: https://wiki.opnfv.org/pharos_rls_b_spec#ci_requirements_for_pharos_labs 15:11:07 <trevor_intel> We haev 6 PODs and only 4 are being used ... and we are building another 3 15:11:08 <fdegir> these are the initial requirments 15:11:39 <fdegir> we can add more/concrete reqs if we're allowed to grab one of the pods 15:12:12 <fdegir> like os, storage, connectivity, etc needs 15:13:23 <trevor_intel> Ok that makes sense ... I propose that you take a POD and document requiremetns as it evolves ... other labs can use that to setup a "CI POD" too 15:13:31 <lmcdasm> i think that you wont be able to define the POD requirements 15:13:32 <fdegir> trevor_intel: who should we contact in order to take the pod? 15:13:39 <fdegir> is it you or someone else? 15:13:44 <lmcdasm> you will only be able to define the requirements for the Hook in node (the Jumphost/GW) 15:13:55 <fdegir> lmcdasm: these are standalone servers 15:13:59 <lmcdasm> ahh.. ok 15:14:02 <lmcdasm> sorry :) 15:14:03 <fdegir> to be used for builds, etc 15:14:07 <lmcdasm> gotcha :) 15:14:08 <fdegir> like ericsson-build 15:14:11 <lmcdasm> apologies 15:14:11 <fdegir> :) 15:14:11 <trevor_intel> Only 1 network needed? 15:14:14 <lmcdasm> (i came late) 15:14:14 <fdegir> np 15:14:25 <fdegir> trevor_intel: yes 15:14:33 <fdegir> since only virt deploys will run on them 15:14:58 <fdegir> or builds 15:15:08 <fdegir> or fw verification activities 15:15:21 <fdegir> for yardstick or other frameworks to run fw functional testing 15:15:27 <trevor_intel> Please send me an email stating the names of everybody who needs access and any basic requirements ... networks, storage, OS's etc. 15:15:37 <fdegir> trevor_intel: will do, thanks 15:15:44 <lmcdasm> basically each of your nodes will have a "public IP" 15:15:52 <lmcdasm> and then inside the nodes all the internals will be loops ont he nodes 15:15:58 <lmcdasm> so each will have a container network inside 15:16:04 <fdegir> right 15:16:08 <lmcdasm> for the OS storage reqs - Fatih - are you doing HA 15:16:14 <lmcdasm> or are you doing different models or?> 15:16:29 <fdegir> you're asking questions that are outside my comfort zone 15:16:33 <lmcdasm> hehe 15:16:35 <lmcdasm> ok - no sweat 15:16:41 <lmcdasm> my point was really that in setting requirements 15:16:42 <fdegir> I'll bug you lmcdasm to identify that type of details 15:16:47 <fdegir> and reqs 15:16:49 <lmcdasm> we can come up with models 15:16:57 <lmcdasm> so that you have alot more people that can participate 15:16:57 <fdegir> and then contact to trevor_intel 15:17:00 <lmcdasm> sounds good 15:17:07 <lmcdasm> we can detail it later if you are still high level'in it 15:17:09 <lmcdasm> :) 15:17:27 <fdegir> yep 15:17:39 <trevor_intel> Enough progress today on CI resource topic? 15:17:47 <lmcdasm> a queestion 15:17:54 <lmcdasm> on there (its related to otehr stuff so relevant) 15:18:05 <trevor_intel> I only answer questions :) 15:18:06 <lmcdasm> for the CI pipeline are you just doing Jenkins slaving 15:18:22 <fdegir> lmcdasm: that is the 2nd type of Ci resource 15:18:25 <lmcdasm> or are you going to setup a tunnel between the central Jenkins node as well (if not already in place) 15:18:27 <lmcdasm> ok 15:18:42 <fdegir> and we need help there to identify the pod setup 15:18:49 <lmcdasm> ok 15:18:51 <lmcdasm> cool 15:18:53 <lmcdasm> we can sort that 15:18:55 <fdegir> so we can hace jumpserver 15:19:01 <fdegir> like the lf pods 15:19:02 <lmcdasm> ya.. i was thinking (my slides) 15:19:07 <lmcdasm> about howe we wannt setup the centrla node 15:19:13 <lmcdasm> connections through F/W / etc. 15:19:16 <lmcdasm> and how that willwork 15:19:17 <fdegir> stage is yours :) 15:19:19 <lmcdasm> hehe 15:19:26 <lmcdasm> hehe 15:19:33 <lmcdasm> well - trevor is running the meeting 15:19:40 <lmcdasm> so i dont wanna hijack i t:P 15:19:50 <trevor_intel> Think we move on, ok? 15:19:53 <lmcdasm> ya 15:19:54 <trevor_intel> ha ha 15:19:55 <fdegir> ok 15:20:10 <trevor_intel> Now do we have anybody from MAAS PoC? 15:20:12 <lmcdasm> (plus i dont have a solution just yet - some ideas but thats it) 15:20:16 <iben__> #info. Iben 15:20:22 <iben__> On my phone 15:20:37 <iben__> Driving 15:20:48 <trevor_intel> Iben! 15:20:50 <iben__> No audio. Right? 15:21:04 <trevor_intel> Pay attention (to driving) 15:21:09 <trevor_intel> No audio 15:21:15 <iben__> Dropping off kids at school. 15:21:20 <lmcdasm> ya.. dont drive and text 15:21:23 <lmcdasm> join back after 15:21:25 <lmcdasm> :) 15:21:36 <iben__> Will be stopped soon. Okay. Maybe 5 minutes 15:21:37 <fdegir> I rhink Narinder created an etherpad for maas poc 15:21:39 <trevor_intel> Rls B miletones? 15:21:44 <fdegir> can't find it 15:21:52 <trevor_intel> https://etherpad.opnfv.org/p/PharosMAASPoc 15:22:16 <fdegir> guess we have to postpone until iben__ comes back 15:22:22 <trevor_intel> Lest defer MAAS for now ... would ratehr we focus on Rls B 15:22:34 <trevor_intel> Ok? 15:22:36 <iben__> Can someone call me in? 4087824726 15:22:56 <trevor_intel> iben__: There is no audio 15:22:58 <fdegir> iben__: irc only 15:23:16 <trevor_intel> #topic Rls B planning 15:24:33 <trevor_intel> Dan Smith's work on milestones? 15:25:11 <trevor_intel> Any comments? 15:25:59 <trevor_intel> Dan can you walk us through your proposal? 15:26:35 <uli-k> Do you have a link? 15:27:29 <lmcdasm> hey there. 15:27:40 <lmcdasm> sorry - was in a chat in another window 15:27:46 <lmcdasm> for the PPT i sent around - im going to do some more work on it 15:28:13 <lmcdasm> but basically for our B-Release it sumamarizes what we discussed last week (uli- i forgot to add you on the list - apologies - maybe someone can send it while i type) 15:28:50 <lmcdasm> anywa - i think for a realistic goal - the idea is really, in a nut shell, to have a single node (in LF or somewhere) that has some established "tunnels" (i didnt pick any of the tech - that is the first ms - to define the connection reference) 15:28:54 <trevor_intel> I will send to Uli 15:29:02 <lmcdasm> to Jumphosts/gateways in each of the communbity labs 15:29:28 <lmcdasm> we will then have a very basic connection that can tell us something simple (like a IPMI list of hosts) or something that shows what "resources each lab has" 15:29:39 <lmcdasm> this setup will allow us to grow later and send "commands" to the linked nodes 15:29:48 <lmcdasm> that can then install installers (foreman, maas, fuel, etc) 15:29:52 <lmcdasm> or do other things - 15:30:05 <lmcdasm> but i think for B-release the idea of establishing the requiremnts for a jumphost 15:30:20 <lmcdasm> some functions that it has to provide (hypervisors, PXE, some tools to ping and vpn, etc) 15:30:39 <lmcdasm> and defining how we want labs to conncet in a secure manner is a good three document MS we can do crowd sourcing the community and lab owners 15:31:04 <lmcdasm> and then for a delivrable - if we can have this cental node setup and efined (in LF - can be a VM even) and some links to a lab (say Trevors and ours) in a srecure way as a template 15:31:21 <fdegir> +1 to these things 15:31:23 <fdegir> jsut a node 15:31:24 <lmcdasm> that would be something pretty good (and ill make the cheesy page that shows the connections and a healthcheck - ping to the "connectedlabs) 15:31:39 <fdegir> Jenkins doesn't connect to labs directly 15:31:40 <lmcdasm> its not fancy, but if we do the plumbing right it will allow extension for otehr to use the links later 15:31:45 <fdegir> instead, jumphosts connect to jenkins 15:31:54 <lmcdasm> right 15:32:00 <fdegir> to simplify a bit 15:32:01 <lmcdasm> so the gateway/jumphost could be a jenkins slave 15:32:05 <fdegir> yes 15:32:05 <lmcdasm> could be running and installer 15:32:08 <lmcdasm> could be doing both 15:32:17 <lmcdasm> the idea is really to make the workload agnostic 15:32:29 <iben__> Jenkins has a time sync page that's nice for the slaves. 15:32:36 <fdegir> that's the setup we have for LF POD 15:32:39 <lmcdasm> but make the link and hookup flexi enough to allow jenkins or installers or whatever to eventually "send orders" to labns to do "things" 15:32:43 <iben__> And we are doing an ipv6 dashboard too. 15:32:49 <lmcdasm> swett.. we can use that as a model Iben 15:32:55 <lmcdasm> cool.. send it along and we can see 15:33:12 <lmcdasm> im completely fine with stealing UI and css from jenkins pages ;P 15:33:22 <iben__> IPv6 we just want to piggy back on existing dashboard. 15:33:38 <trevor_intel> Do we all agreew with this as our theme for rls b ... "Connected community labs with visible capability and deployment/usage monitoring"? 15:33:47 <iben__> For jenkins the functionality you want may already be there. 15:34:04 <iben__> Just scrape the data 15:34:11 <fdegir> trevor_intel: +1 15:34:17 <iben__> It's already monitoring lab connectivity. 15:34:26 <fdegir> iben__: I can do magic when the labs are connected to jenkins 15:34:35 <fdegir> we don't need to mess around with jenkins dashboard 15:34:44 <fdegir> but if we can't get anything setup 15:34:47 <fdegir> we fallback to it 15:35:15 <trevor_intel> Jenkins dashboard doesn't tell much other than its connected? 15:35:18 <iben__> So this new dashboard is to be independent to that? 15:35:22 <lmcdasm> Iben - its monitoring the connectiivty 15:35:26 <iben__> Time sync also 15:35:37 <lmcdasm> but it doesnta llow for the command line / HW level processing and order we will want to send to the gateway/jumpshost 15:36:02 <lmcdasm> i would like to steer way form locking ourselves into using a Jenking slave approach for all lab orders we want to send .. im not saying later that we cannot 15:36:14 <iben__> Gotcha 15:36:17 <iben__> Good idea 15:36:24 <lmcdasm> but i would like to leave it open and small for now - i think that a applicatino free link between labs (and Jenkins over top) is a good goal for now 15:36:50 <lmcdasm> but again - just me 15:37:19 <lmcdasm> if we all say ' - well thats not that fancy cause jenkins has all the labs connected and we want to pursure Jenkins at the lab central node" then im fine with it too :) 15:37:51 <fdegir> lmcdasm: I agree with milestone list 15:38:01 <fdegir> but I think developer usage specific milestone is missing 15:38:05 <fdegir> on slide 6 15:38:11 <fdegir> all of them are named CI 15:38:54 <fdegir> do we want to follow which labs are used/providing developer resources? 15:38:55 <lmcdasm> ya - 15:38:58 <lmcdasm> i didnt try DI usage 15:39:02 <lmcdasm> cause i wasnt sure how far to go 15:39:04 <lmcdasm> :) 15:39:06 <fdegir> :) 15:39:13 <fdegir> and the other comment is 15:39:14 <lmcdasm> i think that its going to be hard to define a stanard for DI role 15:39:20 <lmcdasm> cause people are going to use their labs for what they want 15:39:28 <fdegir> please feel free to change the stories created in JIRA 15:39:39 <fdegir> and put the stories you defined on slides 15:39:43 <lmcdasm> i do think that its important that the CI role when people understand is to really give up HW and let it be under the central node command 15:39:55 <fdegir> yep 15:39:58 <lmcdasm> hehe - i cant "change" anything in JIRA (open ticket :P). 15:40:03 <fdegir> :( 15:40:17 <lmcdasm> so i think that one of the tricks with this setup - is that we need to really be careful to explain about commitment of resources. 15:40:36 <lmcdasm> since it will be a pain int he ass to setup something and then when the person says i wanna try something else - tear down the setup 15:40:42 <lmcdasm> we cant avoid ti - but we can communicate about it :) 15:41:45 <fdegir> yep 15:42:21 <trevor_intel> We need a template for each lab to fill that includes resource commitments and then make it visible through the dashboard 15:42:45 <iben> #info Iben Rodriguez - Spirent - now on laptop 15:43:03 <trevor_intel> Ok so next steps for miletones? 15:43:11 <iben> by resources do we mean people or hardware? 15:43:20 <iben> or both? 15:43:34 <trevor_intel> both IMO 15:43:42 <iben> also - typical sla - response time expectations would be nice 15:43:57 <iben> some labs are really run as a “best effort” basis with minimal expectations 15:44:05 <uli-k> I think we need more than a template. 15:44:13 <iben> others are fully staffed around the clock 15:44:23 <uli-k> Lab resources might change over time - hopefully only in one direction. 15:44:25 <trevor_intel> uli-k: yes template is bare minimum 15:45:05 <trevor_intel> The dashboard is our tool to provide some accountability 15:45:18 <lmcdasm> agreed 15:45:34 <lmcdasm> we also need to, i think, provide some options ot people - as stated, we have low-end labs and high-end labs 15:45:39 <uli-k> OK so the data from the template will be put on the dashboard? 15:45:48 <lmcdasm> i think that if we are to provide a category of CI lab types (doesnt have to be alot) 15:46:00 <lmcdasm> but i think lumping everytone's lab together to find a common ground might be tough 15:46:31 <iben> also there are many “private” labs being used for OPNFV testing and development with different projects 15:46:55 <iben> we should give a way for them to be listed but acknowledge that they are not publicly accessable (yet) 15:47:06 <lmcdasm> its a good point 15:47:09 <iben> this way we encourage companies to open their labs 15:47:13 <uli-k> Yes. 15:47:19 <lmcdasm> what is the scope here - all labs involved in OPNFV, CI labs, ? 15:47:45 <lmcdasm> so we list them on the Dashboard and have a "private" labs (rather than "CONNECTED/ONILNE/DOWN" beside their name on the dashbard 15:47:50 <lmcdasm> that works for me 15:47:52 <iben> for me showing what others are doing helps gain approval for funds from mgmt and execs - the pharos page is great for this now 15:47:53 <trevor_intel> uli-k: In teh extreme the dashbopard shows that you are meeting your SLA or there is a gap ... we shoudl accomodate any scale of lab or level of access but just want to know reality from marketing story 15:48:16 <iben> and the kenkins slave page show the current reality of what’s really being tested now 15:48:16 <lmcdasm> so self-defining SLA? 15:48:18 <uli-k> +1 15:48:26 <fdegir> how people book those labs? 15:48:30 <lmcdasm> in the template we have the person outline their own metrics (servers available, etc) 15:48:34 <fdegir> don't we have something called booking system? 15:48:36 <lmcdasm> (fatih - not decided yet) 15:48:41 <fdegir> ok 15:48:50 <lmcdasm> i would think we get em connected and then work on a booking system 15:48:59 <trevor_intel> There is no bookign system yet 15:48:59 <lmcdasm> there are lots of OTS stuff (Osource) we can look at 15:49:06 <fdegir> then it means those private labs are not private 15:49:27 <trevor_intel> What is a private OPNFV lab? 15:49:28 <lmcdasm> depending on what we want to see / do and how we want to work the "public bookings " (say from a project that wants a specifci run of something) to regular runs / build pipeline 15:49:28 <fdegir> they'll be shown somewhere, up/down/booked 15:49:48 <fdegir> and will be possible to book them when the booking system is in place 15:49:48 <trevor_intel> Lets define that ... its not clear to me 15:49:51 <lmcdasm> well - for the ones that re private .we list em and they are just staic on a page until it changes i would think 15:50:01 <fdegir> (booking system could be an etherpad to start with) 15:50:03 <uli-k> Depending on our definition all labs are now private.... 15:50:08 <lmcdasm> agreed - i think a whole session coule be devoted to booking system 15:50:15 <lmcdasm> true Uli! 15:50:34 <lmcdasm> since everyone has the ability to trump / take down / up / no sla / etc at any time 15:50:41 <uli-k> I remember people saying all lab usage should go through jenkins. 15:50:49 <lmcdasm> so they are private - plus there is no control from central point - maybe these are how we define private? 15:50:52 <trevor_intel> uli-k: no thery are not ... labs being used for community projects are not private 15:50:55 <fdegir> i took private lab thingy from iben s message 15:51:56 <fdegir> uli-k: putting jenkins into developer purposed labs is not really necessary 15:52:12 <iben> i sort fo think as jenkins as a booking system 15:52:28 <fdegir> but if a lab is shared between ci and developers than that's valid 15:53:03 <iben> if you need to do a manual task you can “borrow” or book or request some hardware capacity - but the end goal is to get a test running in jenkins in your (my) lab 15:53:23 <iben> from there the test can be promoted to the LF core infra 15:55:37 <trevor_intel> Ok lets get back to miletones for Rls B ... what are next steps? Dan? 15:55:39 <iben> BTW - JOID project meeting starts in 5 minutes - there we will discuss mostly MaaS POC for Pharos 15:55:59 <trevor_intel> Ahh ok 15:56:02 <iben> basically the setup in intel and spirent lab with central controller running in LF on a VM 15:56:06 <trevor_intel> I need to drop off in 4 min 15:56:14 <iben> who can get us an ubuntu VM on LF hardware? 15:56:25 <lmcdasm> i can make the request 15:56:31 <iben> sweet - 15:56:35 <lmcdasm> however, i would like a change to refine the requirements of our Central node 15:56:45 <iben> do you have time to join next call in 3 minutes? 15:56:52 <lmcdasm> since do we know exactly what we want for example - its easy from OS up 15:56:55 <lmcdasm> but we need to think about networking 15:57:04 <iben> we just need a simple machine with 1 network 15:57:07 <lmcdasm> actually - im home sick and fever - so im gonna logoff after this 15:57:13 <lmcdasm> well thats not true Ibe 15:57:15 <iben> okay - no worries - 15:57:25 <lmcdasm> we have to think about connecting lots of different machine via a secure method 15:57:44 <iben> i may put the POC centrall controller in Spirent to allow it to be open to internet and have ipv6 to start off with 15:57:45 <lmcdasm> so a single NIC migth not be the way (what happens when we have two remote nets using the same subnet for example - then our node is screwed) 15:57:53 <iben> when other labs have ipv6 they can use that then. 15:58:02 <lmcdasm> anyway - gimme an action to work with iben for next week 15:58:08 <lmcdasm> to define the "CENTRAL" NODE" in the LF 15:58:11 <iben> this is jsut an admin ui server vm 15:58:14 <lmcdasm> for what we need trevor 15:58:26 <iben> it sends jobs to the remote lab machines over the internet 15:58:28 <lmcdasm> well iben - i think we are talking maybe about two differne things 15:58:36 <iben> just like jenkins master machine - it’s a vm - right? 15:58:36 <lmcdasm> and i dont know if you saw my milestone slides 15:58:39 <lmcdasm> cause its not that simple 15:58:40 <lmcdasm> no 15:58:41 <lmcdasm> its not 15:58:44 <lmcdasm> that what we have bveen saying 15:58:48 <lmcdasm> we are going to setup IP level links first 15:58:50 <iben> oh wow - ok - that’s nuts 15:58:51 <trevor_intel> action MAAS PoC ... Dan Smith "define the "CENTRAL" NODE" in the LF" 15:58:55 <lmcdasm> then youc an run jenkins slaves or something else over it 15:59:01 <lmcdasm> so that its not a single purpose link to the labs 15:59:09 <iben> is there good docs on how the jenkins machine is setup? 15:59:09 <lmcdasm> anyway - its fresh and new and we need more discussion 15:59:10 <lmcdasm> :) 15:59:14 <trevor_intel> #action MAAS PoC ... Dan Smith "define the "CENTRAL" NODE" in the LF" 15:59:25 <iben> we might just copy that setup eventually if we like that as a best practice 16:00:00 <lmcdasm> wait wait 16:00:19 <lmcdasm> nvm.. we are outo f time 16:00:20 <lmcdasm> no sweat 16:00:22 <lmcdasm> we will sort it out 16:00:53 * fdegir thinks we shouldn't mix jenkins with all the maas discussions 16:01:01 <trevor_intel> I am going to end teh meeting now ... thanks everybosy . Dan ... hope you feel better soon! 16:01:33 <trevor_intel> #endmeeting