14:05:06 #startmeeting OpenStack 3rd Party CI 14:05:06 Meeting started Wed Nov 2 14:05:06 2016 UTC. The chair is fdegir. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:05:06 Useful Commands: #action #agreed #help #info #idea #link #topic. 14:05:06 The meeting name has been set to 'openstack_3rd_party_ci' 14:05:13 apologies for being late 14:05:23 #topic Roll Call 14:05:35 anyone around for the meeting? 14:05:39 hi 14:05:43 hi 14:05:49 hwoarang: ping 14:05:53 you asked, we meet :) 14:05:58 hello 14:06:19 let's start then 14:06:29 #topic puppet-infracloud, openstack-ansible; way forward 14:06:48 yolanda: can you summarize what we discussed so we record it for everyone? 14:06:55 sure 14:07:01 yolanda: alternatives, reasons, preferred way etc 14:07:23 so last week we were at OpenStack summit, and we were chatting with infra people about our third party CI efforts and the usage of puppet-infracloud 14:07:44 so far the response was not very good, and we got suggestions to start using something else, such as openstack-ansible 14:08:05 the main reason for that, is that puppet-infracloud does not want to be see as a reference install, and just as a tool to quickly deploy a single cloud 14:08:10 eeps, simple cloud 14:08:40 so we are going to have trouble if we want to add more features (such as HA, OVS...) to puppet-infracloud project 14:09:03 in a chat with Jeremy, PTL of infra, he suggested we looked as another alternatives,as i said openstack-ansible 14:10:01 so alternatives we have now are: 1. take a look at openstack-ansible and see if that fits our needs , 2. Fork the current development of puppet-infracloud to match our needs, 3. Look at another puppet based installers such as TripleO 14:10:31 so is puppet a soft (hard?) requirement because of the nfv installers? 14:10:59 hwoarang, so if our intention is to provide third party CI for installers, and most installers are based on puppet, it shall be a requirement 14:11:17 that's main problem of openstack-ansible. It could deploy a working cloud, everythign is done, but then we loose that feedback 14:11:28 so, is our intention to provide such a thing? 14:11:43 just to record it; our intention is to provide feedback to both communities, installers are part of it 14:11:59 we want smooth flow between communities 14:12:19 openstack-ansible is currently used in production, but mostly in Rackspace. Also , no OPNFV instalers use those right? 14:12:25 nope 14:12:49 so that's the concern with it, it will be a good tooling to deploy a cloud, but not provide value for us, to detect any breakage in puppet modules that can affect instalelrs 14:13:37 so is the plan to implement a hybrid puppet solution taking modules from both the installers and puppet-infracloud? 14:13:59 point 2 - forking puppet-infracloud, covers that need. But the pain point is that we are forking an infra project, so we will difer from it quite soon, and we are going to have to implement all the wrappers for puppet-openstack modules ourselves 14:14:15 really puppet-infracloud is a simple wrapper for puppet-openstack modules 14:14:37 ok 14:14:50 so this gives us what we need; consume puppet-openstack modules 14:15:01 with some extra overhead 14:15:05 hey 14:15:15 and lack of visibility and support from upstream 14:15:31 we shall need to move that visibility 14:15:33 hey jmorgan1 14:15:39 from openstack-infra to bifrost and puppet-openstack projects 14:15:50 bifrost ptl is very receptive 14:16:03 yes and I see bifrost as done deal 14:16:04 and we already have that third party ci on place 14:16:20 going back and asking the same question yolanda 14:16:40 will they really refuse if we go and continue contributing to puppet-infracloud 14:16:52 while the patches hanging there 14:16:58 we can move on with our "fork" 14:17:06 and hope that they will accept them one day 14:17:44 can't we simply take over that project if it's abandoned? :) 14:17:45 fdegir, they may refuse some bits if they are too much focused to opnfv, such as the OVS change, or if we need to add some extra features 14:18:10 can't those things be done in modular way without impacting their main use case? 14:18:10 and for other changes as HA, we may be blocked because the changes affect their puppet-infracloud in production, so they are not going to land 14:19:03 so i'd say that is not a good idea continuing contributing directly to it, because of the combination of being semi-abandoned, and the blockers that we are going to hit 14:19:41 also, there is no real will for infra to add our third party ci system... 14:21:08 ok, feelings about openstack-ansible? 14:21:28 it's used in production, it has some features we need already, and so on 14:21:39 but again, it will have its own issues 14:21:48 so two concerns here: we are going to loose that feedback from puppet modules, and is going to be difficult to adapt nfv features if most of those features are written on puppet 14:22:01 for example, if we want to test a neutron plugin at some point 14:22:50 and we will probably need to keep them to ourselves if they're not so perceptive as well 14:23:56 do you want me to ask the question? :) 14:24:00 they are receptive as far as i know, and accepts contributions 14:24:14 that's the right word 14:24:20 but we can talk to them directly as well 14:25:00 should we start conversation by you sending mail or? 14:25:19 just ambushing and taking over their meeting some time? 14:25:38 so i'd say that better is to check what's their meeting time, and propose that on next one 14:26:27 meeting for which project? 14:26:30 https://wiki.openstack.org/wiki/Meetings/openstack-ansible 14:26:33 ok 14:26:33 that one ^ 14:26:36 thrusdays 16:00 UTC 14:26:41 good timing 14:27:06 i can join tomorrow 14:27:17 who could make in to meeting tomorrow? 14:27:24 hwoarang, did you attend to openstack-ansible meetings in summit? 14:27:24 I can't 14:27:53 yolanda: i attended the presentation, but not the discussion about contributions etc 14:27:54 i can do it 14:28:00 i can attend the meeting tomorrow 14:28:05 ok 14:28:13 going to add this topic on the agenda 14:28:23 #action yolanda hwoarang to attend the openstack-ansible meeting 2016-11-03 14:28:24 yolanda: the general feeling from the presentation is that they want people to use openstack-ansible for their own stuff and provide feedback and code 14:28:41 #link https://wiki.openstack.org/wiki/Meetings/openstack-ansible 14:28:49 that is a good point to start 14:29:47 so we suspend all the puppet-infracloud work and start playing with openstack-ansible? 14:29:54 everyone agrees? 14:30:21 worth a shot 14:30:23 yes 14:30:33 also not everything is lost, as the main effort done with bifrost is still needed 14:30:54 it will be a matter of replacing the puppet runs for puppet-infracloud with ansible runs for openstack-ansible 14:31:00 #agreed The team agrees to suspend puppet-infracloud work and start evaluating openstack-ansible 14:31:21 yes, this was a good practice actually 14:31:31 and we made enough noise which was one of the aims 14:31:43 if i have to say how much effort i put on bifrost vs puppet-infracloud, i'd said 80-20 14:32:14 the other thing I noticed, openstack-ansible only supports ubuntu isn't it? 14:32:26 not really... their main effort is ubuntu 14:32:38 but most modules shall support centos, or they will accept contributions for it 14:32:46 ok 14:33:45 anything else about this or another topic? 14:33:55 lab space 14:34:04 we continue in same way in releng: prototypes/openstack-ansible 14:34:14 jmorgan1: lab space? 14:34:17 jmorgan1: moar hw? 14:34:31 ;p 14:34:43 we are moving our lab to another location and not sure about spare pods available 14:35:01 it might be good to move efforts to LF pod 14:35:10 jmorgan1: you'll break bifrost 3rd party ci 14:35:23 i'm getting pressure to review intel labs 14:35:34 this is just an fyi but might be a problem soon 14:35:36 and some other stuff like multisite 14:35:40 and builds 14:35:51 pod4 is pretty crucial, not just for us 14:36:46 I think I can fix bifrost stuff and machines for hwoarang and yolanda 14:36:49 but not the rest 14:37:24 is virtualization a solution given the hw shortage? 14:37:35 perhaps moves the slaves as VMs into a single host? 14:37:37 *move 14:38:05 anyway, i'll let you know as planning goes this month 14:38:08 hwoarang: bifrost doesn't play well when it's run in VMs 14:38:21 :( 14:38:22 hwoarang: I tried to do that on pod4 jumphost, which is pretty good machine 14:38:42 hwoarang: but if failed with timeouts and some other failures above my pay grade 14:39:03 ok let me try that again on my pod machine 14:39:05 if there is single vm on host, it worked 14:39:06 since it's idle at the moment 14:39:19 when it created 2 or 3 vms, each running bifrost independently 14:39:24 2 of them failed 14:39:27 hmm 14:39:33 only single one succeeded 14:39:37 i see 14:39:41 jmorgan1: thanks for the heads up 14:39:56 fdegir: np 14:40:25 I suppose we're done for the day 14:40:49 thanks for joining and have fun with the new toy! 14:40:59 #endmeeting