14:00:38 #startmeeting OpenStack 3rd Party CI 14:00:38 Meeting started Wed Sep 7 14:00:38 2016 UTC. The chair is fdegir. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:38 Useful Commands: #action #agreed #help #info #idea #link #topic. 14:00:38 The meeting name has been set to 'openstack_3rd_party_ci' 14:00:41 aricg: who else will have access to LF pod5? 14:00:50 #topic Roll Call 14:00:58 jmorgan1: it is one of the subjects 14:01:02 #info Jack Morgan 14:01:07 fdegir: ok, good 14:01:08 #info Fatih Degirmenci 14:01:37 Julien-zte: yolanda: hwoarang: ping 14:01:47 around? 14:01:58 hi 14:02:09 hi 14:02:31 meeting :) 14:02:33 quick one 14:03:01 moving to the agenda - short one 14:03:04 #topic Agenda 14:03:16 #info Markos 14:03:17 #info Status Update - bifrost and puppet-infracloud 14:03:24 #info HW/Infra Needs 14:03:26 #info AOB 14:03:42 #topic Status Update - bifrost and puppet-infracloud 14:03:53 yolanda: can you type in short status update please? 14:04:06 I know you've been working on centos support 14:04:22 hi sure 14:04:48 so trusty is done, i was working on CentOS. It was failing due to the modules not being able to work with CentOS 14:04:49 pong fdegir 14:05:04 but i sent some patches for puppet-infracloud that already landed. I'll need to test again 14:05:07 no gtm meeting? 14:05:15 fdegir, is that up again? 14:05:20 the lab i mean 14:05:21 yolanda: trusty is done for both bifrost and puppet-infracloud if I'm not mistaken? 14:05:27 Julien-zte: no, irc only 14:05:30 yes, trusty is done 14:05:32 yolanda: yes, all our nodes are up 14:05:34 several meeting, I got confused, sorry for this 14:05:48 #info Trusty is done for both bifrost and puppet-infracloud 14:06:13 #info Work with Centos is going on and some patches to puppet-infracloud already landed. Further testing will be done 14:06:42 yolanda: I suppose that's all 14:07:04 ok so i hope to have some time tomorrow/friday to test centos again 14:07:23 i also did some change to the playbook, to supporte env vars, to be able to spin trusty/centos more easily 14:07:51 #info Changes to the playbook has been done to support env vars in order to spin trusty/centos more easily 14:08:09 we can create a job for centos once it works fine 14:08:14 #link https://gerrit.opnfv.org/gerrit/20357 14:08:16 and run stuff automatically 14:08:26 but i'd like to test it 14:08:35 with lab down i could not test properly 14:08:57 yep, that's jmorgan1's fault 14:09:27 lab down is no excuse ;) 14:09:49 we have got Trusty testing finished also. we are setting up the CI for bifrost in our inner env. something is different that we are using Openstack CI system not OPNFV, and CI slave is boot and deleted and in a VM. it is not possible to setup a nested CI env. 14:10:41 Julien-zte: if you want, you can do stuff on OPNFV Jenkins 14:10:49 Julien-zte: we can find machine for you 14:10:51 is it useful to finished boot the VM in openstack envirement? 14:11:33 heh, combine that with a week of holiday last week, and i could not work so much 14:11:38 Hi fgedir, it is not shortage of resources. just for the infrastructure we used. 14:11:58 Julien-zte: we work on standalone machines and run bifrost directly on them 14:12:10 Julien-zte: and create and provision VMs using bifrost 14:12:20 Julien-zte: I haven't tried running bifrost nested 14:12:27 yes, currently we boot vm using libvirt, shall we support openstack cli 14:12:38 yolanda: no worries 14:13:02 Julien-zte: sorry but I don't understand 14:13:17 Julien-zte: there is no openstack involved in bifrost part 14:13:31 using Bifrost to bootup 3VMs in openstack and deploy them 14:13:59 sorry, we using Zuul + Nodepool for jenkins slaves 14:14:09 Julien-zte: I don't know if bifrost supports that 14:14:29 it is not useful we can focusing on slave node 14:14:31 Julien-zte: as I know bifrost mainly focuses on baremetal provisioning 14:14:45 yes, agree 14:14:46 yolanda can correct me if I'm mistaken 14:14:58 but our focus is to provision baremetal nodes using bifrost in the end 14:14:59 just using openstack as a VM resource provider 14:15:03 and install openstack on them 14:15:07 fdegir, you are right. Bifrost just is installed with ansible 14:15:24 understood 14:15:28 Julien-zte: I think it is much simpler not to mix openstack into this picture 14:15:39 OK 14:15:45 Julien-zte: just an ubuntu/centos slave connected to jenkins 14:15:49 bifrost is just a way to provision the servers, you can use for openstack or for other purposes 14:15:50 Julien-zte: and run bifrost there 14:16:50 hwoarang: can you perhaps say something about SuSe support you are working on? 14:17:03 and anything else you might be looking 14:17:38 fdegir: i hope it's nearly there. i had to package ipxe-bootimgs for suse so i believe this is what's missing to complete the port. I will know more soon now that I got access to the new host 14:18:14 that's all for bifrost ofc. but i haven't done much else due to other tasks poping up 14:18:19 good news hwoarang 14:18:43 #info ipxe-bootimgs has to be packaged for suse which is what's missing to complete the port. We will know more once more testing is done. 14:19:14 thx hwoarang 14:19:35 jmorgan1: have you had chance to look at bifrost and/or puppet-infracloud? 14:20:04 fdegir: no, i'll be focussing on the opnfv release tasks this week then should be able to take a look 14:20:24 ok 14:20:28 from my side 14:20:34 fdegir: my resources in the intel lab are on loan to others 14:21:26 #info The instructions written by yolanda has been tested and small fixes have been comitted 14:21:48 the instructions has been submitted? 14:21:58 do we have link? 14:23:36 sorry - lost my connection 14:23:54 yes, we have 14:24:08 we missed you status update 14:24:11 your 14:24:24 yep, that's when I lost my connection 14:24:43 #info I'll send some updates to puppet for jumphost 14:24:54 #info The instructions are 14:24:58 I thought it will only happned in G.F.W blocked area 14:25:02 -:) 14:25:13 #link https://gerrit.opnfv.org/gerrit/gitweb?p=releng.git;a=blob;f=prototypes/bifrost/README.md 14:25:25 #link https://gerrit.opnfv.org/gerrit/gitweb?p=releng.git;a=blob;f=prototypes/puppet-infracloud/README.md 14:25:45 I have this shitty provider as GFW 14:26:15 I can say that the stuff yolanda sent works perfectly fine 14:26:39 yay :) 14:26:40 yah! 14:26:42 I suggest you to look at puppet logs especially to see if you get any error messages 14:27:21 moving to the next topic if noone objects 14:27:39 ok 14:27:40 #topic HW/Infra Needs 14:28:14 #info We, except Julien-zte, have been machines from Intel POD4 which was under maintenance 14:28:35 #info the lab is back online now so you should be able to continue using them 14:28:49 #info hwoarang got intel-pod4-node2 with suse on it 14:29:19 #info Intel POD4 is used for playing with and developing stuff using VMs 14:29:31 #info yolanda requested access to LF POD5 for baremetal work 14:29:46 no answer for that 14:29:49 yolanda: aricg asks you if you are ok to get access to LF POD5 only? 14:30:12 aricg> normally we vote on access, but in this case, I'd like to just get you to agree to access only pod5 14:30:12 fdegir, yes, that are the ones we will need right? 14:30:18 yolanda: yes 14:30:31 i'm fine 14:30:32 I'll info this in 14:30:46 #info yolanda agrees to access only LF POD5 14:30:51 aricg: ^ 14:31:00 yolanda: I will send you the vpn creds shortly 14:31:15 thanks 14:31:24 #action fdegir to send an email to infra-steering to request access for others to LF POD5 14:31:58 #info We will not have HA at this phase so others can go and use LF POD5 when they reach to the baremetal 14:32:25 #info So the request will be sent for all of us and then we need to list who is using which nodes 14:32:39 jmorgan1: does ^ answer your question? 14:32:46 why no HA? 14:32:48 Untill LF POD5 is in production, perhaps we should assigne a single person to grant or deny access to said pod 14:32:59 on the list, not for now 14:33:06 aricg: I thought you were that person? 14:33:12 agreed ;) 14:33:28 jmorgan1: HA is complicated 14:33:34 fdegir: yes, its answers the question 14:33:36 jmorgan1: puppet-infracloud doesn't support that 14:33:48 yolanda: I hope I'm right with what I said just now 14:33:51 fdegir: ok, so no support currently 14:34:46 yes, we don't have HA because puppet-infracloud is not supporting that at the moment 14:34:57 anyone having any issues/shortages with HW/Infra, please ping jmorgan1 :) 14:35:14 copy 14:35:21 moving on 14:35:24 #topic AOB 14:35:31 anyone wants to add anything? 14:35:37 even more , when i tried to go with first steps, i was hitting a blocker, simply adding the pacemaker module to the module list in infra. This seems to don't work on precise and infra still gates on precise, so i could not start that job until precise stops being used 14:35:42 or find some other workaround 14:35:54 #link https://review.openstack.org/335511 14:36:25 #info Due to puppet-infracloud not supporting HA, we will not attempt HA at this phase and focus on bringing up 2 node setup 14:36:58 so schedule wise, will we be ready for openstack summit? 14:37:10 #info The near term taget is to have both VM and BareMetal provisioning/OpenStack Installation ready by the summit 14:37:29 jmorgan1: I let yolanda answer that :) 14:37:36 challenge by Summit 14:37:54 not with HA, but we will be fine with a simple baremetal deployment 14:38:00 i think we are in a good position 14:38:07 and the latest, not stable 14:38:07 Hi, team, I'm qiliang form compass4nfv and yardstick team. i'm interested in OpenStack 3rd Party CI. https://git.opnfv.org/cgit/releng/tree/prototypes are the code you've done and https://wiki.opnfv.org/display/INF/OpenStack+3rd+Party+CI the place where i can get started? 14:38:35 mm, latest... that still needs to be tested, i don't know the blockers we could find 14:38:46 qiliang: yep, those are the things you can take a look 14:39:11 yolanda: once the baremetal stable working, we can attempt latest perhaps 14:39:20 sounds good 14:39:26 it looks like i'll be in Barcelona in October 14:39:43 good! 14:39:50 while we're at it 14:39:59 who else? 14:40:00 fdegir: ok, thx, i'll take a deep look and join. 14:40:09 i'll be there, it's nearly at home! 14:40:09 (I don't remember if we talked about this last week) 14:40:10 qiliang: thanks for your interest 14:40:15 yes 14:40:17 jmorgan1: +1 14:40:24 Barcelona is just an hour and a half drive from where i live 14:40:42 yolanda: do you have a link or something for openstack infra design session planning? 14:40:44 17 hours for me 14:40:52 jmorgan1: :) 14:40:54 fdegir: i registered and need to book flight/hotel 14:41:04 fdegir, for mid-cycle ? 14:41:09 yolanda: and do we have chance to book an hour or two to present and demo what we are trying to achieve? 14:41:25 i tried to raise the topic a pair of times, but no luck 14:41:33 i will be in barcelona too 14:41:44 yolanda: ok, I'll ping ChrisPriceAB so he can use his super-powers 14:41:54 so will Chris who owes me beers now 14:41:58 he may be more powerful than me for sure :) 14:42:50 #action fdegir to talk to ChrisPriceAB in order to book time with OpenStack Infra during OpenStack Summit 14:42:53 agree 14:43:03 #link https://etherpad.openstack.org/p/qa-infra-newton-midcycle 14:43:19 seems the etherpad didn't get much love lately 14:43:27 hey guys, let me see what I can do. No promises however my powers are limited, I can only promise beer... 14:43:55 ChrisPriceAB, so you see that etherpad link, fdegir added the OpenStack OPNFV collaboration topic 14:44:17 yolanda: I was talking about OpenStack Design SUmmit 14:44:24 oh sorry 14:44:35 so i still don't know details about it 14:44:39 yolanda: as my travel to newton midcycle is under risk 14:45:07 Desgin Summit is on Friday, correct 14:45:44 Design Summit from Tuesday to Friday, I think 14:46:51 sorry, my connection comes and goes 14:47:05 ending the meeting now so I don't keep the channel from having new meetings 14:47:11 thanks everyone for joining! 14:47:17 #endmeeting