14:00:38 <fdegir> #startmeeting OpenStack 3rd Party CI
14:00:38 <collabot> Meeting started Wed Sep  7 14:00:38 2016 UTC.  The chair is fdegir. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:38 <collabot> Useful Commands: #action #agreed #help #info #idea #link #topic.
14:00:38 <collabot> The meeting name has been set to 'openstack_3rd_party_ci'
14:00:41 <jmorgan1> aricg: who else will have access to LF pod5?
14:00:50 <fdegir> #topic Roll Call
14:00:58 <fdegir> jmorgan1: it is one of the subjects
14:01:02 <jmorgan1> #info Jack Morgan
14:01:07 <jmorgan1> fdegir: ok, good
14:01:08 <fdegir> #info Fatih Degirmenci
14:01:37 <fdegir> Julien-zte: yolanda: hwoarang: ping
14:01:47 <fdegir> around?
14:01:58 <yolanda> hi
14:02:09 <fdegir> hi
14:02:31 <fdegir> meeting :)
14:02:33 <fdegir> quick one
14:03:01 <fdegir> moving to the agenda - short one
14:03:04 <fdegir> #topic Agenda
14:03:16 <hwoarang> #info Markos
14:03:17 <fdegir> #info Status Update - bifrost and puppet-infracloud
14:03:24 <fdegir> #info HW/Infra Needs
14:03:26 <fdegir> #info AOB
14:03:42 <fdegir> #topic Status Update - bifrost and puppet-infracloud
14:03:53 <fdegir> yolanda: can you type in short status update please?
14:04:06 <fdegir> I know you've been working on centos support
14:04:22 <yolanda> hi sure
14:04:48 <yolanda> so trusty is done, i was working on CentOS. It was failing due to the modules not being able to work with CentOS
14:04:49 <Julien-zte> pong fdegir
14:05:04 <yolanda> but i sent some patches for puppet-infracloud that already landed. I'll need to test again
14:05:07 <Julien-zte> no gtm meeting?
14:05:15 <yolanda> fdegir, is that up again?
14:05:20 <yolanda> the lab i mean
14:05:21 <fdegir> yolanda: trusty is done for both bifrost and puppet-infracloud if I'm not mistaken?
14:05:27 <jmorgan1> Julien-zte: no, irc only
14:05:30 <yolanda> yes, trusty is done
14:05:32 <fdegir> yolanda: yes, all our nodes are up
14:05:34 <Julien-zte> several meeting, I got confused, sorry for this
14:05:48 <fdegir> #info Trusty is done for both bifrost and puppet-infracloud
14:06:13 <fdegir> #info Work with Centos is going on and some patches to puppet-infracloud already landed. Further testing will be done
14:06:42 <fdegir> yolanda: I suppose that's all
14:07:04 <yolanda> ok so i hope to have some time tomorrow/friday to test centos again
14:07:23 <yolanda> i also did some change to the playbook, to supporte env vars, to be able to spin trusty/centos more easily
14:07:51 <fdegir> #info Changes to the playbook has been done to support env vars in order to spin trusty/centos more easily
14:08:09 <fdegir> we can create a job for centos once it works fine
14:08:14 <yolanda> #link https://gerrit.opnfv.org/gerrit/20357
14:08:16 <fdegir> and run stuff automatically
14:08:26 <yolanda> but i'd like to test it
14:08:35 <yolanda> with lab down i could not test properly
14:08:57 <fdegir> yep, that's jmorgan1's fault
14:09:27 <jmorgan1> lab down is no excuse ;)
14:09:49 <Julien-zte> we have got Trusty testing finished also. we are setting up the CI for bifrost in our inner env. something is different that we are using Openstack CI system not OPNFV, and CI slave is boot and deleted and in a VM. it is not possible to setup a nested CI env.
14:10:41 <fdegir> Julien-zte: if you want, you can do stuff on OPNFV Jenkins
14:10:49 <fdegir> Julien-zte: we can find machine for you
14:10:51 <Julien-zte> is it useful to finished boot the VM in openstack envirement?
14:11:33 <yolanda> heh, combine that with a week of holiday last week, and i could not work so much
14:11:38 <Julien-zte> Hi fgedir, it is not shortage of resources. just for the infrastructure we used.
14:11:58 <fdegir> Julien-zte: we work on standalone machines and run bifrost directly on them
14:12:10 <fdegir> Julien-zte: and create and provision VMs using bifrost
14:12:20 <fdegir> Julien-zte: I haven't tried running bifrost nested
14:12:27 <Julien-zte> yes, currently we boot vm using libvirt, shall we support openstack cli
14:12:38 <jmorgan1> yolanda: no worries
14:13:02 <fdegir> Julien-zte: sorry but I don't understand
14:13:17 <fdegir> Julien-zte: there is no openstack involved in bifrost part
14:13:31 <Julien-zte> using Bifrost to bootup 3VMs in openstack and deploy them
14:13:59 <Julien-zte> sorry, we using Zuul + Nodepool for jenkins slaves
14:14:09 <fdegir> Julien-zte: I don't know if bifrost supports that
14:14:29 <Julien-zte> it is not useful we can focusing on slave node
14:14:31 <fdegir> Julien-zte: as I know bifrost mainly focuses on baremetal provisioning
14:14:45 <Julien-zte> yes, agree
14:14:46 <fdegir> yolanda can correct me if I'm mistaken
14:14:58 <fdegir> but our focus is to provision baremetal nodes using bifrost in the end
14:14:59 <Julien-zte> just using openstack as a VM resource provider
14:15:03 <fdegir> and install openstack on them
14:15:07 <yolanda> fdegir, you are right. Bifrost just is installed with ansible
14:15:24 <Julien-zte> understood
14:15:28 <fdegir> Julien-zte: I think it is much simpler not to mix openstack into this picture
14:15:39 <Julien-zte> OK
14:15:45 <fdegir> Julien-zte: just an ubuntu/centos slave connected to jenkins
14:15:49 <yolanda> bifrost is just a way to provision the servers, you can use for openstack or for other purposes
14:15:50 <fdegir> Julien-zte: and run bifrost there
14:16:50 <fdegir> hwoarang: can you perhaps say something about SuSe support you are working on?
14:17:03 <fdegir> and anything else you might be looking
14:17:38 <hwoarang> fdegir: i hope it's nearly there. i had to package ipxe-bootimgs for suse so i believe this is what's missing to complete the port. I will know more soon now that I got access to the new host
14:18:14 <hwoarang> that's all for bifrost ofc. but i haven't done much else due to other tasks poping up
14:18:19 <Julien-zte> good news hwoarang
14:18:43 <fdegir> #info ipxe-bootimgs has to be packaged for suse which is what's missing to complete the port. We will know more once more testing is done.
14:19:14 <fdegir> thx hwoarang
14:19:35 <fdegir> jmorgan1: have you had chance to look at bifrost and/or puppet-infracloud?
14:20:04 <jmorgan1> fdegir: no, i'll be focussing on the opnfv release tasks this week then should be able to take a look
14:20:24 <fdegir> ok
14:20:28 <fdegir> from my side
14:20:34 <jmorgan1> fdegir: my resources in the intel lab are on loan to others
14:21:26 <fdegir> #info The instructions written by yolanda has been tested and small fixes have been comitted
14:21:48 <Julien-zte> the instructions has been submitted?
14:21:58 <jmorgan1> do we have link?
14:23:36 <fdegir> sorry - lost my connection
14:23:54 <fdegir> yes, we have
14:24:08 <jmorgan1> we missed you status update
14:24:11 <jmorgan1> your
14:24:24 <fdegir> yep, that's when I lost my connection
14:24:43 <fdegir> #info I'll send some updates to puppet for jumphost
14:24:54 <fdegir> #info The instructions are
14:24:58 <Julien-zte> I thought it will only happned in G.F.W blocked area
14:25:02 <Julien-zte> -:)
14:25:13 <fdegir> #link https://gerrit.opnfv.org/gerrit/gitweb?p=releng.git;a=blob;f=prototypes/bifrost/README.md
14:25:25 <fdegir> #link https://gerrit.opnfv.org/gerrit/gitweb?p=releng.git;a=blob;f=prototypes/puppet-infracloud/README.md
14:25:45 <fdegir> I have this shitty provider as GFW
14:26:15 <fdegir> I can say that the stuff yolanda sent works perfectly fine
14:26:39 <yolanda> yay :)
14:26:40 <Julien-zte> yah!
14:26:42 <fdegir> I suggest you to look at puppet logs especially to see if you get any error messages
14:27:21 <fdegir> moving to the next topic if noone objects
14:27:39 <Julien-zte> ok
14:27:40 <fdegir> #topic HW/Infra Needs
14:28:14 <fdegir> #info We, except Julien-zte, have been machines from Intel POD4 which was under maintenance
14:28:35 <fdegir> #info the lab is back online now so you should be able to continue using them
14:28:49 <fdegir> #info hwoarang got intel-pod4-node2 with suse on it
14:29:19 <fdegir> #info Intel POD4 is used for playing with and developing stuff using VMs
14:29:31 <fdegir> #info yolanda requested access to LF POD5 for baremetal work
14:29:46 <yolanda> no answer for that
14:29:49 <fdegir> yolanda: aricg asks you if you are ok to get access to LF POD5 only?
14:30:12 <fdegir> aricg> normally we vote on access, but in this case, I'd like to just get you to agree to access only pod5
14:30:12 <yolanda> fdegir, yes, that are the ones we will need right?
14:30:18 <fdegir> yolanda: yes
14:30:31 <yolanda> i'm fine
14:30:32 <fdegir> I'll info this in
14:30:46 <fdegir> #info yolanda agrees to access only LF POD5
14:30:51 <fdegir> aricg: ^
14:31:00 <aricg> yolanda: I will send you the vpn creds shortly
14:31:15 <yolanda> thanks
14:31:24 <fdegir> #action fdegir to send an email to infra-steering to request access for others to LF POD5
14:31:58 <fdegir> #info We will not have HA at this phase so others can go and use LF POD5 when they reach to the baremetal
14:32:25 <fdegir> #info So the request will be sent for all of us and then we need to list who is using which nodes
14:32:39 <fdegir> jmorgan1: does ^ answer your question?
14:32:46 <jmorgan1> why no HA?
14:32:48 <aricg> Untill LF POD5 is in production, perhaps we should assigne a single person to grant or deny access to said pod
14:32:59 <Julien-zte> on the list, not for now
14:33:06 <fdegir> aricg: I thought you were that person?
14:33:12 <jmorgan1> agreed ;)
14:33:28 <fdegir> jmorgan1: HA is complicated
14:33:34 <jmorgan1> fdegir: yes, its answers the question
14:33:36 <fdegir> jmorgan1: puppet-infracloud doesn't support that
14:33:48 <fdegir> yolanda: I hope I'm right with what I said just now
14:33:51 <jmorgan1> fdegir: ok, so no support currently
14:34:46 <yolanda> yes, we don't have HA because puppet-infracloud is not supporting that at the moment
14:34:57 <fdegir> anyone having any issues/shortages with HW/Infra, please ping jmorgan1 :)
14:35:14 <Julien-zte> copy
14:35:21 <fdegir> moving on
14:35:24 <fdegir> #topic AOB
14:35:31 <fdegir> anyone wants to add anything?
14:35:37 <yolanda> even more , when i tried to go with first steps, i was hitting a blocker, simply adding the pacemaker module to the module list in infra. This seems to don't work on precise and infra still gates on precise, so i could not start that job until precise stops being used
14:35:42 <yolanda> or find some other workaround
14:35:54 <yolanda> #link https://review.openstack.org/335511
14:36:25 <fdegir> #info Due to puppet-infracloud not supporting HA, we will not attempt HA at this phase and focus on bringing up 2 node setup
14:36:58 <jmorgan1> so schedule wise, will we be ready for openstack summit?
14:37:10 <fdegir> #info The near term taget is to have both VM and BareMetal provisioning/OpenStack Installation ready by the summit
14:37:29 <fdegir> jmorgan1: I let yolanda answer that :)
14:37:36 <Julien-zte> challenge by Summit
14:37:54 <yolanda> not with HA, but we will be fine with a simple baremetal deployment
14:38:00 <yolanda> i think we are in a good position
14:38:07 <fdegir> and the latest, not stable
14:38:07 <qiliang> Hi, team, I'm qiliang form compass4nfv and yardstick team. i'm interested in OpenStack 3rd Party CI. https://git.opnfv.org/cgit/releng/tree/prototypes are the code you've done and https://wiki.opnfv.org/display/INF/OpenStack+3rd+Party+CI the place where i can get started?
14:38:35 <yolanda> mm, latest... that still needs to be tested, i don't know the blockers we could find
14:38:46 <fdegir> qiliang: yep, those are the things you can take a look
14:39:11 <fdegir> yolanda: once the baremetal stable working, we can attempt latest perhaps
14:39:20 <yolanda> sounds good
14:39:26 <jmorgan1> it looks like i'll be in Barcelona in October
14:39:43 <fdegir> good!
14:39:50 <fdegir> while we're at it
14:39:59 <fdegir> who else?
14:40:00 <qiliang> fdegir: ok, thx, i'll take a deep look and join.
14:40:09 <yolanda> i'll be there, it's nearly at home!
14:40:09 <fdegir> (I don't remember if we talked about this last week)
14:40:10 <jmorgan1> qiliang: thanks for your interest
14:40:15 <Julien-zte> yes
14:40:17 <fdegir> jmorgan1: +1
14:40:24 <yolanda> Barcelona is just an hour and a half drive from where i live
14:40:42 <fdegir> yolanda: do you have a link or something for openstack infra design session planning?
14:40:44 <Julien-zte> 17 hours for me
14:40:52 <qiliang> jmorgan1: :)
14:40:54 <jmorgan1> fdegir: i registered and need to book flight/hotel
14:41:04 <yolanda> fdegir, for mid-cycle ?
14:41:09 <fdegir> yolanda: and do we have chance to book an hour or two to present and demo what we are trying to achieve?
14:41:25 <yolanda> i tried to raise the topic a pair of times, but no luck
14:41:33 <hwoarang> i will be in barcelona too
14:41:44 <fdegir> yolanda: ok, I'll ping ChrisPriceAB so he can use his super-powers
14:41:54 <jmorgan1> so will Chris who owes me beers now
14:41:58 <yolanda> he may be more powerful than me for sure :)
14:42:50 <fdegir> #action fdegir to talk to ChrisPriceAB in order to book time with OpenStack Infra during OpenStack Summit
14:42:53 <Julien-zte> agree
14:43:03 <yolanda> #link https://etherpad.openstack.org/p/qa-infra-newton-midcycle
14:43:19 <yolanda> seems the etherpad didn't get much love lately
14:43:27 <ChrisPriceAB> hey guys, let me see what I can do.  No promises however my powers are limited, I can only promise beer...
14:43:55 <yolanda> ChrisPriceAB, so you see that etherpad link, fdegir added the OpenStack OPNFV collaboration topic
14:44:17 <fdegir> yolanda: I was talking about OpenStack Design SUmmit
14:44:24 <yolanda> oh sorry
14:44:35 <yolanda> so i still don't know details about it
14:44:39 <fdegir> yolanda: as my travel to newton midcycle is under risk
14:45:07 <jmorgan1> Desgin Summit is on Friday, correct
14:45:44 <Julien-zte> Design Summit from Tuesday to Friday, I think
14:46:51 <fdegir> sorry, my connection comes and goes
14:47:05 <fdegir> ending the meeting now so I don't keep the channel from having new meetings
14:47:11 <fdegir> thanks everyone for joining!
14:47:17 <fdegir> #endmeeting