14:31:57 #startmeeting Fuel@OPNFV 14:31:57 Meeting started Tue Aug 30 14:31:57 2016 UTC. The chair is Greg_E_. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:31:57 Useful Commands: #action #agreed #help #info #idea #link #topic. 14:31:57 The meeting name has been set to 'fuel_opnfv' 14:32:08 in fact, Fatih is off until thursday and the Slave is reporting "Offline" in Jenkins 14:32:32 so unless someone knows how to change that state - i mean, i can start the slave, but i dont know about messing with labels during a week of release cutting 14:32:47 might be "wiser" to run manual jobs till then? - again - up to the group what you guys need / want to do 14:33:12 (another idea would be to get the toplogy for DPDK nailed down, documented and pushed up to the INFRA/PHAROS team - on Julien's page" 14:34:12 DanSmithEricsson: so this pod is already defined in jenkins but not added to daily deployments? is it possible to deploy some scenario on it from jenkis modifying job properties? 14:34:37 #info Michal Skalski 14:35:49 #info Peter Barabas 14:36:14 #info Greg Elkinbard 14:39:12 #info Billy O'Mahony 14:39:28 #info David Chou 14:43:39 One question: which version of Ubunu (14.0.4 or 16.0.4 or others) will be used in Compute node foe release D for fuel@opnfv ? 14:44:15 D is newton so 16.04 will be supported 14:44:54 Thanks! Still no CentOS? 14:45:16 nope 14:45:28 Thanks! 14:48:40 fuel: Hi David, do you still use pod10-n1? I'm trying to reproduce a problem with new version of paramiko which we observed there but with no luck, do you think I could update paramiko version on this node and see if problem will appear? 14:50:06 Hi folks, we seem to be failing yardstick this week in a bunch of scenarios 14:50:23 is anybody chasing down this issue 14:53:19 Michal: I am still using pod10-n1, but Yunhong may not use pod10-n5, I eill check with him, and let you know in email. 14:53:24 #info Daniel Smith 14:54:00 mskalski i see that it is registerd in Jenkins - however its marked as offline so that means to me the slave (on the jumphsot) is stopped 14:54:08 Greg_E_: Cathy probably resolved problems with yardstick runs on onos scenarios 14:54:27 it is failing on all scenarios 14:54:36 mskalski - i "think" its really hust an issue of starting it - then adding the label in Jenkins (if you wanted to run jobs from a pool or singly) 14:55:36 no not at all 14:56:22 (my two cents about ONOS) - cathy found an overlapp in IPs .. updated the POD2 override DHA and seems ok to me when i looked at her run last night 14:56:26 let me find it 14:57:20 https://build.opnfv.org/ci/view/fuel/job/fuel-os-onos-nofeature-ha-baremetal-daily-master/63/ 14:57:25 this seems to have YS fine 14:57:28 but some functest have failed 14:57:38 unless i am reading it wrong and that is from 3 hours ago i think ? 14:58:54 so part of docter/inspector in FUNCTEST failed for that job 14:58:55 https://build.opnfv.org/ci/view/fuel/job/fuel-os-onos-nofeature-ha-baremetal-daily-master/63/ 14:59:00 https://build.opnfv.org/ci/job/functest-fuel-baremetal-daily-master/436/console 15:00:33 Michal - if you want - after this meeting, we can work on getting POD1 up so you can order jobs through jenkins, 15:00:41 DanSmithEricsson: Cathy's change was also cherry picked to stable/colorado right? last run for stable coloardo for onos was 2 days ago, new one should pick up changes 15:00:45 nosdn-nofeature, odl_l2-bgpvpn, odl_l2_sfc, odl_l3 are failing yardstick 15:00:48 correct 15:01:01 i dont know when in the rotation the C- bracnh is run.. there is alot of queing 15:01:07 (greg - can you show link) 15:01:23 so i can see the page you are looking at 15:01:55 DanSmithEricsson: https://build.opnfv.org/ci/view/fuel/job/yardstick-fuel-baremetal-daily-colorado/ 15:02:16 thx. 15:02:17 lookin 15:02:30 this is on stable/colorado bare metal 15:03:50 hmm. 15:04:06 i see lots of "nova image-list" failures reported.. that is strange - and no URL's found 15:04:28 i wonder.. is there a configuration in yardstick that matches the floating ranges we define in the DEA/DHA and they need to be updates to match the hcange that was made? 15:04:39 might be a good idea to (on the new branch) 15:04:47 run the scenarios manually and do a new "reaping" of the PODs 15:04:56 to refresh and ensure the configurations are sound for each 15:05:45 since POD2 was butchered up pretty good for different purposes the weeks prior (just a thought) - if you wanted to be really sure - another thing woul dbe to revert the change and see - but i dont see how the shrinking of a floating range by 2 ips (from .200 down to .198) in a range could impact 15:05:59 unless YS is addressing those two ips somewhere statically (which would be weird i would think). 15:06:19 What do you see Michal? 15:06:40 I think we should look at this file in YS configuration 15:06:41 etc/yardstick/nodes/fuel_virtual/pod.yaml 15:07:07 and that it makes sense (looking - i dont know the YS repo well) - any YS people around? 15:11:27 ?? 15:11:54 DanSmithEricsson: i think that yardstick fetch range of floating ips to use from deployed env, but this file which you pointed is interesting since we can't assume that something on given ip is an controller for example 15:13:12 it change between different deployments, question is if this file has an actual impact on yardsitck run on bare metal 15:16:36 Hi guys iirc you tell y/s where the fuel master is and it sshs there and parses the o/p from 'fuel nodes' 15:17:27 I know it defintiltey had a problem if you had more that on deployment on the FM. Also don't know what it does for ha - which controller does it chose to pull openrc from? does nti matter. 15:17:43 I'm not familiar with etc/yardstick/nodes/fuel_virtual/pod.yaml 15:19:46 billyo: all controllers has the same openrc copy so it doesn't matter but it sounds reasonable that yardstick fetch information from live env, and ignore this config file 15:23:16 Also when I was running yardstick manually I had to modify this fetched openrc manually to change all reference of internalURL to publicURL. I'm not sure how y/s when run by jenkins get's round this. 15:23:37 dunno if any of that helps 15:27:06 billyo: for bare metal servers jumphost has access to mgmt network, for virtual deployment I recently moved mgmt network to separate nic and configured bridge to which this nic is attache to also have connectivity from yardstick container 15:30:17 billyo: but for libvirt deployment iptables usually block connection from different location than host and because of that connections from container may require additional iptables rule (which is done by ci job) 15:31:33 sounds like your are waaaay ahead of me here ;) 15:44:55 agreed.. i would think that the YS would pull from the live configuration (env that is set out0, but again - not sure) 15:45:06 however, i think we need to approach it in two differetn tracks 15:45:44 for the BareMetal we should sort that out and if there is a change in the MGMT network - since in virtual env's that i setup - the only bridge that has a true physical nic is External 15:45:51 all the other 4 are not bound to physical nics 15:46:21 unless you are talking about a hybrid setup (some vComputes and some BMComputes - but i dont think that is a CI/CD supportted setup yet unless i missed alot) 15:47:12 for me, i think its important to trace down the "change" that caused this - cause i really dont think it was the patch picked over. 15:47:18 but we need a YS person - maybe Morgan? 15:47:59 DanSmithEricsson: but which change Daniel? 15:49:16 sorry - i thought we were working on the assumption that the push that Cathy made to the dea override might be a cause of this 15:49:55 maybe i missed something 15:50:05 sorry ( trying to do too many chat windows at once). 15:51:40 ah ok, but this change has only impact on ericsson pod2 15:52:14 correct 15:52:22 ahh - ok - i missed something 15:52:28 we are sayign that this is affecting on all pods? 15:52:37 (all the YS jobs that Greg pointed out?). 15:54:11 ok.. no i am folllowing you a bit more (i think) 15:54:15 i see that in https://build.opnfv.org/ci/job/yardstick-fuel-virtual-daily-master/ 15:54:23 that all the ones that are runnign at on Virtual 15:54:27 PODs 15:55:04 although - nosdn-kvm-noha just passsed on ericsson-virtual an there is a nosdn-nofeature-noha running now 15:56:25 so i think i am with you now Michal (and Billy) 15:56:26 the https://build.opnfv.org/ci/job/yardstick-fuel-virtual-daily-master/198/console 15:56:44 shows that when the Cirros is coming up on the 172.16.0.189 (which is the PUBLIC network) 15:56:47 its not able to SSH in 15:56:56 so my first thing I would check is that the security gruop is being created 15:57:02 each time we do a branch 15:57:13 this param seems to be lost / has to be updated (something to check) 15:58:03 since looking at the log - seems to me like the heat template to bring up the cirros image worked ok 16:31:24 <__szilard_cserey> Hi Alex, I have merged your patch 16:31:42 <__szilard_cserey> https://gerrit.opnfv.org/gerrit/#/c/19487/ 16:38:43 #endmeeting