16:00:02 #startmeeting OPNFV BGS daily release readiness synch 16:00:02 Meeting started Wed May 20 16:00:02 2015 UTC. The chair is frankbrockners. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:02 Useful Commands: #action #agreed #help #info #idea #link #topic. 16:00:02 The meeting name has been set to 'opnfv_bgs_daily_release_readiness_synch' 16:00:08 #info Frank Brockners 16:00:15 #info Tim Rozet 16:00:48 #info Peter Bandzi 16:02:26 #info Jose Lausuch 16:04:44 <[1]JonasB> #info Jonas Bjurel 16:07:57 #info chris price 16:09:22 who 16:09:26 sorry 16:10:18 differnt window 16:10:35 Hehe 16:12:04 shall we start? :) 16:13:05 I think i can provide brief updat for ODL test 16:13:33 ok 16:14:14 #info ODL tests were PASSED on POD2, but need to add cleaning after them, because probably it cause troube to other functests 16:14:26 #info going to try them on POD1 16:14:29 end 16:14:38 pbandzi: what kind of clean do you need? 16:14:40 neutron networks? 16:14:48 instances? 16:15:27 yes i create networks and subnets and porst and i have already created other test which delete these but they are not yet in repo so I am going to push them 16:15:36 ok 16:15:57 anyway, ODL tests are ran after rally 16:16:05 <[1]JonasB> pbandzi: They will fail on POD1 with current deployment, we need to prepare a non HA deployment for the ODL tests 16:16:12 yeah but on the last functest run they were ran while rally was running 16:16:18 which might have choked up the rally run 16:16:45 ah ok 16:16:47 thats strange 16:16:50 it should be sequential 16:17:00 hmm ... isn't is that ODL is in non-clustered mode anyway? 16:17:13 so single instance of ODL in all cases - or not? 16:17:17 [1]JonasB: ok. is Stefan working on POD1 now? I can synchronize with him 16:17:19 pbandzi triggered the ODL run manually cause we wanted to test something 16:17:42 ODL is not clustered, single instance 16:18:03 oh you were asking about POD1, sorry 16:18:06 ... so that should mean we have the same setup for ODL across both PODs 16:18:19 i.e. testing for ODL with Robot should be the same 16:19:43 <[1]JonasB> As said, for POD1 we need to set-up a non HA deployment for ODLL testing 16:20:22 * ChrisPriceAB Q? we are not running odl in ha for Arno in either deploy? 16:20:29 [1]JonasB: Am a bit confused: ODL is always non-HA 16:20:39 yeah ODL doesnt support HA in helium 16:20:58 * ChrisPriceAB ok cool, I'll stop telling lies. 16:21:05 :) 16:21:48 <[1]JonasB> Yes, but as I have explained - we (fuel) doesnt support combbbination of OS HA and ODL. 16:22:26 <[1]JonasB> That is unfortunate, but still the case, we're working on it for SR1 16:22:55 ah ok - it is HA in Openstack and not ODL 16:23:15 that means the main setup we'd test for Fuel would be a non-HA OS with ODL - correct? 16:23:33 <[1]JonasB> Yep, combination OS HA and Standard neutrom ML2 - OK 16:23:45 <[1]JonasB> But not OS HA + ODL 16:24:59 ok - let's focus testing on OS + ODL without HA then for Fuel. 16:25:22 IMHO the native neutron networking is mostly a nice to have 16:26:35 <[1]JonasB> I need to jump off, just one update 16:26:41 [1]JonasB: What is the default deploy that happens nightly right now? 16:26:58 is that with or without ODL? 16:27:08 (I think it misses ODL right now) 16:27:48 <[1]JonasB> #info Four consecutive successfull jenkins automated deploys since yesturday. We're running every 4 hours now 16:28:41 [1]JonasB: Per the question above: Is this a OS + ODL deploy - or OS standalone? 16:29:22 * frankbrockners Jonas seems to have dropped... - Let's table the POD1 questions for now 16:29:36 let's move to updates on POD2 16:29:43 #info updates on POD2 16:29:52 trozet: Quick update from your end? 16:30:22 #info still working on the external net patch. No real updates. Trying to fix one last issue before I commit 16:30:38 #info latest deployments on POD1 succeded https://build.opnfv.org/ci/job/genesis-fuel-deploy/lastBuild/console 16:30:46 #info Fatih has disabled the every 6 hour runs until I am done debugging this 16:30:50 #info then we will turn it back on 16:30:57 trozet: Any hope to have the patch by tomorrow? 16:31:32 jose_lausuch: Do you know what is being deployed? Is this OS+ODL - or is this plain OS (with HA)? 16:31:40 i hope so. Changing linux interface config has big conflicts with puppet. Trying to fix it 16:31:52 I think its HA, but would need confirmation 16:32:00 it is HA because we got 3 controllers yes 16:32:03 I checked that 16:32:24 jose_lausuch: Could we switch to using ODL? 16:32:36 yes 16:32:42 but I dont know the impacts on the tests 16:32:46 hopefully none 16:32:56 that way we can at least run the ODL tests on POD1 16:33:06 yes, proceed that way 16:33:09 it should be fine 16:33:11 immediate impact would be: We can test ODL 16:33:17 that way we can also see how our tests behave 16:33:22 yep 16:33:24 :) 16:33:26 let's info this in... 16:34:05 #info Auto-deploy on POD1 to switch to OpenStack + ODL deployment (i.e. enable ODL): That way ODL Robot tests can run on POD1 as well 16:34:20 pbandzi: Do we have Robot ready on POD1 as well? 16:35:22 I can prepare it today before go home 16:35:47 ok - that way we might be able to run it on the next deploy - 16:36:04 I'll drop an email to Fatih 16:36:10 POD1 is still set to run every 6 hours, right? 16:37:52 * frankbrockners seems that noone has an immediate answer 16:38:17 ok:) 16:38:26 jose_lausuch: Any additional updates on testing? 16:38:45 yes 16:38:50 #info functest jenkins job failed because of 1)bad default credentials, 2) failed command in the job. 16:38:50 did we make any progress on fixing some of the failed tests etc.? 16:39:07 not really, troubleshooting the jenkins problems 16:39:08 #info 1) the OS_AUTH_URL is set now to http://172.30.9.70:5000/v2.0. It should be the same from now on. It was a different one before triggering the jenkins automatic deployments 16:39:16 #info 2) I improved the jenkins job by removing config_functest.py script and downloading it again every time to have always the latest from master, the command rm $HOME/functest/config_functest.py 2&>/dev/null fails when ran from jenkins. Need to check with Fatih 16:39:16 https://build.opnfv.org/ci/view/functest/job/functest-opnfv-jump-1/lastBuild/console . 16:39:55 for 1) I contacted Stefan and we agreed (for now) to have that URL, which is fixed 16:40:04 but it is not an elegant solution (not portable) 16:40:21 for 2) I need to talk to Fatih or Aric 16:40:26 not to be done here right now though 16:40:55 I had also problems deploying a VM in POD2 today, but havent had the time to troubleshoot why yet 16:41:23 thats it 16:42:54 thanks jose_lausuch 16:43:09 #info btw, Aric installed the jenkins plugin showin the "Next executions" or scheduled jobs 16:43:24 * fdegir have a question 16:43:29 so you can see what will be executed next 16:43:48 shoot 16:44:09 I've seen functest job hanging number couple of times 16:44:23 this blocks triggering of the next job 16:44:27 in which pod? 16:44:32 we can introduce timeout 16:44:38 yes, Im doing it now :) 16:44:40 I think it was on pod2 16:44:43 it was vPing actually 16:44:53 waiting forever for a VM to come up 16:44:58 and one more yesterday which I can't remember which pod was it 16:45:03 yep 16:45:03 I Will push a patch later to cope with that 16:45:12 with a timeout 16:45:20 ok 16:45:29 then I don't add timeout in jenkins 16:45:36 there is a timeout for the ping, but there was not for the VM build process 16:45:40 no need 16:45:43 easier in the script 16:45:50 ok 16:46:00 I need your help later :) 16:46:06 yep, we talk later 16:46:08 for the job .yaml 16:46:09 ok 16:46:28 fdegir: Different topic. Now that you're here, could you switch autodeploy on POD1 to deploy a O/S + ODL setup (rather than O/S standalone) - so we can test ODL? 16:47:01 I can if someone tells me what I need to do 16:47:11 who should I talk to? 16:47:25 stefan_berg: ping ? 16:47:27 Probably check with Jonas or Daniel Smith 16:47:31 ok 16:47:33 or Stefan right 16:47:35 will contact them 16:47:39 many thanks 16:47:43 no 16:47:45 np 16:47:50 will also send a quick email 16:48:00 anything else to cover today? 16:48:46 not from my side 16:49:02 ok... - looks like we're done 16:49:11 thanks everyone... 16:49:14 #endmeeting