16:02:48 #startmeeting neutron_northbound 16:02:48 Meeting started Mon Dec 12 16:02:48 2016 UTC. The chair is yamahata. Information about MeetBot at http://ci.openstack.org/meetbot.html. 16:02:48 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:02:48 The meeting name has been set to 'neutron_northbound' 16:02:58 #chair mkolesni barak 16:02:58 Current chairs: barak mkolesni yamahata 16:03:07 #topic agenda bashing and roll call 16:03:10 #info mkolesni 16:03:13 #info yamahata 16:03:29 #link https://wiki.opendaylight.org/view/NeutronNorthbound:Meetings meeting 16:03:54 Do we have any topics other than usual? 16:04:15 can we talk about v2 driver? 16:04:23 Yes, off course 16:04:28 the experiment in the CI 16:04:53 My schedule will be I'll chair Dec 19, skip Dec 26, Jan 2. 16:04:58 isaku - not from me - also have to leave in 30 minutes 16:05:08 can chair Jan 9 16:05:26 john_a_joyce: no problem. Do you have any topic that has high priority? 16:05:28 yamahata: sounds good 16:05:52 anything related to the V2 driver is more important to me 16:05:54 we can discuss them before you leave 16:06:06 so would like to see what Mike wanted to bring up 16:06:37 ok 16:06:43 I'd like to raise that openstack CI, rally, is heavily broken. 16:07:15 #action yamahata announce the schedule. and update the wiki page 16:07:36 perhaps its broken due to V1 driver? 16:07:49 isaku is it broken for v1 and v2? 16:07:54 do you want to discuss the CI/V2 driver now? 16:08:03 mkolesni: so far I'm not sure. we need to dig into it. 16:08:20 please wait a bit. 16:08:21 #topic Announcements 16:08:28 ok 16:08:37 networking-3.3.0 is released 16:09:05 netwon latest release that include only adjustments to version in requirements and reference to branch. 16:09:21 any other annoucement? 16:09:32 none from me 16:10:07 #topic action items from last meeting 16:10:22 #link https://review.openstack.org/#/c/405052/ WIP:fullstack - use functional scripts to install ODL 16:10:47 It's still WIP. 16:11:26 ODL bug 7256. which was assigned. 16:11:53 #topic Neutron Stadium Effort 16:12:31 In short term, we're safe in Ocata cycle. So we should maintain it in daily basis way. 16:12:43 #topic Migration to new features. 16:12:49 mkolesni: now you're on stage 16:13:55 yamahata: 10x 16:14:12 #link https://review.openstack.org/#/c/382597/ DO NOT REVIEW/MERGE: test v2 driver 16:14:30 i made several rechecks there over the weekend 16:14:43 seems stability of it is around 50% failure rate 16:15:07 Is it include doc breakage/rally failure? 16:15:25 no 16:15:39 doc had some intetrmittent failures but it was fixed 16:15:48 i think it was outside of our control anyway 16:15:55 rally is 100% stabvle 16:16:33 Oh, with v2 driver, rally passes. 16:16:44 yes 16:17:00 seems that most failures are random stuff in TestNetworkAdvancedServerOps 16:17:20 sometimes other tests fail but with V2 driver it's mostly this 16:18:19 i also tried v2 + parallel tempest 16:18:29 that's interesting. 16:18:31 and also did rechecks on v1 + parallel tempest 16:18:40 seems that parallel isn't stable on any 16:18:55 though on v1 obviously rally fails a lot more on the parallel 16:19:02 and on the v2 tempest fails more 16:19:45 Hmm, that's not surprise, but annoying. 16:20:08 Do you have any sense where the issue is? 16:20:17 I mean networking-odl or odl. 16:20:32 We're also going to migrate to new netvirt. 16:20:52 If it might be in legacy netvirt, we should switch to new netvirt first. 16:20:53 hmm not sure you can try analyze the failures 16:21:11 Okay. It's difficult tell. 16:22:35 yamahata: i think for now we can perhaps switch to v2 16:22:42 and daible those tests in tempest 16:22:46 disable 16:22:54 and have experimental job running them 16:23:22 mkolesni: agree to add new experimental(or non-voting) job 16:23:57 which tests? Maybe some of your input was lost. 16:23:59 yea i mean non voting 16:24:08 TestNetworkAdvancedServerOps 16:24:17 its random tests from that case 16:24:24 All of test cases in TestNetworkAdvancedServerOps ? 16:24:45 its just random one each time 16:25:17 That's likely. 16:26:23 TestVolumeBootPatternV2.test_volume_boot_pattern SUCCESS TestNetworkAdvancedServerOps.test_server_connectivity_resize SUCCESS SUCCESS TestNetworkAdvancedServerOps.test_server_connectivity_suspend_resume SUCCESS TestNetworkAdvancedServerOps.test_server_connectivity_reboot TestNetworkAdvancedServerOps.test_server_connectivity_stop_start TestNetworkAdvancedServerOps.test_server_connectivity_resize SUCCES 16:26:24 TestNetworkAdvancedServerOps.test_server_connectivity_suspend_resume TestNetworkBasicOps.test_port_security_macspoofing_port TestNetworkAdvancedServerOps.test_server_connectivity_rebuild TestNetworkAdvancedServerOps.test_server_connectivity_suspend_resume 16:27:31 so what do you say 16:27:32 Can you propose a patch to add the job? 16:27:49 sure ill try to work on this 16:28:38 In the past I had tried to add more conbinations 16:28:40 https://review.openstack.org/#/c/347045/ 16:29:51 anything else to add? 16:30:29 so ill send a patch to change the default to be v2? 16:30:39 do we still want to gate on v1 driver? 16:31:01 I would prefer we gate on V2 16:31:27 How about v2 by default 16:31:43 v1 + tempest carbon? 16:32:06 So that we can monitor the difference between v1 and v2. 16:32:11 ok sounds good 16:32:21 sounds good to me 16:32:52 #action mkolesni drive to switch v2 driver by default on openstack CI 16:33:06 i also added testr results collector for the jobs :) 16:33:14 mkolesni: cool. 16:33:15 so now theyre easier to access 16:35:12 On my side, I'm planning to driver net netvirt switch. But the priority is, vagrant for functional and fullstack, fix grenade, and then multinode. 16:35:36 After that unless someone else is driving it, I'll give new netvirt a try. 16:36:09 is vagrant really a high priority? 16:36:28 i think the switch is higher 16:36:43 vagrant iiuc is for devs to run the tests? 16:36:46 No. 408939 is alsmost done. So simpley I'd like to finish it. 16:37:22 I'm effectively looking into grenade and multinode. 16:38:19 when is old netvirt going to be removed? 16:38:24 The highest priority is to fix rally. I'll look into it today. 16:38:40 Probably Nitrogen or Oxygen. 16:39:08 in the next cycle or next next cycle. 16:39:47 yamahata: on v2 rally isnt breaking 16:40:03 so im not sure its a high priority since we know V1 is race prone 16:40:52 mkolesni: okay. 16:41:30 cool 16:42:14 anything else to discuss? 16:42:23 https://git.openstack.org/cgit/openstack/networking-odl/commit/?id=aae5fe0b2a3109ef7b3a5b2f78fb16e2b9e9d595 16:42:32 #topic patches/bugs 16:43:07 I think it is a very important fix and it should get into newton 16:43:29 barak: do you mind to send a patch to backport it then? 16:43:47 I suppose cliking backport button would work 16:44:02 hopefully :) 16:44:13 oops cherry-pick button 16:44:40 worked. :) https://review.openstack.org/#/c/409858/ 16:44:47 cool, thanks 16:45:03 can you please add neutron release reviewers? 16:45:18 Otherwise we will hardly get review on stable/branch. 16:46:17 #action barak add neutron stable maint team to 409858 16:46:26 http://docs.openstack.org/project-team-guide/stable-branches.html#stable-maintenance-core-team 16:46:43 any other bugs/patches that need attention? 16:47:06 OK, thanks. Yes, one more issue 16:47:52 not sure if this was discussed..The number of neutron server processes and the implication on db operations 16:48:51 I have seen that on large server, there are per core processes, and after some time, per the logs, each tries to fetch from the database 16:49:55 I see "Thread walking database _sync_pending_rows..." from 20+ processes sometimes 16:50:12 Do you mean worker process enabled? 16:50:25 you mean on same neutron node? 16:50:30 yes 16:50:47 yes to both? 16:51:05 yes for same node 16:51:30 ok so what do you think is problematic there? 16:51:33 I know the number of workers is configurable, still, I am not sure if this is a correct/planned behaor 16:51:36 the logging? 16:52:02 i dont think its a problem unless you see it causes some bottleneck or something 16:52:42 Are you seeing high load on rdbms? 16:54:05 It was much more problematic before the fix https://git.openstack.org/cgit/openstack/networking-odl/commit/?id=aae5fe0b2a3109ef7b3a5b2f78fb16e2b9e9d595 16:54:07 So far I've thought that such behaviour is undesirable, but it's not critical. 16:54:55 But I agree it is less critical now 16:55:01 i think it can be handled if its causing load otherwise IMHO it's not a problem 16:55:11 we can reduce the logging obviously 16:55:42 barak: Are you seeing problems? Maybe except log is full of the message. 16:56:37 Cannot tell of a problems after the fix, but it just made me think that it may cause some load on the database 16:57:20 i think it shouldnt still table should be very small and any normal rdbms should handle such insignificant load 16:59:53 sorry i have to go 16:59:58 is there something else? 17:00:00 barak: we're aware of such behaviour and if you see problems, please report. 17:00:03 https://review.openstack.org/#/c/407784/ 17:00:17 If you have time, please have a alook. 17:00:20 anythig else? 17:00:29 #topic open mike 17:00:40 okay, thanks every one 17:00:44 ok 17:00:44 thanks 17:00:48 thanks Isaku 17:00:51 #topic cookies 17:00:52 bye guys 17:00:56 #endmeeting