15:03:27 <yamahata> #startmeeting neutron_northbound
15:03:27 <odl_meetbot> Meeting started Mon May 15 15:03:27 2017 UTC.  The chair is yamahata. Information about MeetBot at http://ci.openstack.org/meetbot.html.
15:03:27 <odl_meetbot> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:03:27 <odl_meetbot> The meeting name has been set to 'neutron_northbound'
15:03:33 <yamahata> #topic agenda bashing and roll cal
15:03:40 <yamahata> #info yamahata
15:03:48 <rajivk_> #info rajivk
15:03:50 <yamahata> #link https://wiki.opendaylight.org/view/NeutronNorthbound:Meetings agenda page
15:04:10 <yamahata> there is no update of the agenda page
15:04:26 <yamahata> Is there any additional topic?
15:04:33 <rajivk_> no
15:04:46 <yamahata> I see, move on
15:04:51 <yamahata> #topic Announcements
15:05:03 <yamahata> Last week there was openstack summit. and also openaylight day.
15:05:25 <yamahata> The videos are already available. they were quite fast to upload recordings.
15:05:45 <rajivk_> any new updates from summit
15:05:54 <yamahata> At opendaylight day, the topics focused on nirvana stack which AT&T is trying to promote.
15:06:06 <mkolesni> hi
15:06:09 <yamahata> There is no new info for opendaylight.
15:06:10 <mkolesni> sorry for being late
15:06:29 <yamahata> Right now we're discussing announcement.
15:06:54 <yamahata> mkolesni: Do you have any topic in addition to usual topic?
15:06:57 <mkolesni> yes i see
15:07:03 <mkolesni> no just talk about patches
15:07:46 <yamahata> ODL carbon release is being delayed.
15:07:58 <mkolesni> is there an eta?
15:08:36 <yamahata> #link https://wiki.opendaylight.org/view/Simultaneous_Release:Carbon_Release_Plan carbon release plan
15:09:27 <yamahata> RC0 was cut and now it's test phase.
15:10:03 <mkolesni> so 1 month postponed?
15:10:33 <yamahata> Yes. For detailed schedule, please refer to the discussion in ODL release mailing list.
15:11:38 <yamahata> Hopefully it will be released before ODL developer design forum. but we will see.
15:11:55 <mkolesni> i thought its end of month?
15:12:11 <yamahata> #link https://lists.opendaylight.org/mailman/listinfo/release
15:12:15 <manjeets> hi
15:12:16 <yamahata> mkolesni: right.
15:12:22 <mkolesni> manjeets, hi
15:12:53 <yamahata> any other announcement?
15:13:01 <reedip_> hello
15:13:50 <yamahata> there seems no other annoucement, move on.
15:13:51 <yamahata> #topic action items from last meeting
15:13:57 <yamahata> I suppose there is no items.
15:14:13 <yamahata> #topic carbon/nitrogen planning
15:14:50 <yamahata> We'll discuss nitrogen planning at ODL DDF. Especially we need to communicate about incompatible changes.
15:15:16 <yamahata> I talked with Sam to have time slot to discuss on it at DDF.
15:15:26 <mkolesni> is there something incompatible planned?
15:15:41 <yamahata> yang model update to drop tenant-id.
15:16:06 <yamahata> also status member will be operational.
15:16:23 <yamahata> Those are incompatible ones. Other update will be compatible.
15:16:52 <mkolesni> ah in terms of api status change didnt change the api
15:17:11 <mkolesni> afaik its only additions?
15:17:57 <yamahata> Basically right.
15:18:12 <yamahata> In some cases, API incompatible change is inevitable.
15:18:15 <mkolesni> so only the tenant id which we should ready for afaik
15:19:05 <yamahata> the case of status, we will communicate with dependent projects and see their response.
15:19:17 <yamahata> It may be delayed to post Nitrogen.
15:19:32 <mkolesni> ok
15:20:28 <yamahata> anything else? otherwise let's move on to patches/bugs.
15:21:16 <mkolesni> lets move on
15:21:16 <yamahata> #topic patches/bugs
15:21:25 <manjeets> https://review.openstack.org/#/c/456965/2/networking_odl/tests/functional/base.py
15:21:44 <manjeets> mkolesni, i added this but didn't get a reply from you
15:22:10 <manjeets> i observed for functional test delete test was always passing no matter if resource gets created or not
15:22:49 <mkolesni> manjeets, shouldnt you be getting error code then?
15:23:14 <manjeets> no it sent None
15:23:36 <mkolesni> manjeets, per HTTP SPEC the response should be 410 or 404 in case resource doesnt exist
15:23:49 <mkolesni> so if its not there that what id expect
15:25:22 <manjeets> ohk I haven't touched it for few weeks, i'll recheck but i remember the create was not happening and this test was passing
15:25:30 <mkolesni> if it doesnt return that then we need to decide if thats a bug or not
15:26:26 <mkolesni> perhaps you can add a case to see the correct error code is returned in case of deleting a non existant resource?
15:26:58 <manjeets> mkolesni, that's a good idea
15:27:04 <manjeets> i'll add a case for that
15:27:27 <mkolesni> ok great then we can be sure the correct error is thrown
15:27:39 <mkolesni> thanks
15:28:01 <manjeets> mkolesni, for qos the driver was not getting registered properly and resource i believe got created on neutron side
15:28:02 <mkolesni> yamahata, can we talk about https://review.openstack.org/453581 ?
15:28:20 <yamahata> mkolesni: sure off course.
15:28:43 <yamahata> So what's heppens if two dependent resources are updated?
15:28:53 <yamahata> e.g. network and port. sg and sgrule.
15:29:02 <mkolesni> i basically dont have improvements there, but i noticed that although it works there are now much more deadlocks in the db
15:29:29 <mkolesni> so ive been trying to track it down for the last week but to no avail
15:29:34 <yamahata> more deadlock with your patch? or without patch?
15:30:26 <mkolesni> with the patch some deadlocks occur when inserting the dependencies in the db
15:30:45 <mkolesni> i wasnt able to figure out why though
15:30:49 <yamahata> Oh. Is garella db backend used?
15:30:56 <yamahata> Sure we need to track it down.
15:31:12 <mkolesni> basically it seems to happen when the father resource is being updated while child dependencies get inserted
15:31:46 <yamahata> The dependency calculation would widen the window.
15:31:57 <mkolesni> its not awful since retries basically fix everything back, but its less than ideal
15:33:31 <yamahata> I see. Let's investigate it further.
15:33:32 <mkolesni> regarding the race you were talking about did you see yamamoto's comment?
15:33:35 <mkolesni> https://review.openstack.org/#/c/453581/9/networking_odl/journal/journal.py
15:33:42 <mkolesni> please take a look later
15:33:48 <yamahata> Sure, will do
15:33:55 <yamahata> #action yamahata look at yamamoto's comment
15:34:08 <mkolesni> can we talk about https://review.openstack.org/444648 ?
15:34:16 <yamahata> singleton patch?
15:34:23 <yamahata> sure. Please go ahead.
15:34:44 <mkolesni> yes
15:35:28 <mkolesni> what is your position on this?
15:35:41 <yamahata> For now, we should have only single timer of neutron server.
15:35:56 <mkolesni> you mean globally per host?
15:36:32 <yamahata> rpc worker process shouldn't run the timer and main process should run single timer.
15:36:40 <yamahata> Maybe it can be neutron worker.
15:36:52 <mkolesni> ok but what if that process dies?
15:37:06 <mkolesni> 677016
15:37:25 <mkolesni> it could be problematic
15:37:43 <mkolesni> can i make a suggestion?
15:37:54 <yamahata> Is the number wront? 677016?
15:38:08 <yamahata> the process death means neutorn server death.
15:38:13 <mkolesni> huh, no its just ota token accidentally pressed :$
15:38:38 <yamahata> Oh I can open the patch now.
15:38:58 <mkolesni> since the processes fork i think it could be possible only one can die for whatever reason
15:39:33 <mkolesni> so we could be in problem
15:39:58 <mkolesni> unless its  not possible but im not familiar with all possible OS behaviors so we need to tread carefully
15:40:11 <mkolesni> anyway id like to make a suggestion..
15:40:37 <mkolesni> i think this patch does no harm while for scale it does mitigate a problem that at least we hit in our testing
15:40:53 <mkolesni> so i think as such we can merge this and of course continue planning enhanced solution
15:41:18 <yamahata> the issue you're seeing is timer issue? or other issue?
15:41:47 <yamahata> Anyway to have multiple timers within neutron server would be scalability issue.
15:42:01 <mkolesni> the issue we had was that when we had 56 cores on the machine the cloud came to a half because of so many threads
15:42:05 <yamahata> So we can have single timer within neutron server and see the outcome.
15:42:28 <mkolesni> yes i agree but i dont think this should stop this patch from going in but rather build on top of it
15:42:48 <yamahata> We can have threadpool patch for more flexibility.
15:42:57 <mkolesni> this will at least limit the timers to one per neutron process (after the fork)
15:43:06 <mkolesni> then after that we can further limit it
15:43:07 <yamahata> For example, api worker can have only one journal thread.
15:43:17 <yamahata> but we can have more for main process
15:43:21 <yamahata> s/api/rpc/
15:43:41 <mkolesni> ok sure but this patch doesnt limit that
15:44:13 <mkolesni> all it does is make sure theres one of this object per process, then we can have thread pool of whatever else we like
15:44:19 <yamahata> You're against threadpool patch giving -2.
15:44:35 <mkolesni> we can discuss that as well right now
15:45:07 <yamahata> We can have singleton patch and then threadpool support for more flexibilty.
15:45:53 <mkolesni> sure that sounds good as long as thread pool is not increasing number of timers per process
15:46:39 <yamahata> hmm Do you want to have at least one timer per process. i.e. all rpc workers and main process?
15:47:22 <mkolesni> for now there will be one as i see the thread pool patch didnt change that
15:48:02 <yamahata> Or are you okay with single timer within neutron server?
15:48:15 <yamahata> i.e. single timer among main process and rpc workers.
15:48:41 <mkolesni> i think thread pool patch just increases capasity of available threads per event happening right?
15:48:58 <yamahata> Right.
15:49:07 <mkolesni> ie there will be one timer per process so for 4 core machine there will be 9 timers iiuc
15:49:19 <mkolesni> then later we can plan how many timers we want
15:49:33 <yamahata> We don't have to create timers. We can have only single timer among processes within nuetorn sever.
15:49:40 <mkolesni> obviously too much is not good but limit to 1 per machine could be problematic as well
15:50:02 <yamahata> Why is 1 timer per machine problematic?
15:50:05 <mkolesni> but i think these both patches can continue an the timer count can be addressed later on
15:50:13 <yamahata> timer is only for rescuing unprocessed journal entry.
15:50:43 <mkolesni> timer is generally for sync
15:50:53 <mkolesni> so its either the backlog from connectivity loss
15:50:56 <mkolesni> or full sync
15:51:08 <mkolesni> so just 1 might be too little
15:51:34 <yamahata> I see. but we don't have to have 1 per 1 process.
15:51:37 <mkolesni> also if that process dies but others dont it could be a problem i guess but thats just a theory which im not sure if its possible or not
15:51:47 <yamahata> We can control the number of timers.
15:52:04 <mkolesni> sure but what i think is that should be a different patch
15:52:06 <yamahata> Process death means neutron death. It's another issue.
15:52:27 <mkolesni> i.e. no reason to stall these patches for that fix
15:53:00 <yamahata> You'd like to have single timer per process?
15:54:14 <mkolesni> id like to think of it further and come up with a proposal
15:54:29 <mkolesni> but in the mean time i dont think these patches need to wait
15:54:37 <mkolesni> i removed the -2 from the thread pool patch
15:54:45 <yamahata> Ok. we have 5min left.
15:54:57 <yamahata> any other patches to discuss?
15:55:03 <yamahata> rajivk: ?
15:55:04 <mkolesni> none from me
15:55:30 <rajivk_> yeah, i requested for lbaas review
15:55:30 <yamahata> From me, dhcp port issue will be discussed on mailing list.
15:55:44 <yamahata> #action yamahata reply to dhcp port discussion on mailing list
15:56:04 <yamahata> #link https://review.openstack.org/#/c/449432/ lbaas review
15:56:19 <yamahata> any other patches?
15:56:24 <rajivk_> yeah, https://review.openstack.org/#/c/459970/
15:56:36 <reedip_> rajivk_ did you check the jenkins failure?
15:56:39 <yamahata> is python27 test case broken?
15:57:09 <rajivk_> I checked it, but i could not find the reason
15:57:21 <rajivk_> i requested to yamahata to have a look at them.
15:57:22 <reedip_> hmm .. ok
15:57:30 <yamahata> I see.
15:57:41 <yamahata> #action yamahata and others look at https://review.openstack.org/#/c/459970/
15:57:44 <reedip_> If its ok, I would look at it tomorrow .. hope i can help
15:57:53 <yamahata> Also I noticed the patch https://review.openstack.org/#/c/464111/
15:58:01 <yamahata> Fix floatingip status not same when create and unAssociate
15:58:21 <yamahata> This is a good fix. So we should follow it up. neutron fix might be necessary.
15:58:45 <yamahata> any other patches to discuss?
15:59:11 <yamahata> okay.
15:59:12 <yamahata> #topic open mike
15:59:17 <yamahata> anything else to discuss?
16:00:17 <rajivk_> I would like to have more work.
16:00:18 <yamahata> seems nothing.
16:00:28 <rajivk_> If someone needs any help, please let me know.
16:00:39 <yamahata> rajivk: please go ahead. and feel free to take over pending patches.
16:00:57 <yamahata> sometime I have uploaded patches, but don't have time to follow up.
16:01:03 <yamahata> In that case, please take them over.
16:01:04 <rajivk_> yamahata, thanks
16:01:14 <rajivk_> I want to know more about your rpc specs
16:01:34 <rajivk_> What is the plan for that, may be i can contribute in that with you.
16:01:52 <yamahata> rajivk: cool. The plan is to implement rpc from ODL. the main use case is dhcp port.
16:02:13 <rajivk_> yeah, are you working on it?
16:02:19 <yamahata> The goal is to allow rpc. It doesn't add new rpc.
16:02:28 <yamahata> Not yet. Now we're discussin dhcp port folks.
16:02:35 <yamahata> discussing with dhcp folks.
16:03:06 <rajivk_> ok, i will go through your rpc specs and raise my concern if i have any.
16:03:13 <yamahata> great
16:03:32 <yamahata> anything else?
16:03:35 <rajivk_> no
16:03:44 <yamahata> thank you everyone.
16:04:08 <yamahata> #topic cookies
16:04:13 <yamahata> #endmeeting