08:11:54 <joehuang> #startmeeting multisite
08:11:54 <collabot> Meeting started Thu Jan 28 08:11:54 2016 UTC.  The chair is joehuang. Information about MeetBot at http://wiki.debian.org/MeetBot.
08:11:54 <collabot> Useful Commands: #action #agreed #help #info #idea #link #topic.
08:11:54 <collabot> The meeting name has been set to 'multisite'
08:13:08 <joehuang_> #topic rollcall
08:13:13 <joehuang_> #info joehuang
08:13:23 <SAshish> #info Ashish
08:13:43 <sorantis> #info dimitri
08:13:43 <joehuang_> connection issue
08:14:19 <joehuang_> hello, Here in China we'll have Chinese New Year soon
08:14:42 <joehuang_> and I'll leave for vacation from Feb.4 to Feb.14
08:15:02 <SAshish> Happy new year in advance!!!
08:15:14 <sorantis> cool!
08:15:15 <joehuang_> So I can not call for meeting next week and the week after next week
08:15:24 <joehuang_> Thanks
08:15:32 <sorantis> do you guys also celebrate on January 1st?
08:15:45 <joehuang_> yes. eat food
08:15:59 <sorantis> 2016 is a Monkey year
08:16:00 <joehuang_> and family time
08:16:19 <joehuang_> yes, it's a Monkey year. Even you know that~
08:16:28 <sorantis> any recommendations from the chinese horoscope? :)
08:17:00 <joehuang_> the system has some issue today, often quite automaticly
08:17:19 <sorantis> You’d be surprised but Chinese horoscope is big in Georgia :)
08:18:16 <joehuang_> I have no some knowledge about horoscope
08:18:29 <joehuang_> only know which year is related to which animal
08:18:41 <joehuang_> hi, let's back to kingbird
08:18:55 <joehuang_> #topic quota implementation
08:19:03 <SAshish> Yes
08:19:33 <SAshish> I see two ways of implementation, Please comment
08:19:39 <SAshish> There can be two approaches  1) API, JD & JW  JD creates multiple JW through RPC call. So there will be 'n' RPC calls for 'n' regions. Also there has to be some way of sharing region specific data back to JD. When JW runs asynchronously then it can be tough to return/share the data to JD.     2) API and Engine(can call JD)  Quota Manager will be a separate module in KB and API will call it directly for DB updates. For quota reb
08:19:42 <joehuang_> hi Ashish, could you share new idea on the implementation after last meeting
08:20:08 <SAshish> the text above did not go well..
08:20:14 <SAshish> I am resending it
08:20:20 <joehuang_> ok
08:20:28 <SAshish> 1) API, JD & JW
08:20:33 <SAshish> JD creates multiple JW through RPC call. So there will be 'n' RPC calls for 'n' regions. Also there has to be some way of sharing region specific data back to JD.
08:20:38 <SAshish> When JW runs asynchronously then it can be tough to return/share the data to JD.
08:21:05 <joehuang_> agree. that's why I changed a little in last meeting
08:21:11 <SAshish> data here is resource usage
08:21:18 <SAshish> so there is one more approach
08:21:18 <SAshish> with me
08:21:27 <joehuang_> please
08:21:30 <SAshish> 2) API and Engine(can call JD)
08:21:43 <SAshish> Quota Manager will be a separate module in KB and API will call it directly for DB updates.
08:21:45 <SAshish> For quota rebalancing, an API request is made to engine(also a periodic task without API request), then engine calls Quota Manager for rebalancing task.
08:21:49 <SAshish> QM uses multiprocessing module of python to create pool of workers each performing region specific task.
08:21:52 <SAshish> So QM will be parent process and workers will be its children.
08:21:56 <SAshish> All the workers start their task asynchronously and which even worker has finished the task, it has to share that data to Parent process(QM). For this, QM also creates a unnamed pipe so that all the workers(child process) have access to it. Worker soon after finishes the task updates the pipe with its information.
08:22:00 <SAshish> QM waits till all the workers finish their tasks by calling .join on them(threading concept).
08:22:03 <SAshish> Also all the workers acquire lock on pipe before writing/reading to ensure synchronization
08:22:08 <SAshish> Then QM has each region information, now it perform calculation and the final limits to update is given back to the processes for limit updation.
08:22:14 <SAshish> We have to cache region specific clients as well so that it can be used during reading and writing.
08:22:17 <SAshish> This way there will be only one RPC call from API to Engine and engine creats pool of workers with multiprocessing module of python, information is shared using pipes
08:23:53 <joehuang_> good idea
08:24:27 <sorantis> what happens what a task fails
08:24:39 <joehuang_> using multi-processes or multi-eventlet? do you have any comparation?
08:25:02 <sorantis> say we have only 2 successful task executions out of 4. what will QM conclude ?
08:25:06 <joehuang_> multi-eventlet is more lightweight
08:25:41 <joehuang_> this task should be failed, only if all success
08:25:56 <SAshish> then we can perfrom retry for those failures
08:26:21 <sorantis> I agree with joe
08:26:44 <sorantis> if even one task fails, QM should try again. otherwise quota limits will be out of sync
08:26:47 <SAshish> yes. after one retry the
08:26:48 <joehuang_> configured retry times, after configured retry times, let it failed. it'll simplify the design
08:27:07 <sorantis> you mean each sub provess should have a number retries
08:27:10 <sorantis> of*
08:27:31 <SAshish> yes. something like that
08:28:50 <joehuang_> would Zhiyuan do a test, to see if multi-eventlet also work for multi-region API calling, and can do it concurrently without a lot overhead
08:29:23 <joehuang_> programming for multi-process is not easy
08:30:11 <zhiyuan> ok, I can do a test that spawns multiple green thread and each green thread GETs some urls
08:30:37 <SAshish> but still we need to share info from threads to its caller
08:30:46 <joehuang_> yes
08:30:54 <SAshish> they run in async manner
08:31:17 <SAshish> where ever some one finishes they have to write output in some shared space
08:31:22 <joehuang_> Ashish, how do you think about this multi-processing compared to multi-eventlet?
08:31:50 <joehuang_> and Dimitri?
08:31:56 <SAshish> #link https://github.com/svinota/pyroute2/issues/99
08:32:05 <sorantis> I wouldn’t go multi-processing
08:32:14 <sorantis> too much hassle
08:32:32 <SAshish> okay.
08:32:53 <SAshish> we need to have multiple worker work parallely
08:33:11 <sorantis> that you can do with threading
08:34:22 <SAshish> we have more control over the process with multiprocess. it is similar to threads
08:35:39 <SAshish> #link http://stackoverflow.com/questions/3044580/multiprocessing-vs-threading-python
08:35:53 <sorantis> what’s wrong with thread group manager?
08:36:31 <sorantis> well your link just gave you the answer :)
08:36:49 <sorantis> I would avoid multi-processing for exactly the same reason
08:37:28 <joehuang_> so dimitri also prefer eventlet than multi-processing/threading?
08:37:29 <sorantis> difficult to sync, difficult to share info, expensive to spaw
08:37:46 <SAshish> hmm.. Separate memory space
08:39:03 <sorantis> joe, yes, I’m more inclined to use something more lightweight
08:39:03 <joehuang_> here we need share memory for data coming from different region
08:39:42 <sorantis> and for that I would have a look how they implemented it in Senlin
08:39:49 <sorantis> with the concept of Actions
08:40:05 <joehuang_> for eventlet, need some test. So I suggest Ahshish to work on the overall code framework, and zhiyuan do a test see if eventlet works
08:40:39 <sorantis> also have a look at Senlin’s actions
08:40:46 <joehuang_> yes
08:40:49 <sorantis> their actions operate on clusters
08:40:54 <SAshish> I had introduced multiprocess as its easy to share pipe among parent/child processes
08:41:08 <SAshish> okay.. will look into senlin
08:41:25 <sorantis> no need to go multiprocess
08:41:32 <joehuang_> Hi Ashish has alread implemented multi-processing
08:42:00 <joehuang_> not see patch yet
08:42:08 <SAshish> no. I have not done
08:42:18 <joehuang_> ok
08:42:19 <SAshish> we are yet to finalize the desing
08:42:43 <SAshish> I have done a small hello world program at my end just to get handon
08:42:53 <sorantis> #action Ashish to check Senlin’s implementation of Actions
08:43:02 <sorantis> #action zhiyuan do a test see if eventlet works
08:44:24 <joehuang_> And zhiyuan finished two region installation with devstack in tricircle, but one bug left, after the bug fixed, zhiyuan can submit a patch in kingbird too for two region setup
08:44:44 <sorantis> sounds good
08:45:35 <joehuang_> currently multi-region setup in devstack not work well, even in openstack community
08:46:22 <sorantis> is there a specific reason?
08:47:03 <joehuang_> they changed devstack script, but no one yet to install multi-region via devstack
08:47:12 <joehuang_> so the bug not found
08:48:01 <sorantis> I didn’t know that somebody is working on mr in devstack scripts
08:48:39 <SAshish> very usefull it will be.
08:48:42 <joehuang_> I am afraid that senlin only deal with VM cluster in one openstack
08:49:06 <sorantis> yes
08:49:41 <sorantis> I think their concept can be extrapolated onto multiple regions
08:50:08 <sorantis> basically they use ThreadGroupManager
08:50:32 <sorantis> there’s a dispatcher that manages actions
08:50:35 <joehuang_> worth digging into
08:51:53 <SAshish> then we have to share the information
08:52:23 <joehuang_> yes, please share in the m-l. I'll check mail and can do review with phone after Feb 4
08:53:38 <joehuang_> and one more thing, API(QM) rpc JW, just bypass the now
08:53:43 <SAshish> share information between the called thread and the calling process
08:54:28 <joehuang_> If with multi-eventlet, it's easier, one shared queue
08:54:49 <sorantis> Ashish check ThreadGroupManager and eventlets
08:55:02 <SAshish> OR with threads we can use named pipes
08:55:31 <SAshish> #action Ashish check ThreadGroupManager and eventlets
08:56:50 <sorantis> https://github.com/openstack/senlin/blob/master/senlin/engine/scheduler.py
08:57:30 <joehuang_> so we agree that the QM send one RPC out
08:57:32 <zhiyuan_> as I know, the creator of gthread can use green pile to retrieve the return value of each gthread
08:57:48 <zhiyuan_> one way to collect the result of each worker
08:58:06 <joehuang_> not QM, the KB API
08:58:50 <sorantis> zhiyuan_: the above threadgroupmanager uses threadgroup from oslo
08:58:58 <sorantis> which in turn uses eventlets
08:59:08 <joehuang_> good news
08:59:30 <zhiyuan_> oh, so another convenient oslo lib
08:59:38 <sorantis> I would really really try that track :)
08:59:39 <SAshish> yes.. threadgroupmanager  based on eventlet
08:59:44 <sorantis> yes
08:59:47 <sorantis> oslo_service
09:00:26 <sorantis> guys
09:00:28 <sorantis> we need to close
09:00:38 <sorantis> #info https://github.com/openstack/senlin/blob/master/senlin/engine/scheduler.py
09:00:56 <joehuang_> ok, we need more frequent email discussion
09:00:58 <sorantis> #link https://github.com/openstack/senlin/blob/master/senlin/engine/scheduler.py
09:01:10 <sorantis> let’s continue this on ml
09:01:16 <joehuang_> let's conclude the meeting today
09:01:26 <SAshish> sure
09:01:28 <joehuang_> yes, m-l discussion is needed
09:01:41 <joehuang_> and share information
09:01:41 <sorantis> ok
09:01:43 <sorantis> thanks guys
09:01:48 <joehuang_> see you
09:01:51 <sorantis> bye
09:01:54 <joehuang_> #endmeeting
09:01:55 <zhiyuan_> bye
09:01:56 <SAshish> thanks guys. Good bye
09:02:22 <joehuang> #endmeeting