08:07:13 <joehuang> #startmeeting multisite
08:07:13 <collabot`> Meeting started Thu Nov 19 08:07:13 2015 UTC.  The chair is joehuang. Information about MeetBot at http://wiki.debian.org/MeetBot.
08:07:13 <collabot`> Useful Commands: #action #agreed #help #info #idea #link #topic.
08:07:13 <collabot`> The meeting name has been set to 'multisite'
08:07:22 <sorantis> #info dimitri
08:07:25 <joehuang> #info rollcall
08:07:30 <joehuang> #info joehuang
08:07:37 <ttallgren> #info tapio
08:08:14 <joehuang> even if we use KB api + KB engine, the api need to call engine through RPC
08:08:49 <SAshish> yes.. but here in controller
08:08:52 <SAshish> @index.when(method='delete', template='json')     def delete(self):         context = restcomm.extract_context_from_environ()         return self.jd_api.say_hello_world_cast(context, '## delete cast ##')
08:09:10 <SAshish> it is not a rpc call right...
08:09:18 <joehuang> #topic discussion Kingbird BPs and progress
08:09:31 <joehuang> it's cast, async call
08:10:49 <joehuang> cast means the RPC call will return immeditly, don't wait for the result
08:11:24 <SAshish> yes.. but it doesnt look like htat
08:11:24 <joehuang> Hi Ashish, delete is an example for cast
08:11:25 <SAshish> that
08:11:42 <SAshish> yeah.. that cast call if from jdapi => jdservice
08:11:46 <joehuang> for post. put, it's example for RPC call
08:11:56 <SAshish> not from kb api => jdapi
08:12:32 <joehuang> jdapi is the client of the RPC, running in KB api. KB api call the jdapi
08:12:38 <SAshish> this part of code is from controllers/helloworld.py
08:13:12 <SAshish> Correct me If I am wrong
08:13:14 <joehuang> controller/helloworld.py running in the KB api
08:13:28 <SAshish> yes.
08:13:45 <joehuang> then it call the RPC client which provided by the JD
08:14:17 <joehuang> the rpcclient will send rpc message to JD, JD manager will handle the call accordingly
08:14:17 <SAshish> that RPC client == JD API??
08:14:23 <joehuang> yes
08:14:59 <SAshish> #info Ashish
08:15:29 <SAshish> so there is no direct rpc call to JD from KB API
08:16:18 <SAshish> API calls a client(jdapi) through a normal call then this client(jdapi) will do a rpc call to jdservice.
08:16:40 <joehuang> correct
08:16:44 <SAshish> so rpc calls in inside sub components not across components
08:17:10 <joehuang> oslo.messaging encapsulate this
08:18:45 <joehuang> You don't need to post message directly,
08:18:50 <joehuang> def say_hello_world_call(self, ctxt, payload):         return self.client.call(ctxt, 'say_hello_world_call', payload=payload)
08:19:16 <SAshish> Kingbird API is Web Server Gateway Interface (WSGI) applications to receive and process API calls, including keystonemiddleware to do the authentication, parameter check and validation, convert API calls to job rpc message, and then send the job to Kingbird Job Daemon through the queue
08:19:25 <SAshish> this is from the document.
08:20:05 <joehuang> in jdrpcapi.py, client.call is RPC client calling
08:20:07 <SAshish> convert API calls to job rpc message.. Does this mean that KB API convert this?
08:20:14 <SAshish> yes.. seen that..
08:20:18 <joehuang> yes
08:20:58 <sorantis> joehuang, and this client.call triggers the jdmanager?
08:21:42 <joehuang> yes, the rpc method will trigger in the server jdmanager
08:21:54 <sorantis> yes
08:22:53 <sorantis> so basically jdrpcapi is a relay mechanism to a jdmanager, that contains the actual logic for the rpc calls
08:23:00 <joehuang> after the client.call say_hello_world_call, the jdmanager receive the message, and will call say_hello_world_call(self, ctx, payload) in JDmanager
08:23:03 <sorantis> lie a proxy
08:23:04 <SAshish> no right
08:23:09 <SAshish> It is jwmanager
08:23:14 <SAshish> not jdmanager
08:23:31 <SAshish> jdmanager is again a call to jwapi
08:23:40 <SAshish> that is then rpc call to jwservice/jwmanager
08:23:54 <SAshish> actual login == jwmanager??/
08:23:57 <SAshish> logic*
08:24:02 <joehuang> "jdmanager is again a call to jwapi" this is an example, to call jwapi immeditaly
08:24:05 <sorantis> the jdmanager then dispatches tasks to jworkers
08:24:21 <sorantis> so it’s a double chain
08:24:35 <sorantis> jdrpcapi is a proxy to jdmanager
08:24:46 <joehuang> for quota, we can split the job in jdmanager, then using rpc to call jwmanager to process small job
08:24:48 <sorantis> and jwrpcapi is a proxy to jwmanager
08:25:33 <joehuang> "jwrpcapi is a proxy to jwmanager" ==> jwrpcapi is a "client" to jwmanager
08:25:48 <joehuang> it's just like map-reduce
08:26:00 <joehuang> split a task to a lot of small task
08:26:02 <SAshish> though it is not clear.
08:26:22 <sorantis> joe, who coordinates those tasks then?
08:26:32 <joehuang> jdmanager
08:26:35 <SAshish> the jdmanager then dispatches tasks to jworkers
08:26:41 <joehuang> yes
08:27:09 <SAshish> this dispatching is not direct though.
08:27:10 <sorantis> yes, becasue these are rpc calls, not casts
08:27:38 <joehuang> for async job, could be cast
08:27:53 <sorantis> yes
08:28:17 <sorantis> I was considering the case when I want to dispatch tasks to workers, and then aggregate the results
08:28:20 <joehuang> so jd we called job daemon
08:28:37 <joehuang> yes, you need to aggregate result
08:28:47 <SAshish> yes.. and if there is any need to exchange info among workers
08:28:50 <joehuang> API is presentation layers
08:28:52 <sorantis> so this would look like
08:29:24 <sorantis> for 1 to n: result+=self.jw_api.do_something_call()
08:29:34 <joehuang> especially aggregate result from a lot of region, different service
08:30:34 <joehuang> no for workers, it should work very simple job, don't need to know each other
08:31:07 <joehuang> indeed we are implementing distributed query system
08:31:20 <joehuang> we are implementing distributed query/synchronization system
08:31:41 <joehuang> it's a little like search engine
08:31:53 <sorantis> hadoop :)
08:31:58 <joehuang> yes
08:31:59 <SAshish> :)
08:32:08 <sorantis> ok, good
08:32:49 <joehuang> so when jd receive a job, create a result tables for the worker to fill, after job finished, jd return the table result to API
08:34:08 <joehuang> so the aggregation will be more easier, but distribute the computing to multiple jobworkers
08:34:11 <sorantis> joehuang, I have a question about test coverage
08:34:18 <joehuang> please
08:34:29 <joehuang> I am also thinking about testing
08:34:37 <sorantis> we will of course add unit test for the quota related part, as we add the code
08:34:47 <joehuang> that's great
08:34:49 <sorantis> what about this service architecture?
08:34:55 <sorantis> do we need to cover that too?
08:35:21 <joehuang> what do you mean service archictecture? you mean JD, JW
08:35:42 <SAshish> there can be multiple JDs each dedicated for specific service.
08:35:45 <sorantis> yes
08:35:54 <SAshish> service wise jobdeamons?
08:36:09 <joehuang> we try to add test for all code we add
08:36:25 <sorantis> ok
08:36:29 <sorantis> that good
08:36:52 <joehuang> you have seen that testing is lack in current commit
08:36:59 <joehuang> but I am working on it
08:39:02 <joehuang> hi, Dimitri, could you open the comment right (edit is not necessary)  for your quota spec
08:39:42 <sorantis> I’ve done that yesterday. no?
08:39:55 <sorantis> I openned for editing for everyone who has the link
08:40:26 <joehuang> Ok, I'll try. No need to open the editting right in case of malicious operation
08:40:40 <sorantis> It’s open source ;)
08:41:02 <joehuang> That's fine :)
08:41:21 <sorantis> although you’re right
08:41:38 <sorantis> I’ve just changed the right to commenting only
08:41:54 <joehuang> sometime maybe only wrong operation with no intention
08:42:08 <sorantis> #info https://docs.google.com/document/d/1aYmhfxdlKVhv3j1NGrrfSXnyonfKv12jv6KURdwMMKI/edit?usp=sharing
08:42:14 <joehuang> Thanks
08:43:28 <joehuang> "BP Driver for openstack communication" linked to "openstack client" page, do we need spec? I think even no spec also work for me
08:44:40 <SAshish> will make one
08:44:45 <sorantis> yeah, I guess so
08:44:50 <joehuang> thanks.
08:45:02 <sorantis> #action ashish to make a spec
08:45:47 <SAshish> yes.. will make a short spec for this.
08:45:51 <joehuang> and I also one suggestion to break done the quota part into several BPs and build dependency among them, so that we know the order of implementation
08:46:09 <joehuang> break done -> break down
08:46:10 <sorantis> #action for the openstack client
08:47:32 <joehuang> and easy for patch review
08:48:08 <sorantis> I’ll commit the unit test execution script
08:48:19 <sorantis> this should simplify testing and verification
08:48:24 <joehuang> good
08:48:25 <sorantis> will do it today
08:48:39 <sorantis> ashish, please review the remaining commits by joe
08:48:53 <sorantis> and let’s get on to the db part
08:48:56 <SAshish> yes. will do it today
08:48:59 <joehuang> ok.
08:49:19 <sorantis> #action Dimitri to commit unit test execution script
08:49:46 <joehuang> the last patch is the integration with devstack, will ease the incremental developement
08:51:24 <joehuang> so Dimitri you or Ashish will register a BP for DAL part
08:51:26 <SAshish> Just a suggestion, will have a readme file for each submodule describing how to run/use it
08:51:43 <SAshish> whenever someone's commit
08:52:07 <joehuang> let's look at the fashion of OpenStack
08:52:54 <joehuang> it's a good idea
08:53:27 <SAshish> this will save our time to get onto others code.
08:53:42 <joehuang> doc part?
08:53:55 <joehuang> https://github.com/openstack/kingbird/tree/master/doc/source
08:55:08 <SAshish> yes.. and also just a simple readme inside the directory for a component.
08:55:27 <joehuang> OK. let's try it
08:55:30 <SAshish> Example: tools directory is there. where it is used..
08:55:39 <SAshish> where => why
08:55:49 <joehuang> #info have a readme file for each submodule describing how to run/use it
08:58:42 <sorantis> ok
08:58:48 <sorantis> shall we close the meeting?
08:58:57 <joehuang> Yes, great to have this meeting. Let's continue communication through mail-list
08:59:18 <SAshish> yes. Same here:)
08:59:18 <joehuang> Thank you all. Hope you have a good day
08:59:26 <joehuang> no scare anymore
08:59:26 <sorantis> bye!
08:59:30 <joehuang> bye
08:59:31 <SAshish> Thank all.
08:59:33 <SAshish> Good bye
08:59:35 <joehuang> #endmeeting