08:03:45 <joehuang> #startmeeting multisite
08:03:45 <collabot> Meeting started Thu Feb 18 08:03:45 2016 UTC.  The chair is joehuang. Information about MeetBot at http://wiki.debian.org/MeetBot.
08:03:45 <collabot> Useful Commands: #action #agreed #help #info #idea #link #topic.
08:03:45 <collabot> The meeting name has been set to 'multisite'
08:03:47 <SAshish> 35 today.. will touch 47/ 48 in May
08:04:12 <joehuang> really hot, like summer everyday
08:04:21 <joehuang> #topic rollcall
08:04:26 <joehuang> #info joehuang
08:04:32 <SAshish> #info Ashish
08:04:33 <zhiyuan_> #info zhiyuan
08:04:49 <SAshish> Shall we wait for Dimitri?
08:05:23 <joehuang> let's wait for a while and discuss in parallel
08:05:33 <SAshish> sure.
08:06:36 <joehuang> I have found that one configuration to control enable/disable quota-sync timer in Engine
08:07:30 <SAshish> yes.. you are taking about TGM?
08:08:07 <joehuang> If only two or three engine nodes enabled with timer, then the overlapping for quota sync. is acceptable.
08:09:54 <joehuang> yes the timer in QM
08:10:22 <SAshish> yes.. but there can be more nodes..
08:10:26 <joehuang> but not merged successfully, don't know why
08:10:45 <SAshish> it is merged right
08:10:54 <SAshish> https://review.openstack.org/#/c/277489/
08:10:58 <joehuang> if more nodes, the overlapping seems to be too much
08:11:05 <SAshish> this one you are talking about?
08:11:50 <joehuang> ok, another patch not merged successfully
08:12:28 <SAshish> there was some jenkins issue with this commit
08:12:42 <SAshish> time out.. we did "recheck" then it was merged
08:13:31 <SAshish> If keystone is temporary unreachable, it'll impact the Kingbird's initialization, and brings different later behavior of action, it's not good idea to make the service initialization (and later action) dependents on another service. Especially the calling to another service can't be guaranteed.
08:13:46 <SAshish> Joe.. Need your clarification here.
08:14:29 <SAshish> Let us keep this away so some time.. we were discussing about configuration to control enable/disable quota-sync timer
08:14:47 <SAshish> will get back to keystone initialization..
08:15:32 <SAshish> so, Is it ok to have only 2 or 3 engine nodes ?
08:15:59 <joehuang> configuration about quota sync timer is ok, but need guide that not too much nodes enbaled, 2/3 nodes are ok
08:17:10 <joehuang> In the M-L, just described that if all nodes are enabled with timer, then the sync. overlapping is increased as nodes increase
08:17:25 <joehuang> hi, Dimitri
08:17:28 <sorantis> hi
08:17:32 <SAshish> Hi Dimitri
08:17:42 <sorantis> sorry i’m late.
08:17:48 <sorantis> got a fever
08:18:05 <joehuang> Some discussion recap: configuration about quota sync timer is ok, but need guide that not too much nodes enbaled, 2 or 3 nodes are ok
08:18:23 <joehuang> sorry to hear that you got a fever
08:18:32 <SAshish> get well soon Dimitri..
08:19:01 <sorantis> thanks
08:19:23 <sorantis> this is the first case from your list, Joe, right?
08:19:24 <joehuang> if only two or three engine nodes enabled with timer, then the overlapping for quota sync. is acceptable.
08:19:31 <joehuang> yes
08:19:31 <sorantis> yes
08:19:34 <sorantis> agree
08:20:15 <joehuang> ok
08:20:22 <joehuang> hi Malla, long time no see
08:20:54 <joehuang> #agree configuration about quota sync timer is ok, but need guide that not too much nodes enbaled, 2 or 3 nodes are ok
08:21:29 <joehuang> hi Dimitri, u add one more topic, pls
08:21:36 <sorantis> yes
08:21:54 <sorantis> it’s about the kb API representation
08:22:07 <sorantis> we’ve been discussing yesterday a few approaches
08:22:22 <sorantis> currently this is how the api looks like
08:22:27 <SAshish> and currently we have API structure
08:22:29 <SAshish> as
08:22:35 <Malla> Hi Joe, I was busy with other activities.
08:22:47 <SAshish> curl -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X POST --data "{'tenant_1': {'ram':15, 'cores':12}, 'tenant_2':{'cpu':15}}"  http://127.0.0.1:8118/v1.0/quota
08:22:51 <joehuang> Welcome come back Malla
08:22:57 <SAshish> there is commit on gerrit
08:23:02 <SAshish> Quota Sync API
08:23:25 <sorantis> as you can see, here the tenant information is part of the payload
08:23:34 <joehuang> Shall we keep the same structure as that in Nova/Cinder/Neutron?
08:23:44 <sorantis> that was my thikning too
08:23:50 <sorantis> /v1.0/{project_id}/quota
08:23:55 <joehuang> Yes, In Nova, Cinder, it's part in the url
08:24:14 <sorantis> so can we aggree on this structure?
08:24:35 <sorantis> and then payload will be something like this {
08:24:35 <sorantis> "quota_set": {
08:24:37 <sorantis> "instances": 50,
08:24:38 <sorantis> "cores": 50,
08:24:40 <sorantis> "ram": 51200,
08:24:41 <sorantis> "floating_ips": 10,
08:24:43 <sorantis> "metadata_items": 128,
08:24:44 <sorantis> "injected_files": 5,
08:24:46 <sorantis> "injected_file_content_bytes": 10240,
08:24:47 <sorantis> "injected_file_path_bytes": 255,
08:24:49 <sorantis> "security_groups": 10,
08:24:50 <sorantis> "security_group_rules": 20,
08:24:52 <sorantis> "key_pairs": 100
08:24:53 <sorantis> }
08:24:54 <sorantis> }
08:25:11 <joehuang> Nova's structure is /v2.1/{project_id}/quota_sets/{target_tenant_id}
08:25:26 <SAshish> curl -H "X-Auth-Token: <Token ID>" http://192.168.100.70:8774/v2/<Tenant ID>/servers/<Server ID>
08:25:43 <joehuang> Yes, the payload you pasted is what in Nova, Cinder
08:26:34 <sorantis> target_tenant_id what’s that?
08:27:29 <joehuang> http://developer.openstack.org/api-ref-compute-v2.1.html
08:27:38 <joehuang> search quota-sets
08:28:14 <sorantis> /v2.1/​{admin_tenant_id}​/os-quota-sets/​{tenant_id}
08:28:27 <joehuang> the first project_id is the admin_id, actually look like you pasted
08:28:28 <sorantis> ok, that’s different
08:28:56 <joehuang> the last tenant_id is the admin want to manage on
08:29:14 <zhiyuan_> and that is a PUT request
08:30:38 <joehuang> But for Neutron, no {admin_tenant_id} part, Cinder/Nova have, it's a little different
08:30:51 <sorantis> yes
08:30:55 <SAshish> we will have only tenant_id
08:31:16 <sorantis> so I was thinking that maybe we should start small, and add admin tenant information if needed
08:32:03 <joehuang> In Nova/Cinder/Neutron, only Admin is allowed to update quota limit
08:32:13 <SAshish> /v1.0/​{tenant_id}​/quota/
08:32:33 <sorantis> it will by default the user of kingbird
08:32:47 <joehuang> But because the new introduced feature called hierarchy-tenant quota control
08:33:26 <joehuang> it will be a different in the later, the admin-tenant-id could be the parent of the target tenant
08:34:18 <sorantis> i understand, but it’s not a general form for all services
08:34:19 <joehuang> So my suggestion is to keep alignment the structure with Nova/Cinder, and easy for later development
08:34:32 <sorantis> as you know neutron and cinder have a different structure for quotas
08:34:45 <SAshish> they have quota classes..
08:34:55 <sorantis> my bad
08:35:00 <sorantis> cinder has been updated
08:35:07 <joehuang> Neutron just use the tenant-id in the token to judge it's admin or not
08:35:19 <joehuang> Nova/Cinder have
08:35:24 <joehuang> quota classes.
08:35:52 <joehuang> Nova/Cinder was kept aligned
08:35:54 <SAshish> so you are suggesting we will have admin_tenant_id with API call
08:36:01 <joehuang> yes
08:36:10 <SAshish> see we have a token with passed wiht it
08:36:16 <sorantis> /v2/​{tenant_id}​/os-quota-sets/​{tenant_id}
08:36:17 <SAshish> this token is admin token
08:36:29 <SAshish> /v2/​{admin_tenant_id}​/os-quota-sets/​{tenant_id}
08:36:31 <SAshish> list this
08:36:35 <SAshish> like*
08:36:54 <SAshish> so is there any specific reason for this admin tenant id
08:37:15 <joehuang> for hierachy multi-tenancy
08:37:49 <joehuang> Reseller mode, A -> B -> C -> D. B is the parent of C, A is the Admin
08:38:02 <joehuang> A assign quota to B
08:38:11 <joehuang> B can resell quota to C
08:38:32 <joehuang> B can resell quota to C (partly)
08:39:07 <sorantis> yes, but tenants are still tenants
08:39:21 <SAshish> and we need to consider super admin
08:39:30 <joehuang> yes
08:39:45 <joehuang> Tenants are still tenants
08:39:53 <sorantis> this information is not needed for syncing, but it is required to make kingbird a global quota storage
08:40:10 <joehuang> I ported code from Cinder to Tricircle, so understand it's being more and more complex
08:40:16 <joehuang> for the quota limit manage, we need to consider it
08:40:18 <sorantis> just to keep the same structure with other services
08:40:27 <joehuang> but for sync, it's tenant by tenant
08:40:39 <sorantis> yes, exactly my point
08:40:51 <joehuang> great
08:41:50 <joehuang> so can we agree to have similar structure like Nova/Cinder?
08:43:27 <sorantis> +1 otherwise kb won’t provide the same quota management capabilities
08:43:48 <sorantis> strange that neutron doesn’t care
08:43:48 <joehuang> Ashish and Zhiyuan?
08:43:59 <SAshish> yes.. agree
08:44:04 <joehuang> Neutron always the last one
08:44:22 <zhiyuan_> agree
08:44:25 <joehuang> too many to tackle in Neutron
08:44:58 <sorantis> so coming back to the sync part
08:45:04 <joehuang> ok
08:45:30 <joehuang> #agree similar url structure like Nova/Cinder for quota management
08:45:55 <sorantis> in kb we have sync as a periodic activity
08:46:09 <sorantis> it is not triggered externally
08:46:11 <joehuang> and also on demand sync
08:46:16 <sorantis> yes
08:46:20 <sorantis> coming to that
08:46:37 <sorantis> on demand sync is probably better to have on a per tenant basis
08:46:47 <joehuang> agree
08:46:57 <sorantis> /v1/{project_id}/quotas/sync
08:47:11 <sorantis> or
08:47:20 <sorantis> /v1/{project_id}/sync/quota
08:47:25 <SAshish> http://127.0.0.1:8118/v1.0/{admin_tenant_id}/quota/sync/{tenant_id}
08:47:41 <SAshish> like this?
08:48:03 <SAshish> as other API urls will have admin_tenant_id
08:48:08 <SAshish> we will have for this as well
08:48:30 <sorantis> well
08:48:35 <sorantis> don’t think so
08:48:49 <sorantis> the other API had to be compatible with other services
08:48:54 <sorantis> this one is related to sync
08:49:13 <sorantis> and it doesn’t care about parent/child tenant structure
08:49:32 <sorantis> there’s a bunch of tenants, kb just iterates over them and syncs
08:50:24 <joehuang> this one: /v1/{project_id}/quotas/sync prefered
08:50:45 <SAshish> fine. agree
08:50:59 <joehuang> version/tenant_id/resource/action
08:51:19 <sorantis> +1
08:51:29 <SAshish> +1
08:51:32 <zhiyuan_> +1
08:52:01 <sorantis> great
08:52:04 <joehuang> #agree sync  /v1/{project_id}/quotas/sync
08:52:11 <sorantis> on to the next topic
08:52:32 <SAshish> I have to discuss review comments from Joe
08:53:11 <sorantis> sure
08:53:17 <SAshish> If keystone is temporary unreachable, it'll impact the Kingbird's initialization, and  brings different later behavior of action, it's not good idea to make the service initialization (and later action) dependents on another service. Especially the calling to another service can't be guaranteed.
08:53:54 <joehuang> patch link?
08:54:01 <SAshish> https://review.openstack.org/#/c/277802/
08:56:00 <joehuang> When you want to get the service-list from the KeyStone client, it may be failed, for example temparly link broken
08:56:13 <joehuang> then you will not get the service-list
08:56:42 <SAshish> yes.. this will be called when sync job is triggered
08:56:59 <SAshish> and it needs to have that information to proceed
08:58:04 <joehuang> so the quota support by Neutron or by Nova-Network could be a configuration item to avoid the inquery from the keystone
08:58:36 <joehuang> And Nova-Network is deprecated, by default Neutron should be
08:58:47 <SAshish> yes.. if it is a configuration. it can be avoided.
08:58:48 <SAshish> but
08:58:49 <joehuang> i.e, the SEG quota in Neutron
08:59:31 <SAshish> this information is known in runtime..
09:00:01 <zhiyuan_> Initialize service_list on demand? if the initialization of service_list when executing job, the job fails and can be redo.
09:00:04 <joehuang> do we need to support Nova-Network?
09:00:10 <SAshish> how can we configure that upfront, I mean which region has neutron enabled
09:00:18 <SAshish> this is also with cinder
09:00:30 <SAshish> if cinder is not present then we will not consider its resources.
09:00:43 <sorantis> I agree with ashish
09:01:00 <sorantis> we can know about availability of a certain service at runtime from endpoint list
09:01:10 <sorantis> otherwise we make an assumption
09:01:22 <sorantis> and this may not be the actual case in a particular region
09:01:27 <SAshish> that too..  region specific it has to be
09:01:59 <SAshish> and a region can be configured later with neutron in some point of time
09:02:15 <SAshish> if it has started with nova network..
09:02:42 <joehuang> If exception happened, need to catch and process the exception
09:03:04 <joehuang> for example, KeyStone is not reachable
09:03:08 <joehuang> Then the job will be failed and do next time
09:04:34 <SAshish> yes.. exception handling should be improved for majority of modules.
09:05:01 <joehuang> OK
09:05:02 <sorantis> that i support
09:05:03 <SAshish> both the comments of yours on that commit refer to samething
09:05:16 <SAshish> that we will do anyhow.
09:05:43 <joehuang> So add exception handling in the patch
09:06:27 <SAshish> now I have moved that code to keystone driver
09:06:43 <SAshish> it is handeld there
09:06:54 <joehuang> Two topics left, one is VNF DR, another is setup multisite environment in Release C
09:07:00 <SAshish> same with the call to neutron extensions
09:07:08 <joehuang> ok
09:07:24 <SAshish> can you please review the code so far..
09:07:31 <joehuang> will do
09:07:48 <SAshish> thanks
09:07:52 <sorantis> we’re out of time
09:08:01 <sorantis> shall we discuss those in ML
09:08:05 <joehuang> Hi, Dimitri, Ashish and Zhiyuan, how do you think about these two
09:08:05 <sorantis> or next time?
09:08:25 <joehuang> ok, M-L please, and then conclude quickly in meeting
09:08:47 <joehuang> please reply in M-L
09:08:58 <joehuang> thanks for your time for the meeting
09:09:08 <SAshish> #info rework Quota Sync APIs
09:09:12 <zhiyuan_> ok, will check M-L later, busy coding Tricircle these days
09:09:29 <sorantis> thanks
09:09:30 <joehuang> #endmeeting