08:03:45 #startmeeting multisite 08:03:45 Meeting started Thu Feb 18 08:03:45 2016 UTC. The chair is joehuang. Information about MeetBot at http://wiki.debian.org/MeetBot. 08:03:45 Useful Commands: #action #agreed #help #info #idea #link #topic. 08:03:45 The meeting name has been set to 'multisite' 08:03:47 35 today.. will touch 47/ 48 in May 08:04:12 really hot, like summer everyday 08:04:21 #topic rollcall 08:04:26 #info joehuang 08:04:32 #info Ashish 08:04:33 #info zhiyuan 08:04:49 Shall we wait for Dimitri? 08:05:23 let's wait for a while and discuss in parallel 08:05:33 sure. 08:06:36 I have found that one configuration to control enable/disable quota-sync timer in Engine 08:07:30 yes.. you are taking about TGM? 08:08:07 If only two or three engine nodes enabled with timer, then the overlapping for quota sync. is acceptable. 08:09:54 yes the timer in QM 08:10:22 yes.. but there can be more nodes.. 08:10:26 but not merged successfully, don't know why 08:10:45 it is merged right 08:10:54 https://review.openstack.org/#/c/277489/ 08:10:58 if more nodes, the overlapping seems to be too much 08:11:05 this one you are talking about? 08:11:50 ok, another patch not merged successfully 08:12:28 there was some jenkins issue with this commit 08:12:42 time out.. we did "recheck" then it was merged 08:13:31 If keystone is temporary unreachable, it'll impact the Kingbird's initialization, and brings different later behavior of action, it's not good idea to make the service initialization (and later action) dependents on another service. Especially the calling to another service can't be guaranteed. 08:13:46 Joe.. Need your clarification here. 08:14:29 Let us keep this away so some time.. we were discussing about configuration to control enable/disable quota-sync timer 08:14:47 will get back to keystone initialization.. 08:15:32 so, Is it ok to have only 2 or 3 engine nodes ? 08:15:59 configuration about quota sync timer is ok, but need guide that not too much nodes enbaled, 2/3 nodes are ok 08:17:10 In the M-L, just described that if all nodes are enabled with timer, then the sync. overlapping is increased as nodes increase 08:17:25 hi, Dimitri 08:17:28 hi 08:17:32 Hi Dimitri 08:17:42 sorry i’m late. 08:17:48 got a fever 08:18:05 Some discussion recap: configuration about quota sync timer is ok, but need guide that not too much nodes enbaled, 2 or 3 nodes are ok 08:18:23 sorry to hear that you got a fever 08:18:32 get well soon Dimitri.. 08:19:01 thanks 08:19:23 this is the first case from your list, Joe, right? 08:19:24 if only two or three engine nodes enabled with timer, then the overlapping for quota sync. is acceptable. 08:19:31 yes 08:19:31 yes 08:19:34 agree 08:20:15 ok 08:20:22 hi Malla, long time no see 08:20:54 #agree configuration about quota sync timer is ok, but need guide that not too much nodes enbaled, 2 or 3 nodes are ok 08:21:29 hi Dimitri, u add one more topic, pls 08:21:36 yes 08:21:54 it’s about the kb API representation 08:22:07 we’ve been discussing yesterday a few approaches 08:22:22 currently this is how the api looks like 08:22:27 and currently we have API structure 08:22:29 as 08:22:35 Hi Joe, I was busy with other activities. 08:22:47 curl -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X POST --data "{'tenant_1': {'ram':15, 'cores':12}, 'tenant_2':{'cpu':15}}" http://127.0.0.1:8118/v1.0/quota 08:22:51 Welcome come back Malla 08:22:57 there is commit on gerrit 08:23:02 Quota Sync API 08:23:25 as you can see, here the tenant information is part of the payload 08:23:34 Shall we keep the same structure as that in Nova/Cinder/Neutron? 08:23:44 that was my thikning too 08:23:50 /v1.0/{project_id}/quota 08:23:55 Yes, In Nova, Cinder, it's part in the url 08:24:14 so can we aggree on this structure? 08:24:35 and then payload will be something like this { 08:24:35 "quota_set": { 08:24:37 "instances": 50, 08:24:38 "cores": 50, 08:24:40 "ram": 51200, 08:24:41 "floating_ips": 10, 08:24:43 "metadata_items": 128, 08:24:44 "injected_files": 5, 08:24:46 "injected_file_content_bytes": 10240, 08:24:47 "injected_file_path_bytes": 255, 08:24:49 "security_groups": 10, 08:24:50 "security_group_rules": 20, 08:24:52 "key_pairs": 100 08:24:53 } 08:24:54 } 08:25:11 Nova's structure is /v2.1/{project_id}/quota_sets/{target_tenant_id} 08:25:26 curl -H "X-Auth-Token: " http://192.168.100.70:8774/v2//servers/ 08:25:43 Yes, the payload you pasted is what in Nova, Cinder 08:26:34 target_tenant_id what’s that? 08:27:29 http://developer.openstack.org/api-ref-compute-v2.1.html 08:27:38 search quota-sets 08:28:14 /v2.1/​{admin_tenant_id}​/os-quota-sets/​{tenant_id} 08:28:27 the first project_id is the admin_id, actually look like you pasted 08:28:28 ok, that’s different 08:28:56 the last tenant_id is the admin want to manage on 08:29:14 and that is a PUT request 08:30:38 But for Neutron, no {admin_tenant_id} part, Cinder/Nova have, it's a little different 08:30:51 yes 08:30:55 we will have only tenant_id 08:31:16 so I was thinking that maybe we should start small, and add admin tenant information if needed 08:32:03 In Nova/Cinder/Neutron, only Admin is allowed to update quota limit 08:32:13 /v1.0/​{tenant_id}​/quota/ 08:32:33 it will by default the user of kingbird 08:32:47 But because the new introduced feature called hierarchy-tenant quota control 08:33:26 it will be a different in the later, the admin-tenant-id could be the parent of the target tenant 08:34:18 i understand, but it’s not a general form for all services 08:34:19 So my suggestion is to keep alignment the structure with Nova/Cinder, and easy for later development 08:34:32 as you know neutron and cinder have a different structure for quotas 08:34:45 they have quota classes.. 08:34:55 my bad 08:35:00 cinder has been updated 08:35:07 Neutron just use the tenant-id in the token to judge it's admin or not 08:35:19 Nova/Cinder have 08:35:24 quota classes. 08:35:52 Nova/Cinder was kept aligned 08:35:54 so you are suggesting we will have admin_tenant_id with API call 08:36:01 yes 08:36:10 see we have a token with passed wiht it 08:36:16 /v2/​{tenant_id}​/os-quota-sets/​{tenant_id} 08:36:17 this token is admin token 08:36:29 /v2/​{admin_tenant_id}​/os-quota-sets/​{tenant_id} 08:36:31 list this 08:36:35 like* 08:36:54 so is there any specific reason for this admin tenant id 08:37:15 for hierachy multi-tenancy 08:37:49 Reseller mode, A -> B -> C -> D. B is the parent of C, A is the Admin 08:38:02 A assign quota to B 08:38:11 B can resell quota to C 08:38:32 B can resell quota to C (partly) 08:39:07 yes, but tenants are still tenants 08:39:21 and we need to consider super admin 08:39:30 yes 08:39:45 Tenants are still tenants 08:39:53 this information is not needed for syncing, but it is required to make kingbird a global quota storage 08:40:10 I ported code from Cinder to Tricircle, so understand it's being more and more complex 08:40:16 for the quota limit manage, we need to consider it 08:40:18 just to keep the same structure with other services 08:40:27 but for sync, it's tenant by tenant 08:40:39 yes, exactly my point 08:40:51 great 08:41:50 so can we agree to have similar structure like Nova/Cinder? 08:43:27 +1 otherwise kb won’t provide the same quota management capabilities 08:43:48 strange that neutron doesn’t care 08:43:48 Ashish and Zhiyuan? 08:43:59 yes.. agree 08:44:04 Neutron always the last one 08:44:22 agree 08:44:25 too many to tackle in Neutron 08:44:58 so coming back to the sync part 08:45:04 ok 08:45:30 #agree similar url structure like Nova/Cinder for quota management 08:45:55 in kb we have sync as a periodic activity 08:46:09 it is not triggered externally 08:46:11 and also on demand sync 08:46:16 yes 08:46:20 coming to that 08:46:37 on demand sync is probably better to have on a per tenant basis 08:46:47 agree 08:46:57 /v1/{project_id}/quotas/sync 08:47:11 or 08:47:20 /v1/{project_id}/sync/quota 08:47:25 http://127.0.0.1:8118/v1.0/{admin_tenant_id}/quota/sync/{tenant_id} 08:47:41 like this? 08:48:03 as other API urls will have admin_tenant_id 08:48:08 we will have for this as well 08:48:30 well 08:48:35 don’t think so 08:48:49 the other API had to be compatible with other services 08:48:54 this one is related to sync 08:49:13 and it doesn’t care about parent/child tenant structure 08:49:32 there’s a bunch of tenants, kb just iterates over them and syncs 08:50:24 this one: /v1/{project_id}/quotas/sync prefered 08:50:45 fine. agree 08:50:59 version/tenant_id/resource/action 08:51:19 +1 08:51:29 +1 08:51:32 +1 08:52:01 great 08:52:04 #agree sync /v1/{project_id}/quotas/sync 08:52:11 on to the next topic 08:52:32 I have to discuss review comments from Joe 08:53:11 sure 08:53:17 If keystone is temporary unreachable, it'll impact the Kingbird's initialization, and brings different later behavior of action, it's not good idea to make the service initialization (and later action) dependents on another service. Especially the calling to another service can't be guaranteed. 08:53:54 patch link? 08:54:01 https://review.openstack.org/#/c/277802/ 08:56:00 When you want to get the service-list from the KeyStone client, it may be failed, for example temparly link broken 08:56:13 then you will not get the service-list 08:56:42 yes.. this will be called when sync job is triggered 08:56:59 and it needs to have that information to proceed 08:58:04 so the quota support by Neutron or by Nova-Network could be a configuration item to avoid the inquery from the keystone 08:58:36 And Nova-Network is deprecated, by default Neutron should be 08:58:47 yes.. if it is a configuration. it can be avoided. 08:58:48 but 08:58:49 i.e, the SEG quota in Neutron 08:59:31 this information is known in runtime.. 09:00:01 Initialize service_list on demand? if the initialization of service_list when executing job, the job fails and can be redo. 09:00:04 do we need to support Nova-Network? 09:00:10 how can we configure that upfront, I mean which region has neutron enabled 09:00:18 this is also with cinder 09:00:30 if cinder is not present then we will not consider its resources. 09:00:43 I agree with ashish 09:01:00 we can know about availability of a certain service at runtime from endpoint list 09:01:10 otherwise we make an assumption 09:01:22 and this may not be the actual case in a particular region 09:01:27 that too.. region specific it has to be 09:01:59 and a region can be configured later with neutron in some point of time 09:02:15 if it has started with nova network.. 09:02:42 If exception happened, need to catch and process the exception 09:03:04 for example, KeyStone is not reachable 09:03:08 Then the job will be failed and do next time 09:04:34 yes.. exception handling should be improved for majority of modules. 09:05:01 OK 09:05:02 that i support 09:05:03 both the comments of yours on that commit refer to samething 09:05:16 that we will do anyhow. 09:05:43 So add exception handling in the patch 09:06:27 now I have moved that code to keystone driver 09:06:43 it is handeld there 09:06:54 Two topics left, one is VNF DR, another is setup multisite environment in Release C 09:07:00 same with the call to neutron extensions 09:07:08 ok 09:07:24 can you please review the code so far.. 09:07:31 will do 09:07:48 thanks 09:07:52 we’re out of time 09:08:01 shall we discuss those in ML 09:08:05 Hi, Dimitri, Ashish and Zhiyuan, how do you think about these two 09:08:05 or next time? 09:08:25 ok, M-L please, and then conclude quickly in meeting 09:08:47 please reply in M-L 09:08:58 thanks for your time for the meeting 09:09:08 #info rework Quota Sync APIs 09:09:12 ok, will check M-L later, busy coding Tricircle these days 09:09:29 thanks 09:09:30 #endmeeting