08:10:09 #startmeeting multisite 08:10:09 Meeting started Thu Dec 10 08:10:09 2015 UTC. The chair is joehuang. Information about MeetBot at http://wiki.debian.org/MeetBot. 08:10:09 Useful Commands: #action #agreed #help #info #idea #link #topic. 08:10:09 The meeting name has been set to 'multisite' 08:10:59 #info Ashish 08:11:23 hi, Ashish, I tried again, and not reproduce the tox -epy27 error in my environment for the patch https://review.openstack.org/#/c/250707/ 08:11:35 #info joehuang 08:11:59 Zhiyuan, how about your tox result for the patch 08:12:00 #info zhiyuan 08:12:07 oh.. But encounter it everytime.. 08:12:22 py27 success but pep8 fail ... 08:13:01 gate-kingbird-python27 SUCCESS in 1m 53s means the gate test succeed in the CI infrastructure for tox -epy27 08:13:12 shock 08:13:18 pep8 shows erros starting with D like D100, D103 08:13:41 I did not find any pep8 errors when running tox -epep8 08:14:21 ok, I will use one clean machine to do the gate test again 08:14:37 it's the first time for me to see the Dxxx error 08:14:44 feeling strange 08:15:19 Ashish, some questions on the quota control 08:15:24 yes.. 08:16:23 in the doc https://docs.google.com/document/d/1aYmhfxdlKVhv3j1NGrrfSXnyonfKv12jv6KURdwMMKI/edit 08:17:00 #info Quota Manager is a Job Daemon(OR part of Job Daemon) which creates job workers to perform quota synchronization 08:17:37 yes, that is my proposal. 08:18:05 all projects' quota will be synced at the same time or batch by batch 08:18:45 or with random delay to avoid burst traffic to all regions? 08:18:47 the periodic task syncs quota fro all projects at the same time.. 08:19:16 if we have thousands of projects, it may be a great traffic 08:20:02 and if there are a lot os regions, then the burst is not small 08:20:31 yes.. 08:20:40 in nova-compute, some times, a random delay is used to avoid burst concurrency 08:21:07 for example, to update the resource usage information to the db and scheduler 08:21:10 then we pick few projects, sync quota then a randon delay pick other projects, sync quota 08:21:32 something like this? 08:21:59 yes 08:22:36 okay.. sounds reasonable. 08:23:57 yes, this is a challenge part to implement 08:24:04 hi Dimitri 08:24:07 welcome 08:24:10 hello 08:24:18 sorry I’m late 08:24:18 Hi Dimitri 08:24:45 #info dimitri 08:25:12 can you read the earlier messages ? 08:25:12 we are just talking about the quota sync, how to avoid burst concurrency 08:25:19 OR I paste it again? 08:25:38 pls 08:25:55 i cannot see the previous msgs unfortunately 08:26:02 all projects' quota will be synced at the same time or batch by batch 08:26:13 or with random delay to avoid burst traffic to all regions? 08:26:18 if we have thousands of projects, it may be a great traffic 08:26:37 then we pick few projects, sync quota, then a randon delay, pick other projects, sync quota 08:27:00 in nova-compute, some times, a random delay is used to avoid burst concurrency 08:27:01 for example, to update the resource usage information to the db and scheduler 08:27:32 sounds reasonable 08:27:45 if the number of regions and projects are a big number, the traffic needs to be took into account 08:28:23 Joe, we shall figure that out offline, Why FT failing at my place 08:28:43 yes 08:29:42 Can you use one clean machine to try the FT? use both tox -epy27 and run_test.sh 08:29:44 #info all projects' quota will be synced batch by batch 08:30:25 sure. 08:30:48 I have an update. 08:31:05 any other topic to discuss 08:31:19 last meeting we were discussing about the usage of openstackclient for all OS communications 08:32:05 I have dug into it and found out there are lot of challenges with it interms of limitations and its use as a pythin binding 08:32:19 I have contacted its PTL 08:32:43 It's still in early development phase 08:32:45 Seems like OpenStackClient (OSC) is the wrong tool for the job. I recommend looking at the OpenStackSDK. OSC was strictly designed to be a CLI. 08:32:57 he replied this 08:32:58 not so mature like the client of each service 08:33:08 its purely for CLI usage 08:33:17 we have openstacksdk for such python binidings 08:33:25 yes, that’s right 08:33:29 understand 08:33:37 but the sdk has even las functionality 08:34:02 las -> less? 08:34:06 less 08:34:11 sorry 08:34:26 yes.. unfortunately 08:34:46 so I suggest we use the clients for the time being 08:34:59 that the endpoint cache shall remain unchainged 08:35:06 currently the functionalites we need is mainly on quota management of each service 08:35:07 but the sdk.py will change 08:35:25 yes.. now sdk will be an interface for native clients 08:36:08 I have checked openstacksdk as well, but as Dimitri said, many things are missing there too 08:36:41 we have total control if we use native clients + we can have a working model soon 08:37:22 in tricircle we use the client of each service currently, it provides most of the features 08:37:32 so I guess we need to revisit ourdecision 08:37:44 agree 08:38:23 yes, So will modify the Blueprint as well 08:38:37 shall I have to ? 08:38:39 ok, it's still a good try 08:38:44 please 08:38:56 #action update the BP for the driver 08:39:24 and I have asked them question. why there are OSclient for CLI and OSSdk for library both are generic drivers right 08:39:40 both can be clubbed and used as CLI and library 08:40:00 this is His answer 08:40:00 That's sort of the plan. The SDK will always be a generic python library, whereas OSC will always be a CLI. We actually just started to use the SDK in some part of OSC (for network support), this will be in our next release. They won't ever be part of the same project, they will have two different goals, but in the long term we want OSC to just use the SDK. 08:40:05 #action change openstackclient usage 08:42:15 any other topic? 08:42:37 shall we have tag for kingbird on ask.openstack.org 08:42:37 no 08:42:38 ? 08:43:15 https://ask.openstack.org/en/tags/ 08:43:29 don’t think it’s important at the moment 08:43:54 seems a little early 08:43:57 okay, Just felt that yesterday when I had to ask some question 08:44:12 agree 08:44:32 ok let's work offline, thanks, and see you next meeting 08:44:33 ok, Joe I guess we can close here 08:44:38 yes 08:44:46 #endmeeting