08:01:38 <joehuang> #startmeeting multisite
08:01:38 <collabot> Meeting started Thu Jan 12 08:01:38 2017 UTC.  The chair is joehuang. Information about MeetBot at http://wiki.debian.org/MeetBot.
08:01:38 <collabot> Useful Commands: #action #agreed #help #info #idea #link #topic.
08:01:38 <collabot> The meeting name has been set to 'multisite'
08:01:39 <kemo> Hello. Thanks for inviting me to your meeting
08:01:48 <joehuang> hello, kemo
08:01:53 <goutham_pratapa> hi kemo
08:02:30 <joehuang> Kemo, could you introduce yourself briefly?
08:02:45 <sorantis> hi all
08:02:55 <kemo> Hello, I'm Kemo from Slovenia, Europe.
08:02:55 <joehuang> hi, welcome back
08:03:08 <goutham_pratapa> hi dimitri
08:03:20 <joehuang> which company are you working for
08:03:31 <kemo> Currently I'm preparing a solution for georedundancy and desaster recovery on openstack.
08:04:06 <joehuang> great
08:04:15 <kemo> I'm from the small company Psi-net and are now working for a telecom equipment vendor Iskratel
08:05:17 <joehuang> welcome, how about follow the meeting agent
08:07:29 <goutham_pratapa> I have been working on the kingbird client and the last commit for that is done "https://review.openstack.org/#/c/418412/"
08:08:12 <goutham_pratapa> and currently i am working on two bugs which are in the final stage and once it is done i will update the patch for keypair syncing.
08:08:58 <goutham_pratapa> we also have to modify tempest test_cases which are currently in the form of curl requests to python clients.
08:09:21 <joehuang_> hello, my link was broken
08:09:24 <joehuang_> sorry
08:10:28 <joehuang_> #topic D release plan
08:10:34 <joehuang_> #link https://wiki.opnfv.org/display/multisite/Multisite+Release+D+Planning
08:10:45 <joehuang_> David has slip the plan a little
08:10:59 <joehuang_> for MS5 and MS6
08:11:32 <joehuang_> MS5 is Jan 27, MS 6 is Feb 17
08:11:52 <joehuang_> stable branch will be created on Feb 17
08:11:53 <sorantis> that’s a bit tight
08:11:58 <joehuang_> for OpenSrack release plan
08:12:06 <sorantis> the deployment nodes have just been made available
08:12:25 <joehuang_> Ocata stable branch will be created around Jan 23~27
08:13:41 <joehuang_> and release Feb 20
08:13:48 <joehuang_> #link https://releases.openstack.org/ocata/schedule.html
08:14:02 <joehuang_> the release date is approaching
08:14:42 <joehuang_> the nodes are just avaialble for deployments, it's quite challenge
08:16:50 <joehuang_> features should be freeze before Feb 17, i.e before Ocata release and D release stable branch
08:17:20 <joehuang_> what's your proposal for the feature freeze date?
08:18:28 <sorantis> we stick to the dates. I don’t have any other poposal. I will resume working on the build scripts
08:19:20 <joehuang> great
08:19:21 <goutham_pratapa> some where in the end of jan or the first week of feb.. with keypair_syncing,conversion of tempest to pythonclients.
08:20:34 <joehuang> #info stick to the release plan of OPNFV D release and Ocata release. Feature freeze Feb.16, branch created on Feb.17
08:21:47 <joehuang> we can discuss kingbird feature development after the second topic
08:21:52 <joehuang> #topic VNF Geo site disaster recovery
08:22:17 <joehuang> hello, kemo, you would like to discuss this topic, please
08:23:11 <kemo> Yes, I'm currently trying to make your third scenario with Ceph as a backend for Cinder storage
08:23:45 <joehuang> in one openstack or two openstack?
08:24:12 <joehuang> is there independent openstack in different site
08:24:21 <kemo> With two openstacks.
08:24:32 <joehuang> and two cephs
08:24:53 <kemo> Yes, each openstack is connected to its own ceph cluster
08:25:08 <kemo> And rbd mirroring is used for data replication.
08:26:02 <kemo> Certainly openstack lacks some functionality.
08:26:15 <joehuang> have you tried to make the database of openstack working in master/slave mode
08:27:02 <joehuang> i.e in the site1, database is the master, and replicate data to the database in site2 asyncly
08:27:23 <joehuang> the database in site2 work as the slave
08:27:53 <joehuang> so the volume data in the site2 will be kept same as that in site1
08:28:16 <joehuang> but the site2 openstack should be working in standby mode
08:28:17 <kemo> No, I only copy same data from cinder database from primary to secondary site by hand
08:28:30 <joehuang> ok
08:29:03 <kemo> I'd like that the secondary site would run in normal active state
08:30:17 <joehuang> if you want the secondary site to run in active state, you may have to use galera cluster like multi-active database cluster
08:30:55 <joehuang> multi-write is allowed
08:31:41 <joehuang> not like keystone database, the write frequency is not so often
08:31:53 <kemo> Just replicating data is not enough, because some IDs sholud be changed on this data (UserId, ProjectId, CinderTypeID)
08:32:32 <joehuang> assume that you have same user management data
08:32:59 <joehuang> CinderTypeID do you mean volume type id?
08:33:28 <kemo> The second problem is switchover. I made it by Ceph and then attaching volumes on secondary side.
08:33:47 <kemo> Yes I mean volume type id.
08:34:10 <joehuang> if you have database replicated, then the volume type id should be kept same
08:35:18 <joehuang> attach volume? Is the VM running in the site2?
08:36:21 <joehuang> some documents in multisite repo needs to update
08:37:09 <kemo> As a first VM is running on the primary site and after switchover the same VM is started on secondary site attaching the replicated volume.
08:38:07 <joehuang> so the secondary site is not in service state, if the VM will be started after switchover
08:38:36 <joehuang> service state -> active state
08:39:10 <kemo> Secondary site is in service state for other not geo VMs.
08:39:24 <joehuang> ok, I got it
08:40:01 <joehuang> I suggest you to divide them into different pool
08:40:54 <joehuang> geo VMs in one pool, other VMs which are providing services in other pool
08:41:09 <kemo> In your document GR software is mentioned as a module which will take care of those things. What is this GR module, have you already designed it?
08:41:31 <joehuang> this is not open source software
08:42:05 <joehuang> I am not sure whether there is open source software will take care of DR in OpenStack
08:43:29 <kemo> If it's not opensource, can you just tell me something about it.
08:43:30 <joehuang> #info multisite requirements documentation update is needed for D release
08:45:47 <joehuang> sorry, I think you can search the public available  DR related information
08:47:14 <joehuang> so next topic
08:47:16 <kemo> OK, I got it. Thanks for your recommendations.
08:47:29 <joehuang> #topic kingbird feature development
08:48:10 <joehuang> hello, Goutham, could you please share some info on feature development
08:48:17 <goutham_pratapa> yes
08:48:23 <goutham_pratapa> Kb_cli is done.
08:48:42 <goutham_pratapa> last commit for the kb_cli is in review..
08:49:38 <goutham_pratapa> conversion of tempest_cases to python_clients has to be done.
08:50:00 <joehuang> in OpenStack, the client is released more frequently
08:50:03 <goutham_pratapa> currently  i have migrated curl request to python code for only one command.
08:50:51 <goutham_pratapa> yes.
08:52:34 <joehuang> ok, and it'll be also serve as the api SDK for other software, we can examine it in tempest first
08:52:43 <goutham_pratapa> I will update patch for the two bugs which are in  review in a couple of hours
08:53:30 <joehuang> thank you have much
08:53:45 <goutham_pratapa> and keypair syncing feature and its respective test case will also be updated soon.
08:54:17 <joehuang> #info migrate curl to use client SDK
08:55:45 <goutham_pratapa> and i am sorry for repeated patches. It will never happen again. As dimitri suggested i will create a branch for my commits,try and make patches as less as possible .
08:56:35 <joehuang> when you modify code, usually create your own branch
08:57:22 <joehuang> ok, other topics?
08:57:26 <goutham_pratapa> ok.
08:57:47 <goutham_pratapa> regarding the async approach to be followed
08:57:56 <goutham_pratapa> we can discuss that..
08:58:07 <joehuang> yes, Dimitri is back
08:58:33 <joehuang> and I also said, for the short duration task, sync way is acceptable
08:59:09 <goutham_pratapa> yes.
08:59:49 <joehuang> the return info will be more friendly if each region is successful or not are showed
08:59:56 <goutham_pratapa> For async process i propose an output like this.. https://hastebin.com/vujevizike.rb
09:01:23 <sorantis> perhaps I’m confusing something but
09:01:25 <sorantis> $ kingbird sync start --force --type keypair --regions RegionTwo,RegionThree --resources keypair_1,keypair_2,Keypair_3
09:01:32 <sorantis> where is the source region?
09:02:00 <goutham_pratapa> I am sorry i missed i just wanted to show an example
09:02:05 <sorantis> ok
09:02:23 <sorantis> so “start” is the asyn trigger?
09:02:37 <goutham_pratapa> yes..
09:02:54 <joehuang> "start" is no need
09:02:58 <sorantis> +1
09:03:37 <goutham_pratapa> and with the uuid genereated user can view the sync-status just like heat stack-create and heat resource-list <uuid>
09:04:08 <sorantis> like in all other cases operation triggering is asyn
09:04:09 <sorantis> c
09:04:12 <joehuang> understand, it's only for long duration task to enhance better user experience
09:04:20 <sorantis> you poll for status
09:04:32 <sorantis> I think we shouldn’t make it complicated
09:04:51 <sorantis> I suggest async by default
09:05:13 <sorantis> then poll for status
09:05:20 <sorantis> if status is required
09:06:08 <joehuang> if all tasks are processed in async way, will make quota/keypair sync being more complicated
09:06:26 <joehuang> current sync processing is good for these short duration tasks
09:07:25 <sorantis> the actual syncing can be synchronous
09:07:33 <sorantis> but the client should not wait
09:09:01 <joehuang> I am afraid the code stablity in Ocata/D release, too short time to change quota/keypair sync processing
09:09:50 <sorantis> I need to run now
09:09:57 <sorantis> the meeting is overdue
09:09:58 <joehuang> yes, time is up
09:10:28 <joehuang> so the design should be discussed but not to implement in this release
09:10:50 <joehuang> thank you for attending the meeting, continue discuss offline
09:10:55 <joehuang> #endmeeting