08:00:55 <joehuang> #startmeeting multisite
08:00:55 <collabot`> Meeting started Thu Mar 23 08:00:55 2017 UTC.  The chair is joehuang. Information about MeetBot at http://wiki.debian.org/MeetBot.
08:00:55 <collabot`> Useful Commands: #action #agreed #help #info #idea #link #topic.
08:00:55 <collabot`> The meeting name has been set to 'multisite'
08:01:34 <joehuang> hi goutham
08:01:58 <sorantis> hi
08:02:00 <May-meimei> hi
08:02:15 <joehuang> nice to meet you
08:02:22 <May-meimei> me too
08:02:47 <fuqiao> hi,joe. I am able to make it~~
08:02:58 <joehuang> welcome, fuqiao :)
08:03:04 <joehuang> thank you very much
08:03:07 <fuqiao> The other meeting is over early
08:03:28 <joehuang> #topic CMCC multi-site requirements
08:03:57 <joehuang> hi, fuqiao, thank you to introduce the multisite requirements from CMCC
08:04:01 <fuqiao> Ok, thanks. Joe
08:04:07 <joehuang> now it's your turn :)
08:04:51 <fuqiao> Ok. We are currently working on the architecture of our nfv deployment
08:05:06 <fuqiao> We plan a 3 layer arch
08:05:28 <joehuang> #info 3 layer NFV deployment architecture
08:05:32 <fuqiao> For each of the DC, we call it as TIC
08:06:05 <fuqiao> the top layer is what we call the core tic
08:06:26 <fuqiao> And following are tow layer of regional and access tic
08:06:55 <fuqiao> The regional tic is located in cities, and access is in counties
08:07:23 <fuqiao> We are planning a multi site scenario for regional tic and access tic
08:08:39 <fuqiao> For each regional tic located in a city, there will be approximately 6 access tic located in the counties
08:09:02 <fuqiao> These 7tics is planned to construct a multi site scenario
08:09:53 <fuqiao> We are still not decided yet what exactly is the deployment, but there could be two solutions
08:12:21 <fuqiao> Since the vnfs in the regional tic provides services for larger number, we need to provide HA for them in multisite
08:12:44 <fuqiao> I mean disaster tolerant
08:12:50 <joehuang> understand
08:13:32 <joehuang> more info about two solutions?
08:13:38 <fuqiao> One solution is to have an access tic to work together with the regional one and be the disaster tolerant site
08:15:58 <fuqiao> The other is to make these 6 access tic as regions for the openstack in the regional tic, in which seperate nova should be deployed, however other openstack services are shared
08:16:42 <fuqiao> For solution2 we actually are thinking of reduce the human resource to maintain the access tic
08:17:39 <fuqiao> So we want to make it as simple as possible, but we still need one of the access tic to be the disaster tolerant site for the regional tic
08:18:56 <fuqiao> For now we a thinking,at least a keystone is needed in one of the access tic, so as to make sure the availability of the authentication service in these 7tics
08:19:42 <fuqiao> We haven't dig more into this architecture yet, and may also like to hear your suggestions on this
08:20:41 <fuqiao> I don't know if I make this clear, I think a few more pic may help me explain our need more precisely.
08:21:09 <fuqiao> But I am afraid we can't post pic in irc
08:21:12 <joehuang> do you plane to use same identity or 7tics
08:21:24 <joehuang> /s/plane/plan
08:21:42 <joehuang> do you plan to use same identity for 7tics
08:22:00 <fuqiao> What exactly do you mean by identity
08:22:01 <joehuang> and how many OpenStack instances would you like to deploy
08:22:27 <fuqiao> We plan to have the same group of people to maintain the 7tics
08:22:33 <fuqiao> If that is what you ask
08:22:40 <joehuang> or say same keystone
08:22:48 <fuqiao> Yes
08:23:02 <joehuang> understand
08:23:57 <fuqiao> For the instance, in regional tic, 100 more servers, and 50 for each access tic
08:23:58 <joehuang> one OpenStack instance for 7tic or each tic has one OpenStack
08:25:36 <fuqiao> The first solution is each tic with one openstack
08:26:17 <fuqiao> The second one, we think we don't need whole openstack for the access tic, only nova deployed in each tic
08:26:41 <fuqiao> And the seven tics share the common services in the regional tic
08:27:46 <joehuang> for the second one, each tic will have RPC interface with regional OpenStack except Nova
08:27:51 <fuqiao> We currently are more likely to choose second one
08:28:04 <fuqiao> Yes
08:28:27 <fuqiao> We hope the second one can help reduce the human resource in the access tic
08:29:02 <fuqiao> Since it is actually impossible for us to have an engineer on openstack working in the small counties...
08:29:08 <joehuang> you can have same group person to maintain 7 tic OpenStack
08:29:27 <joehuang> no need, you can remotely manage the openstack
08:29:45 <joehuang> for the second option
08:29:54 <fuqiao> Yes, remotely
08:29:59 <joehuang> may be the major concern is openstack upgrading
08:30:07 <fuqiao> Yes
08:30:16 <joehuang> because it's RPC interface
08:30:33 <fuqiao> The access tic is about 3000. It will be huge work for us
08:31:00 <fuqiao> I don't quite understand about roc
08:31:05 <fuqiao> RPC
08:31:08 <joehuang> multiple different version of RPC has to co-work during the upgrading period
08:31:19 <fuqiao> what is the problem of this interface
08:31:27 <fuqiao> Ok
08:31:37 <joehuang> RPC interface will be changed almost in each version
08:31:52 <joehuang> it's not so stable like OpenStack restful API
08:32:26 <fuqiao> Why there could be multi versions, what if we have same version in each access tic, will this help?
08:32:33 <joehuang> so if all tic share same regional neutron/cinder/
08:33:02 <joehuang> then dealing with each site tic's upgrading needs to be done very carefully
08:33:09 <fuqiao> We always have the same version of openstack in the access tic, is this possible to avoid
08:33:34 <joehuang> you have to upgrade 7 tic at the same time, for example, same day
08:33:42 <fuqiao> Oh got it
08:33:45 <joehuang> is it reasonable?
08:33:49 <fuqiao> It could be a problem
08:34:02 <joehuang> not all service work very well in rolling upgrade
08:34:07 <fuqiao> Ok
08:34:15 <joehuang> have to check the rolling uprade maturity
08:34:27 <joehuang> and sometimes it may be broken
08:34:34 <fuqiao> Got it
08:34:49 <joehuang> if you have each seperate openstack instance in each tic
08:35:06 <joehuang> then you can upgrade one tic without impact other TICs
08:35:19 <fuqiao> How about disaster tolerant of regional tic in these two solution. Any suggestions?
08:35:35 <joehuang> yes, I think it's needed
08:36:17 <fuqiao> Are there any gaps?
08:36:57 <joehuang> need to know more detail of disaster tolerance
08:37:25 <joehuang> for example, you want VNFV disaster tolerance or openstack services tolerance
08:37:40 <fuqiao> Both
08:38:11 <joehuang> no matter solution 1 or solution 2
08:38:26 <fuqiao> Yes
08:38:31 <joehuang> you at least one regional OpenStack instance and one tic OpenStack instance
08:39:39 <fuqiao> Sorry, didn't get it
08:41:08 <joehuang> you want both, VNF and OpenStack tolernace
08:41:52 <fuqiao> Yes
08:41:53 <joehuang> if you only need VNF tolerance, no need to deploy multiple OpenStack
08:41:58 <fuqiao> Ok
08:42:17 <fuqiao> Got it
08:42:21 <joehuang> but you also want the OpenStack serive disater recovery
08:42:45 <joehuang> then you have to deploy two OpenStack instances at least
08:43:56 <joehuang> so VNF in access TIC, no need to do disaster torelance, isn't it?
08:44:02 <fuqiao> Yes, is it possible to use one of the access tic as the tolerance, when normally the access tic openstack support services in its own tic, and will take over the services in the regional tic once disaster happens
08:45:11 <joehuang> good idea, need to do design in detail
08:45:48 <fuqiao> Yes, would like to rear your suggestion if such design is possible
08:47:20 <joehuang> for regional Tic to take over the services from the regional OpenStack, how to deal with the compute nodes in the disater site? or just all API entrance will be forwarded to the backup site
08:48:15 <joehuang> and the capacity is also one factor to take into consideration, what kind of VNF should be backed up in the backup site
08:48:15 <fuqiao> You mean solution 2?
08:48:39 <joehuang> both
08:49:05 <fuqiao> For now mostly data plane vnfs running in regionals
08:49:18 <joehuang> for solution 2 other services will use shared service in regional tic
08:49:58 <joehuang> we have 10 minutes left
08:50:22 <joehuang> fuqiao, we may continue discussion in next weekly meeting
08:50:35 <fuqiao> Sure
08:51:11 <fuqiao> I can work out some pic and forward to the mailing list,
08:51:15 <joehuang> for D release, need TSC to vote the release date, it'll not be Mar.27 as planned
08:51:23 <joehuang> to Fuqiao, that's great!
08:51:29 <fuqiao> Hope it will help better understanding the needs
08:51:39 <joehuang> +1
08:51:45 <fuqiao> Ok
08:51:59 <joehuang> thank you fuqiao
08:52:07 <joehuang> #topic functest issue
08:52:18 <joehuang> hello, dimitri?
08:53:10 <joehuang> goutham and meimei?
08:53:15 <pratapagoutham> hi
08:53:18 <joehuang> hi
08:53:21 <pratapagoutham> dimitri will be back
08:53:30 <pratapagoutham> 2 minutes
08:53:32 <sorantis> yes
08:53:52 <joehuang> hello, how about functest issue after the firewall issue addressed
08:54:19 <sorantis> after speaking with Goutham turns out that the domain name is spelled incorrectly in tempest.conf
08:54:30 <sorantis> instead of Default it should say defailt
08:54:34 <sorantis> default*
08:54:54 <joehuang> ok, just the configuration issue
08:55:00 <sorantis> we’ve tested locally and the tests pass
08:55:08 <sorantis> with Default they fail with the same error
08:55:08 <joehuang> that's great news!
08:55:24 <joehuang> #info functest pass locally
08:55:50 <joehuang> looking forward to the patch to fix the last mile
08:56:48 <joehuang> and just said that the D release date will be voted in next TSC meeting
08:57:07 <joehuang> and it'll not be Mar.27 which was planned before
08:57:21 <pratapagoutham> yea i have seen the mail
08:57:25 <pratapagoutham> from david
08:57:29 <joehuang> what's exact date is not clear yet
08:57:35 <joehuang> thank you
08:57:38 <pratapagoutham> its mar 28
08:57:42 <pratapagoutham> i beleive
08:57:55 <joehuang> just one day postponed?
08:57:59 <pratapagoutham> yes
08:58:07 <pratapagoutham> as far as i remember
08:58:31 <joehuang> then the functest is expected to restore normal before that date :)
08:59:02 <sorantis> this will be my last contribution in the project
08:59:22 <joehuang> ok, thank you very much for the contribution!
08:59:59 <joehuang> other topics?
09:00:12 <pratapagoutham> i have nothing to add :)
09:00:48 <joehuang> thank you for attending the meeting
09:00:52 <joehuang> bye
09:00:56 <joehuang> #endmeeting