08:00:55 #startmeeting multisite 08:00:55 Meeting started Thu Mar 23 08:00:55 2017 UTC. The chair is joehuang. Information about MeetBot at http://wiki.debian.org/MeetBot. 08:00:55 Useful Commands: #action #agreed #help #info #idea #link #topic. 08:00:55 The meeting name has been set to 'multisite' 08:01:34 hi goutham 08:01:58 hi 08:02:00 hi 08:02:15 nice to meet you 08:02:22 me too 08:02:47 hi,joe. I am able to make it~~ 08:02:58 welcome, fuqiao :) 08:03:04 thank you very much 08:03:07 The other meeting is over early 08:03:28 #topic CMCC multi-site requirements 08:03:57 hi, fuqiao, thank you to introduce the multisite requirements from CMCC 08:04:01 Ok, thanks. Joe 08:04:07 now it's your turn :) 08:04:51 Ok. We are currently working on the architecture of our nfv deployment 08:05:06 We plan a 3 layer arch 08:05:28 #info 3 layer NFV deployment architecture 08:05:32 For each of the DC, we call it as TIC 08:05:48 TELECOM INTEGRATED CLOUD 08:06:05 the top layer is what we call the core tic 08:06:26 And following are tow layer of regional and access tic 08:06:55 The regional tic is located in cities, and access is in counties 08:07:23 We are planning a multi site scenario for regional tic and access tic 08:08:39 For each regional tic located in a city, there will be approximately 6 access tic located in the counties 08:09:02 These 7tics is planned to construct a multi site scenario 08:09:53 We are still not decided yet what exactly is the deployment, but there could be two solutions 08:12:21 Since the vnfs in the regional tic provides services for larger number, we need to provide HA for them in multisite 08:12:44 I mean disaster tolerant 08:12:50 understand 08:13:32 more info about two solutions? 08:13:38 One solution is to have an access tic to work together with the regional one and be the disaster tolerant site 08:15:58 The other is to make these 6 access tic as regions for the openstack in the regional tic, in which seperate nova should be deployed, however other openstack services are shared 08:16:42 For solution2 we actually are thinking of reduce the human resource to maintain the access tic 08:17:39 So we want to make it as simple as possible, but we still need one of the access tic to be the disaster tolerant site for the regional tic 08:18:56 For now we a thinking,at least a keystone is needed in one of the access tic, so as to make sure the availability of the authentication service in these 7tics 08:19:42 We haven't dig more into this architecture yet, and may also like to hear your suggestions on this 08:20:41 I don't know if I make this clear, I think a few more pic may help me explain our need more precisely. 08:21:09 But I am afraid we can't post pic in irc 08:21:12 do you plane to use same identity or 7tics 08:21:24 /s/plane/plan 08:21:42 do you plan to use same identity for 7tics 08:22:00 What exactly do you mean by identity 08:22:01 and how many OpenStack instances would you like to deploy 08:22:27 We plan to have the same group of people to maintain the 7tics 08:22:33 If that is what you ask 08:22:40 or say same keystone 08:22:48 Yes 08:23:02 understand 08:23:57 For the instance, in regional tic, 100 more servers, and 50 for each access tic 08:23:58 one OpenStack instance for 7tic or each tic has one OpenStack 08:25:36 The first solution is each tic with one openstack 08:26:17 The second one, we think we don't need whole openstack for the access tic, only nova deployed in each tic 08:26:41 And the seven tics share the common services in the regional tic 08:27:46 for the second one, each tic will have RPC interface with regional OpenStack except Nova 08:27:51 We currently are more likely to choose second one 08:28:04 Yes 08:28:27 We hope the second one can help reduce the human resource in the access tic 08:29:02 Since it is actually impossible for us to have an engineer on openstack working in the small counties... 08:29:08 you can have same group person to maintain 7 tic OpenStack 08:29:27 no need, you can remotely manage the openstack 08:29:45 for the second option 08:29:54 Yes, remotely 08:29:59 may be the major concern is openstack upgrading 08:30:07 Yes 08:30:16 because it's RPC interface 08:30:33 The access tic is about 3000. It will be huge work for us 08:31:00 I don't quite understand about roc 08:31:05 RPC 08:31:08 multiple different version of RPC has to co-work during the upgrading period 08:31:19 what is the problem of this interface 08:31:27 Ok 08:31:37 RPC interface will be changed almost in each version 08:31:52 it's not so stable like OpenStack restful API 08:32:26 Why there could be multi versions, what if we have same version in each access tic, will this help? 08:32:33 so if all tic share same regional neutron/cinder/ 08:33:02 then dealing with each site tic's upgrading needs to be done very carefully 08:33:09 We always have the same version of openstack in the access tic, is this possible to avoid 08:33:34 you have to upgrade 7 tic at the same time, for example, same day 08:33:42 Oh got it 08:33:45 is it reasonable? 08:33:49 It could be a problem 08:34:02 not all service work very well in rolling upgrade 08:34:07 Ok 08:34:15 have to check the rolling uprade maturity 08:34:27 and sometimes it may be broken 08:34:34 Got it 08:34:49 if you have each seperate openstack instance in each tic 08:35:06 then you can upgrade one tic without impact other TICs 08:35:19 How about disaster tolerant of regional tic in these two solution. Any suggestions? 08:35:35 yes, I think it's needed 08:36:17 Are there any gaps? 08:36:57 need to know more detail of disaster tolerance 08:37:25 for example, you want VNFV disaster tolerance or openstack services tolerance 08:37:40 Both 08:38:11 no matter solution 1 or solution 2 08:38:26 Yes 08:38:31 you at least one regional OpenStack instance and one tic OpenStack instance 08:39:39 Sorry, didn't get it 08:41:08 you want both, VNF and OpenStack tolernace 08:41:52 Yes 08:41:53 if you only need VNF tolerance, no need to deploy multiple OpenStack 08:41:58 Ok 08:42:17 Got it 08:42:21 but you also want the OpenStack serive disater recovery 08:42:45 then you have to deploy two OpenStack instances at least 08:43:56 so VNF in access TIC, no need to do disaster torelance, isn't it? 08:44:02 Yes, is it possible to use one of the access tic as the tolerance, when normally the access tic openstack support services in its own tic, and will take over the services in the regional tic once disaster happens 08:45:11 good idea, need to do design in detail 08:45:48 Yes, would like to rear your suggestion if such design is possible 08:47:20 for regional Tic to take over the services from the regional OpenStack, how to deal with the compute nodes in the disater site? or just all API entrance will be forwarded to the backup site 08:48:15 and the capacity is also one factor to take into consideration, what kind of VNF should be backed up in the backup site 08:48:15 You mean solution 2? 08:48:39 both 08:49:05 For now mostly data plane vnfs running in regionals 08:49:18 for solution 2 other services will use shared service in regional tic 08:49:58 we have 10 minutes left 08:50:22 fuqiao, we may continue discussion in next weekly meeting 08:50:35 Sure 08:51:11 I can work out some pic and forward to the mailing list, 08:51:15 for D release, need TSC to vote the release date, it'll not be Mar.27 as planned 08:51:23 to Fuqiao, that's great! 08:51:29 Hope it will help better understanding the needs 08:51:39 +1 08:51:45 Ok 08:51:59 thank you fuqiao 08:52:07 #topic functest issue 08:52:18 hello, dimitri? 08:53:10 goutham and meimei? 08:53:15 hi 08:53:18 hi 08:53:21 dimitri will be back 08:53:30 2 minutes 08:53:32 yes 08:53:52 hello, how about functest issue after the firewall issue addressed 08:54:19 after speaking with Goutham turns out that the domain name is spelled incorrectly in tempest.conf 08:54:30 instead of Default it should say defailt 08:54:34 default* 08:54:54 ok, just the configuration issue 08:55:00 we’ve tested locally and the tests pass 08:55:08 with Default they fail with the same error 08:55:08 that's great news! 08:55:24 #info functest pass locally 08:55:50 looking forward to the patch to fix the last mile 08:56:48 and just said that the D release date will be voted in next TSC meeting 08:57:07 and it'll not be Mar.27 which was planned before 08:57:21 yea i have seen the mail 08:57:25 from david 08:57:29 what's exact date is not clear yet 08:57:35 thank you 08:57:38 its mar 28 08:57:42 i beleive 08:57:55 just one day postponed? 08:57:59 yes 08:58:07 as far as i remember 08:58:31 then the functest is expected to restore normal before that date :) 08:59:02 this will be my last contribution in the project 08:59:22 ok, thank you very much for the contribution! 08:59:59 other topics? 09:00:12 i have nothing to add :) 09:00:48 thank you for attending the meeting 09:00:52 bye 09:00:56 #endmeeting