13:04:44 #startmeeting Dovetail-2016-0701 13:04:44 Meeting started Fri Jul 1 13:04:44 2016 UTC. The chair is bin_. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:04:44 Useful Commands: #action #agreed #help #info #idea #link #topic. 13:04:44 The meeting name has been set to 'dovetail_2016_0701' 13:05:17 #sinfo Chris Price 13:05:21 #info Chris Price 13:05:42 #info Bin Hu 13:05:52 #info Hongbo Tian 13:06:00 bin can you #chair chrispriceab 13:08:36 #chair chrispriceab 13:08:36 Current chairs: bin_ chrispriceab 13:09:10 #topic Dovetail framework 13:09:33 #info hongbo presented a powerpoint slide reflecting his thoughts for the dovetail project 13:12:03 #link https://wiki.opnfv.org/display/dovetail dovetail project description 13:14:41 #info Hongbo outlines the need to address hardware in the Dovetail specification. 13:15:24 #info There was a concensus on the call that this would be leveraging the pharos specification and including a testing procedure associated with the pharos specification. 13:19:15 #info there was a discussion on evaluating available test frameworks to use for establishing the test suites 13:24:42 #link http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV001v010101p.pdf potential use cases to establish test cases around 13:28:36 #link http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf predeployment test case descriptions 13:40:19 #info the team identifed that there is a need to establish an initiatial test specification as a first step before further elaborating on the tools etc... 13:40:22 #endmeeting 08:22:48 ping alexyang 08:23:53 ping aricg 07:01:11 hello 07:01:28 hi there 07:01:40 good to see you here 07:01:50 i don't know if csatari is already connected 07:02:16 he is listed in the right pane 07:02:18 maybe we need to wait for a few for minutes 07:02:26 ok 07:02:38 yeah, but he doesn't reply :-) 07:02:46 at least not in the netready channel 07:09:09 csatari: are you there? 07:15:08 hello 07:15:23 hi 07:15:35 maybe we can start 07:15:39 because i have a question 07:15:44 ok 07:15:58 hopefully csatari will still show up 07:16:09 I think so 07:16:28 i ready your L2 inter-DC requirements doc 07:16:40 ready -> read 07:16:51 in which you propose to use fake ports 07:17:12 i am interested from a Gluon perpective 07:17:39 this is more lightweight, but neutron community not welcome it 07:17:40 Gluon allows much more flexibility in terms of modeling network elements such as ports 07:17:57 yes, i can imaging 07:18:07 imagine 07:18:11 understand 07:18:38 understand 07:18:39 if we have fake port represent the remote port and it's vtep info 07:18:44 in Gluon, one could simply create a new port service which represents a VTEP 07:18:53 then the L2 networking will be much more easier 07:19:21 yes 07:19:53 what does the Neutron community propose instead? how would they solve it? 07:20:43 i 07:20:45 Hi 07:20:47 then the SDN controller backend needs to make the fake port and local ports in one L2 network, that means each local ports needs to understand the remote vtep for the fake port 07:20:57 hi csatari 07:21:07 hi 07:21:22 Neutron asked us move the L2 networking functionalities to L2GW 07:21:36 ok, i see 07:21:42 it's more complicated 07:22:35 do you think fake port is much more feasible in Gluon and SDN controller backend? 07:23:43 in general, Gluon provides much more flexibility on a per-port basis 07:23:57 however, there is no L2 service in Gluon yet 07:24:04 I am afraid my network was broken 07:24:04 helo 07:24:15 helo 07:24:25 and I don't have a full understanding of the problem yet 07:24:38 but we can analyze it 07:25:11 Gluon uses Neutron as the L2 backend 07:25:28 I remember there is one Local SDN contorller under Neutron for L2 network? 07:25:47 hi csatari 07:26:59 As far as I know Gluon can use several backends. 07:27:23 However it is not clear for me how Gluon selects which backend to choose. 07:28:03 each port has attribute which backend API should be called 07:28:15 by default, it's Neutron 07:28:28 Okay. 07:28:33 yes, you can of course either Neutron itself or an SDN controller such as ODL by means of the ML2 plugin 07:29:20 for using Gluon, you create a new networking API (called a proton) which you can use to create new network or port services 07:29:37 Gluon then only stores the mapping of a port to its respective backend 07:29:37 If SDN controller can support fake port directly, then no need to touch Neutron 07:30:18 that is part of the idea 07:31:14 the issue for the fake port mechanism is how to respond to the remove VM migration, for the VTEP will change if VM is migrated. 07:32:38 that's the challenge I talked in the summit 07:32:41 yes, you some kind of coordination across data centers 07:33:02 remove -> remote 07:33:29 https://docs.google.com/presentation/d/1Cv23dLAmSB57IpD-nt-TH5lrCehcoeiml7HpvgUWauo/edit#slide=id.g149036e5a0_6_201 07:35:19 to csatari, Georg and I are discussing this way to establish L2 network: https://bugs.launchpad.net/neutron/+bug/1484005 07:37:36 okay 07:38:51 i have to say that the l2gw solution looks cleaner, but i am not very familiar with l2gw 07:40:25 yes, L2GW could be core switch, and controlled by SDN controller, then the data path will work as usual for cross data center traffic 07:41:09 L2GW community is in-active community 07:41:35 we proposed one spec, no enough core reviewer to give +2 07:43:12 I talked to Amando in OPNFV summit, who initiated the L2GW project, but he is not working on this project anymore 07:43:44 oh, it is? 07:43:44 the L2GW spec for cross data center L2 connection: https://review.openstack.org/#/c/270786/ 07:43:57 i wasn't aware 07:44:59 he said he will talk to Sukhdev Kapur, who is currently the only core reviewer in L2GW 07:45:36 ok 07:46:00 off topic: i have to run to another meeting in 15min 07:46:08 ok 07:46:15 csatari: do you want to discuss your geo-redundancy use case? 07:46:25 Sure 07:46:33 so L2 networking is one common topic 07:46:53 we can continue work on it together 07:47:16 Yes 07:47:17 to csatari, please 07:49:33 In the georedundancy use case we did not considered the L2 connection between the diffeent datacenters. 07:49:54 L2 or L3 are optional 07:50:18 As my original thinking was that one VNF runs in one Datacenter/OpenStack domain. 07:51:15 I refer to the multisite project when a single VNF spans across several datacenters. 07:52:35 would like to know your idea 07:53:44 The basic use case is to create a network connection between the two cloud cells/regions/instances. 07:55:00 When there is no need for a shared broadcast domain between the sites, this is a simple external network connection. 07:55:13 But still some configuration is needed. 07:55:20 you mean use VPN for the network connection? 07:55:52 using floating IP to talk with each other? 07:56:56 how to isolate the traffic using external network between tenants 07:58:21 I'm not even sure if we need to isolate the traffic of the tenants on the external network. 07:58:47 why wouldn't we need isolation? 07:59:41 then another tenant can use floating IP ( from external network) to talk to your external IP 08:00:32 to Georg: +1 08:01:42 Do we have a requirement to always isolate the georedundancy related traffic of the different VNF-s when they are in a georedundant configuration? 08:02:27 I think this requirement is necessary 08:02:55 What is the reason for it? 08:03:13 the cloud is different from the old one is that it 08:03:24 (I'm just asking for meat into my use case description :)) 08:03:30 it's multi tenants shared infrastructure 08:04:06 so each tenant's E-W traffic ( or say, internal traffic) should be isolated 08:04:29 I do not consider the georedundancy related trafic VNF E-W traffic. 08:04:51 We are talking about two different VNF-s now. 08:05:02 But they are redundant on VNF level. 08:05:25 that's different case. 08:06:04 Internal E-W traffic should be isolated. I agree. 08:06:46 the traffic among VNFs and PNFs should be isolated or not, up to the operator's decision 08:07:26 Yes, and in some cases there is a need to configure the underlying network. 08:07:58 When a new connection between two OpenStack cells/regions/instances are created. 08:08:24 if the operator wants to move all to cloud, and there are multi-tenants space separation for VNFs, then needed, otherwise, no need 08:08:31 just it works today as PNFs 08:09:07 joehuang, csatari: I am off to another meeting (in parallel) 08:09:21 ok, nice to talk to you, Georg 08:09:25 georgk: ok 08:10:18 to csatari, you are talking how to configure a inter-connected external network which is for the communication among VNFs and PNFs 08:10:31 Yes 08:10:37 all IP addresses should be visible in one space 08:11:29 In more particular about the configuration what needs to be done in the VIM-s hosting the VNF-s. 08:12:43 understand 08:13:06 can refer to current operator's infrastructure, especiall IP pool management 08:15:12 okay 08:16:25 I will also add a note, that cells are not the best way to manage scale OpenStack due to the maintenance problem you mentioned. 08:20:10 hello, csatari, my network was broken for a while 08:22:10 hello, csatari, are you online? 08:24:50 ok, my network is not stable, I can send the chat in the mail-list, and arrange more discussion as needed 08:25:19 hi, joehuang, I'm here. 13:59:18 ChrisPriceAB: Error: Can't start another meeting, one is in progress. Use #endmeeting first. 13:59:22 #topic roll call 13:59:27 #endmeeting