13:04:44 <bin_> #startmeeting Dovetail-2016-0701
13:04:44 <collabot`> Meeting started Fri Jul  1 13:04:44 2016 UTC.  The chair is bin_. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:04:44 <collabot`> Useful Commands: #action #agreed #help #info #idea #link #topic.
13:04:44 <collabot`> The meeting name has been set to 'dovetail_2016_0701'
13:05:17 <ChrisPriceAB> #sinfo Chris Price
13:05:21 <ChrisPriceAB> #info Chris Price
13:05:42 <bin_> #info Bin Hu
13:05:52 <bin_> #info Hongbo Tian
13:06:00 <ChrisPriceAB> bin can you #chair chrispriceab
13:08:36 <bin_> #chair chrispriceab
13:08:36 <collabot`> Current chairs: bin_ chrispriceab
13:09:10 <ChrisPriceAB> #topic Dovetail framework
13:09:33 <ChrisPriceAB> #info hongbo presented a powerpoint slide reflecting his thoughts for the dovetail project
13:12:03 <ChrisPriceAB> #link https://wiki.opnfv.org/display/dovetail dovetail project description
13:14:41 <ChrisPriceAB> #info Hongbo outlines the need to address hardware in the Dovetail specification.
13:15:24 <ChrisPriceAB> #info There was a concensus on the call that this would be leveraging the pharos specification and including a testing procedure associated with the pharos specification.
13:19:15 <ChrisPriceAB> #info there was a discussion on evaluating available test frameworks to use for establishing the test suites
13:24:42 <ChrisPriceAB> #link http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_NFV001v010101p.pdf potential use cases to establish test cases around
13:28:36 <ChrisPriceAB> #link http://www.etsi.org/deliver/etsi_gs/NFV-TST/001_099/001/01.01.01_60/gs_NFV-TST001v010101p.pdf predeployment test case descriptions
13:40:19 <ChrisPriceAB> #info the team identifed that there is a need to establish an initiatial test specification as a first step before further elaborating on the tools etc...
13:40:22 <ChrisPriceAB> #endmeeting
08:22:48 <Julien-zte> ping alexyang
08:23:53 <alexyang> ping aricg
07:01:11 <joehuang> hello
07:01:28 <georgk> hi there
07:01:40 <joehuang> good to see you here
07:01:50 <georgk> i don't know if csatari is already connected
07:02:16 <joehuang> he is listed in the right pane
07:02:18 <georgk> maybe we need to wait for a few for minutes
07:02:26 <joehuang> ok
07:02:38 <georgk> yeah, but he doesn't reply :-)
07:02:46 <georgk> at least not in the netready channel
07:09:09 <georgk> csatari: are you there?
07:15:08 <joehuang> hello
07:15:23 <georgk> hi
07:15:35 <georgk> maybe we can start
07:15:39 <georgk> because i have a question
07:15:44 <joehuang> ok
07:15:58 <georgk> hopefully csatari will still show up
07:16:09 <joehuang> I think so
07:16:28 <georgk> i ready your L2 inter-DC requirements doc
07:16:40 <georgk> ready -> read
07:16:51 <georgk> in which you propose to use fake ports
07:17:12 <georgk> i am interested from a Gluon perpective
07:17:39 <joehuang> this is more lightweight, but neutron community not welcome it
07:17:40 <georgk> Gluon allows much more flexibility in terms of modeling network elements such as ports
07:17:57 <georgk> yes, i can imaging
07:18:07 <georgk> imagine
07:18:11 <joehuang> understand
07:18:38 <joehuang> understand
07:18:39 <joehuang> if we have fake port represent the remote port and it's vtep info
07:18:44 <georgk> in Gluon, one could simply create a new port service which represents a VTEP
07:18:53 <joehuang> then the L2 networking will be much more easier
07:19:21 <joehuang> yes
07:19:53 <georgk> what does the Neutron community propose instead? how would they solve it?
07:20:43 <csatari> i
07:20:45 <csatari> Hi
07:20:47 <joehuang> then the SDN controller backend needs to make the fake port and local ports in one L2 network, that means each local ports needs to understand the remote vtep for the fake port
07:20:57 <joehuang> hi csatari
07:21:07 <georgk> hi
07:21:22 <joehuang> Neutron asked us move the L2 networking functionalities to L2GW
07:21:36 <georgk> ok, i see
07:21:42 <joehuang> it's more complicated
07:22:35 <joehuang> do you think fake port is much more feasible in Gluon and SDN controller backend?
07:23:43 <georgk> in general, Gluon provides much more flexibility on a per-port basis
07:23:57 <georgk> however, there is no L2 service in Gluon yet
07:24:04 <joehuang> I am afraid my network was broken
07:24:04 <joehuang> helo
07:24:15 <csatari> helo
07:24:25 <georgk> and I don't have a full understanding of the problem yet
07:24:38 <georgk> but we can analyze it
07:25:11 <joehuang> Gluon uses Neutron as the L2 backend
07:25:28 <joehuang> I remember there is one Local SDN contorller under Neutron for L2 network?
07:25:47 <joehuang> hi csatari
07:26:59 <csatari> As far as I know Gluon can use several backends.
07:27:23 <csatari> However it is not clear for me how Gluon selects which backend to choose.
07:28:03 <joehuang> each port has attribute which backend API should be called
07:28:15 <joehuang> by default, it's Neutron
07:28:28 <csatari> Okay.
07:28:33 <georgk> yes, you can of course either Neutron itself or an SDN controller such as ODL by means of the ML2 plugin
07:29:20 <georgk> for using Gluon, you create a new networking API (called a proton) which you can use to create new network or port services
07:29:37 <georgk> Gluon then only stores the mapping of a port to its respective backend
07:29:37 <joehuang> If SDN controller can support fake port directly, then no need to touch Neutron
07:30:18 <georgk> that is part of the idea
07:31:14 <joehuang> the issue for the fake port mechanism is how to respond to the remove VM migration, for the VTEP will change if VM is migrated.
07:32:38 <joehuang> that's the challenge I talked in the summit
07:32:41 <georgk> yes, you some kind of coordination across data centers
07:33:02 <joehuang> remove -> remote
07:33:29 <csatari> https://docs.google.com/presentation/d/1Cv23dLAmSB57IpD-nt-TH5lrCehcoeiml7HpvgUWauo/edit#slide=id.g149036e5a0_6_201
07:35:19 <joehuang> to csatari, Georg and I are discussing this way to establish L2 network: https://bugs.launchpad.net/neutron/+bug/1484005
07:37:36 <csatari> okay
07:38:51 <georgk> i have to say that the l2gw solution looks cleaner, but i am not very familiar with l2gw
07:40:25 <joehuang> yes, L2GW could be core switch, and controlled by SDN controller, then the data path will work as usual for cross data center traffic
07:41:09 <joehuang> L2GW community is in-active community
07:41:35 <joehuang> we proposed one spec, no enough core reviewer to give +2
07:43:12 <joehuang> I talked to Amando in OPNFV summit, who initiated the L2GW project, but he is not working on this project anymore
07:43:44 <georgk> oh, it is?
07:43:44 <joehuang> the L2GW spec for cross data center L2 connection: https://review.openstack.org/#/c/270786/
07:43:57 <georgk> i wasn't aware
07:44:59 <joehuang> he said he will talk to Sukhdev Kapur, who is currently the only core reviewer in L2GW
07:45:36 <georgk> ok
07:46:00 <georgk> off topic: i have to run to another meeting in 15min
07:46:08 <joehuang> ok
07:46:15 <georgk> csatari: do you want to discuss your geo-redundancy use case?
07:46:25 <csatari> Sure
07:46:33 <joehuang> so L2 networking is one common topic
07:46:53 <joehuang> we can continue work on it together
07:47:16 <csatari> Yes
07:47:17 <joehuang> to csatari, please
07:49:33 <csatari> In the georedundancy use case we did not considered the L2 connection between the diffeent datacenters.
07:49:54 <joehuang> L2 or L3 are optional
07:50:18 <csatari> As my original thinking was that one VNF runs in one Datacenter/OpenStack domain.
07:51:15 <csatari> I refer to the multisite project when a single VNF spans across several datacenters.
07:52:35 <joehuang> would like to know your idea
07:53:44 <csatari> The basic use case is to create a network connection between the two cloud cells/regions/instances.
07:55:00 <csatari> When there is no need for a shared broadcast domain between the sites, this is a simple external network connection.
07:55:13 <csatari> But still some configuration is needed.
07:55:20 <joehuang> you mean use VPN for the network connection?
07:55:52 <joehuang> using floating IP to talk with each other?
07:56:56 <joehuang> how to isolate the traffic using external network between tenants
07:58:21 <csatari> I'm not even sure if we need to isolate the traffic of the tenants on the external network.
07:58:47 <georgk> why wouldn't we need isolation?
07:59:41 <joehuang> then another tenant can use floating IP ( from external network) to talk to your external IP
08:00:32 <joehuang> to Georg: +1
08:01:42 <csatari> Do we have a requirement to always isolate the georedundancy related traffic of the different VNF-s when they are in a georedundant configuration?
08:02:27 <joehuang> I think this requirement is necessary
08:02:55 <csatari> What is the reason for it?
08:03:13 <joehuang> the cloud is different from the old one is that it
08:03:24 <csatari> (I'm just asking for meat into my use case description :))
08:03:30 <joehuang> it's multi tenants shared infrastructure
08:04:06 <joehuang> so each tenant's E-W traffic ( or say, internal traffic) should be isolated
08:04:29 <csatari> I do not consider the georedundancy related trafic VNF E-W traffic.
08:04:51 <csatari> We are talking about two different VNF-s now.
08:05:02 <csatari> But they are redundant on VNF level.
08:05:25 <joehuang> that's different case.
08:06:04 <csatari> Internal E-W traffic should be isolated. I agree.
08:06:46 <joehuang> the traffic among VNFs and PNFs should be isolated or not, up to the operator's decision
08:07:26 <csatari> Yes, and in some cases there is a need to configure the underlying network.
08:07:58 <csatari> When a new connection between two OpenStack cells/regions/instances are created.
08:08:24 <joehuang> if the operator wants to move all to cloud, and there are multi-tenants space separation for VNFs, then needed, otherwise, no need
08:08:31 <joehuang> just it works today as PNFs
08:09:07 <georgk> joehuang, csatari: I am off to another meeting (in parallel)
08:09:21 <joehuang> ok, nice to talk to you, Georg
08:09:25 <csatari> georgk: ok
08:10:18 <joehuang> to csatari, you are talking how to configure a inter-connected external network which is for the communication among VNFs and PNFs
08:10:31 <csatari> Yes
08:10:37 <joehuang> all IP addresses should be visible in one space
08:11:29 <csatari> In more particular about the configuration what needs to be done in the VIM-s hosting the VNF-s.
08:12:43 <joehuang> understand
08:13:06 <joehuang> can refer to current operator's infrastructure, especiall IP pool management
08:15:12 <csatari> okay
08:16:25 <csatari> I will also add a note, that cells are not the best way to manage scale OpenStack due to the maintenance problem you mentioned.
08:20:10 <joehuang> hello, csatari, my network was broken for a while
08:22:10 <joehuang> hello, csatari, are you online?
08:24:50 <joehuang> ok, my network is not stable, I can send the chat in the mail-list, and arrange more discussion as needed
08:25:19 <csatari> hi, joehuang, I'm here.
13:59:18 <collabot`> ChrisPriceAB: Error: Can't start another meeting, one is in progress.  Use #endmeeting first.
13:59:22 <ChrisPriceAB> #topic roll call
13:59:27 <ChrisPriceAB> #endmeeting