18:16:36 <regXboi> #startmeeting model/arch redux
18:16:36 <odl_meetbot> Meeting started Fri May 16 18:16:36 2014 UTC.  The chair is regXboi. Information about MeetBot at http://ci.openstack.org/meetbot.html.
18:16:36 <odl_meetbot> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:16:36 <odl_meetbot> The meeting name has been set to 'model_arch_redux'
18:16:57 <regXboi> #chair tbachman alagalah
18:16:57 <odl_meetbot> Current chairs: alagalah regXboi tbachman
18:17:18 <tbachman> #info Is creating a tenant in response to a new flow a good idea? answer: no
18:17:33 <tbachman> #info however, we still need a bulk operation to create tenants
18:18:23 <tbachman> #info orchestration systems (openstack) allows for the bulk creation of networks
18:18:28 <tbachman> #info readams asserts that's fine
18:19:20 <tbachman> #info tenant/EPG changes shouldn't be continuous operations
18:20:06 <tbachman> #info regXboi notes the M&A use case, which implies a running system, and a new bulk has been dumped on it
18:22:29 <tbachman> #info time constant for convergence should be relatively short
18:24:37 <tbachman> #info regXboi would like to see the time constant for convergence on the order of seconds
18:24:56 <tbachman> #info and by time constant, that's the "1/e time"
18:25:08 <regXboi> yeah, i'm an engineer :)
18:25:12 <tbachman> lol
18:26:24 <tbachman> #info There are situations where a renderer running on physical switches running OF and creating L2 flows of 10k/second, and 50 switches, it's possible that there could be changes in policy that could change flows everywhere, making for a convergence time on the order of minutes
18:26:38 <tbachman> #info but those are seen as "rare" occurrences
18:27:21 <tbachman> #info It would be good if we could make some statement on the convrgence time of some cases.
18:27:24 <regXboi> uchau: we are trying to get to your problem :)
18:27:28 <regXboi> or I am
18:27:38 <uchau> yes thank you :)
18:28:50 <alagalah> regXboi: mate, I used to be a voice eng/arch. I don't think I said "random"
18:29:05 <regXboi> alagalah: you are right - you said dynamic
18:29:14 * tbachman wonders what we're trying to talk about
18:29:24 <regXboi> but I wanted to point out that we can understand where we are going :)
18:29:29 <alagalah> ack
18:29:48 <uchau> i just want to know what the supported config rate is
18:30:26 <regXboi> uchau: I don't think we know because there isn't an implementation yet
18:30:43 <regXboi> and I know I didn't specify a config rate in my sizing draft
18:30:53 <regXboi> which is in https://wiki.opendaylight.org/view/Group_Policy:Scaling
18:31:01 <uchau> we're hearing that 200/sec is no, and human rate is what's expected
18:31:42 <tbachman> #info regXboi asks what call rate they're looking at for UC&C use case
18:31:59 <tbachman> #info uchau says about 200 flows/second, 10-100k sessions active at any one time
18:32:17 <tbachman> #info correction 200 sessions/second
18:32:54 <tbachman> maybe I mis-scribed. Maybe 10-100k EPs
18:33:51 <tbachman> #info alagalah asks for clarification of 10-100k number -- active, or total possible number
18:35:03 <alagalah> #info Actually what I said was clarify 10k-100k concurrent sessions or that number of endpoints
18:35:05 <tbachman> #info lost audio
18:35:13 <tbachman> alagalah: thx!
18:36:04 <tbachman> #info clarification on what a dynamic classifier is
18:39:02 <tbachman> #info The config says this is the description of how these endpoints in different EPGs are allowed to communicate
18:42:52 <tbachman> #info It's not a rate of change number that separates operational vs. config changes.
18:43:37 <tbachman> #info subject feature definition is a global configuration item
18:44:29 <tbachman> #info A rule is defined that consists of a classifier that's associated with the flow
18:44:40 <tbachman> #info the action would be "allow", "apply QoS"
18:46:12 <tbachman> #info The renderer gets a packet_in, it goes through policy resolution, discovers that it needs to classify with the lync flow classifier
18:46:28 <tbachman> #info readams recommends reading the policy resolution spec on the wiki
18:46:36 <tbachman> #info resolve the EPG of the EP, using the EPG repo
18:47:21 <tbachman> #link https://wiki.opendaylight.org/view/Group_Policy:Architecture/Policy_Model
18:47:33 <tbachman> #link https://wiki.opendaylight.org/view/Group_Policy:Architecture
18:48:34 <tbachman> #info there is something like an orchestration system that assigns EPs to EPGs
18:50:02 <tbachman> #info regXboi makes Lync call to self
18:50:14 <tbachman> #info policy was improperly specified
18:50:17 <tbachman> lol
18:50:19 <tbachman> sorry folks
18:50:41 <tbachman> #info renderer gets information, ensures that EP gets assigned to proper EPG
18:51:50 <tbachman> #info There's some mechanism for assigning/mapping an EP to it's EPG
18:51:51 <regXboi> #info regXboi has 10 more minutes before his next meeting
18:52:47 <tbachman> #info Lync flow classifier has something backing it -- the Lync session info
18:54:25 <tbachman> #info The point is that the operational data can be treated differently than the configuraiton data.
18:55:15 <tbachman> #info there's going to be auditing, AAA, logging, etc.
18:56:44 * regXboi raises hand
18:58:37 <regXboi> bye all - have a great weekend
18:58:42 <tbachman> regXboi: you too!
19:01:00 <tbachman> #info Still a question of how an EP is mapped to an EPG -- readams says this is a longer conversation, but there's got to be some configuration of your network that does that
19:03:59 <tbachman> #info For example, in UC&C case, the devices making the calls can be the EPs
19:05:39 <tbachman> #info Lync session server pushes information about lync session into some storage, which triggers an event to the renderer
19:06:02 <tbachman> #info what's in the policy is how does the application want to treat these flows
19:06:25 <tbachman> #info FOr example, only allow calls to other parties in a specific region
19:06:46 <tbachman> #info the renderer listens to the policy repository and to the store that keeps the sessions
19:08:35 <tbachman> #info uchau expresses concern about having a per-application renderers
19:08:49 <tbachman> #info readams notes that the renderers could be generalized -- e.g. campus LAN renderer
19:16:15 <tbachman> #info the goal for UC&C is to support campus, enterprise, and datacenter
19:17:03 <tbachman> #info the ONF is wondering if GBP can support the use cases they're looking at
19:20:35 <tbachman> #action alagalah to send a diagram of what this may look like
19:20:42 <tbachman> #action uchau  to provide link for UC&C
19:21:36 <tbachman> #info the policy model is extremely general, but the first implementation is OpenStack focused
19:25:41 <tbachman> #endmeeting