============================================ #opendaylight-group-policy: model/arch redux ============================================ Meeting started by regXboi at 18:16:36 UTC. The full logs are available at http://meetings.opendaylight.org/opendaylight-group-policy/2014/model_arch_redux/opendaylight-group-policy-model_arch_redux.2014-05-16-18.16.log.html . Meeting summary --------------- * Is creating a tenant in response to a new flow a good idea? answer: no (tbachman, 18:17:18) * however, we still need a bulk operation to create tenants (tbachman, 18:17:33) * orchestration systems (openstack) allows for the bulk creation of networks (tbachman, 18:18:23) * readams asserts that's fine (tbachman, 18:18:28) * tenant/EPG changes shouldn't be continuous operations (tbachman, 18:19:20) * regXboi notes the M&A use case, which implies a running system, and a new bulk has been dumped on it (tbachman, 18:20:06) * time constant for convergence should be relatively short (tbachman, 18:22:29) * regXboi would like to see the time constant for convergence on the order of seconds (tbachman, 18:24:37) * and by time constant, that's the "1/e time" (tbachman, 18:24:56) * There are situations where a renderer running on physical switches running OF and creating L2 flows of 10k/second, and 50 switches, it's possible that there could be changes in policy that could change flows everywhere, making for a convergence time on the order of minutes (tbachman, 18:26:24) * but those are seen as "rare" occurrences (tbachman, 18:26:38) * It would be good if we could make some statement on the convrgence time of some cases. (tbachman, 18:27:21) * regXboi asks what call rate they're looking at for UC&C use case (tbachman, 18:31:42) * uchau says about 200 flows/second, 10-100k sessions active at any one time (tbachman, 18:31:59) * correction 200 sessions/second (tbachman, 18:32:17) * alagalah asks for clarification of 10-100k number -- active, or total possible number (tbachman, 18:33:51) * Actually what I said was clarify 10k-100k concurrent sessions or that number of endpoints (alagalah, 18:35:03) * lost audio (tbachman, 18:35:05) * clarification on what a dynamic classifier is (tbachman, 18:36:04) * The config says this is the description of how these endpoints in different EPGs are allowed to communicate (tbachman, 18:39:02) * It's not a rate of change number that separates operational vs. config changes. (tbachman, 18:42:52) * subject feature definition is a global configuration item (tbachman, 18:43:37) * A rule is defined that consists of a classifier that's associated with the flow (tbachman, 18:44:29) * the action would be "allow", "apply QoS" (tbachman, 18:44:40) * The renderer gets a packet_in, it goes through policy resolution, discovers that it needs to classify with the lync flow classifier (tbachman, 18:46:12) * readams recommends reading the policy resolution spec on the wiki (tbachman, 18:46:28) * resolve the EPG of the EP, using the EPG repo (tbachman, 18:46:36) * LINK: https://wiki.opendaylight.org/view/Group_Policy:Architecture/Policy_Model (tbachman, 18:47:21) * LINK: https://wiki.opendaylight.org/view/Group_Policy:Architecture (tbachman, 18:47:33) * there is something like an orchestration system that assigns EPs to EPGs (tbachman, 18:48:34) * regXboi makes Lync call to self (tbachman, 18:50:02) * policy was improperly specified (tbachman, 18:50:14) * renderer gets information, ensures that EP gets assigned to proper EPG (tbachman, 18:50:41) * There's some mechanism for assigning/mapping an EP to it's EPG (tbachman, 18:51:50) * regXboi has 10 more minutes before his next meeting (regXboi, 18:51:51) * Lync flow classifier has something backing it -- the Lync session info (tbachman, 18:52:47) * The point is that the operational data can be treated differently than the configuraiton data. (tbachman, 18:54:25) * there's going to be auditing, AAA, logging, etc. (tbachman, 18:55:15) * Still a question of how an EP is mapped to an EPG -- readams says this is a longer conversation, but there's got to be some configuration of your network that does that (tbachman, 19:01:00) * For example, in UC&C case, the devices making the calls can be the EPs (tbachman, 19:03:59) * Lync session server pushes information about lync session into some storage, which triggers an event to the renderer (tbachman, 19:05:39) * what's in the policy is how does the application want to treat these flows (tbachman, 19:06:02) * FOr example, only allow calls to other parties in a specific region (tbachman, 19:06:25) * the renderer listens to the policy repository and to the store that keeps the sessions (tbachman, 19:06:46) * uchau expresses concern about having a per-application renderers (tbachman, 19:08:35) * readams notes that the renderers could be generalized -- e.g. campus LAN renderer (tbachman, 19:08:49) * the goal for UC&C is to support campus, enterprise, and datacenter (tbachman, 19:16:15) * the ONF is wondering if GBP can support the use cases they're looking at (tbachman, 19:17:03) * ACTION: alagalah to send a diagram of what this may look like (tbachman, 19:20:35) * ACTION: uchau to provide link for UC&C (tbachman, 19:20:42) * the policy model is extremely general, but the first implementation is OpenStack focused (tbachman, 19:21:36) Meeting ended at 19:25:41 UTC. Action items, by person ----------------------- * alagalah * alagalah to send a diagram of what this may look like * uchau * uchau to provide link for UC&C People present (lines said) --------------------------- * tbachman (57) * regXboi (13) * odl_meetbot (4) * alagalah (3) * uchau (3) Generated by `MeetBot`_ 0.1.4