#opnfv-meeting: BGS weekly meeting

Meeting started by frankbrockners at 15:02:31 UTC (full logs).

Meeting summary

  1. administrivia (frankbrockners, 15:02:45)
    1. I can take it (rprakash, 15:04:23)

  2. Committer approval - Daniel Smith (dneary, 15:06:19)
    1. AGREED: Daniel Smith is approved as a committer for BGS (dneary, 15:06:35)
    2. Glue between different activities for different installers and Daniel will be be commiter (dneary, 15:07:10)

  3. Upcoming agenda (dneary, 15:07:17)
    1. we need to continue our meeting (rprakash, 15:07:33)
    2. Frank reminds everyone that Easter Monday is next weekend, and is a holiday for many regular participants. He suggests we should maintain the meeting at the same time. (dneary, 15:08:14)
    3. Dave proposes cancelling or moving the call (dneary, 15:08:25)
    4. frank agreed we need to have meeting Monday or Tuesday? (rprakash, 15:08:38)
    5. Bin suggests moving it to Tuesday 7th, 9am PST (the hour after the TSC meeting) (dneary, 15:08:57)
    6. Tuesday 9 AM PDT is the suggession (rprakash, 15:09:02)
    7. AGREED: Meeting moves to 9am PDT on Tuesday 7th April (dneary, 15:09:32)

  4. Work items for first release (dneary, 15:10:15)
    1. People are working towards a final system state, Frank asks if we have reached a final state yet? (dneary, 15:11:46)
    2. Network topology to be agreed for documentation (rprakash, 15:12:45)
    3. Not yet at a common conclusion (Szilard from Ericsson) (dneary, 15:13:31)
    4. pharos:pharos_specification [Wiki] (rprakash, 15:13:53)
    5. https://wiki.opnfv.org/pharos/pharos_specification (rprakash, 15:15:01)
    6. for first implementaion it will be to LF and there are two PODS there (rprakash, 15:16:02)
    7. https://dl.dropboxusercontent.com/u/12773330/opnfv_hw_pics/OPNFVHardwareFront.jpg (rprakash, 15:16:18)
    8. Design of Network is more important for simple LF implementaion (Stefen Ericson), Chris supports that (rprakash, 15:19:30)
    9. lets continue discussion through email to get to an agreement (rprakash, 15:20:22)
    10. storage single and High availability - Ceph connectivity in question (rprakash, 15:21:32)
    11. in BGS they use for artifacts repo (rprakash, 15:23:09)
    12. http://artifacts.opnfv.org/ (rprakash, 15:23:15)
    13. Dan Radez says that the issue is that we have a line item - "we'll use Ceph for storage" - with no idea what that means for a deployment. Frank submits that someone needs to take the action to consolidate the various views into one common target platform (dneary, 15:23:40)
    14. BGS will have different storage requirement than Octopus for Ceph (rprakash, 15:24:09)
    15. ACTION: Peter Bandzi takes on the task of leading the discussion to converge on the reference target installation of OPNFV so that we can deploy to it next Thursday (dneary, 15:27:22)

  5. Common set of Puppet manifests for installation of common components (dneary, 15:27:54)
    1. trozet describes the Ceph configuration being used (one Ceph "controller" on each of the controller nodes, and OSDs on every node), and some continuing issues with VXLAN creation for OpenDaylight and OVS (dneary, 15:30:58)
    2. san use drat version for ceph to allow experiment rather than merge (rprakash, 15:32:48)
    3. Question: "where can I find the script you use to configure Ceph?" Frank suggests putting it into git, Tim will put it into Gerrit and mark it as "Draft", as per Dan Radez's suggestion (dneary, 15:32:50)
    4. Frank reiterates that common pieces used by both Fuel & Foreman need to be in the repo ASAP (dneary, 15:38:13)
    5. most common pieces that is across different BGS tracks should be listed on wiki (rprakash, 15:38:20)
    6. yaml files for different Vendor blades need to be consolidated by Peter Bandzi (rprakash, 15:40:51)
    7. need common shim/wrapper for API call for Blade servers by different Vendor hardware (rprakash, 15:42:59)
    8. send the etherpad link for this to BGS asap for Peter to review (rprakash, 15:43:52)

  6. Contents of OPNFV release ISO for r1 (dneary, 15:45:06)
    1. Deployment Automation an Hardware adopter's to be shared (rprakash, 15:45:06)
    2. Need something well defined from a release perspective - contains everything we need (OpenDaylight, OpenStack + dependencies, base OS, installer) (dneary, 15:45:49)
    3. Frank: Should install the jumphost off an ISO, and allow a user to deploy other hosts from there (dneary, 15:46:49)
    4. Tim: It doesn't make any sense to assume that provisioning host doesn't have internet access (dneary, 15:47:17)
    5. Chris: What we provide as a deliverable is different from what we use ourselves - want to enable consumers of what we are producing, potentially behind a firewall (dneary, 15:48:34)
    6. Tim asks if there's a proposal of which packages to include (dneary, 15:49:03)
    7. Dan says that a base OS installer plus OpenStack plus OpenDaylight plus all dependencies would be a huge image - bigger than a CD (dneary, 15:49:44)
    8. Frank says, do we want an all in one package that installs independent of having internet access? (dneary, 15:51:08)
    9. We need a deterministic install. If we are pulling upstream resources there is the likelyhood that something will break for someone. (bryan_att, 15:51:27)
    10. At the least we need to pull defined versions of the resources as used in the final release build (bryan_att, 15:52:18)
    11. Dave says, if the goal is to enable an installation to someone who has no internet access, then other alternatives are possible - we can ship packages without that being a bootable install media (dneary, 15:55:28)
    12. e.g. from our own artifact repository if needed (bryan_att, 15:55:37)
    13. Bryan says that approach would be fragile (dneary, 15:55:46)
    14. I would not put too strong a focus on internet-limited labs / users, to the detriment of the community re a reliable install experience (bryan_att, 15:56:30)
    15. Chris says there's an expectation that we have an installable media - is there any reason not to do an ISO? (dneary, 15:57:00)
    16. for CI-focused labs / users, it may make sense to pull directly from the upstream but only if we are OK with the fact that things will break, in different ways for different users (bryan_att, 15:58:22)
    17. Dave says that there's a lot of decisions and work to happen for a bootable media - host OS, packages, installer, scripts - and it feels like some of those decisions have not been discussed yet (dneary, 15:59:37)
    18. Dan says the issue he has is the work involved in creating an ISO would endanger the other tasks that need to be done for the release (dneary, 16:00:21)
    19. I'm happy to be convinced that this is an overstated concern - looking for any evidence as to how we can achieve a deterministic install experience while dynamically pulling from the internet the upstream code (bryan_att, 16:01:37)
    20. Tim suggests that we could bring up an installer host, which would pull in all of the upstream stuff, in the interests of limiting the footprint and work involved (dneary, 16:02:19)
    21. AGREED: BGS team to continue to figure out if an install media is feasible as an approach, Chris to bring this discussion back to the board to set expectations for the release (dneary, 16:04:23)
    22. sounds good to me, if we can ensure a specific tested package version is installed (bryan_att, 16:06:26)
    23. AGREED: That the target you have an installable disk that installs the OS on the Controller and Compute node and additional packages can be downloaded from Internet or private mirror as may be the case (rprakash, 16:08:37)
    24. Still Chris want's to ensure we do feasible to get like iso discussed (rprakash, 16:09:19)

Meeting ended at 16:10:57 UTC (full logs).

Action items

  1. Peter Bandzi takes on the task of leading the discussion to converge on the reference target installation of OPNFV so that we can deploy to it next Thursday

People present (lines said)

  1. dneary (61)
  2. rprakash (26)
  3. collabot (14)
  4. bryan_att (9)
  5. frankbrockners (6)
  6. fdegir (4)
  7. arnaud_orange (2)
  8. malla (1)
  9. ChrisPriceAB (0)
  10. pbandzi (0)

Generated by MeetBot 0.1.4.