#opnfv-meeting: BGS weekly meeting
Meeting started by frankbrockners at 15:02:31 UTC
(full logs).
Meeting summary
- administrivia (frankbrockners, 15:02:45)
- I can take it (rprakash,
15:04:23)
- Committer approval - Daniel Smith (dneary, 15:06:19)
- AGREED: Daniel Smith
is approved as a committer for BGS (dneary,
15:06:35)
- Glue between different activities for different
installers and Daniel will be be commiter (dneary,
15:07:10)
- Upcoming agenda (dneary, 15:07:17)
- we need to continue our meeting (rprakash,
15:07:33)
- Frank reminds everyone that Easter Monday is
next weekend, and is a holiday for many regular participants. He
suggests we should maintain the meeting at the same time.
(dneary,
15:08:14)
- Dave proposes cancelling or moving the
call (dneary,
15:08:25)
- frank agreed we need to have meeting Monday or
Tuesday? (rprakash,
15:08:38)
- Bin suggests moving it to Tuesday 7th, 9am PST
(the hour after the TSC meeting) (dneary,
15:08:57)
- Tuesday 9 AM PDT is the suggession (rprakash,
15:09:02)
- AGREED: Meeting moves
to 9am PDT on Tuesday 7th April (dneary,
15:09:32)
- Work items for first release (dneary, 15:10:15)
- People are working towards a final system
state, Frank asks if we have reached a final state yet? (dneary,
15:11:46)
- Network topology to be agreed for
documentation (rprakash,
15:12:45)
- Not yet at a common conclusion (Szilard from
Ericsson) (dneary,
15:13:31)
- pharos:pharos_specification
[Wiki] (rprakash,
15:13:53)
- https://wiki.opnfv.org/pharos/pharos_specification
(rprakash,
15:15:01)
- for first implementaion it will be to LF and
there are two PODS there (rprakash,
15:16:02)
- https://dl.dropboxusercontent.com/u/12773330/opnfv_hw_pics/OPNFVHardwareFront.jpg
(rprakash,
15:16:18)
- Design of Network is more important for simple
LF implementaion (Stefen Ericson), Chris supports that (rprakash,
15:19:30)
- lets continue discussion through email to get
to an agreement (rprakash,
15:20:22)
- storage single and High availability - Ceph
connectivity in question (rprakash,
15:21:32)
- in BGS they use for artifacts repo (rprakash,
15:23:09)
- http://artifacts.opnfv.org/
(rprakash,
15:23:15)
- Dan Radez says that the issue is that we have a
line item - "we'll use Ceph for storage" - with no idea what that
means for a deployment. Frank submits that someone needs to take the
action to consolidate the various views into one common target
platform (dneary,
15:23:40)
- BGS will have different storage requirement
than Octopus for Ceph (rprakash,
15:24:09)
- ACTION: Peter Bandzi
takes on the task of leading the discussion to converge on the
reference target installation of OPNFV so that we can deploy to it
next Thursday (dneary,
15:27:22)
- Common set of Puppet manifests for installation of common components (dneary, 15:27:54)
- trozet describes the Ceph configuration being
used (one Ceph "controller" on each of the controller nodes, and
OSDs on every node), and some continuing issues with VXLAN creation
for OpenDaylight and OVS (dneary,
15:30:58)
- san use drat version for ceph to allow
experiment rather than merge (rprakash,
15:32:48)
- Question: "where can I find the script you use
to configure Ceph?" Frank suggests putting it into git, Tim will put
it into Gerrit and mark it as "Draft", as per Dan Radez's
suggestion (dneary,
15:32:50)
- Frank reiterates that common pieces used by
both Fuel & Foreman need to be in the repo ASAP (dneary,
15:38:13)
- most common pieces that is across different BGS
tracks should be listed on wiki (rprakash,
15:38:20)
- yaml files for different Vendor blades need to
be consolidated by Peter Bandzi (rprakash,
15:40:51)
- need common shim/wrapper for API call for Blade
servers by different Vendor hardware (rprakash,
15:42:59)
- send the etherpad link for this to BGS asap for
Peter to review (rprakash,
15:43:52)
- Contents of OPNFV release ISO for r1 (dneary, 15:45:06)
- Deployment Automation an Hardware adopter's to
be shared (rprakash,
15:45:06)
- Need something well defined from a release
perspective - contains everything we need (OpenDaylight, OpenStack +
dependencies, base OS, installer) (dneary,
15:45:49)
- Frank: Should install the jumphost off an ISO,
and allow a user to deploy other hosts from there (dneary,
15:46:49)
- Tim: It doesn't make any sense to assume that
provisioning host doesn't have internet access (dneary,
15:47:17)
- Chris: What we provide as a deliverable is
different from what we use ourselves - want to enable consumers of
what we are producing, potentially behind a firewall (dneary,
15:48:34)
- Tim asks if there's a proposal of which
packages to include (dneary,
15:49:03)
- Dan says that a base OS installer plus
OpenStack plus OpenDaylight plus all dependencies would be a huge
image - bigger than a CD (dneary,
15:49:44)
- Frank says, do we want an all in one package
that installs independent of having internet access? (dneary,
15:51:08)
- We need a deterministic install. If we are
pulling upstream resources there is the likelyhood that something
will break for someone. (bryan_att,
15:51:27)
- At the least we need to pull defined versions
of the resources as used in the final release build (bryan_att,
15:52:18)
- Dave says, if the goal is to enable an
installation to someone who has no internet access, then other
alternatives are possible - we can ship packages without that being
a bootable install media (dneary,
15:55:28)
- e.g. from our own artifact repository if
needed (bryan_att,
15:55:37)
- Bryan says that approach would be
fragile (dneary,
15:55:46)
- I would not put too strong a focus on
internet-limited labs / users, to the detriment of the community re
a reliable install experience (bryan_att,
15:56:30)
- Chris says there's an expectation that we have
an installable media - is there any reason not to do an ISO?
(dneary,
15:57:00)
- for CI-focused labs / users, it may make sense
to pull directly from the upstream but only if we are OK with the
fact that things will break, in different ways for different
users (bryan_att,
15:58:22)
- Dave says that there's a lot of decisions and
work to happen for a bootable media - host OS, packages, installer,
scripts - and it feels like some of those decisions have not been
discussed yet (dneary,
15:59:37)
- Dan says the issue he has is the work involved
in creating an ISO would endanger the other tasks that need to be
done for the release (dneary,
16:00:21)
- I'm happy to be convinced that this is an
overstated concern - looking for any evidence as to how we can
achieve a deterministic install experience while dynamically pulling
from the internet the upstream code (bryan_att,
16:01:37)
- Tim suggests that we could bring up an
installer host, which would pull in all of the upstream stuff, in
the interests of limiting the footprint and work involved
(dneary,
16:02:19)
- AGREED: BGS team to
continue to figure out if an install media is feasible as an
approach, Chris to bring this discussion back to the board to set
expectations for the release (dneary,
16:04:23)
- sounds good to me, if we can ensure a specific
tested package version is installed (bryan_att,
16:06:26)
- AGREED: That the
target you have an installable disk that installs the OS on the
Controller and Compute node and additional packages can be
downloaded from Internet or private mirror as may be the case
(rprakash,
16:08:37)
- Still Chris want's to ensure we do feasible to
get like iso discussed (rprakash,
16:09:19)
Meeting ended at 16:10:57 UTC
(full logs).
Action items
- Peter Bandzi takes on the task of leading the discussion to converge on the reference target installation of OPNFV so that we can deploy to it next Thursday
People present (lines said)
- dneary (61)
- rprakash (26)
- collabot (14)
- bryan_att (9)
- frankbrockners (6)
- fdegir (4)
- arnaud_orange (2)
- malla (1)
- ChrisPriceAB (0)
- pbandzi (0)
Generated by MeetBot 0.1.4.