17:03:35 <jamoluhrsen> #startmeeting integration
17:03:35 <odl_meetbot> Meeting started Thu Feb  8 17:03:35 2018 UTC.  The chair is jamoluhrsen. Information about MeetBot at http://ci.openstack.org/meetbot.html.
17:03:35 <odl_meetbot> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:03:35 <odl_meetbot> The meeting name has been set to 'integration'
17:04:40 <jamoluhrsen> #topic builder
17:04:40 <dfarrell07> LuisGomez:
17:06:09 <jamoluhrsen> #undo
17:06:09 <odl_meetbot> Removing item from minutes: <MeetBot.ircmeeting.items.Topic object at 0x2c36790>
17:06:20 <jamoluhrsen> #topic distribution
17:07:21 <jamoluhrsen> #info still some projects not yet added in Oxygen (8 projects)
17:08:16 <jamoluhrsen> #info the carbon distribution was not building for a while, but it worked now today, so all branches seem to be good now
17:09:59 <jamoluhrsen> #info seems that lots of the recent infra related problems with building has been overcome with larger VMs (more CPU and RAM)
17:13:23 <jamoluhrsen> #topic builder
17:13:47 <jamoluhrsen> #info some complaints coming that we shouldn't be changing VMs (sizes, etc) underneath them.
17:14:24 <zxiiro> #info Nexus disk ran out of inodes and caused downtime twice on Saturday and again on Monday
17:14:24 <zxiiro> #info We will be performing maintenance on Monday to move to an XFS partition with 4x the amount of inodes
17:14:24 <zxiiro> #info We will be putting logs on a separate partition moving forward so that inode breaks in that directory does not affect the rest of the Nexus server in the future.
17:14:24 <zxiiro> #info another idea is to tarball older logs to save on inodes
17:14:24 <zxiiro> #info Related to this we've been notified by Sonatype about a Cross Site Scripting vulnerability CVE which could allow code execution so expecting to update Nexus as well.
17:17:11 <zxiiro> #info Confirmed that Vexxhost Disk I/O is slow because disk is backed by a networked Ceph cluster and not local volume storage like we had in Rackspace
17:17:11 <zxiiro> #info Vexxhost cloud Network I/O slow Monday morning due to Hardware upgrades, Vexxhost heard our concerns that their cloud is noticiably slower than our previous provider and are looking into installing faster CPUs and faster Disks in the cluster. #info Unfortunately because the volume storage is backed by Ceph when new disks were added the cluster sync took up a lot of network I/O causing all of our
17:17:11 <zxiiro> CSIT jobs on Monday to run on very slow network links.
17:17:29 <jamoluhrsen> #info CSIT jobs on Monday to run on very slow network links.
17:18:13 <shague> jamoluhrsen: what's the link to the carbon int/distr job that you mentioned finally passed?
17:19:00 <jamoluhrsen> shague: 326
17:19:16 <shague> what job? I am having a brain fart
17:19:22 <jamoluhrsen> dist-test
17:19:51 <jamoluhrsen> shague: https://jenkins.opendaylight.org/releng/view/integration/job/integration-distribution-test-carbon/326/
17:20:47 <jamoluhrsen> #info vexxhost still has more migration/swapping to do so we need to find a sane time and warn everyone
17:21:28 <jamoluhrsen> #info dfarrell07 suggests that doing the upgrades/migrations could be when we are all at ONS.
17:22:44 <zxiiro> #info Wednesday we discovered a bug in our orphaned nodes cleanup script where a race condition where if a new Node was created but not yet connected to Jenkins could be issued a delete command causing the job to immediately fail as soon as it starts. This explains the seemingly random disappearence of servers.
17:22:44 <zxiiro> #info Orphaned script is now disabled and a better solution that will take into account creation time will be added in a future update.
17:26:27 <zxiiro> #link Changed default flavor to give projects more CPU (but less RAM) https://git.opendaylight.org/gerrit/67871
17:26:27 <zxiiro> #info All Validate Autorelease accidently broken on Friday and releng was not aware until Tuesday. This should never have been possible in the first place as this job should not be overridable. we've hardcoded it since
17:26:27 <zxiiro> #info Some project merge jobs needed to be adjusted (and maybe more, please let us know)
17:27:52 <jamoluhrsen> #still some projects are failing because of too small build vms, but we need to fix those one at a time
17:28:44 <jamoluhrsen> #topic release
17:29:15 <jamoluhrsen> #info carbon SR3 is looking better. need signoff on the failures
17:29:59 <jamoluhrsen> #link https://docs.google.com/spreadsheets/d/1VcB12FBiFV4GAEHZSspHBNxKI_9XugJp-6Qbbw20Omk/edit#gid=40307633 <-- csit failures for SR3
17:30:11 <jamoluhrsen> #action LuisGomez jamoluhrsen  to check as many jobs as possible
17:30:51 <jamoluhrsen> #info we did already vote from the TSC that if we get all CSIT signoffs for SR3 we can release
17:31:30 <jamoluhrsen> #info no good nitro autorelease for a week (probably infra related) but we want to freeze nitro this week and release next week
17:31:43 <jamoluhrsen> #info nitrogen is top priority for us
17:32:38 <jamoluhrsen> #info oxygen is still struggling with 8 projects not in the distribution yet. 3 don't even have a yangtools version bump patch that's passing
17:33:33 <jamoluhrsen> #info eman, snmp4sdn and vtn are the projects that don't have a passing yangtools version bump
17:34:19 <klou> #link https://git.opendaylight.org/gerrit/#/c/66633/ <- version bump patch for vtn
17:37:22 <jamoluhrsen> #info gbp faas jsonrpm nic unimgr are the projects that have done yangtools bump (via tom) but not in distro. nic and unimgr don't have any activity to do that yet
17:37:58 <jamoluhrsen> #info still projects missing m2,m3,m4 statuses....
17:38:14 <jamoluhrsen> #topic packaging
17:38:44 <jamoluhrsen> #info still lots of activity around test coverage
17:39:04 <jamoluhrsen> #info found a legit ssh bug in odlparent because of these newly added tests
17:39:42 <jamoluhrsen> #info new puppet fixes/features coming in
17:40:03 <jamoluhrsen> #info some concern about the current email discussion about .cfg vs .xml files for odl configurations
17:40:41 <jamoluhrsen> #info the .deb pipeline was stale for a while and we requested some help and some people have stepped up.
17:43:39 <jamoluhrsen> #info the #1 problem with .deb packaging is that the current pipeline doesn't work at all.
17:43:54 <jamoluhrsen> #info once the deb pipeline works, we want to turn on CI/CD for it
17:44:22 <jamoluhrsen> #info we have some test jobs that we use against the rpm packaging that we should easily be able to covert over the deb side
17:46:43 <jamoluhrsen> zxiiro: can you link the page here for how to use the sandbox/jenkins ?
17:47:06 <zxiiro> #link http://docs.opendaylight.org/en/latest/submodules/releng/builder/docs/jenkins.html#jenkins-sandbox
17:48:09 <jamoluhrsen> dirty feet
17:48:19 <jamoluhrsen> wet hands
17:49:00 <mardim> mardim hello
17:49:09 <jamoluhrsen> hey
17:58:08 <jamoluhrsen> #topic misc
17:58:49 <jamoluhrsen> #info venkat wonders how we can test an image locally after we have given builder changes
17:58:56 <jamoluhrsen> #info zxiiro says we cannot do that easily
18:00:41 <zxiiro> #link https://github.com/opendaylight/releng-builder/blob/master/packer/vars/cloud-env.json.example
18:00:50 <jamoluhrsen> #info zxiiro says if we have a local openstack cloud we should be able to replicate these images with some basic steps (see link above)
18:01:44 <zxiiro> #link https://github.com/opendaylight/releng-builder/tree/master/packer
18:04:34 <jamoluhrsen> #endmeeting