14:09:21 <Greg_E_> #startmeeting Fuel@OPNFV
14:09:21 <collabot> Meeting started Thu Sep  1 14:09:21 2016 UTC.  The chair is Greg_E_. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:09:21 <collabot> Useful Commands: #action #agreed #help #info #idea #link #topic.
14:09:21 <collabot> The meeting name has been set to 'fuel_opnfv'
14:09:52 <JonasB> #info Jonas Bjurel
14:09:53 <Greg_E_> #info Greg Elkinbard
14:11:08 <mskalski> #info Michal Skalski
14:11:24 <mskalski> NikoHermannsEric: how about '.value' on the end ?
14:11:41 <fzhadaev> #info Fedor Zhadaev
14:13:19 <__szilard_cserey> #info Szilard Cserey
14:13:39 <mskalski> https://build.opnfv.org/ci/view/fuel/job/functest-fuel-baremetal-daily-colorado/48/console
14:14:39 <mskalski> CREATE_FAILED  Error: resources.tosca.relationships.attachesto_1: Failed to attach volume dc1cd8fb-9cf3-42c2-bb3e-6a24ba354003 to server fded0b07-9da2-4d9c-aedc-fefd22ee1efb - Invalid input for field/attribute device. Value: /data. u'/data' does not match '(^/dev/x{0,1}
14:20:51 <__szilard_cserey> python error
14:21:03 <__szilard_cserey> >>> a = unicode('hello') >>> a u'hello' >>> b = 'Hello' >>> b 'Hello' >>> a == b False
14:22:59 <__szilard_cserey> is this from Parser ?
14:29:42 <mskalski> __szilard_cserey: yes
14:39:00 <Greg_E_> #info core reviewers please make sure that you do not +2 anything that is a feature or improvement for Colorado at this point. We need to focus on stabilizing the branch prior to the release
14:39:12 <mskalski> __szilard_cserey: but isn't this error about value which does not match pattern? /data vs ^/dev/x{0,1}
14:52:47 <fuel> #info David Chou
14:53:43 <mskalski> fix for above error https://gerrit.opnfv.org/gerrit/#/c/20149/
14:55:39 <NikoHermannsEric> mskalski: mhh not working
14:55:50 <NikoHermannsEric> condition: "settings:opendaylight.metadata.enable_l3_odl.value == false"
14:55:55 <NikoHermannsEric> no change at all
14:57:21 <mskalski> NikoHermannsEric: it is a part of larger code? By not working you mean that some field in different plugin is not enabled/disabled for choose ?
14:58:08 <NikoHermannsEric> it is a new restriction for enable_bgpvpn
14:58:08 <NikoHermannsEric> - condition: "settings:opendaylight.metadata.enable_l3_odl.value == false"
14:58:08 <NikoHermannsEric> strict: false
14:58:08 <NikoHermannsEric> message: "ODL to manage L3 traffic enabled."
14:58:19 <NikoHermannsEric> I want to have odl controlling l3
14:59:06 <NikoHermannsEric> ok but do not care
14:59:13 <NikoHermannsEric> it is not so relevant
14:59:42 <NikoHermannsEric> thing what more concerns me is that I do not get rid of ovs bridge br-ex
15:00:03 <JonasB> https://gerrit.opnfv.org/gerrit/#/c/19555/
15:00:17 <NikoHermannsEric> somehow it is always created although I am removing it from network_scheme
15:05:15 <NikoHermannsEric> mskalski: ^
15:05:40 <NikoHermannsEric> thing what more concerns me is that I do not get rid of ovs bridge br-ex
15:05:55 <mskalski> NikoHermannsEric: I need to go now, but I can take look later at this issue with br-ex, it is aboud your change for odl plugin to switch to boron right?
16:09:53 <NikoHermannsEric> yes
16:10:02 <NikoHermannsEric> mskalski: I will send you a mail tomorrow
16:10:09 <NikoHermannsEric> it is not that urgend thanks
16:34:35 <fdegir> aricg: about kcmfornfv cloning
16:34:59 <fdegir> aricg: we had a look at it and it seems the gerrit is the reason
16:35:19 <fdegir> aricg: no matter how fast our connection is, it is slow to clone the repo
16:35:46 <fdegir> DanSmithEricsson: ^
16:37:03 <fdegir> aricg: starts at 80-100 KiB/s
16:37:10 <fdegir> aricg: and goes to about 300
16:37:34 <fdegir> aricg: but even with 300, it'd take about an hour to clone the repo which is too slow
16:43:15 <DanSmithEricsson> - maybe a question to Aric/LF - have we thought about the parallel scalling that LF needs on the Repo servers, etc in relation to how we scale CI/CD and jobs?  As we add more scenarios, two branch mode, etc load increases on the gerrit boxes..  for me, i tried to clone a bunch of different repos and i only get between 75-120 right now (3 differente sites, all with 36Mpbs (or greater connections) testing fine.
17:04:51 <jmorgan1> fdegir: why not have a local mirror? I think you can clean out some of the object files in a git repo to make cloning faster
17:05:38 <DanSmithEricsson> its an idea we have toyed with
17:05:45 <DanSmithEricsson> some impacts (we woulud have to fix):
17:06:04 <DanSmithEricsson> we would have to make the deploy/builder be able to switch and point to "latest"
17:06:14 <jmorgan1> i didnt see where the jenkins slave was, but local mirror and pruning repo might help
17:06:23 <DanSmithEricsson> we would have automate something (a push receivor from gerrit) to ensure local copeis of the entire repo woudl be there
17:06:33 <DanSmithEricsson> but 100% doable
17:06:41 <DanSmithEricsson> we have talked about this for upstream repos as well
17:06:44 <DanSmithEricsson> when doing builds
17:07:08 <DanSmithEricsson> i would also wonder about a gerrit repot that is so big it takes a hour even at 300K
17:07:17 <DanSmithEricsson> something doesnt seem right there
17:07:34 <jmorgan1> its a clone of the RT kernel which is 1.1Gb is why
17:07:51 <DanSmithEricsson> and we are carrying that binary in the OPNFV repo>?
17:07:54 <jmorgan1> for kvm4nfv case
17:08:00 <DanSmithEricsson> that (to my understanding) should be sourced fro mupstream somewhere
17:08:17 <DanSmithEricsson> otherwise OPNFV repo is "hosting binaries" - something i thought we wanted to avoid
17:08:30 <DanSmithEricsson> and thus we dont load our gerrit so when the build is called that kernel is pulled from source
17:08:35 <DanSmithEricsson> not a gateway (us) repo
17:08:41 <DanSmithEricsson> i could be wrong :)
17:12:04 <DanSmithEricsson> cause - i mean, if we can start to store stuff like that in repos - then it makes things lots easier :)
17:21:45 <DanSmithEricsson> i cool way (rather than setting up clones in each DC) would be to do a cpan/ fastestmirror style approach
17:22:00 <DanSmithEricsson> and have DC's prop a VM that holds a mirror (since we all have ingress points)
17:22:17 <DanSmithEricsson> and then for nodes from inside - (local to that mirror) it would be for sure first hop
17:22:40 <DanSmithEricsson> and then as well - i think we would find (for those off N America) you would have faster luck off peers sometimes
17:22:45 <DanSmithEricsson> its an idea anyway
17:55:54 <DanSmithEricsson> Hey jack - what about something like this?
17:55:55 <DanSmithEricsson> https://docs.google.com/file/d/0B5_-qRW_6quzaVVRdE9jZTB0TFU/edit?hl=en&forcehl=1
17:56:06 <DanSmithEricsson> sorry jmorgan1
17:57:12 <DanSmithEricsson> and then have a "job" that makes a copy of the repo
17:57:38 <DanSmithEricsson> since we are essentially in jenkins distro mode - there should be a way (through jenkins) to do this a slave level perhaps?
17:59:44 <DanSmithEricsson> here is an interesteing thread on our issue https://gitlab.com/gitlab-org/gitlab-ee/issues/76 ... anyway - sounds like a discussion topic for the INFRA WG - cauese we can go a couple of ways
20:33:24 <bramwelt> We're looking into mirroring Gerrit to at least GCe.
20:33:54 <bramwelt> I should say in the same environment  Jenkins is hosted in.
20:35:02 <bramwelt> fdegir: DanSmithEricsson: This is a bandwidth issue with the datacenter hosting the OPNFV Gerrit instance. Things won't be resolved until we reboot a router on Saturday.
20:35:25 <DanSmithEricsson> very coool
20:35:49 <DanSmithEricsson> what sort of speed do you think we "should be " getting in general?
20:42:44 <bramwelt> DanSmithEricsson: Whatever the previous one was. :)
20:46:22 <DanSmithEricsson> heheh
20:46:24 <DanSmithEricsson> ok
20:46:38 <DanSmithEricsson> you work for apple?
21:02:08 <bramwelt> hehe
21:03:41 <bramwelt> I don't know what the current bandwith was/is, I just know it's going to either meet or exceed that since we won't be bottlenecked by the datacenter.
21:04:20 <jmorgan1> DanSmithEricsson: went to run some errand, sounds like a good discussion topic
21:05:50 <DanSmithEricsson> trevor - cool.. jack - its a neat topic yup - for the immediate seems a HW issue, but longer term something to think about for sure
09:55:08 <mskalski> fdegir: Hi, for sfc scenario in C release we need to use not yet released odl Boron, since currently there is no persistent tarball to which we could refer I thought about uploading it to artifacts, I see how it is done for logs but I wonder if this tarball will not be removed by some clean up job?
09:56:12 <fdegir> mskalski: there was a flaw in automatic cleanup
09:57:18 <fdegir> mskalski: it should remove artifacts older than 10 days
09:57:38 <fdegir> mskalski: but leave the latest one no matter how old it is
09:58:04 <fdegir> mskalski: this part doesn't work as far as I know so it might get removed after 10 days
09:58:26 <fdegir> mskalski: if and only if the auto-removal is enabled for fuel artifacts
09:58:29 <fdegir> aricg: ^
09:58:52 <fdegir> mskalski: we can upload it in any case while aricg looks into it
09:59:38 <mskalski> fdegir: ok, is fuel/colorado directory will be ok for this?
10:00:03 <fdegir> mskalski: we can put it under both fuel/ and fuel/colorado/
10:00:15 <fdegir> mskalski: so you can use it for both branches if needed
10:00:53 <fdegir> mskalski: up to you
10:01:55 <mskalski> fdegir: it will be only needed for colorado release
10:02:07 <fdegir> mskalski: ok
10:02:25 <fdegir> mskalski: just send the location of the file and the name it should have on artifact repo
10:02:29 <fdegir> mskalski: and I upload it to there
10:04:37 <mskalski> fdegir: ah I just started to download it to pod2, and wanted to use upload code similar to the one used for logs uploading, is it ok or It require additional permission?
10:05:15 <fdegir> mskalski: pod2 jumphost has the credentials so you can upload it
10:05:17 <fdegir> mskalski: just do
10:05:48 <fdegir> mskalski: gsutil cp <file to upload> gs://artifacts.opnfv.org/fuel/colorado/<filename>
10:05:54 <fdegir> as jenkins user
10:06:09 <mskalski> fdegir: ok will do
10:07:24 <fdegir> mskalski: it will take some time for the file to appear via http so you can verify the upload by
10:07:33 <fdegir> mskalski: gsutil ls gs://artifacts.opnfv.org/fuel/colorado/<filename>
10:17:41 <mskalski> fdegir: ok it's there http://artifacts.opnfv.org/fuel/colorado/distribution-karaf-0.5.0-Boron-RC2.tar.gz thanks for instructions
10:18:22 <fdegir> np
13:26:12 <veena> Hi, I have deployed an env enabling DPDK, and allocated HugePages for Nova and DPDK. Instance creation fails with "No valid host found error". I found a bug - https://bugs.launchpad.net/fuel/+bug/1575091, but it is mentioned as fuel 10.0 in the bug. I'm using fuel 9.0. Does 9.0 support dpdk?
13:26:39 <veena> Has anybody faced similar issue when compute nodes are configured for DPDK?
14:01:28 <aricg> mskalski: The cleanup job does not delve into directories, so if we create a directory fuel/foo/ or fuel/colorado/foo/ your artifact will persist
14:20:21 <veena> Hello all, any pointers to fix my DPDK issue on fuel 9 will be very helpful
14:44:34 <AlexAvadanii> veena: Hi! it looks like fuel 9.0 is using nova 13.0, which does not contain the fix you linked
14:50:55 <veena> AlexAvadanii, I saw that the fix is committed 7 weeks ago and the fuel iso I'm using is - [root@fuel ~]# cat /etc/fuel_build_id
14:50:56 <veena> OPNFV_FUEL_2016-08-08:00:04_cf58d9d488fde91a5177ae01363844da8ec8441c
14:51:11 <veena> AlexAvadanii, should I take the latest fuel iso?
15:09:23 <veena> AlexAvadanii, https://blueprints.launchpad.net/fuel/+spec/support-hugepages  is this different for fuel anf opnfv-fuel?
15:22:06 <AlexAvadanii> veena: it won't help, the nova we use comes from mirror.fuel-infra.org, which is still nova 13.0
15:22:32 <AlexAvadanii> veena: about the blueprint, that applies to Fuel@OPNFV too, but that does not imply the nova is patched
15:22:42 <AlexAvadanii> as far as I can tell, Fuel 9 will not get that fix
15:23:40 <AlexAvadanii> the best way forward I see is to add a patch for nova 13.0 via puppet during Fuel deployment
15:24:28 <AlexAvadanii> like we patch nova in Armband for direct kernel boot support, for example https://git.opnfv.org/cgit/armband/tree/patches/fuel-library/0010-nova-Fix-inject-for-direct-boot-with-part-table.patch
15:26:38 <veena> AlexAvadanii, Okay. I think we should have a patch for it
15:27:51 <veena> AlexAvadanii, by looking at https://openstack.nimeyo.com/81974/openstack-dev-fuel-fuel-9-0-is-released, fuel9.0 claims that it supports DPDK, should we raise a bug?
15:35:26 <AlexAvadanii> veena: well, Fuel does support DPDK, and Fuel@OPNFV actually tests DPDK, but I think it's without hugepage option
15:35:59 <AlexAvadanii> veena: not sure about raising a bug, if it affects x86 too, we could open one for Fuel@OPNFV, if it only affects arm targets, we should treat it in armband
15:38:02 <veena> AlexAvadanii, the support is not there in x86 also.
15:38:26 <veena> AlexAvadanii, is armband making any effort to apply patches for supporting DPDK?
15:56:33 <AlexAvadanii> veena: yes, we build dpdk 16.0x (currently 16.04, will be 16.07 in a few days), together with ovs-dpdk, but we ship only the generic version, no vendor specific dpdk packages yet
15:59:21 <AlexAvadanii> veena: I have to go now, will be back later, please send me an e-mail if you have further questions
16:48:51 <aricg> gerrit port 29418 will be down for 5 minutes while we switch to haproxy. email about this sent to infra-steering
16:49:39 <aricg> after the change, lf_pod* will be able to connectto http and 29418 from the public ip
17:05:01 <mskalski> aricg: Hi, and when I put this in fuel/colorado/file.tgz it wil be purged? should I move this to fuel/colorado/odl ?
17:06:15 <aricg> mskalski: yes, please put it in a sub directory, dependancies or something likethat.
17:25:40 <mskalski> aricg: this should be safe artifacts.opnfv.org/fuel/colorado/vendor/distribution-karaf-0.5.0-Boron-RC2.tar.gz  ?
17:26:52 <aricg> mskalski: yep
17:27:03 <mskalski> aricg: ok thanks
07:15:49 <fdegir> mskalski: ping
07:19:35 <mskalski> fdegir: Hi Fatih
07:26:57 <fdegir> mskalski: hi Michal
07:27:01 <fdegir> mskalski: have a quick question
07:27:20 <fdegir> mskalski: should we have some kind of priority between jobs for master and colorado?
07:27:29 <fdegir> mskalski: to reduce the queueing?
07:55:16 <mskalski> fdegir: agree that we now more care about colorado results than master and I would be great if we have this results faster, but in situation when we change something it first go to the master and we probably should verify this there and if we see improvement then move to colorado
07:55:29 <mskalski> fdegir: maybe it is a stupid idea but if we disable completely master branch jobs, and run this even manually when we merge something new with highest priority? I not even sure if it is possible..
07:55:48 <mskalski> fdegir: but I think we could try your approach, we may skip phase when we waiting for master results and relay on commiters reviews
07:56:26 <fdegir> mskalski: we can run master twice a week
07:56:51 <fdegir> mskalski: and leave the rest for colorado
07:57:35 <fdegir> mskalski: if it still doesn't help, we can reduce to once a wekk
07:58:11 <mskalski> fdegir: +1
07:58:26 <fdegir> mskalski: ok, will send a patch
08:43:02 <mskalski> AlexAvadanii: Hi are you here?
08:44:01 <pma> All intel pods/virtual are offline now, Are they in maintenance mode?
08:44:11 <mskalski> pma: yes
08:44:19 <pma> thanx
08:45:01 <mskalski> pma: http://lists.opnfv.org/pipermail/opnfv-tech-discuss/2016-September/012401.html
08:46:08 <pma> <mskalski> thanx, I missed that
09:35:58 <AlexAvadanii> mskalski: Hi!
09:36:20 <AlexAvadanii> sorry, I forgot to check that mirror blacklist, I had it on a todo and completely forgot about it
10:30:34 <z0d> is it just me, or references should be links here:  http://artifacts.opnfv.org/fuel/review/19555/installationprocedure/installation.instruction.html
11:07:56 <mskalski> z0d: agree that it would be easier to follow docs if references would be links, also not sure if we need to repeat url in reference when it is already url, the one on the bottom of the page, not in text
11:10:04 <z0d> mskalski: they could be footnotes, so you can still click on them and they wouldn't take much space
11:18:18 <s_berg> Hi.
11:18:59 <mskalski> s_berg: Hello! how is you vacations?
11:19:01 <z0d> hey Stefan
11:19:27 <s_berg> mskalski: Oh, it was great - but hard to adjust back to "normal" life. :)
11:19:42 <s_berg> Hi z0d!
11:21:51 <s_berg> A question, about something that is benign: My controller logs cyclically (every five minutes) complains about:
11:22:02 <s_berg> 11:15:01 node-2 liberasurecode[29928]: liberasurecode_backend_open: dynamic linking error libshss.so.1: cannot open shared object file: No such file or directory
11:22:23 <s_berg> Are we missing a package somewhere?
11:22:39 <mskalski> hmm is that from swift?
11:22:45 <s_berg> Yup.
11:25:12 <s_berg> Just wondered if someone ran into this before or if I should dig. :)
11:26:12 <s_berg> https://ask.openstack.org/en/question/93267/unable-to-start-swift-proxy/
11:26:39 <mskalski> here is an very old bug https://bugs.launchpad.net/fuel/+bug/1488575
11:29:25 <mskalski> and here more recent https://bugs.launchpad.net/fuel/+bug/1561608
11:32:45 <s_berg> mskalski: I'll read up on those, thanks!
11:49:06 <AlexAvadanii> s_berg, mskalski: erasurecode lib package was heavily reworked in fuel9, maybe it's just a missing dependency?
11:54:21 <mskalski> do I understand correctly that this is related to ceilometer middleware in swift proxy?
12:02:09 <s_berg> AlexAvadanii, mskalski: I think this may be related to a missing but proprietary backend: http://www.ntt.co.jp/news2015/1505e/150518a.html
12:02:36 <s_berg> I'll look further. Can't see any obvious backend configuration in /etc.
12:21:36 <AlexAvadanii> mskalski: I have a question about fuel-mirror: in build/config.mk we set the FUEL_MIRROR commit hash like https://git.opnfv.org/cgit/fuel/tree/build/config.mk#n24, but we override it just for the build-time repo operations in https://git.opnfv.org/cgit/fuel/tree/build/f_isoroot/f_repobuild/config.mk#n13, right?
12:22:52 <AlexAvadanii> the way Armband patch system works now is take the commit IDs from build/config.mk, apply some patches on top of them, and then pass the new HEAD to the build system; overriding both these sets (we use a hard "=", long story about var inheritance in our build system ...)
12:23:15 <mskalski> AlexAvadanii: that is true
12:23:27 <AlexAvadanii> is there a known problem in using the same commit ID?
12:26:10 <mskalski> AlexAvadanii: I did not test this, only thing is that for repobuild it points to the master branch, not sure if there were any changes in the dependencies in the meantime
12:26:32 <mskalski> which could brake installation of fuel master
12:27:03 <AlexAvadanii> mskalski: I can test this in Armband :)
12:27:37 <mskalski> I knew that you said that :) let us know how its behave
12:35:27 <mskalski> Guys any of you tried to install fuel@opnfv from usb? Apparently there are issues with that, but I don't have hardware to test this now
12:36:36 <mskalski> hmm but maybe it will be possible to simulate this in kvm
14:08:18 <pma> We've an issue with ovs plugin (compiled on haswell+ processors w/ AVX2 flag gives "illegal instruction" on processors/envs w/o AVX2)
14:08:30 <pma> & following options to resolve it:
14:08:40 <pma> - exclude ericsson-virtual1 from fuel-virtual group
14:08:47 <pma> - build plugin on processors w/o AVX2 flag
14:08:55 <pma> - add -mno-avx2 flag directly to plugin
14:09:02 <pma> - build 2 versions of plugin, then check exec-platform out and choose proper version to run
14:10:45 <pma> What do think which of them look more "straight"?
17:26:59 <AlexAvadanii> pma: I see nobody is commenting on the above; I personally think removing ericsson-virtual1 is the easiest solution, but in the long term, would it make sense to (re)build OVS (and maybe other plugins) on the target nodes during deployment/plugin install?
18:16:36 <__szilard_cserey> #opnfv-meeting
14:32:43 * DanSmithEricsson wave
14:33:53 <z0d> hey Dan
14:34:27 <DanSmithEricsson> Hey Peter.. how are you?
14:35:07 <z0d> doing ok. you?
14:35:26 <Guest66071> Hi
14:35:41 <fzhadaev> Hi
14:35:48 <z0d> hi Jonas
14:35:52 <z0d> hi Fedor
14:36:33 <dklenov> hi folks
14:38:33 <collabot> Greg_E_: Error: Can't start another meeting, one is in progress.  Use #endmeeting first.
14:38:48 <Greg_E_> #info Greg Elkinbard
14:39:32 <mskalski> Greg_E_: meeting must be open from last time, maybe we should close that one and start new?
14:39:45 <Greg_E_> #endmeeting