13:00:12 <fdegir> #startmeeting Cross Community CI
13:00:12 <collabot> Meeting started Wed Oct  4 13:00:12 2017 UTC.  The chair is fdegir. Information about MeetBot at http://wiki.debian.org/MeetBot.
13:00:12 <collabot> Useful Commands: #action #agreed #help #info #idea #link #topic.
13:00:12 <collabot> The meeting name has been set to 'cross_community_ci'
13:00:22 <fdegir> anyone for XCI meeting?
13:00:59 <hwoarang> im here
13:01:10 <mbuil> fdegir: are there cookies?
13:01:16 <fdegir> in fact I do
13:01:18 <epalper> I'm in :)
13:01:23 <fdegir> we have kanelbulle day in Sweden
13:01:39 <mbuil> fdegir: ooooh!!!
13:01:55 <mbuil> nice
13:02:00 <mbuil> #info Manuel Buil
13:02:02 <fdegir> if you ever visit here (again), I'll buy you some
13:02:30 <fdegir> please info in your names
13:02:52 <fdegir> while you do that, put the link to the agenda which is very similar to last week's
13:03:01 <fdegir> #link https://etherpad.opnfv.org/p/xci-meetings
13:03:07 <epalper> #info Periyasamy Palanisamy
13:03:07 <OPNFV-Gerrit-Bot> Markos Chandras proposed releng-xci: xci: Bump bifrost SHA  https://gerrit.opnfv.org/gerrit/44021
13:03:17 <Taseer> #info Taseer
13:03:28 <hwoarang> #info Markos Chandras
13:03:34 <fdegir> first topic is
13:03:37 <fdegir> #topic Multi-distro support
13:03:59 <fdegir> #info Ubuntu and OpenSUSE work fine and work with CentOS is progressing
13:04:08 <fdegir> anything to add hwoarang ?
13:04:10 <hwoarang> nope
13:04:16 <durschatz> #info Dave Urschatz
13:04:47 <fdegir> #topic CI for XCI
13:05:00 <tinatsou> #info Tina Tsou
13:05:04 <fdegir> #info xci-verify got some cool stuff thanks to hwoarang
13:05:05 <ttallgren> #info Tapio Tallgren
13:05:22 <fdegir> #info The jobs run in clean VMs to ensure we don't get strange results
13:05:22 <David_Orange> #info David Blaisonneau
13:05:33 <fdegir> #info Ubuntu and OpenSUSE jobs are voting
13:05:46 <fdegir> ttallgren: you want to say something about Centos?
13:05:56 <ttallgren> Yes
13:06:00 <fdegir> please go ahead
13:06:00 <hwoarang> one patch needs review https://gerrit.opnfv.org/gerrit/#/c/44049/ which basically uses pre-built OS for the clean VM instead of building new images on every run
13:06:07 <hwoarang> make the job 10 minutes faster ;p
13:06:10 <hwoarang> *makes
13:06:33 <ttallgren> I got to network configuration, made a silly mistake, and spent a lot of time trying to find a trivial mistake
13:06:59 <fdegir> #info ttallgren is working on network configuration for Centos
13:07:37 <ttallgren> I could not take the network config from Ubuntu or CentOS directly, so I rewrote them from scratch
13:07:37 <fdegir> ttallgren: anything else to add?
13:08:02 <hwoarang> ttallgren: the suse one shouldnt be so different compare to centos
13:08:08 <hwoarang> at least the ifcfg parts
13:08:20 <ttallgren> Just a question: why do we have only one interface for the VMs?
13:08:48 <fdegir> ttallgren: the sandbox is based on upstream spec for "test" deployments
13:09:00 <fdegir> ttallgren: having proper VMs with multiple network interfaces and so on is in the backlog
13:09:24 <ttallgren> Ok. Would be easier to just add interfaces, rather than adding VLAN interfaces
13:09:35 <fdegir> ttallgren: we also need to switch to our own VM creation as well since current VMs are created by bifrost as "test" VMs
13:09:47 <fdegir> so multiple things to fix in order to have more proper setup
13:10:16 <timirnich> #info Tim Irnich
13:10:25 <ttallgren> Ok. I have been wondering when the real VMs come up
13:10:45 <hwoarang> i am not sure i follow
13:10:48 <fdegir> ttallgren: right after the release so we don't break it totally
13:10:52 <hwoarang> what are the real VMs?
13:11:17 <ttallgren> hwoarang: fdegir is referring to "test" VMs
13:11:19 <fdegir> hwoarang: the VMs we use now has only 1 interface etc and the deployment is based on test spec from osa
13:11:40 <fdegir> hwoarang: and these VMs are created using bifrost test script so they are not as good as they can be
13:11:48 <hwoarang> do you know what's missing?
13:11:53 <hwoarang> apart from the single NIC?
13:12:05 <fdegir> we can't control the spec per VM
13:12:13 <hwoarang> anything else?
13:12:14 <fdegir> they all get same cpu/ram/disk
13:12:29 <fdegir> need to look at what I've done for this part
13:12:35 <hwoarang> ok
13:12:52 <fdegir> ttallgren: so, some things are already known and will be fixed
13:13:07 <fdegir> but until that happens, what we need to do is to be done based on what we have in the repo
13:13:55 <fdegir> #info A patch is waiting to be reviewed which uses pre-built OS for the clean VM instead of building new images on every run, cuttine the time by 10 minutes
13:14:03 <fdegir> #link https://gerrit.opnfv.org/gerrit/#/c/44049/
13:14:08 <fdegir> #topic Testing
13:14:18 <fdegir> #info prepare-functest role has been merged
13:14:23 <fdegir> mbuil: what about run-tests?
13:14:41 <mbuil> fdegir: Regarding functest integration, I have two patches
13:14:50 <mbuil> fdegir: The 1st patch fixes a missing dependency and I think is ready to be reviewed: https://gerrit.opnfv.org/gerrit/#/c/44101/
13:15:00 <fdegir> #info A new playbook is currently being written for running tests; starting with functest
13:15:12 <fdegir> #info The patch fixes a missing dependency and ready for review
13:15:17 <fdegir> #ljnk https://gerrit.opnfv.org/gerrit/#/c/44101/
13:15:41 <mbuil> fdegir: The 2nd patch sets everything up for functest and runs the functest container: https://gerrit.opnfv.org/gerrit/#/c/43635/ However, after discussing with ollivier and howarang I think it might be easier to create and run the docker container through the docker CLI instead of using ansible
13:15:55 <ttallgren> hwoarang: The CentOS networking configuration is complicated, since by default it uses NetworkManager and no one likes it. NetworkManager supports the ifcfg scripts and if there is no ifcfg, then it uses the "traditional" way. So I followed this: https://major.io/2017/04/13/openstack-ansible-on-centos-7-with-systemd-networkd/
13:16:14 <fdegir> #info The 2nd patch sets everything up for functest and runs the functest container
13:16:20 <fdegir> #link https://gerrit.opnfv.org/gerrit/#/c/43635/
13:16:39 <mbuil> fdegir: the reason is that the functest container does not upload functest logs and that will make things difficult if the test fails because we might not see the problem
13:17:09 <hwoarang> ttallgren: that's strange because upstream OSA supports regular ifcf networking on CentOS. if you use systemd that means centos will do thing completely different than the rest of the distros which will be hard to simplify the role
13:17:28 <fdegir> #info We will run the testing using docker CLI to ensure we can make what went wrong visible and upload the test logs
13:17:46 <fdegir> thx mbuil - we will talk about sfc in its own topic
13:17:50 <mbuil> fdegir: so for the time being, I suggest using docker CLI "docker run...". That means the playbook sets everything up for functest and then we run the functest container with normal CLI. It is ok from jjob perspective or?
13:18:17 <fdegir> mbuil: please fix it in a simple way and we get it into jjb
13:18:29 <mbuil> fdegir: yes, sir :)
13:18:40 <fdegir> mbuil: as I mentioned, this is probably temporary and we need to revisit this after release
13:18:58 <fdegir> #topic Documentation
13:19:09 <fdegir> #info Started moving documentation to docs.opnfv.org
13:19:23 <fdegir> #info User guide is available: docs.opnfv.org/en/latest/submodules/releng-xci/docs/xci-user-guide.html
13:19:42 <hwoarang> one question
13:19:53 <fdegir> #info Work with Overview, Developer Guide, and XCI way of working is in progress
13:19:55 <hwoarang> how does this documentation relates to the doc patches you submit to releng-xci ?
13:20:10 <fdegir> hwoarang: this documentation is generated from those patches
13:20:14 <hwoarang> ok
13:20:27 <fdegir> hwoarang: but the docs are available on docs.opnfv.org post-merge
13:20:41 <hwoarang> is there a way to generate such documentation locally to check how our patch would look like?
13:20:52 <fdegir> hwoarang: yes, there is
13:20:58 <fdegir> let me put the link
13:21:02 <hwoarang> ok thank you
13:21:36 <fdegir> #info You can test your doc patches locally and the guide below explains how it can be done
13:21:38 <fdegir> #link http://docs.opnfv.org/en/latest/how-to-use-docs/include-documentation.html#testing-build-documentation-locally
13:22:16 <fdegir> #info Also, jenkins posts comments to the change pointing the generated documents such as "Patch Set 2:
13:22:16 <fdegir> Document link(s):
13:22:16 <fdegir> http://artifacts.opnfv.org/releng-xci/review/43955/index.html"
13:22:51 <fdegir> but please note that the toolchain that generates the documents for non-merged patches is different from the one that generates after merge
13:23:15 <fdegir> so expect small differences and it is better to test documents locally
13:23:28 <fdegir> #topic Bifrost jobs status
13:23:38 <mbuil> fdegir: I was planning to add a guide in SFC about how to deploy the SFC scenario with XCI but it might be worth trying to create one general "how to run scenarios" guide in xci and then I could just write the SFC details and refer to that releng-xci wiki page
13:23:58 <fdegir> mbuil: that will be part of developer and way of working guide
13:24:05 <fdegir> can the person who added bifrost topic talk about it?
13:24:08 <hwoarang> yes
13:24:40 <hwoarang> so right now bifrost jobs are queuing but nothing executes them
13:24:41 <fdegir> hwoarang: please go ahead
13:24:41 <hwoarang> so
13:24:47 <hwoarang> 1) do we still want them?
13:25:11 <hwoarang> that's only for testing upstream bifrost patches. internal bifrost changes are being tested in the regular XCI jobs
13:25:40 <hwoarang> so the opnfv-bifrost-* jobs can be deleted right now. I want to know what happens to the openstack-bifrost-* ones
13:25:55 <fdegir> this depends on the hw availability so if we have enough machines, I think we should continue running openstack-bifrost-* ones
13:25:59 <hwoarang> we can resume them, and use the clean VM approache.
13:26:02 <hwoarang> *approach
13:26:13 <fdegir> as I know the Bifrost PTL is looking into feedback we give - when we give them
13:26:19 <hwoarang> ie just the first bit of the XCI job
13:26:49 <hwoarang> ok since we still need them I will work on enable these back
13:27:04 <hwoarang> *enabling
13:27:09 <hwoarang> can't type today again
13:27:23 <fdegir> #action hwoarang to remove opnfv-bifrost-* jobs, update openstack-bifrost-* jobs to use clean-VM approach and enable them when we have the machines
13:27:44 <hwoarang> i'm done
13:27:48 <fdegir> thx
13:28:01 <fdegir> #topic Release Readiness
13:28:09 <fdegir> #info Release has been delayed by 2 weeks so skipping the topic
13:28:56 <fdegir> #topic Scenario/Feature Status: os-odl-nofeature
13:29:13 <fdegir> epalper: I think the status is same as last week - ie change is still pending review?
13:29:26 <epalper> Now moved code changes related to os-odl-nofeature scenario from releng-xci into ansible-opendaylight repo
13:29:38 <epalper> #link https://git.opendaylight.org/gerrit/#/c/63938/
13:29:58 <fdegir> #info epalper moved changes related to the scenario from releng-xci into ansible-opendaylight repo
13:30:03 <epalper> will raise another patch on top of mbuil's releng-xci review for the integration.
13:30:06 <fdegir> epalper: that's cool!
13:30:14 <epalper> #link https://gerrit.opnfv.org/gerrit/#/c/43469/
13:30:19 <fdegir> epalper: but
13:30:31 <fdegir> epalper: I'm not sure if ODL will accept this type of stuff there
13:31:07 <fdegir> epalper: I mean having xci and/or osa things there
13:31:26 <epalper> I just raised the review. Let us wait for their review comments
13:31:40 <epalper> even sfc is from ODL :)
13:32:00 <fdegir> epalper: right - I will put this as review there since I know their ansible role is a proper/standard one
13:32:16 <fdegir> thx epalper - will come to ovs after sfc
13:32:21 <mbuil> epalper: I am also not sure because the SFC scenario includes non-ODL stuff: openstack, OVS...
13:32:26 <fdegir> #topic #topic Scenario/Feature Status: os-odl-sfc
13:32:41 <fdegir> mbuil: sfc simply works, isn't it?
13:33:07 <mbuil> fdegir: almost. I was having an issue with the credentials because of a problem with user_Variables.yml
13:33:07 <fdegir> mbuil: anything else to mention?
13:33:16 <mbuil> fdegir: The first sfc scenario patch is merged: https://gerrit.opnfv.org/gerrit/#/c/43165/
13:33:28 <mbuil> fdegir: I have a second one in the oven. I am deploying with it right now to test if it works (after the user_variables.yml discussion we had): https://gerrit.opnfv.org/gerrit/#/c/44031/
13:33:36 <fdegir> #info First sfc scenario patch is merged: https://gerrit.opnfv.org/gerrit/#/c/43165/
13:34:12 <fdegir> #info Work with another patch is going on: https://gerrit.opnfv.org/gerrit/#/c/43165/
13:34:28 <mbuil> fdegir: you wrote the same link twice
13:34:39 <fdegir> #undo
13:34:39 <collabot> Removing item from minutes: <MeetBot.ircmeeting.items.Info object at 0x33a7690>
13:34:50 <fdegir> #info Work with another patch is going on: https://gerrit.opnfv.org/gerrit/#/c/44031/
13:34:52 <fdegir> thx mbuil
13:35:35 <fdegir> mbuil: I'm not adding user_variables.yml details as of yet until you verify the approach
13:35:49 <mbuil> fdegir: ok
13:36:20 <fdegir> #topic Scenario/Feature Status: os-nosdn-ovs
13:36:26 <fdegir> epalper: how about this scenario?
13:36:45 <epalper> raised a review in releng-xci: https://gerrit.opnfv.org/gerrit/#/c/43447/ and testing it now.
13:37:09 <epalper> might need to right repo to move this scenario
13:37:19 <epalper> need to find*
13:37:24 <fdegir> #info A change has been sent for review and currently being tested locally: https://gerrit.opnfv.org/gerrit/#/c/43447/
13:37:43 <fdegir> epalper: until we find that, we can keep it next to os-odl-nofeature in releng-xci
13:37:59 <epalper> ok, sure
13:38:19 <fdegir> we can perhaps talk to ovs team once we have something and they take it if they want to be part of XCI
13:38:25 <fdegir> thx epalper
13:38:46 <fdegir> #topic Scenario/Feature Status: k8s-nosdn-nofeature
13:39:06 <fdegir> Stephen is not with us today due to Chinese public holidays so here is a short update I got from him
13:39:32 <fdegir> #info s3wong is evaluating kubespray to see what needs to be done to setup a simple k8s cluster using it
13:39:57 <fdegir> #info He added some details to the etherpad https://etherpad.opnfv.org/p/xci-k8s
13:40:26 <fdegir> #info Patches will be sent once he is back from holiday and done with first steps
13:40:36 <fdegir> David_Orange: anything from your side for kubernetes?
13:40:38 <OPNFV-Gerrit-Bot> Manuel Buil proposed releng-xci: Create the run-tests playbook  https://gerrit.opnfv.org/gerrit/43635
13:41:13 <David_Orange> fdegir: nothing from now, i was waiting fir s3wong input/first tests
13:41:31 <fdegir> David_Orange: right
13:41:52 <David_Orange> fdegir: but i can put it in the todo list, to test kubspray too
13:41:53 <fdegir> David_Orange: his notes give some idea about what he is doing so you can perhaps take a look and add what you think directly there
13:42:14 <fdegir> thx David_Orange
13:42:15 <David_Orange> fdegir: ok, i will
13:42:28 <fdegir> btw kubespray is just one option but a good looking one
13:42:41 <fdegir> if someone else has better idea, please share it on the etherpad directly
13:42:58 <fdegir> #topic Scenario/Feature Status: Congress
13:43:05 <fdegir> Taseer: your turn
13:43:41 <fdegir> moving to ceph
13:43:45 <Taseer> yes, I am able to setup a congress container, and run through the pip install path
13:44:06 <fdegir> Taseer: good - you made it before I change the topic
13:44:16 <fdegir> #info Taseer is able to setup a congress container, and run through the pip install path
13:44:36 <Taseer> failing on adding the service to keystone
13:44:48 <fdegir> Taseer: ok
13:45:00 <fdegir> Taseer: have you looked at the code bryan_att provided?
13:45:07 <fdegir> Taseer: I suppose he's doing similar things there
13:45:45 <fdegir> #info Failure while adding the service to keystone - investigation ongoing
13:45:54 <Taseer> fdegir: yes, but he is doing using the bash commands, I need to follow the OSA convention
13:46:09 <fdegir> Taseer: ok
13:46:46 <fdegir> #topic Scenario/Feature Status: Ceph
13:47:04 <fdegir> Tianwei is also on holidays so here is a short update
13:47:35 <fdegir> #info Tianwei is working on integrating ceph: https://gerrit.opnfv.org/gerrit/#/c/42503/
13:47:48 <fdegir> #topic Baremetal status
13:47:57 <fdegir> David_Orange: I think you are still waiting some more reviews...
13:48:00 <David_Orange> testing on ONAP OpenLab: playbooks runs are all green but glance can not write on nfs mount point due to local rights (770 on this pod, 777 on pod1). Debug is ongoing
13:48:24 <David_Orange> fdegir: yes, but this can wait for a few days$
13:48:24 <fdegir> #info David_Orange is testing it on ONAP OpenLab
13:48:40 <David_Orange> fdegir: i am cleaning the code to push playbooks tasks to ansible roles
13:48:40 <fdegir> #info Playbook runs are all green but glance can not write on nfs mount point due to local rights (770 on this pod, 777 on pod1). Debug is ongoing
13:49:07 <fdegir> thx for that too and again sorry for lack of reviews
13:49:13 <David_Orange> fdegir: so it should be push in a few days
13:49:19 <fdegir> the release stuff messed up things for everyone
13:49:21 <David_Orange> a new deployment is planed on Orange Pod2 community lab (PDF/IDF writting is ongoing)
13:49:27 <David_Orange> fdegir: np
13:49:43 <fdegir> #info The code is being cleaned up and roles will be created
13:49:51 <fdegir> #info A new deployment is planed on Orange Pod2 community lab (PDF/IDF writting is ongoing)
13:50:09 <fdegir> David_Orange: anything else?
13:50:09 <David_Orange> this will give me another source for testing ONAP open lab issue
13:50:22 <David_Orange> ys, i am also preparing new openstack_user_config for nonha deployments using 'test' template
13:50:39 <fdegir> #info Testing it on Orange POD2 will give chance to check the issue happening in ONAP OpenLab
13:50:52 <David_Orange> for those that wana play with it with less servers
13:51:07 <fdegir> David_Orange: for baremetal you mean?
13:51:18 <David_Orange> yes, 1 controller, n computes
13:51:31 <fdegir> David_Orange: ok - so you are creating flavors for baremetal
13:51:37 <fdegir> David_Orange: which is good
13:51:46 <David_Orange> fdegir: yes :)
13:52:01 <durschatz> David_Orange: are there multiple nics in bare metal deployments
13:52:13 <David_Orange> fdegir: if you have some recommendation on that i will take
13:52:16 <fdegir> David_Orange: perhaps we can have aio, mini, noha, and ha so no matter if it is virtual or baremetal we have the same flavors
13:52:32 <tinatsou> #link https://wiki.opnfv.org/display/AUTO
13:52:45 <David_Orange> durschatz: yes, it is based on PDF/IDF, so for actual testes there is 3 interfaces
13:52:58 <David_Orange> durschatz: including vlan management
13:53:14 <durschatz> sweet!
13:53:50 <fdegir> #info Different flavors for baremetal will be available similar to the ones with virtual machines
13:53:56 <fdegir> thx David_Orange!
13:54:11 <fdegir> tinatsou: what about the link you put?
13:54:14 <David_Orange> i will have a few refactor to do with PDF/IDF updates, but it will be simple
13:54:52 <fdegir> moving to the last topic
13:54:54 <fdegir> #topic AoB
13:55:03 <fdegir> #info We are waiting for new machines to become available
13:55:05 <fdegir> jmorgan1: ^
13:55:40 <fdegir> #info Workaround for Jenkins slave connection from Intel lab works fine and it will be used until the firewall is configured by Intel corp IT
13:55:49 <fdegir> anyone has anything to add?
13:56:12 <fdegir> durschatz: any issues with the sandbox?
13:56:28 <durschatz> sandbox is working fine
13:56:30 <durschatz> also
13:56:50 <durschatz> I have need to deploy ODL on bare metal pod soon
13:57:01 <fdegir> durschatz: how soon?
13:57:20 <durschatz> may also approach Dave to test ONAP deployments
13:57:54 <fdegir> durschatz: as you can guess, we first have to get odl done on virtual machines
13:58:05 <durschatz> next week so I may just use apex unless something solid comes up with XCI
13:58:11 <durschatz> yes
13:58:27 <fdegir> David_Orange is our baremetal expert so it should just work fine
13:58:45 <fdegir> and one final note
13:58:59 <fdegir> #info Bifrost and OSA shas might be bumped soon
13:59:03 <durschatz> seriously though, shuould I use apex for now?
13:59:07 <fdegir> thank you all!
13:59:12 <fdegir> #endmeeting