14:27:45 #startmeeting armband 22Jul2016 14:27:45 Meeting started Fri Jul 22 14:27:45 2016 UTC. The chair is bobmonkman. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:27:45 Useful Commands: #action #agreed #help #info #idea #link #topic. 14:27:45 The meeting name has been set to 'armband_22jul2016' 14:27:57 #info Alexandru Avadanii (Enea) 14:28:08 Do we have Frloin or can Alex give an update? 14:28:10 #info Florin Dumitrascu 14:28:19 I will let Alex start 14:28:27 #info this week we fixed live migration on ThunderX (actually Vijay from Cavium provided the patches, we just built and did a quick validation) 14:28:27 plese use #info and give me a status 14:28:31 us a status 14:28:39 thx contiue 14:29:13 #info why is live migration needed? 14:29:30 #info latest armband (Mitaka) ISO deploys fine on all hardware we have (Cavium ThunderX, APM Mustang, AMD Softiron), including live migration (new) 14:30:17 #info live migration is a standard feature in Openstack/OPNFV, which is expected to work out of the box (basic checks of stack funtionality include tests for it), but was not ready before on GICv3 systems (like ThunderX) 14:30:38 #info live migration is also a requirement for real-life use cases, where you want to move a VM from a node to another 14:30:53 #info or snapshot a VM and launch it again 14:31:24 #info OK, just wanted to clarify 14:31:33 #info also, recently we fixed some deployment limitations in Fuel@OPNFV, which now allow using mixed pods (instead of requiring 5 identical nodes in a pod, like it was before) 14:31:52 #info Great news on mixed pods 14:32:19 #info are we all good dedicating Pod 1 for CI main? 14:32:32 #info this allows us to have working deploys in CI for arm-pod1 (5 x thunderx) + arm-pod2 (2 x thunderx + 1 APM + 2 softirons) 14:32:40 #info Cavium is sending 2 server today and 3 in 2 weeks to replace the full pod 14:33:09 #info yes, I think turning arm-pod1 into a CI pod is the best approach, since installing the new nodes introduces some risk 14:33:33 #info we are very happy to hear about the 2 new nodes, especially since they are 2 socket nodes 14:33:58 #info I am going to order 3 SoftIrons for more pod capability but let me know if it makes a big difference if I order 4 or 5 instead 14:34:45 #info (this is closer to dev work than to overview status, but it's a very important step for us) we now have re-entrant deploys (no manual action required to run the CI loop over and over again), which previously needed a little manual intervention to remove stale boot entries in EFI menu 14:35:56 #info long story short, latest ISO should already behave better than brahmaputra ISO does, live migration being the big thing 14:36:06 #info so, are we on track for functest completeion and CI integration? 14:36:24 Tim we are using IRC only today 14:36:34 GTM conflcits to work out 14:36:40 welcome 14:36:56 #info functest work is ongoing, the new healthcheck tests added in fuel prevent us from having a full succesful run at the moment, but the problems we are facing seem to also affect Fuel@OPNFV 14:37:15 ciprian-barbu: do you want to go into details about functest? 14:37:29 #info OK I assume someone is interacting with Fuel team to track progress 14:37:45 AlexAvadanii: sure, if you're done 14:37:45 #info meanwhile, we've enabled BGPVPN plugin build at ISO build, and today/early next week we will also enable OVS 14:37:51 #info Cirprian, a short run down would be good 14:38:15 #info currently the functest jobs stop after the first test, which is called healthcheck 14:38:16 #info one last thing from me, we are preparing to switch to 4.4 kernel, which should happen soon 14:38:44 #info this is a very simple script that does some basic routines to ensure the openstack components work fine 14:39:01 #info the problem is this test is hardcoded to use x86 cirros image 14:39:10 #info I am currently working to fix this 14:39:25 #info @Ciprian- I thought we fixed the cirrus issue in B-release 14:39:58 #info OPNFV introduced this healthcheck test for Colorado, it did not exist for Brahmaputra 14:40:12 #info I see 14:40:29 #info can we work around this to execute other tests ? 14:40:30 #info I was surprised to see it was written like this, since Jose, the author should have been aware of our ARM pods not being able to run it 14:40:42 bobmonkman: we don't work around issues, we fix them ;) 14:41:10 #info :-) 14:41:15 #info yes, I did run tempest and rally by hand on a two node POD and I can say nothing much changed 14:41:50 #info but I would like to run the whole suite one a 5 node HA setup with all the features, the one I used didn't even had ODL 14:42:21 #info OK, that if Good Ciprian, anything else we need to discuss on Functest? Just keep us posted on the cirrus issue and let me know if you need help 14:42:31 #info one other thing 14:43:16 Tim are you prepared to say anything on Apex and the test buid that Jim provided? 14:43:21 #info I have a patch in the upstream Openstack rally project that will allow us to solve a few failing testcases 14:43:31 we can talk about that if there is an update 14:43:55 #info a few of the failing testcases were caused by insufficient RAM, the change I upstreamed will allows us to configure it 14:44:00 #info very good, Ciprian we need to be diligent to close those out ovver time 14:44:22 #info however, OPNFV has not updated the rally version in their docker functest image in a while, I will have to propose a patch for it 14:44:52 #info already talked to Jose about it, I'm currently testing with a manual built docker image to make sure I will not break things 14:44:59 #info this should be ready next week 14:45:02 #info Morgan is out on holiday and not ure who is managing Functest in the interim 14:45:05 #info and that's it on my side 14:45:30 #info it should Jose Lausuch from Erricson 14:45:36 #info @Ciprina- is that in rgeards to the cirrus issue? 14:45:56 #info no, this is a different issue 14:47:14 #info for the notes, please clarify which issue you are working with Jose on 14:47:48 #info can anyone give an update on YardStick? 14:47:56 #info sorry, I thought it was clear, I'm working with Jose on updating rally with my change inside the functest docker image; this will help solve some of the failing tempest testcases 14:48:26 thx Cirprian,,cross message and I got confused- that helps 14:48:47 bobmonkman: no problem, glad it helps 14:49:20 #info for Yardstick we were blocked for a while not having manpower 14:50:01 #info but we will get back on it next week 14:50:59 #info OK, thx. I will continue to try and get info from the Apex and JOID teams on progress with alternative Installers 14:51:49 #info I am also working to plan for the open contrail controller and keep us updated but no news this week 14:52:29 #info OK, anything else we should discuss today? 14:52:42 #info I will add a quick update about vIMS 14:53:19 #info it seems to me we are working thru new issues but we believe we are on track for 22 Sept? Release 1 14:53:29 Thx Florin 14:53:41 #info we are making progress with vIMS, the requirements for Cloudify are understood 14:54:17 #info currently we are working to port Cloudify Manager dependencies on ARM, around 9 dependencies have been identified 14:54:30 #info that is great news.I would like to get that solved and also be able to have our team internal to ARM reproduce at some point 14:54:50 #info I have some initial findings about feasibility of Apex installer. Hopefully Tim can help me out 14:54:50 #info Cloudify team has offered support, but until today this has not really materialized 14:55:25 #info that's the status for vIMS 14:55:27 #info thx Madhu...can u jot a couple of notes for the record here? 14:56:01 #iinfo Florin: are you blocking on their help then? 14:56:20 #info no, I believe we can do the port ourselves 14:56:50 #info in case we really get stuck, we have someone to contact 14:56:55 #info ok, let's just take it one step at a time and continue to interact with them. 14:58:38 #info I have nothing else, If Madhu adds something in the notes I will capture it before I end the log. Madhu, can you please connect with me on email? Bob.monkman@arm.com 14:59:26 #info Madhu's connection got reset. He is trying to reconnect 14:59:26 #info thanks everyone and I will work with Dovetail team to work out a solution. 14:59:44 Ok, thanks Vijay 14:59:57 #info I would also like to work on yardstick issues ciprian-barbu mentioned. 15:00:09 Extremely sorry, my connection just dropped 15:00:23 #info CentOS VM image we received is working nicely as a cloudify base image, if anyone was wondering 15:00:24 I will discuss with ciprian-barbu for more details 15:00:25 #info that would be very helpful Vijay 15:00:42 Vijayendra: Paul (pava) here who joined later will be looking at it, you should sync 15:01:07 ciprian-barbu, Sure. 15:01:10 #info Ciprian: that is great news on the iniital CentOS image 15:01:15 #info this is in preliminary state for now, but Cavium and Enea are working on setting up packaging CI inside Cavium lab 15:01:23 ciprian-barbu: ok 15:02:28 #info I'd like to update on the Apex installer. The DIB currently supports only armhf and x86_64, so this might be a limitation 15:02:36 #info I think this is very helpful to have the lab replicated in our internal facilities and ARM has a complete setup with B-relase as well. Now looking to run VNFS 15:03:43 #info Madhu : thx for this. We are going to have to work with APex /CentOS team on that one it seems 15:04:32 Madhu...can you connect with me on email to start a dialog? 15:05:23 Madhu,,, can u connect with me on email? 15:05:38 Sure, will do 15:06:13 Ok i need to run and so I will end the log but folks can continue to chat here if you like. Thank all 15:06:22 #endmeeting