#opnfv-armband: armband 22Jul2016
Meeting started by bobmonkman at 14:27:45 UTC
(full logs).
Meeting summary
-
- Alexandru Avadanii (Enea) (AlexAvadanii,
14:27:57)
- Florin Dumitrascu (florind,
14:28:10)
- this week we fixed live migration on ThunderX
(actually Vijay from Cavium provided the patches, we just built and
did a quick validation) (AlexAvadanii,
14:28:27)
- why is live migration needed? (bobmonkman,
14:29:13)
- latest armband (Mitaka) ISO deploys fine on all
hardware we have (Cavium ThunderX, APM Mustang, AMD Softiron),
including live migration (new) (AlexAvadanii,
14:29:30)
- live migration is a standard feature in
Openstack/OPNFV, which is expected to work out of the box (basic
checks of stack funtionality include tests for it), but was not
ready before on GICv3 systems (like ThunderX) (AlexAvadanii,
14:30:17)
- live migration is also a requirement for
real-life use cases, where you want to move a VM from a node to
another (AlexAvadanii,
14:30:38)
- or snapshot a VM and launch it again
(AlexAvadanii,
14:30:53)
- OK, just wanted to clarify (bobmonkman,
14:31:24)
- also, recently we fixed some deployment
limitations in Fuel@OPNFV, which now allow using mixed pods (instead
of requiring 5 identical nodes in a pod, like it was before)
(AlexAvadanii,
14:31:33)
- Great news on mixed pods (bobmonkman,
14:31:52)
- are we all good dedicating Pod 1 for CI
main? (bobmonkman,
14:32:19)
- this allows us to have working deploys in CI
for arm-pod1 (5 x thunderx) + arm-pod2 (2 x thunderx + 1 APM + 2
softirons) (AlexAvadanii,
14:32:32)
- Cavium is sending 2 server today and 3 in 2
weeks to replace the full pod (bobmonkman,
14:32:40)
- yes, I think turning arm-pod1 into a CI pod is
the best approach, since installing the new nodes introduces some
risk (AlexAvadanii,
14:33:09)
- we are very happy to hear about the 2 new
nodes, especially since they are 2 socket nodes (AlexAvadanii,
14:33:33)
- I am going to order 3 SoftIrons for more pod
capability but let me know if it makes a big difference if I order 4
or 5 instead (bobmonkman,
14:33:58)
- (this is closer to dev work than to overview
status, but it's a very important step for us) we now have
re-entrant deploys (no manual action required to run the CI loop
over and over again), which previously needed a little manual
intervention to remove stale boot entries in EFI menu (AlexAvadanii,
14:34:45)
- long story short, latest ISO should already
behave better than brahmaputra ISO does, live migration being the
big thing (AlexAvadanii,
14:35:56)
- so, are we on track for functest completeion
and CI integration? (bobmonkman,
14:36:06)
- functest work is ongoing, the new healthcheck
tests added in fuel prevent us from having a full succesful run at
the moment, but the problems we are facing seem to also affect
Fuel@OPNFV (AlexAvadanii,
14:36:56)
- OK I assume someone is interacting with Fuel
team to track progress (bobmonkman,
14:37:29)
- meanwhile, we've enabled BGPVPN plugin build at
ISO build, and today/early next week we will also enable OVS
(AlexAvadanii,
14:37:45)
- Cirprian, a short run down would be good
(bobmonkman,
14:37:51)
- currently the functest jobs stop after the
first test, which is called healthcheck (ciprian-barbu,
14:38:15)
- one last thing from me, we are preparing to
switch to 4.4 kernel, which should happen soon (AlexAvadanii,
14:38:16)
- this is a very simple script that does some
basic routines to ensure the openstack components work fine
(ciprian-barbu,
14:38:44)
- the problem is this test is hardcoded to use
x86 cirros image (ciprian-barbu,
14:39:01)
- I am currently working to fix this (ciprian-barbu,
14:39:10)
- @Ciprian- I thought we fixed the cirrus issue
in B-release (bobmonkman,
14:39:25)
- OPNFV introduced this healthcheck test for
Colorado, it did not exist for Brahmaputra (ciprian-barbu,
14:39:58)
- I see (bobmonkman,
14:40:12)
- can we work around this to execute other tests
? (bobmonkman,
14:40:29)
- I was surprised to see it was written like
this, since Jose, the author should have been aware of our ARM pods
not being able to run it (ciprian-barbu,
14:40:30)
- :-) (bobmonkman,
14:41:10)
- yes, I did run tempest and rally by hand on a
two node POD and I can say nothing much changed (ciprian-barbu,
14:41:15)
- but I would like to run the whole suite one a 5
node HA setup with all the features, the one I used didn't even had
ODL (ciprian-barbu,
14:41:50)
- OK, that if Good Ciprian, anything else we need
to discuss on Functest? Just keep us posted on the cirrus issue and
let me know if you need help (bobmonkman,
14:42:21)
- one other thing (ciprian-barbu,
14:42:31)
- I have a patch in the upstream Openstack rally
project that will allow us to solve a few failing testcases
(ciprian-barbu,
14:43:21)
- a few of the failing testcases were caused by
insufficient RAM, the change I upstreamed will allows us to
configure it (ciprian-barbu,
14:43:55)
- very good, Ciprian we need to be diligent to
close those out ovver time (bobmonkman,
14:44:00)
- however, OPNFV has not updated the rally
version in their docker functest image in a while, I will have to
propose a patch for it (ciprian-barbu,
14:44:22)
- already talked to Jose about it, I'm currently
testing with a manual built docker image to make sure I will not
break things (ciprian-barbu,
14:44:52)
- this should be ready next week (ciprian-barbu,
14:44:59)
- Morgan is out on holiday and not ure who is
managing Functest in the interim (bobmonkman,
14:45:02)
- and that's it on my side (ciprian-barbu,
14:45:05)
- it should Jose Lausuch from Erricson
(ciprian-barbu,
14:45:30)
- @Ciprina- is that in rgeards to the cirrus
issue? (bobmonkman,
14:45:36)
- no, this is a different issue (ciprian-barbu,
14:45:56)
- for the notes, please clarify which issue you
are working with Jose on (bobmonkman,
14:47:14)
- can anyone give an update on YardStick?
(bobmonkman,
14:47:48)
- sorry, I thought it was clear, I'm working with
Jose on updating rally with my change inside the functest docker
image; this will help solve some of the failing tempest
testcases (ciprian-barbu,
14:47:56)
- for Yardstick we were blocked for a while not
having manpower (ciprian-barbu,
14:49:20)
- but we will get back on it next week
(ciprian-barbu,
14:50:01)
- OK, thx. I will continue to try and get info
from the Apex and JOID teams on progress with alternative
Installers (bobmonkman,
14:50:59)
- I am also working to plan for the open contrail
controller and keep us updated but no news this week (bobmonkman,
14:51:49)
- OK, anything else we should discuss
today? (bobmonkman,
14:52:29)
- I will add a quick update about vIMS
(florind,
14:52:42)
- it seems to me we are working thru new issues
but we believe we are on track for 22 Sept? Release 1 (bobmonkman,
14:53:19)
- we are making progress with vIMS, the
requirements for Cloudify are understood (florind,
14:53:41)
- currently we are working to port Cloudify
Manager dependencies on ARM, around 9 dependencies have been
identified (florind,
14:54:17)
- that is great news.I would like to get that
solved and also be able to have our team internal to ARM reproduce
at some point (bobmonkman,
14:54:30)
- I have some initial findings about feasibility
of Apex installer. Hopefully Tim can help me out (Madhu___,
14:54:50)
- Cloudify team has offered support, but until
today this has not really materialized (florind,
14:54:50)
- that's the status for vIMS (florind,
14:55:25)
- thx Madhu...can u jot a couple of notes for the
record here? (bobmonkman,
14:55:27)
- no, I believe we can do the port
ourselves (florind,
14:56:20)
- in case we really get stuck, we have someone to
contact (florind,
14:56:50)
- ok, let's just take it one step at a time and
continue to interact with them. (bobmonkman,
14:56:55)
- I have nothing else, If Madhu adds something in
the notes I will capture it before I end the log. Madhu, can you
please connect with me on email? Bob.monkman@arm.com (bobmonkman,
14:58:38)
- Madhu's connection got reset. He is trying to
reconnect (Vijayendra_,
14:59:26)
- thanks everyone and I will work with Dovetail
team to work out a solution. (bobmonkman,
14:59:26)
- I would also like to work on yardstick issues
ciprian-barbu mentioned. (Vijayendra_,
14:59:57)
- CentOS VM image we received is working nicely
as a cloudify base image, if anyone was wondering (AlexAvadanii,
15:00:23)
- that would be very helpful Vijay (bobmonkman,
15:00:25)
- Ciprian: that is great news on the iniital
CentOS image (bobmonkman,
15:01:10)
- this is in preliminary state for now, but
Cavium and Enea are working on setting up packaging CI inside Cavium
lab (AlexAvadanii,
15:01:15)
- I'd like to update on the Apex installer. The
DIB currently supports only armhf and x86_64, so this might be a
limitation (Madhu111,
15:02:28)
- I think this is very helpful to have the lab
replicated in our internal facilities and ARM has a complete setup
with B-relase as well. Now looking to run VNFS (bobmonkman,
15:02:36)
- Madhu : thx for this. We are going to have to
work with APex /CentOS team on that one it seems (bobmonkman,
15:03:43)
Meeting ended at 15:06:22 UTC
(full logs).
Action items
- (none)
People present (lines said)
- bobmonkman (50)
- ciprian-barbu (23)
- AlexAvadanii (19)
- florind (9)
- Vijayendra_ (4)
- collabot (3)
- Madhu111 (2)
- pava (1)
- Madhu (1)
- Madhu___ (1)
Generated by MeetBot 0.1.4.