#opnfv-armband: armband 22Jul2016

Meeting started by bobmonkman at 14:27:45 UTC (full logs).

Meeting summary

    1. Alexandru Avadanii (Enea) (AlexAvadanii, 14:27:57)
    2. Florin Dumitrascu (florind, 14:28:10)
    3. this week we fixed live migration on ThunderX (actually Vijay from Cavium provided the patches, we just built and did a quick validation) (AlexAvadanii, 14:28:27)
    4. why is live migration needed? (bobmonkman, 14:29:13)
    5. latest armband (Mitaka) ISO deploys fine on all hardware we have (Cavium ThunderX, APM Mustang, AMD Softiron), including live migration (new) (AlexAvadanii, 14:29:30)
    6. live migration is a standard feature in Openstack/OPNFV, which is expected to work out of the box (basic checks of stack funtionality include tests for it), but was not ready before on GICv3 systems (like ThunderX) (AlexAvadanii, 14:30:17)
    7. live migration is also a requirement for real-life use cases, where you want to move a VM from a node to another (AlexAvadanii, 14:30:38)
    8. or snapshot a VM and launch it again (AlexAvadanii, 14:30:53)
    9. OK, just wanted to clarify (bobmonkman, 14:31:24)
    10. also, recently we fixed some deployment limitations in Fuel@OPNFV, which now allow using mixed pods (instead of requiring 5 identical nodes in a pod, like it was before) (AlexAvadanii, 14:31:33)
    11. Great news on mixed pods (bobmonkman, 14:31:52)
    12. are we all good dedicating Pod 1 for CI main? (bobmonkman, 14:32:19)
    13. this allows us to have working deploys in CI for arm-pod1 (5 x thunderx) + arm-pod2 (2 x thunderx + 1 APM + 2 softirons) (AlexAvadanii, 14:32:32)
    14. Cavium is sending 2 server today and 3 in 2 weeks to replace the full pod (bobmonkman, 14:32:40)
    15. yes, I think turning arm-pod1 into a CI pod is the best approach, since installing the new nodes introduces some risk (AlexAvadanii, 14:33:09)
    16. we are very happy to hear about the 2 new nodes, especially since they are 2 socket nodes (AlexAvadanii, 14:33:33)
    17. I am going to order 3 SoftIrons for more pod capability but let me know if it makes a big difference if I order 4 or 5 instead (bobmonkman, 14:33:58)
    18. (this is closer to dev work than to overview status, but it's a very important step for us) we now have re-entrant deploys (no manual action required to run the CI loop over and over again), which previously needed a little manual intervention to remove stale boot entries in EFI menu (AlexAvadanii, 14:34:45)
    19. long story short, latest ISO should already behave better than brahmaputra ISO does, live migration being the big thing (AlexAvadanii, 14:35:56)
    20. so, are we on track for functest completeion and CI integration? (bobmonkman, 14:36:06)
    21. functest work is ongoing, the new healthcheck tests added in fuel prevent us from having a full succesful run at the moment, but the problems we are facing seem to also affect Fuel@OPNFV (AlexAvadanii, 14:36:56)
    22. OK I assume someone is interacting with Fuel team to track progress (bobmonkman, 14:37:29)
    23. meanwhile, we've enabled BGPVPN plugin build at ISO build, and today/early next week we will also enable OVS (AlexAvadanii, 14:37:45)
    24. Cirprian, a short run down would be good (bobmonkman, 14:37:51)
    25. currently the functest jobs stop after the first test, which is called healthcheck (ciprian-barbu, 14:38:15)
    26. one last thing from me, we are preparing to switch to 4.4 kernel, which should happen soon (AlexAvadanii, 14:38:16)
    27. this is a very simple script that does some basic routines to ensure the openstack components work fine (ciprian-barbu, 14:38:44)
    28. the problem is this test is hardcoded to use x86 cirros image (ciprian-barbu, 14:39:01)
    29. I am currently working to fix this (ciprian-barbu, 14:39:10)
    30. @Ciprian- I thought we fixed the cirrus issue in B-release (bobmonkman, 14:39:25)
    31. OPNFV introduced this healthcheck test for Colorado, it did not exist for Brahmaputra (ciprian-barbu, 14:39:58)
    32. I see (bobmonkman, 14:40:12)
    33. can we work around this to execute other tests ? (bobmonkman, 14:40:29)
    34. I was surprised to see it was written like this, since Jose, the author should have been aware of our ARM pods not being able to run it (ciprian-barbu, 14:40:30)
    35. :-) (bobmonkman, 14:41:10)
    36. yes, I did run tempest and rally by hand on a two node POD and I can say nothing much changed (ciprian-barbu, 14:41:15)
    37. but I would like to run the whole suite one a 5 node HA setup with all the features, the one I used didn't even had ODL (ciprian-barbu, 14:41:50)
    38. OK, that if Good Ciprian, anything else we need to discuss on Functest? Just keep us posted on the cirrus issue and let me know if you need help (bobmonkman, 14:42:21)
    39. one other thing (ciprian-barbu, 14:42:31)
    40. I have a patch in the upstream Openstack rally project that will allow us to solve a few failing testcases (ciprian-barbu, 14:43:21)
    41. a few of the failing testcases were caused by insufficient RAM, the change I upstreamed will allows us to configure it (ciprian-barbu, 14:43:55)
    42. very good, Ciprian we need to be diligent to close those out ovver time (bobmonkman, 14:44:00)
    43. however, OPNFV has not updated the rally version in their docker functest image in a while, I will have to propose a patch for it (ciprian-barbu, 14:44:22)
    44. already talked to Jose about it, I'm currently testing with a manual built docker image to make sure I will not break things (ciprian-barbu, 14:44:52)
    45. this should be ready next week (ciprian-barbu, 14:44:59)
    46. Morgan is out on holiday and not ure who is managing Functest in the interim (bobmonkman, 14:45:02)
    47. and that's it on my side (ciprian-barbu, 14:45:05)
    48. it should Jose Lausuch from Erricson (ciprian-barbu, 14:45:30)
    49. @Ciprina- is that in rgeards to the cirrus issue? (bobmonkman, 14:45:36)
    50. no, this is a different issue (ciprian-barbu, 14:45:56)
    51. for the notes, please clarify which issue you are working with Jose on (bobmonkman, 14:47:14)
    52. can anyone give an update on YardStick? (bobmonkman, 14:47:48)
    53. sorry, I thought it was clear, I'm working with Jose on updating rally with my change inside the functest docker image; this will help solve some of the failing tempest testcases (ciprian-barbu, 14:47:56)
    54. for Yardstick we were blocked for a while not having manpower (ciprian-barbu, 14:49:20)
    55. but we will get back on it next week (ciprian-barbu, 14:50:01)
    56. OK, thx. I will continue to try and get info from the Apex and JOID teams on progress with alternative Installers (bobmonkman, 14:50:59)
    57. I am also working to plan for the open contrail controller and keep us updated but no news this week (bobmonkman, 14:51:49)
    58. OK, anything else we should discuss today? (bobmonkman, 14:52:29)
    59. I will add a quick update about vIMS (florind, 14:52:42)
    60. it seems to me we are working thru new issues but we believe we are on track for 22 Sept? Release 1 (bobmonkman, 14:53:19)
    61. we are making progress with vIMS, the requirements for Cloudify are understood (florind, 14:53:41)
    62. currently we are working to port Cloudify Manager dependencies on ARM, around 9 dependencies have been identified (florind, 14:54:17)
    63. that is great news.I would like to get that solved and also be able to have our team internal to ARM reproduce at some point (bobmonkman, 14:54:30)
    64. I have some initial findings about feasibility of Apex installer. Hopefully Tim can help me out (Madhu___, 14:54:50)
    65. Cloudify team has offered support, but until today this has not really materialized (florind, 14:54:50)
    66. that's the status for vIMS (florind, 14:55:25)
    67. thx Madhu...can u jot a couple of notes for the record here? (bobmonkman, 14:55:27)
    68. no, I believe we can do the port ourselves (florind, 14:56:20)
    69. in case we really get stuck, we have someone to contact (florind, 14:56:50)
    70. ok, let's just take it one step at a time and continue to interact with them. (bobmonkman, 14:56:55)
    71. I have nothing else, If Madhu adds something in the notes I will capture it before I end the log. Madhu, can you please connect with me on email? Bob.monkman@arm.com (bobmonkman, 14:58:38)
    72. Madhu's connection got reset. He is trying to reconnect (Vijayendra_, 14:59:26)
    73. thanks everyone and I will work with Dovetail team to work out a solution. (bobmonkman, 14:59:26)
    74. I would also like to work on yardstick issues ciprian-barbu mentioned. (Vijayendra_, 14:59:57)
    75. CentOS VM image we received is working nicely as a cloudify base image, if anyone was wondering (AlexAvadanii, 15:00:23)
    76. that would be very helpful Vijay (bobmonkman, 15:00:25)
    77. Ciprian: that is great news on the iniital CentOS image (bobmonkman, 15:01:10)
    78. this is in preliminary state for now, but Cavium and Enea are working on setting up packaging CI inside Cavium lab (AlexAvadanii, 15:01:15)
    79. I'd like to update on the Apex installer. The DIB currently supports only armhf and x86_64, so this might be a limitation (Madhu111, 15:02:28)
    80. I think this is very helpful to have the lab replicated in our internal facilities and ARM has a complete setup with B-relase as well. Now looking to run VNFS (bobmonkman, 15:02:36)
    81. Madhu : thx for this. We are going to have to work with APex /CentOS team on that one it seems (bobmonkman, 15:03:43)


Meeting ended at 15:06:22 UTC (full logs).

Action items

  1. (none)


People present (lines said)

  1. bobmonkman (50)
  2. ciprian-barbu (23)
  3. AlexAvadanii (19)
  4. florind (9)
  5. Vijayendra_ (4)
  6. collabot (3)
  7. Madhu111 (2)
  8. pava (1)
  9. Madhu (1)
  10. Madhu___ (1)


Generated by MeetBot 0.1.4.