17:04:10 #startmeeting JOID weekly 17:04:11 Meeting started Wed Feb 10 17:04:10 2016 UTC. The chair is arturt. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:04:11 Useful Commands: #action #agreed #help #info #idea #link #topic. 17:04:11 The meeting name has been set to 'joid_weekly' 17:04:18 ahh 17:04:21 there we go 17:05:05 #info Narinder Gupta 17:05:09 #info David Blaisonneau 17:05:33 #info Samantha Jian-Pielak 17:05:51 #info Artur Tyloch 17:06:27 #chair iben_ 17:06:27 Current chairs: arturt iben_ 17:06:46 #link https://etherpad.opnfv.org/p/joid agenda 17:08:52 ionutbalutoiu: has been helping us with this: http://webchat.freenode.net/?channels=opnfv-meeting 17:09:07 #undo 17:09:07 Removing item from minutes: 17:09:08 woops - wrong link 17:09:32 #info ionutbalutoiu has been helping with JOID testing https://github.com/opnfv/opnfv-ravello-demo 17:10:20 #topic adgenda bashing 17:10:24 #topic new b release date 17:10:37 First week of March 17:10:41 #info see etherpad above 17:11:02 #topic release B readiness 17:13:03 #link https://wiki.opnfv.org/releases/brahmaputra/release_plan new release date feb 26 17:13:31 Can we add some sort of note about experience per ravello on ontrail and open-daylight? 17:14:20 *contrail 17:16:33 akash: that is already on the adgenda 17:16:40 okay thanks 17:16:51 #info TSC Voted to set the Brahmaputra “release deploy” date to Thursday, February 25 17:17:28 #topic Steps to B release https://etherpad.opnfv.org/p/steps_to_brahmaputra 17:18:39 #info successful deployment with ODL 17:18:59 #info ONOS charm - bug in charm narinder sent email to "the team" 17:19:57 #info NTP needs to be set for each environment - suggest to use the MAAS machine as ntp server 17:24:10 #topic contrail charm 17:24:17 #info Contrail charm - still failing on 2/10 with Liberty https://jira.opnfv.org/browse/JOID-25 17:24:38 #info Juniper OpenContrail https://jira.opnfv.org/browse/JOID-25 work in progress 17:28:33 #topic ONOS charm 17:28:58 #info ONOS charm - bug in charm narinder sent email to "the team" 17:29:05 #info not ready - Chinese new year team OoO 17:31:56 #info ONOS charm is stored in github but setup synced to launchpad bazzar https://github.com/opennetworkinglab/onos 17:32:30 #topic ODL charm 17:33:08 #info all ODL test are passing on orange pod 17:33:21 #info all (except 3 tests) ODL test are passing on orange pod 17:33:31 #info new ODL charm might fix the IPvsix team issue around L2 L3 mode - need to test 17:35:20 #link https://build.opnfv.org/ci/view/joid/ we reviewed the builds here 17:35:26 #topic Documentation update 17:36:14 #info doc: https://git.opnfv.org/cgit/joid/tree/docs/configguide 17:36:17 #link https://gerrit.opnfv.org/gerrit/#/c/5487/ 17:36:59 #info Bryan Sullivan 17:37:42 #info you can see the progress of the doc and patches here: #link https://gerrit.opnfv.org/gerrit/#/q/project:joid 17:49:55 #info discussion around workflow - how to use JOID to do parralel functional tests in the cloud then perform hardware based performance tests once functional tests have passed 17:51:34 sorry for asking so many questions - but the config export part of this has an unclear value to me; the running of multiple parallel CI/CD jobs in the cloud is clearly useful, but I don't see how exporting a Ravello-deployed config helps me in my local lab, because I still need to deploy using the JuJu deploy commands... 17:55:34 OK, I think I understand - if someone developed a bundle tweak in Ravello testing they can export the bundle so we can use it in a local JOID deploy. That part is clear if true. 17:56:17 That's though just a juju export operation, right? And it doesn't need to include resource specifics e.g. MACs etc.\ 17:56:23 bryan_att: yes, it is core juju feature 17:57:33 #1 ravello feature for me is complete transparency on the JOID installer support for that environment, e.g. at most a flag that indicates the special power control etc needs to be used when deploying there. 17:58:22 "feature for me" means the #1 priority to be addressed through the Ravello project, and upstreamed to OPNFV. 17:58:44 We need to minimize the lifespan of a ravello fork 18:00:26 bryan_att: yes - agreed to minimize (or eliminate) any forks 18:00:53 ionutbalutoiu: and i have discussed and agreed to this 18:03:39 As I mentioned, TOSCA is our target for NSD/VNFD etc ingestion, but for now I understand for Canonical that JuJu bundles etc are the medium, and an export function for them would be useful explain how to do and use. 18:10:11 bryan_att: have you tried export function ? 18:10:25 Sorry i have to go. Bye 18:10:28 arturt: no, not yet 18:10:41 #info juju bundle export import https://jujucharms.com/docs/stable/charms-bundles 18:12:09 apart from service model you have also a machine specification, which allows you to set up specific machines and then to place units of your services on those machines however you wish. 18:12:14 Ravello export is probably different from Juju charm bundle export. 18:12:37 is there any Ravello export? 18:12:48 that's the blueprint, right? 18:13:05 but cannot export blueprint outside Ravello 18:13:15 ah, yeah 18:13:17 you can replicate blueprint on Ravello 18:13:20 ok 18:14:37 one thing I would like to discuss - the JuJu deploy step takes very long and it's not clear how to know what's going on... any help there would be great. 18:15:28 e.g. why is it taking so long, what happened when it times out (as it regularly does...), etc - how to debug 18:17:30 running "watch juju status --format tabular" on another terminal helps to see what's going on, which is started, installing packages, failed, etc. 18:18:17 bryan_att: usually we can deploy whole OpenStack in approx. 20min... 18:19:11 #info bug submitted for maas power type driver for ravello https://bugs.launchpad.net/maas/+bug/1544211 18:19:49 bryan_att: I can talk about the juju-deployer process: http://pastebin.ubuntu.com/15006924/ 18:22:03 where is the wiki page? 18:22:35 akash: you can see the work bryan_att did with joid here https://wiki.opnfv.org/copper/academy/joid 18:23:09 what troubleshooting steps can we take to observe the bottlenecks with JOID? 18:23:37 catbus1: thats what user guide should include 18:23:52 amt or IPMI to see teh console boot process 18:23:52 #action schedule JOID community session with Bryan to discuss user experience with Juju akash 18:24:00 centralized log collection 18:24:01 the troubshooting steps, we have some info on the configguide 18:24:16 verify NTP settings too 18:24:17 installguide 18:24:49 narindergupta: agreed, talking with bryan_att will help with the user guide. 18:25:06 catbus1: :) 18:25:54 let's schedule a session this week, today or tomorrow - we can use it for user guide 18:25:58 where in the log can we see a charm is being downloaded 18:26:22 can we enable hash tag progress reports on the console? 18:26:33 so we can tail -f the log to see progress? 18:28:14 bryan.sullivan@att.com 18:30:48 arturt: I am available this week 18:30:55 today or tomorrow 18:31:27 mailto:dave.urschatz@cengn.ca please invite me to meeting with Bryan 18:31:39 can we set up for next wednesday? 18:31:44 my week is slammed 18:31:56 yes for me 18:31:58 i was just about to send an invite and block up to 2 hours 18:32:13 catbus1: ^? 18:32:22 bryan_att: btw whats the issue r u facing this time? Can u run me juju status --format=tabular and send me output? 18:32:25 akash: that works for me too 18:32:46 it's not that I am anxious to work on the user guide. ;p 18:32:48 I can send the juju download logs. But doing "sudo grep download *.log" on all logs in the bootstrap VM I see that downloads started at 16:19 and finished at 16:20 (more than an hour ago) so I don't think that's the issue. 18:33:38 https://www.irccloud.com/pastebin/UCZQXRCe/it's%20getting%20there%2C%20but%20just%20*very*%20slowly... 18:36:01 In the last 20 minutes the juju UI went from 2 relations complete to almost all of them. I noticed that it just timed out, so I think somehow that may be allowing the relations to complete...? 18:36:13 https://www.irccloud.com/pastebin/IlJGKSpF/Here%20is%20the%20timeout%20notice. 18:36:32 bryan_att: the charm download starts in the beginning of joid deployment. 18:36:36 should be short 18:36:55 it was - about 2 minutes tops 18:42:50 bryan_att: this simply looks like a timeout of 2 hrs. May be little extra time needed in your environment. Or just wait deployment might fiinsh soon as timeout should not imapct any relation 18:43:59 narindergupta: looks like I had the keystone error I reported earlier. Maybe that's hanging the process. I can try the "juju resolved keystone/0" workaround 18:45:07 bryan_att: ok 18:45:07 narindergupta: I don't see why my environment should have performance issues - these are Intel i7 machines with 16GB RAM connected to a 100MB ethernet switch... the controller has two network interfaces... 18:46:14 bryan_att: not performance usually maas act as proxy cache 18:47:20 bryan_att: definetely full logs will help to understand 18:47:37 OK, should I just paste them here? 18:47:43 and which ones? 18:49:03 bryan_att: you can paste them to pastebin.ubuntu.com, if not confidential 18:49:16 ./deploy.sh logs starting from there so i can figure it out from time stamp 18:50:24 irccloud works too. 18:54:55 Here it is - the ubuntu pastebin only accepts text. The logs are too big. https://usercontent.irccloud-cdn.com/file/ie3mffYE/160210_blsaws_bootstrap_logs.tar.gz 19:04:42 I used the "juju resolved keystone/0" workaround and things got happier. But normally (2-3 times now) I have to enter the command twice to resolve all the issues (the keystone error seems to reappear?) https://www.irccloud.com/pastebin/NDu0Uygd/ 19:19:23 bryan_att: from the juju status output, keystone unit is ready. 19:20:03 yes, but only after I entered the command "juju resolved keystone/0" - see the previous status I posted where keystone was in "error" 19:20:17 ah, use juju resolved --retry keystone/0 19:20:44 using only juju resolved only makes the status look good, it doesn't do anything. 19:21:03 with '--retry' it will rerun the hook where it failed. 19:21:16 OK, thanks that's good to know 19:21:48 you may wonder why juju resolved exists, it's for killing units that are in error state. You can't kill units in error state, so get it in working state and kill it. 19:22:49 bryan_att: sometimes the issue will resolve by itself after the re run, but if the error appears again, you can juju ssh keystone/0, and sudo -i as root to look into /var/log/juju/unit-keystone-0.log 19:23:28 find out where the last error is about, manually fix it, go back to the jumphost and re run the juju resolved --retry 19:24:58 Here are the errors from that log file https://www.irccloud.com/pastebin/7chcwf99/ 19:26:11 you need to look at the section above "2016-02-10 18:39:06 DEBUG juju.worker.uniter modes.go:31 [AGENT-STATUS] error: hook failed: "identity-service-relation-changed"" 19:27:04 can you copy and paste the section before this error message in the log? 19:27:26 ok, hang on 19:31:01 Here it is https://www.irccloud.com/pastebin/cLRXYQlF/ 19:32:33 it's too little info. 19:33:47 * catbus1-afk --> meeting 19:38:45 When you get back - here is more https://www.irccloud.com/pastebin/XsKD1nML/ 19:38:56 * bryan_att afk-lunch 19:48:45 bryan_att: this is same issue about admin roles as trying to create the Admin role and failed as admin already exist 19:49:15 and this is issue with keystone as well 19:49:25 ok, so a known issue? 19:49:38 yeah 19:50:22 in keystone service does not differentiate between admin and Admin 19:50:34 while keystone client does 19:51:18 differentate so send request to service and service failed to create that role and says duplicate 19:52:52 bryan_att: final success run of deply, functest and yardstick https://build.opnfv.org/ci/view/joid/job/joid-os-odl_l2-nofeature-ha-orange-pod2-daily-master/ 20:00:51 bryan_att: basically keystone can not create two roles admin and Admin 20:08:41 narindergupta: is this a keystone issue, or a JOID issue? Does it affect the other installers? 20:11:21 bryan_att: other installers uses admin as role by default and charms uses Admin so on request from functest team we changed in role to admin in service but looks like some services are also trying to create role Admin which failed in keystone because keystone does not differentiate between admin and Admin. Defeinitely a keystone bug but other installer may not encounter those as they uses admin for everything. 20:12:10 bryan_att: here is the bug already reported https://bugs.launchpad.net/charms/+source/keystone/+bug/1512984 20:13:04 bryan_att: comment no 10 I suspect that keystone sees 'admin' and 'Admin' as the same thing from a role name perspective; the problem is that the role created by default is currently all lowercase, whereas the role requested via swift is not - the code checks but is case sensitive.We should fix that, but the root cause of the lowercase role creation is bemusing - its default is 'Admin' in config, not 'admin' and that's used raw by the charm. 20:13:39 narindergupta: so what's our workaround in the meantime - change back to using "Admin" in the charms? 20:14:00 yes 20:14:01 because this appears to be affecting successful deployment 20:15:08 bryan_att: whenever relationship changes this occur i would prefer Admin in bundle. there is yaml file in ci/odl/juju-deployer 20:15:58 ok, are you going to issue a patch to change it back? Just wondering how long the issue will remain for JOID. 20:16:00 change admin to Admin at the end of file for all three deployments juno, kilo and liberty 20:16:06 then run ./clean.dh 20:16:15 ./clean.sh 20:16:20 OK, I can do that. 20:16:57 admin-role: admin 272     keystone-admin-role: admin 20:17:20 those two option needs a change from admin to Admin or comment both 20:17:26 as by default is Admin 20:18:45 thrn run ./deply.sh -o liberty -s odl -t nonha -l attvirpod1 20:18:54 will restart the deployment. 20:23:13 bryan_att: at the end it seems we will be switching to admin by default most likely in charm release in 16.04 as per this bug. 21:27:59 narindergupta: when you say "change admin to Admin at the end of file for all three deployments juno, kilo and liberty", which file am I changing? 21:28:56 bryan_att: there are three files for odl https://gerrit.opnfv.org/gerrit/#/c/9697/1/ci/odl/juju-deployer/ovs-odl-nonha.yaml https://gerrit.opnfv.org/gerrit/#/c/9697/1/ci/odl/juju-deployer/ovs-odl-ha.yaml and https://gerrit.opnfv.org/gerrit/#/c/9697/1/ci/odl/juju-deployer/ovs-odl-tip.yaml 21:35:47 ok, starting the redeploy now 21:36:06 cool 23:50:49 narindergupta: this time it went thru to the end, everything looks good so far. 70 minutes to deploy. 23:53:06 bryan_att: cool yeah that issue we need to work on 23:55:13 narindergupta: sometime I would like to learn how to set the services to be assigned specific IPs. everytime they get installed they have different addresses. in typical NFV deployments I think we will try to have everything consistent. 23:57:41 I still have the keystone error though. I was advised (by catbus1) to use the command "juju resolved --retry keystone/0" to retry the hook from where it failed. 00:14:16 bryan_att: thats strange we need to understand is it same error and is it occuring in your case? 00:14:57 looks like the same error - I could not login to horizon until I entered the resolved command. 00:15:59 bryan_att: but this time we used Admin right 00:16:03 > 00:16:07 The --retry flag does not appear to have resolved it though - catbus1 said that without the --retry flag, that resolved was just ignored the issues 00:17:18 Yes, I used Admin for keystone (at the end of the file) 00:17:25 https://www.irccloud.com/pastebin/qCfxWSmC/ 05:10:01 bryan_att: will you retry commenting both the option in file. Also if you can send me bundles.yaml from joid/ci would be great? 08:15:51 narindergupta: how may I help you? 13:38:12 hi David_Orange good morninig 14:30:48 ashyoung: hi 14:31:09 narindergupta1: hi 14:31:53 ashyoung: do you know anyonw who can change the onos charm? Looks like team is on chineese new year and deployments are failing 14:32:17 narindergupta1: yes 14:32:19 i have a fix but until it goes into git repo i can not run the install successfuly 14:32:36 narindergupta1: can you help me out and provide me some details on what's failing? 14:32:43 Oh 14:32:57 Do you just need your fix checked in? 14:33:48 ashyoung: i need the fixes to check in 14:34:00 ok 14:34:05 I can help with that 14:34:32 http://bazaar.launchpad.net/~opnfv-team/charms/trusty/onos-controller/fixspace/changes/12?start_revid=12 14:34:37 contains the changes i need 14:34:48 there are two changes 14:35:36 Thanks! 14:35:52 I will get it taken care of right away 14:36:01 What's the current problem? 14:36:04 rev 11 and 12 needs to be added 14:36:20 deployment failed because of additonal space introduced in charm 14:36:37 nd also then onos-controller charm install failed failed fo config 14:36:49 and there are two patches for two issues 14:37:05 got it 14:38:02 ashyoung: once you are able to merge in git tree then i can sync in bazaar and run the deployment again. 14:38:15 understood 14:39:17 I'll get it done 14:39:40 ashyoung: thanks 14:40:17 My pleasure 16:05:59 narindergupta: hi 16:07:18 David_Orange: hi ok resize test cases passed now. but need to check with you around the glance api failed cases? 16:07:45 how can i help ou ? 16:08:13 last functest failed, but it seems to be a docker problem 16:08:18 need to know what command temptest runs and whether those passes manually in your pod or not? 16:08:24 oh ok 16:08:36 yesterday it passed 16:08:55 and it seems total 98% test cases are passing as per morgan 16:10:11 yes 16:10:41 ok how can i find the 2% failed cases and fix those too. 16:11:25 i look at them 16:12:49 thanks. 16:13:42 also need help on debugging failure cause on intel pods. It seems there might be deploy in changing the switch. But we can tell them it is issue with switch then intel might do it sooner 16:15:11 David_Orange: also i added few scenrios in joid like dfv, vpn, ipv6 etc.. and i am passing it through -f parameter in ./deploy.sh 16:15:24 can it be integreted as part of ci as well? 16:15:41 yes of course, i can work on that 16:15:47 thanks 16:15:58 do you also have something for dpdk ? 16:16:35 David_Orange: dpdk will be part of 16.04 LTS in main and we will be enabling it with 16.04 lts 16:16:53 it may be part of SR2 after xenial release. 16:17:19 ok 16:17:29 so not before 2 month 16:17:44 for experimental basis we can add 16:17:52 but it may not work 16:20:52 ok 16:21:13 what is dfv ? 16:22:42 narindergupta: for new parameters, can 2 params can be enabled at the same time ? 16:22:51 or 3 16:22:59 or more :) 16:27:22 sfv 16:27:28 sorry it is sfv 16:27:45 David_Orange: currently no 16:28:04 ok 16:28:12 David_Orange: but i can write a combination might do like ipv6dvr 16:28:18 ipv6sfc 16:28:21 and we can enables all those param for all sdn controllers 16:28:55 can you take more than one '-f' ? 16:28:56 well few for nosdn and few for odl 16:29:06 no i can not 16:29:13 only one right now 16:29:13 or a coma-dash separated list 16:29:31 David_Orange: currently no but we can in future 16:29:49 ok, so i only set one 16:29:53 as i need to enhance my code to accept that 16:29:55 ok 16:30:06 np 16:30:26 and they can be enabled for each scenarios ? 16:30:36 yes 16:30:44 sorry: s/scenario/sdncontroller/ 16:30:45 ha/nonha/tip 16:31:01 not necessary 16:31:15 like ipv6 is for all. But sfc only for odl 16:31:28 ok 16:31:40 and vpn ? 16:31:52 only for odl currently 16:32:17 ok: sfc (all) vpn (odl) ipv6 (odl) 16:32:31 and for nosdn cases ? 16:32:44 no ipv6 all 16:32:52 sfc and vpn only odl 16:33:00 and same for odl_l2 and odl_l3 16:34:10 David_Orange: also dvr for all 16:34:21 yes sorry 16:35:19 we have odl_l2 and l3 now ? 16:46:54 yes i am trying to enable is using dvr but not sure whether it will work or not. 16:48:33 do we have seperate test cases? 16:53:03 narindergupta: today we have only odl_l2, but i can prepare l2 and l3 16:57:05 narindergupta1: let me know for odl as i can push the patch 16:57:31 narindergupta1: do you also thought about OS API access ? 17:01:34 David_Orange: yes i am thinking about it and discussing internally. 17:02:12 ok 17:02:46 David_Orange: currently issue is containers can e only on admin network. To enabled containers with other host network we need to wait for MAAS 2.0 17:03:25 ok, and for scenario with fixed ip for all endpoints ? 17:03:37 yes 17:03:59 this did not require a new network 17:04:38 for fix ip no issues as we can go for different vip address which act as end point anyway so may be changing the vip in bundle might help. 17:05:39 if all endpoints have fixed and known address i can set the reverse proxy quickly 17:06:12 but we also need to setup the publicurl to a dedicated fqdn, is it possible ? 17:09:41 narindergupta1: actually i transform odl_l2 and odl_l3 to odl, do you want i remove that and push odl_l2 odl_l3 to deploy.sh ? 17:10:21 David_Orange: means? 17:10:45 David_Orange: of that sense no -s should be odl only 17:11:00 and -f can be odl_l2 or odl_l3 17:11:21 ok, so for you this is an option ? 17:11:42 correct 17:11:54 other installer set odl_l2 and odl_l3 in controller part (os---[-]) 17:11:59 as i need to enable profile in odl to enable this 17:12:25 ok 17:12:44 unfortunately we do not define that way as we have single controller odl and in that feature can be enabled for l2 and l3. 17:13:08 i push you a patch 17:13:16 thanks 17:13:50 David_Orange: my question was for you that do we have seperate test cases for l2 and l3? 17:14:30 no 17:15:54 narindergupta1: odl_l3 = odl_l2 + l3 true ? so i dont need to add a scenario name for odl_l2: os-odl_l2-old_l2-ha 17:16:58 ok sounds good to me 17:17:21 David_Orange: i am not sure by default iodl_l2 is enabled 17:17:50 as i can see in odl only l3 is enabled by default but for l2 we need to enable the switch specifically 17:18:28 if we set odl for sdn controller, odl_l2 is enable by default, no ? 17:18:35 so i believe naming of default scenario is not true. It should be l3 default and that what we test 17:19:02 how to find it out? 17:20:10 currently we enable using this article https://wiki.opendaylight.org/view/OpenStack_and_OpenDaylight 17:22:11 i 17:22:27 "OpenStack can use OpenDaylight as its network management provider through the Modular Layer 2 (ML2) north-bound plug-in" 17:22:44 today it is odl_l2, ip services are provided by neutron 17:23:20 ok yes thats what we have 17:23:44 i am wondering how to enable l3 then? 17:24:02 in that case i was in confusion. as there is l2switch module in odl 17:24:14 and i was thinking for l2 i need to enable that 17:24:53 is there any odl documentation which explains this in better way? 17:24:54 by default odl is using ovs as switch, it may be an other switch 17:24:57 let me check 17:26:00 but does that mean by enabling odl-l2switch we are enalbing l2? 17:26:15 or by adding ml2 plugin means it is odl_l2 17:26:21 https://wiki.opendaylight.org/view/OpenDaylight_Controller:MD-SAL:L2_Switch 17:27:01 so my question is to enable l2 do i need to enable this? 17:27:15 as far as i understand, enabling ml2plugin enable odl for network layer (l2) 17:27:42 i dont think so, until now odl was working without no ? 17:27:45 ok then we support l2 and i can verify that and for l3 enablement what should be done? 17:27:58 for l3 i dont know 17:28:37 l2switch seems to be much more an enhance l2 switch (you can keep it as mdsal option for example 17:28:59 our charm developer followed this and integreted https://wiki.opendaylight.org/view/OpenDaylight_Controller:MD-SAL:L2_Switch 17:29:08 sorry https://wiki.opendaylight.org/view/OpenStack_and_OpenDaylight 17:29:38 this is for l2 17:29:39 David_Orange: ok 17:30:07 ok sounds good then i need to correct hu bin as i telling him that l3 is enabled by default 17:30:21 now i need to figure it out what to do to enable l3 then 17:30:38 so that both l2 and l3 is enabled by odl 17:30:47 l3 is managed by neutron 17:31:08 ok so l3 need to be enabled in neutron 17:31:22 yes, i thinks so 17:31:43 i am far to be an odl expert, but this is my understanding 17:31:46 i think in neutron we already enabled it by default 17:32:03 until now odl was well running 17:32:09 so in that case we have both odl_l2 and l3 by default 17:32:39 should not we keep it simple until B release then work on all that new features ? 17:32:46 but i need confirmation so that i can tell the community how can i do that? 17:33:28 i can cal my colleage tomorrow, but i am not sure he is working (this is an holiday period here) 17:33:51 David_Orange: oh ok who else can guide me. 17:34:04 last time he check (1 month ago) we were using odl for layer 2 and neutron for l3 17:36:30 narindergupta1: dont know any else. 17:36:43 narindergupta1: scenario should be frozen from 2 or 3 weeks ago 17:37:43 we should wait to have a clean install of B release 4 times, froze Bramaputra then add those new features 17:39:11 narindergupta1: so i can add ipv6, sfc and so on, but it is not the way i would do it. But you are the boss :) 17:39:17 David_Orange: if neutron l3 then it is already enabled in joid. we are testing it. And it is enalbed by default 17:39:28 yes 17:39:58 today we are loop testing odl at l2 and neutron at l3 17:40:01 David_Orange: please add it and we will run if fails then won't release otherwise will get added 17:40:05 this is my understanding 17:40:22 and our tests are passing correct? 17:41:24 today functest says: odl on joid pod2 = 100%, no ? 17:41:47 i think morgan stated 98% 17:42:07 odl_l3 is much more an option 17:42:15 98% is on tempest no ? 17:42:20 overall 17:42:33 tempotest only 3 failed test cases related to glance 17:42:54 two are related to glance which needs to understand and one related to boot option. 17:43:07 as per viktor manuall verification worked for both 17:43:18 yes, but odl tests are all ok: 18 tests total, 18 passed, 0 failed 17:43:43 yes odl all were passed 17:43:44 so for odl we should not touch 17:44:02 David_Orange: please add an option in ci and i will run that test and sure it will pass. 17:44:22 odl_l3 option ? 17:44:34 yes do it please 17:44:55 and if neutron handles it then it is already there. 17:45:12 basically same deployment have both l2 and l3 enabled then 17:45:24 l2 by odl and l3 by neutron 17:48:12 https://gerrit.opnfv.org/gerrit/9821 17:50:58 narindergupta1: i have to go 17:51:14 David_Orange: one change you need to pass -f 17:51:35 it is passed line 147 17:52:03 narindergupta1: is it ok ? 17:52:15 y3es 17:52:25 ok, good, see you tomorrow 17:52:39 ok see you 17:53:38 David_Orange: sorry -f is missing 17:54:08 in line 147 and line 149 15:04:22 narindergupta: hi 15:04:32 David_Orange: hi 15:05:11 morgan is asking if we are ok to set pod2 as CI pod until B official release ? 15:05:26 David_Orange: yes i am +1 for it 15:05:30 ok, nice 15:06:09 David_Orange: for maas i have to do minor adjustment for dhcp and static ip address though/ 15:06:09 narindergupta: have you seen my mail about ODL l2switch 15:06:20 still need to check 15:06:32 okok 15:06:37 and ok 15:07:26 Davidregarding the feature needs to be enabled 15:08:02 which one ? 15:08:10 no 1 15:08:13 #undo 15:08:16 yes 15:08:23 yeah we are enabling minimum now in Be 15:08:35 David for l2 switch nice to know 15:08:55 ok, so you are not enabling all features as in the doc, good 15:09:13 i will have more feedback in 10 days 15:11:04 and during my holidays, next week, if you need, you can send me a mail (if you want me to set the reverse proxy for public API access 15:11:52 i will not answer it in the hour, but try to check some time 15:16:33 David_Orange: sure david. 15:17:54 David_Orange: also for crating the reverse proxy need to work with you. Lets try something which works for all 15:18:07 yes of course 19:56:06 bryan_att: i found the issue with keystone charm it seems i applied the patch to ha install but leftout nonha and i am fixinf it now 20:06:27 bryan_att: fixed it you can give a retry 04:57:31 yuanyou: hi 04:57:54 narindergupta:hi 04:58:25 yuanyou: i am still finding the installation issues with onos. Will you please look into it? We have holiday tomorrow but will restart the build once you will fix it 04:58:48 yuanyou: this time it is config change on neutron-gateway 04:59:47 narindergupta: I am working on this ,but I don't know how to fix it. 05:00:35 what is the issue? Which script is failing? 05:01:10 as i fixed other issue and those were due to extra space and not having proper config() definitin 05:01:56 but other i have not idea how your team has implmented. Best way to look itnto the logs on the neutron-gateway unit and see what errors 05:02:00 and try to resolv 05:02:13 narindergupta: I only know config-changed failed,but i don't know which line failed 05:02:35 more errors can b find on failed unit 05:02:52 and check /var/log/juju/unit-neutron-gateway-0.log 05:03:01 on neutron-gateway unti 05:03:31 narindergupta:yes, I am deploying on my own environment 05:03:42 ok no problem 05:04:08 meanwhile on pod5 can u remove the auto run of onos until this issue fixes 05:04:34 and this is supposed to be stable ci lab but unfortunately its failing today on onos 05:04:45 and you can use intel pod6 though for development 05:05:23 narindergupta:ok,i will remove the auto run in releng 05:05:31 thanks 14:57:09 narindergupta: ping 14:57:17 jose_lausuch: pomg 14:57:20 whats up? 14:57:50 jose_lausuch: io have installed gsutil into pod5 for joid and submit the patch in master branch 14:58:00 will do same for other pods as well 14:58:49 narindergupta: great 14:58:52 fdegir: ping 14:59:08 can you help to install gsutil with proper credentials? (I have no clue what's needed) 14:59:33 yeah in orange pod5 i am seeeing upload error 14:59:49 where gsutil was installed 15:01:04 jose_lausuch: you were talking about image issue> 15:01:05 > 15:01:06 ? 15:01:20 narindergupta: flavor issue 15:01:38 https://build.opnfv.org/ci/view/functest/job/functest-joid-intel-pod5-daily-brahmaputra/45/console 15:01:38 jose_lausuch: what was that? 15:01:50 - ERROR - Flavor 'm1.small' not found. 15:01:59 that is on joid intel pod 5 15:02:00 but 15:02:38 however 15:02:43 if you look above 15:02:46 but its there | 2 | m1.small | 1 | 2048 | | 20 | when we run 15:02:48 Flavors for user `admin` in tenant `admin`: 15:02:48 yeah 15:02:50 yes 15:02:54 so, that is strange 15:02:59 and also, lookin below 15:02:59 yeah 15:03:01 for promise test 15:03:11 Error [create_flavor(nova_client, 'promise-flavor', '512', '0', '1')]: ('Connection aborted.', BadStatusLine("''",)) 15:03:11 and by defaul t we do not deleted anything 15:03:15 cann't create flavor 15:03:35 that says connected aborted. 15:03:43 why was it aborted? 15:03:46 ya, casual.. 15:03:48 I dont know 15:03:59 it's just creating a flavor with those specifications 15:04:07 2016-02-14 17:14:41,689 - vPing_userdata- INFO - Flavor found 'm1.small' 15:04:18 requests.exceptions.ConnectionError: ('Connection aborted.', BadStatusLine("''",)) 15:04:22 yes, same error there 15:04:26 that is strnage 15:05:02 jose_lausuch: but i saw this passed in latest deployment 15:05:36 this is even worse: https://build.opnfv.org/ci/view/functest/job/functest-joid-intel-pod5-daily-brahmaputra/44/console 15:05:44 Error [create_glance_image(glance_client, 'functest-vping', '/home/opnfv/functest/data/cirros-0.3.4-x86_64-disk.img', 'True')]: : Failed to establish a new connection: [Errno 113] No route to host 15:06:19 so, we are not having stable results 15:06:35 jose_lausuch: and in pod6 it passed. https://build.opnfv.org/ci/view/joid/job/functest-joid-intel-pod6-daily-master/54/consoleFull 15:07:11 ya, that worked 15:07:17 but I'm looking at stable/brahmaputra branch 15:07:22 https://build.opnfv.org/ci/view/functest/job/functest-joid-intel-pod5-daily-brahmaputra/ 15:07:25 it looks like pointing me towards the pod stability as aborted connection is something called for networkign failure as 15:07:32 yes 15:07:38 looks like network failure or something 15:07:41 there is no difference in branch though 15:07:49 from installer prospective 15:08:46 jose_lausuch: if you will see in intel pod5 https://build.opnfv.org/ci/view/joid/job/functest-joid-intel-pod5-daily-brahmaputra/46/console 15:08:49 it passed 15:08:57 even in pod5 15:09:40 narindergupta: yes, the flavor thing worked there... 15:09:40 and temptest failed modt of time and not sure why? but same test passes in orange pod2 15:09:45 but 172 failures in tempest. 15:10:28 jose_lausuch: never got good results on intel pod5. Can you please help here., AS in oramge pod2 only 5 test cases failed 15:10:38 and test results were 98% 15:10:47 yes 15:10:52 I know 15:11:02 but why is this pod giving these bad results? 15:11:07 and also the job was aborted 15:11:10 while running 15:11:12 david think it could be networking issue in pod 15:11:21 do not know 15:11:34 that could explain it 15:11:58 jose_lausuch: but deployment always worked. 15:12:12 taking a look at this: 15:12:13 https://build.opnfv.org/ci/view/joid/job/functest-joid-intel-pod6-daily-master/ 15:12:26 all the latests jobs are aborted.. (grey) 15:12:50 yes 15:14:08 jose_lausuch: i am seeing this aborted issue for long time and it never passes and not sure what it does 15:14:15 may be you can debug. 15:14:30 and time out is 210 minutes 15:15:07 narindergupta: timeout for what? 15:15:13 for job 15:15:21 check the status 15:15:41 jose_lausuch: Build timed out (after 210 minutes). Marking the build as aborted. 15:16:36 narindergupta: ok.... something taking too long maybe... 15:17:03 jose_lausuch: not sure it is stuck or taking too long 15:17:58 narindergupta: ya, I see now in our jjob a timeout of 210 sec... 15:17:59 wrappers: 15:17:59 - build-name: 15:17:59 name: '$BUILD_NUMBER Suite: $FUNCTEST_SUITE_NAME Scenario: $DEPLOY_SCENARIO' 15:17:59 - timeout: 15:17:59 timeout: 210 15:18:05 mmmm 15:18:12 ok 15:18:24 but it is not normal that it takes that long... 15:19:37 jose_lausuch: yes its not so need a debugging where it got stuck. 15:20:12 also yardstick was working until last week on orange pod2 but not anymore 15:20:56 ok 15:21:08 Im trying to figure out what is getting stuck 15:22:41 for example vIMS gets an error and takes almost 30 min 15:34:14 narindergupta: I have detected that the Cinder tests that does Rally, take 45 min 15:34:36 I have checked on other installers, and it takes only 20 min 15:35:06 oh ok could be in intel pod5 and pod6 we do not have extra hard disk 15:35:21 can u check on orang epod2 how much time it takes? 15:35:22 narindergupta: same in orange pod 2, aroound 20 min 15:35:33 and I guess the other rally tests will take longer too 15:35:45 the normal functest runtime is around 2.5 hr 15:35:51 this timedout after 3.5 hr... 15:35:55 something is bad there 15:35:57 that may be reason in intel pods no extra disk ssd disk so we are using the os disk 15:36:09 I'll check other tests 15:36:14 sure please 15:36:41 specially network specific. In worst case we can increase the time oput and test 15:38:30 for neutron test, normally it takes 10 min, and on intel pod 2 20+ minutes 15:38:38 so everything is slowly there.. 15:43:45 intel pod2 ? 15:43:56 so other pods also taking same time? 15:44:10 not specific to pod5 and pod6? 15:45:40 jose_lausuch: all pods i am seeing this https://build.opnfv.org/ci/view/joid/job/joid-verify-master/ do you think it is connectivity issue 15:45:41 ? 15:46:02 to linux foundation 15:47:09 narindergupta: you mean this? pending—Waiting for next available executor on intel-us-build-1 15:47:28 yeah 15:47:39 I dont know... 15:47:49 pod5, pod6 and orange pod2 shows the same status 15:49:55 ya... strange 16:09:42 jose_lausuch: anyway is temptest is time dependent? 16:10:32 narindergupta: what do you mean 16:11:01 i am seeing few test cases faied in intel pod5 and 6 16:11:21 so just wondering some temptest times out as well and marked as failed 16:11:36 just thinking loud? 16:12:33 we can check the logs 16:12:47 ah no, we cant, they are not pushed to artifacts 16:13:06 we can check them on the container 16:13:32 which container? 16:15:52 the functest docker container on the jumphost 17:16:03 narindergupta: ping 17:16:24 jose_lausuch: pong 17:16:36 narindergupta: can you tell me the IPs of the jumphost on intel-pod5 and 6? 17:16:46 I have the vpn 17:16:50 but dont know the ips 17:16:57 10.2.65.2 and 10.2.66.2 17:17:12 [pd5 and pod6 but i need to add your ssh keys for access. 17:17:29 give me your ssh public keys 17:17:54 ok, let's do it with private chat 17:18:03 sure 17:18:47 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9I5Pyg3tND4sV9EoEW3jCqY+91IOdkNCAe8xscI1mRcVLlN7/0YHFCQLFX8q+lTAMWhguMkoUf4y0w6rMmDE0c59XKUNYUPHMPT6vEBbHz9JjCZdEhHGouDJmSAxS0PBLrv+nj+P9fhFVUdxf+pWzaCID8zZDfBq2k7KGtyREmV/l1jkIatDUjh5Hj0lenkYvH85nrAQaAWa3WoLieqj8Ve9ruoguhV6/IjWbUtU+JX/9FLyn10Wq+ArySIbDbwD2ajI//4E1XPDfztzjsscU3sSUw9vwaP78/1XOHPKeEvgd1UBIG4TzaTuRgLmsTtWar409sZ8QsPkE2CwkS4OB ejolaus@ejolaus-dev 17:18:52 oops, sorry :) 17:19:04 no worrues 17:19:40 ok now you can try using 10.2.66.2 intel pod6 17:19:45 with user jenki 17:19:53 sorry user jenkins 17:21:17 can I try first the pod5? I already have that vpn opened 17:21:29 ok just a moment then 17:23:26 please try now 17:23:40 narindergupta: I opened vpn also for pod6, and it works 17:23:42 Im in, thanks! 17:23:49 cool 17:25:09 I will run some test on pod5, is that ok? 17:25:29 is the deployment up? 17:26:21 root@ce0974c487f4:~# neutron net-list 17:26:21 Unable to establish connection to http://10.4.1.27:9696/v2.0/networks.json 17:26:33 what is this? that address is not showed in the endpoint list 17:27:31 just now checked looks like onos tried to installed ans habing issue with neutron 17:27:55 i can retart the odl deployment might take an hour if you are ok with that 17:27:55 ? 17:28:17 narindergupta: ok, but I might check tomorrow.. 17:28:34 overnight job will on that 17:28:53 and install will override again as this is ci pods 17:29:24 or you can look into it tomorrow morning your time 17:29:51 as onos job is scheduled at 8:00 AM CST i believe 17:31:25 ok 03:54:27 yuanyou: hi 03:54:57 yuanyou: i have send you an information and it seems you need to do charm sync of neutron-api-onos as well 04:53:27 narindergupta : yes,I saw it ,and I am test it . 04:53:42 yuanyou: ok 04:54:16 yuanyou: thanks also it is good habit to to do charmsync for your charms in case you are taking it from openstack cahrms 04:56:40 narindergupta: yes ,that will be fine 04:57:56 yuanyou: currently there won't be any onos build op into pod5 as functest team need this for debugging temptest failures. But we can use intel pod6 once you are able to correct the charm sucessfully? 05:01:24 narindergupta: yes, I see 13:11:46 narinderg_cfk: ping when you are up 13:30:07 hi jose_lausuch 13:30:28 I so you aborted a job after 4 hors.. 13:30:31 hours 13:31:04 jose_lausuch: yesternight i did not abort 13:32:02 jose_lausuch: and intel pod5 was success daily 13:32:14 https://build.opnfv.org/ci/view/joid/job/functest-joid-intel-pod5-daily-brahmaputra/52/console 13:32:53 jose_lausuch: so does for intel pod6 https://build.opnfv.org/ci/view/joid/job/functest-joid-intel-pod6-daily-master/55/console but need to see the failed test cases 13:34:23 jose_lausuch: but yardstick test cases failed 13:42:52 jose_lausuch: it seems increasing timeout completed atleast the test in all labs. 13:43:06 narinderg_cfk: I mean this one, the previous one 13:43:06 https://build.opnfv.org/ci/view/joid/job/functest-joid-intel-pod5-daily-brahmaputra/51/console 13:43:15 narinderg_cfk: anyway, the blue ball took 5 hr... 13:43:38 narinderg_cfk: the problem is cinder 13:43:43 jose_lausuch: correct. yes above aborted because i wanted to run overnight builkd 13:43:46 | cinder | 52:41 | 50 | 98.82% | 13:44:03 ok one hour 13:44:42 I will compare this 13:44:46 with another installer 13:45:30 in orange pod2 only 19.31 13:45:49 | cinder | 19:31 | 50 | 100.00% | 13:46:11 but they have ssds 13:47:06 jose_lausuch: but heat test cases are failing on all pods | heat | 03:52 | 2 | 7.69% | 13:47:42 narinderg_cfk: http://pastebin.com/raw/kFdKVDRQ 13:47:51 look at keystone 13:47:58 took 3 hours... that is the real problem 13:48:00 and cinder 1 hr 13:49:19 i think all service took extra time 13:49:33 for cinder I can understand that there are not SSDs 13:49:40 correct 13:49:40 but for keystone?? 13:49:47 3 hrs?? 13:49:54 there is a network issue for sure 13:50:38 yeah looks like 13:51:06 jose_lausuch: how to figure it out? 13:51:28 narinderg_cfk: I'm trying to login to the pod, but I am not sure how to start troubleshooting this... 13:53:16 jose_lausuch: i think we need help 13:55:47 jose_lausuch: lkets discuss in pharos channel please join opnfv-pharos 14:33:55 jose_lausuch: ok so manual working of openstack is fast enough but performance detriot during test. can u check the logs on thesystem what point it took more time 14:36:28 narindergupta: sorry, need to waity, Im reporting in the release meetning 15:46:04 jose_lausuch: i am back again sorry it was power outage. 15:46:24 narindergupta: no prob 15:49:23 catbus1: hi 15:53:23 narindergupta: Im back on intel pod 5 15:53:27 running test by test 15:53:42 jose_lausuch: ok 15:58:17 we have yet another problem 15:59:20 whats the issue? 16:00:50 now vping doesnt work 16:01:57 narindergupta: JOID uses the same network segment for public and admin network... 16:02:28 jose_lausuch: there are different network 16:02:43 if I do keystone endpoint-list 16:03:03 jose_lausuch: yeah for endlist we are defining the public segment seperate 16:03:06 http://hastebin.com/odovurufog.sm 16:03:10 its all on admin network 16:03:16 10.4.1.0/24 for all 16:03:22 ah 16:03:37 for endpoints only 16:03:38 I wonder if that could also be an issue 16:04:10 is there a joid gui of the deployment? 16:04:20 yes it is on admin entwork 16:04:42 is there later on isolation for storage/public/admin networks ? 16:05:48 there is isolation of data, public and admin 16:05:52 ok, I see that 10.2.65.0/24 is the public range 16:06:00 correct 16:06:02 ok 16:06:09 then, ignore my comment :) 16:06:12 :) 16:06:25 vping failed 16:06:38 I will run it again and not clean the instances 16:06:39 whats the reason? 16:06:43 so that you can login to pod5 and check 16:06:47 cannot ping the floating ip 16:07:33 which port flatin ip created? 16:07:40 ext-net or somewhere else 16:08:08 ya 16:08:15 can you login to the deployment? 16:10:08 yeah i am logging in 16:10:12 nova list 16:10:19 and then you'll see there are 2 VMs 16:10:23 one of them with a floating ip 16:10:40 you can check too neutron floatingip-list 16:14:28 which subnet? 16:14:54 let me create the router as i used to do and retry 16:16:12 narindergupta: there is already a router 16:18:01 jose_lausuch: i cna not figure out the n why ping is not working. I know when i test is after fresh install it works 16:18:20 I Assigned a new floating ip to the first vm 16:18:22 and that is pingable 16:18:38 hun thats wiered 16:18:56 very 16:19:03 do a nova list 16:19:09 I will remove floating ip from vm 2 16:19:12 and assign it again 16:19:42 yeah i can see 16:19:59 .84 pings but not .83 16:21:48 narindergupta: nova console-log opnfv-vping-2 16:22:46 jose_lausuch: i am in 16:22:54 nova console-log opnfv-vping-2|grep 'ifconfig' -A 16 16:23:00 the second VM didnt get the ip 16:23:33 udhcpc (v1.20.1) started 16:23:33 Sending discover... 16:23:33 Sending discover... 16:23:33 Sending discover... 16:23:33 Usage: /sbin/cirros-dhcpc 16:23:34 No lease, failing 16:23:34 WARN: /etc/rc3.d/S40-network failed 16:23:35 cirros-ds 'net' up at 181.97 16:23:35 checking http://169.254.169.254/2009-04-04/instance-id 16:25:16 jose_lausuch: could it be dhcp issue? 16:25:29 I dont know 16:25:31 Im trying another thing 16:27:46 ok 16:27:56 how can I access horizon? 16:28:51 you can do X redirect through ssh 16:29:07 and start the firefox on the jumphost 16:29:40 there is vncserver on 10.4.0.255 as well 16:29:56 password is ubuntu 16:30:21 vip for dashboard is 10.4.1.21 16:30:29 narindergupta: the second VM doesnt get the ip from the dhcp... 16:30:48 but floating ip pings? 16:31:42 also nova lsit shows | ffc42fc4-065f-4cfe-8276-cbbb126752be | opnfv-vping-2 | ACTIVE | - | Running | vping-net=192.168.130.4, 10.2.65.87 | 16:31:59 which means it got the ip somehow. 16:32:25 assigned but dhco looks like not giving the ip on request. 16:32:58 narindergupta: I use port forwarding, so I open firefox on my local env :) 16:33:32 narindergupta: assigning the ip is not a problem, but if you check the console-log you'll see that dhcp doesnt work 16:33:44 narindergupta: what is the user/password for horizon? 16:34:13 admin openstack 16:34:47 ok thanks, taht works 16:34:51 Sending discover... 16:34:51 Usage: /sbin/cirros-dhcpc 16:34:51 No lease, failing 16:34:51 WARN: /etc/rc3.d/S40-network failed 16:34:51 cirros-ds 'net' up at 181.25 16:35:10 do you think it could be image issue? 16:35:17 why image? 16:35:21 the first VM gets the ip correctly 16:35:37 it gives the usage error 16:36:16 nova console-log opnfv-vping-1|grep 'Starting network...' -A 10 16:36:23 usage? 16:36:53 Usage: /sbin/cirros-dhcpc 16:36:53 (10:34:49 AM) narindergupta: No lease, failing 16:37:59 let me check on neutron-gateway node 16:41:21 ok 16:42:26 dhcp lease does not show ip its there for .3 16:42:31 but not for .4 16:42:48 that's bad :) 16:43:06 i know looks like request did not reached to dhcp 16:44:51 jose_lausuch: even neutron logs no sign of .4 while .3 its there 16:45:03 do you know mac address of the interface i can search 16:45:21 for 2nd vm 16:45:27 yes 16:45:57 eth0 Link encap:Ethernet HWaddr FA:16:3E:57:24:56 16:47:11 no request with this mac 16:48:27 can u try to create one more vm? 16:48:42 lets see how does it behave with the same interface? 16:48:43 yes 16:50:07 narindergupta: done 16:50:12 ok i am capturing the dhcp agent log 16:50:18 called test-vm 16:50:41 Starting network... 16:50:41 udhcpc (v1.20.1) started 16:50:41 Sending discover... 16:51:57 again sending discover 16:52:01 no leases?? 16:53:46 i am seeing this in one of dhcp agent log 2016-02-16 07:32:40.432 12656 ERROR neutron.agent.dhcp.agent RemoteError: Remote error: IpAddressGenerationFailure No more IP addresses available on network 303fd1aa-10fd-4f73-b8c1-475fdd8f0a09. 16:53:46 not now but issue was seen earlier 16:53:47 Sending discover... 16:53:47 Sending discover... 16:53:47 Sending discover... 16:53:47 Usage: /sbin/cirros-dhcpc 16:53:47 No lease, failing 16:53:47 WARN: /etc/rc3.d/S40-network failed 16:53:57 aha! 16:54:00 interesting 16:54:01 looks like some how related 16:54:24 but it doesnt make sense 16:54:31 | vping-subnet | 192.168.130.0/24 | {"start": "192.168.130.2", "end": "192.168.130.254"} | 16:54:36 the range is quite wide! 16:55:16 can u check whether on dashboard dhc agent services are up 16:55:17 ? 16:55:29 node6-control Enabled Up 16:55:30 yes 16:55:56 yeah it matches here on the node 16:56:02 (d4c55165-70d2) 16:56:02 16:56:02 192.168.130.2 16:56:02 network:dhcp Active UP 16:56:05 that is the port 17:00:27 hun so all services are up ports are up 17:01:54 yes 17:03:08 i think its worth to clear the network including bridges and router and recreate it again and check some time due to no leases available its stuck there and is waiting for leases to avilable 17:03:28 ok 17:03:30 I will do that 17:04:49 thanks 17:06:12 done 17:06:19 I will run the same test, but on a different network 17:06:25 192.168.40.0/24 for example 17:09:00 narindergupta: can you check again? 17:09:17 narindergupta: it worked... 17:09:33 Feb 16 17:08:59 node6-control dnsmasq-dhcp[2860]: DHCPACK(ns-f2fff5fd-05) 192.168.140.4 fa:16:3e:f7:06:3e host-192-168-140-4 17:09:54 yeah i can verify in syslog i am getting dhcp assigned ip 17:10:09 now the VMs are trying to ping each other 17:10:14 ok 17:12:39 something is slow or doesnt work 17:14:08 whats happening 17:14:21 not sure, the test is hanging at some point 17:14:24 I will abort it 17:14:59 their ping time was 130 s 17:15:06 earlier in the lab 17:16:12 ya, not sure 17:16:17 if I run it manually it works 17:16:29 ya 17:16:33 so what was the problem? 17:16:36 I would like to retest it 17:16:42 with the same network 17:23:02 narindergupta: you know what? 17:23:05 now it doesnt work 17:23:23 vm2 doesnt get an ip from dhcp 17:23:39 how is that possible? 17:23:49 I removed and created the network again, with the same range 17:23:56 the first vm got an ip 17:24:03 but same issue with vm2 17:24:52 narindergupta: I am creating another VM manually 17:24:57 can you check the dhcp logs? 17:40:57 no sign of ip 17:41:11 jose_lausuch: no sign of new ip 17:41:56 but i have following host listed 17:42:08 fa:16:3e:85:08:81,host-192-168-140-1.openstacklocal.,192.168.140.1 17:42:08 fa:16:3e:98:97:81,host-192-168-140-2.openstacklocal.,192.168.140.2 17:42:08 fa:16:3e:3d:99:fa,host-192-168-140-3.openstacklocal.,192.168.140.3 17:42:08 fa:16:3e:11:53:88,host-192-168-140-4.openstacklocal.,192.168.140.4 17:42:08 fa:16:3e:6b:8c:a6,host-192-168-140-5.openstacklocal,192.168.140.5 17:43:27 that is strange 17:44:43 yeah in leases file its not there but in host file it exist 17:45:48 and i have liberty version of neutron-dhcp-agent 17:46:06 what ODL version is that? 17:46:11 Be 17:46:47 i think we should try the same test on orange pod2 in case something works there? 18:28:36 catbus1: any update on the user guide? 02:16:00 narindergupta: I found clues as to why MAAS is not creating the nodes. See the log entry pasted below 02:16:47 https://www.irccloud.com/pastebin/tpmje5Ft/ 02:17:46 narindergupta: when I change the parameter power_parameters_power_address to power_address and send the same command from the shell, it works. 02:18:18 controlnodeid=`maas maas nodes new autodetect_nodegroup='yes' name='node1-control' tags='control' hostname='node1-control' power_type='virsh' mac_addresses=$node1controlmac power_address='qemu+ssh://'$USER'@192.168.122.1/system' architecture='amd64/generic' power_parameters_power_id='node1-control' | grep system_id | cut -d '"' -f 4 ` 02:19:21 Above is the change I tried for this in 02-maasdeploy.sh, which was the only place I saw this parameter name... but that did not change what was sent. So there is definitely a bug somewhere still. 02:20:16 note also that resending the command as above still did not set eth1 to auto on the controller... I had to do that manually 04:07:57 yuanyou: 04:08:29 i am not seeing any change in charms related to neutron-api-onos for charm syncer 04:41:03 yuanyou: on intel pod6 i can verify that after doing charm sync we were able to create the network. 06:12:56 narindergupta: I had synchronized in my local environment, but there is some error ,so I don't commit the changes, and I should have more tests. 15:56:25 bryan_att: on your intel NUCS can you try to redeploy as i beleive nonha issue with keystone should be fixed now. 15:57:01 ok, the last deploy timed out last night. Did you see my notes from yesterday on the power_parameters? 16:03:58 narindergupta: I have the joid-walk meeting today at 1PM PST with your team. I'd like to get a successful deploy before then. I can restart the JuJu deploy now but I think the issues I reported last night would be good to address asap also. I think we can fix whatever issue is requiring me to manually create the machines in MAAS. 16:28:42 narindergupta: I just recloned the repo and am restarting the maas deploy. 16:28:54 thanks 16:29:04 bryan_att: you still need to add the nodes manually 16:29:27 narindergupta: did you see my earlier note about the bug I found? 16:29:36 i am working with maas-deployer team on building the maas-deployer so that fixes will be avialalbe soon 16:29:56 no not yet 16:30:03 i still have to check 16:32:38 I can get the nodes created through the command "ssh -i /home/opnfv/.ssh/id_maas -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=quiet ubuntu@192.168.10.3 maas maas nodes new autodetect_nodegroup='yes' name='node2-compute' tags='compute' hostname='node2-compute' power_type='ether_wake' mac_addresses='B8:AE:ED:76:C5:ED' 16:32:38 power_address='B8:AE:ED:76:C5:ED' architecture='amd64/generic'" 16:33:04 when I change the parameter power_parameters_power_address to power_address and send the same command from the shell, it works. 16:51:00 ok 16:58:34 yeah i made changes in maas-deployer accordingly and hopefuly next builde of maas-deplopyer will fix 17:05:01 arturt: Error: Can't start another meeting, one is in progress. Use #endmeeting first. 17:05:19 #endmeeting