07:08:22 <joehuang> #startmeeting multisite
07:08:22 <collabot`> Meeting started Thu Apr 14 07:08:22 2016 UTC.  The chair is joehuang. Information about MeetBot at http://wiki.debian.org/MeetBot.
07:08:22 <collabot`> Useful Commands: #action #agreed #help #info #idea #link #topic.
07:08:22 <collabot`> The meeting name has been set to 'multisite'
07:08:29 <joehuang> yes
07:08:33 <joehuang> sure will
07:08:43 <SAshish> I was busy with finalizing tempest and summit VISA thing
07:08:55 <SAshish> have to check VPN credentails
07:09:12 <joehuang> ok, so you will also attend Austin summit
07:09:26 <SAshish> till now it is not sure :)
07:09:46 <joehuang> For devstack, first to download devstack
07:09:59 <SAshish> okay
07:10:09 <joehuang> http://docs.openstack.org/developer/devstack/
07:10:12 <joehuang> it's simple
07:11:07 <joehuang> and then copy and rename https://github.com/openstack/kingbird/blob/master/devstack/local.conf.sample to local.conf to the devstack folder you downloaded
07:12:44 <SAshish> Dimtri has pinged me
07:12:59 <SAshish> there is some problem with his IRC. He will join soon
07:13:16 <joehuang> ok, no problem
07:13:40 <SAshish> okay. so this local.conf is enough?
07:13:45 <SAshish> then run stack.sh?
07:13:50 <joehuang> yes
07:13:55 <sorantis> hey
07:14:00 <sorantis> finally joined
07:14:00 <joehuang> hey
07:14:05 <sorantis> #info dimitri
07:14:12 <Malla> #info malla
07:14:14 <SAshish> so the code is picked from github and installed
07:14:15 <joehuang> #info joehuang
07:14:19 <SAshish> #info Ashish
07:14:31 <joehuang> The code will be picked automaticly
07:14:46 <joehuang> Hi, Dimitri, and Malla
07:14:52 <SAshish> Hi All
07:15:06 <joehuang> Ashish and I just talked about how to use devstack to do function test
07:15:24 <joehuang> I just fixed the bug and make devstack can work now
07:15:40 <joehuang> but the patch needs to be reviewed and merged for you to use devstack
07:15:48 <SAshish> yes
07:16:03 <joehuang> #link https://review.openstack.org/#/c/304368/
07:16:27 <joehuang> when I try to fix the devstack issue, and find another issue must be addressed first
07:16:44 <joehuang> #link https://review.openstack.org/#/c/305593/
07:17:13 <joehuang> the tables cannot be sync-ed to the database for the command is not executable
07:17:28 <sorantis> hm
07:17:42 <sorantis> good catch
07:17:45 <joehuang> so please review these two patches
07:18:18 <SAshish> KINGBIRD_ENGINE=$KINGBIRD_DIR/kingbird/cmd/engine.py KINGBIRD_ENGINE_CONF=$KINGBIRD_CONF_DIR/engine.conf KINGBIRD_ENGINE_LISTEN_ADDRESS=${KINGBIRD_JD_LISTEN_ADDRESS:-0.0.0.0}
07:18:26 <SAshish> wo dont have engine.conf right?
07:18:32 <SAshish> KINGBIRD_ENGINE_CONF=$KINGBIRD_CONF_DIR/engine.conf
07:18:33 <joehuang> After these two patches, curl request can be routing correctly
07:18:47 <sorantis> no, we don’t
07:18:52 <sorantis> it’s kingbird.conf
07:18:54 <joehuang> created in the devstack
07:19:01 <joehuang> it's same
07:19:27 <joehuang> ok, can rename these two conf to same , then devstack scripts need to update
07:19:47 <SAshish> yeah. reviewed
07:20:20 <SAshish> Joe, I have updated your comment for tempest
07:20:25 <joehuang> the file is created during the devstack, but not use tox -egenconfig to generate
07:20:27 <SAshish> create project for tests
07:20:31 <joehuang> ok
07:20:36 <joehuang> great
07:20:42 <SAshish> please review
07:20:53 <SAshish> #link https://review.openstack.org/#/c/304179/
07:21:11 <SAshish> okay. so with tox it will create kingbird.conf
07:21:17 <SAshish> with devstack it creates engine.conf and api.conf
07:21:18 <SAshish> ?
07:21:32 <SAshish> we need to have same for both tox and devstack
07:23:10 <joehuang111> hello, my previous connection seems to be lost
07:23:38 <joehuang111> hi
07:24:28 <sorantis> ashish, yes we need to align, and we need to change the config filenames to kingbird, just like we have for a regular kingbird setup
07:25:12 <sorantis> hi joe
07:25:32 <SAshish> yes, have commented on Joe's patch, Joe will update that
07:25:36 <sorantis> I just said “yes we need to align, and we need to change the config filenames to kingbird, just like we have for a regular kingbird setup”
07:25:45 <joehuang_> hi, just come back
07:25:54 <SAshish> Hi
07:26:01 <joehuang_> agree, I'll change after the meeting
07:26:21 <joehuang_> just use same kingbird.conf
07:26:25 <SAshish> yeah
07:26:57 <joehuang_> ok, I got pgp from Intel lab, but no time to verify it works or not
07:27:05 <joehuang_> try to run kingbird first
07:27:42 <sorantis> we have the intel lab till 24 I think
07:27:46 <sorantis> April that is
07:28:03 <sorantis> sorry 22.
07:28:07 <joehuang_> You mean the lab will be shutdown after 24?
07:28:13 <joehuang_> You mean the lab will be shutdown after 22?
07:28:15 <sorantis> I’ve asked jack if we get to keep the environment after that
07:28:31 <sorantis> I think it will be on maintenance for some time, and then it’ll be back
07:28:44 <sorantis> Jack’s intention is to put back the HDDs into the new servers
07:28:53 <joehuang_> ok
07:29:04 <joehuang_> how long it'll come back
07:29:11 <sorantis> that I don’t know
07:29:31 <sorantis> it shouldn’t be too long
07:29:49 <joehuang_> I got VPN access to Huawei shanghai lab, two blades, one work, the other one can't access
07:30:09 <sorantis> try to access the Intel lab
07:30:15 <joehuang_> I'll
07:30:19 <sorantis> the environment is already in place there
07:30:35 <sorantis> you can ping me when you’ve accessed and I can show you how to navigate there
07:30:38 <joehuang_> Yes
07:30:47 <joehuang_> that's great
07:31:05 <sorantis> Ashish, I suggest you also request a vpn access
07:31:08 <joehuang_> I also think we don't need too much environment
07:31:18 <sorantis> it’s plain openstack
07:31:25 <sorantis> two openstacks with shared keystone to be precise
07:31:26 <joehuang_> other wise, lab work will occupy too much effort
07:31:34 <SAshish> yes. I will do it today
07:32:09 <sorantis> Joe, agree, that’s why for now let’s use it for our immediate needs
07:32:27 <sorantis> Keystone there is not configured for Fernet tokens yet
07:32:45 <sorantis> There is a Galera cluster for controllers
07:32:53 <sorantis> hence for the Keystone database
07:32:58 <joehuang_> And I found that if we integrate tempest into functest and installer, then the CI job will work automaticly
07:33:11 <joehuang_> most work will on the installer part for integration
07:33:25 <sorantis> how’s that?
07:33:46 <joehuang_> because Functest alreay in CI daily job
07:34:11 <joehuang_> so if we integrate tempest in the Functest, it'll be part of Functest
07:34:27 <sorantis> will functest create two regions?
07:34:33 <joehuang_> and Functest will call installer to install all package first
07:34:49 <joehuang_> no, that's the installer does't do today
07:35:11 <joehuang_> so, the big part is in installer
07:35:23 <joehuang_> and support multi-region installation
07:35:38 <sorantis> what installer are we talking about?
07:36:03 <sorantis> I know there’s Fuel@OPNFV and some others
07:36:31 <joehuang_> we need to select one for this release, we have no enough resource to work on all
07:36:40 <joehuang_> I assume Fuel for the first one
07:36:55 <sorantis> correct
07:37:06 <sorantis> this will be quite easy with Fuel
07:37:18 <sorantis> however the tricky bit would be the postinstallation configuration
07:37:24 <sorantis> such as centralizing Keystone
07:37:50 <joehuang_> yes
07:37:51 <sorantis> while Fuel can create two OpenStack environments, it currently doesn’t know how to share one Keystone in multiple envs
07:38:05 <sorantis> I’ll speak with the Fuel@OPNFV PTLs about it
07:38:30 <joehuang_> great, it should be a feature of Fuel
07:38:39 <SAshish> latest is out last month
07:38:52 <joehuang_> but for fuel plugin, we have to do by ourselves
07:38:57 <SAshish> not sure if they have this
07:39:04 <sorantis> no, they don’t
07:39:08 <SAshish> okay
07:39:11 <sorantis> but maybe they are willing to help us
07:39:29 <sorantis> in fact it will also benefit them by and large
07:39:39 <joehuang_> for sure
07:39:59 <joehuang_> ok, that's the information I would like to share and discuss
07:40:14 <joehuang_> how to change the nickname back
07:40:21 <SAshish> lol
07:40:47 <SAshish> Dimtri
07:40:59 <SAshish> have you done "Submitting package information"
07:41:09 <SAshish> on pypi.org for kb
07:41:10 <SAshish> ?
07:41:14 <sorantis> not yet
07:41:18 <sorantis> there’s no release yet
07:41:39 <joehuang> to support pip install, need this
07:41:48 <SAshish> okay. will do it
07:42:09 <joehuang> #action register package in pypi.org
07:42:10 <SAshish> might need you people help in filling up the details
07:42:22 <sorantis> let’s finish with the present commits and tag the release
07:42:23 <SAshish> definately need help
07:42:29 <sorantis> then I’ll register it in pypi.org
07:42:40 <joehuang> will help as needed
07:42:58 <joehuang> before release taging
07:42:59 <SAshish> there are lot of information that we need to fill three
07:43:00 <SAshish> there
07:43:45 <joehuang> let's review and merge the tempest test cases first
07:43:50 <SAshish> now, If the current commits are through
07:43:54 <SAshish> then are we good to go?
07:44:07 <SAshish> I mean development wise for mitaka tagging?
07:44:17 <joehuang> and the bug reported
07:44:21 <SAshish> yeah
07:44:33 <SAshish> there are commits for each bug reported right?
07:46:12 <joehuang> yes
07:46:12 <SAshish> cool
07:46:12 <joehuang> for default configuration, I'll abandon it for Dimitri thinks it's no need
07:46:12 <sorantis> I wonder
07:46:12 <sorantis> I we have added openstackci user to pypi kingbird space
07:46:12 <sorantis> then shouldn’t it be published automatically?
07:46:12 <joehuang> but need to regirster in pypi,org first
07:46:12 <SAshish> https://pypi.python.org/pypi/kingbird/1.0.0
07:46:37 <sorantis> https://pypi.python.org/pypi/kingbird
07:46:37 <sorantis> you mean this?
07:46:37 <SAshish> yes
07:46:39 <SAshish> Package Index Owner: openstackci, sorantis
07:46:50 <joehuang> It should work like this, but not sure
07:47:19 <SAshish> but many things are missing
07:47:44 <sorantis> just replaced the version number
07:47:44 <joehuang> no pakage
07:47:44 <SAshish> license. downloaded link
07:47:44 <sorantis> Ashish, what exactly?
07:47:44 <SAshish> Name:	 Version:	 Author:	 Author email:	 Maintainer:	 Maintainer email:	 Home page:	 License:
07:47:47 <SAshish> Summary:	 Description:
07:47:57 <SAshish> Keywords:	 Platform:	 You should enter a full description here only if appropriate classifiers aren't available (see below). Download URL:	 Hidden:	 Bugtrack url:	 Requires: 	 Provides:
07:48:12 <sorantis> that yes
07:48:17 <sorantis> I’ve just read New packages without any releases need to be manually registered on PyPI.
07:48:42 <joehuang> that's what Ashish mentioned
07:48:43 <sorantis> ok
07:48:46 <SAshish> yeah
07:48:50 <sorantis> Let’s register when we have a tag
07:49:07 <joehuang> before tag, dimitri
07:49:51 <joehuang> before release, so that a package is correspondent with the tag
07:49:59 <sorantis> ok
07:50:02 <sorantis> never done this
07:50:09 <sorantis> will do it this week
07:50:14 <joehuang> me too
07:50:16 <SAshish> me too :)
07:50:36 <SAshish> Guys, one more thing.
07:52:14 <joehuang> ok, let's conclude the meeting
07:52:14 <joehuang> please
07:52:14 <SAshish> I wanted to tell from last week. but forgot
07:52:14 <SAshish> nova keypair doesnot take tenant as argument
07:52:14 <SAshish> and also, keypair usage is not listed in nova limits
07:52:14 <SAshish> right now we are doing a len(nova.keypairs.list())
07:52:14 <SAshish> nova ==> object created with admin user
07:52:26 <SAshish> so for all tenants, It will give count for number of keypairs in admin tenant
07:52:29 <joehuang> http://developer.openstack.org/api-ref-compute-v2.1.html#keypairs-v2.1
07:53:11 <SAshish> oh. It is not with CLI for sure
07:53:25 <SAshish> even the API call doesnot take any filter
07:53:32 <joehuang> CLI sometimes is later than API
07:54:00 <SAshish> but now also I am sure that
07:54:16 <joehuang> It's based on tenant
07:54:21 <SAshish> no
07:54:29 <SAshish> it is based on current nova object
07:54:43 <SAshish> the object with which the nova client object is created
07:55:05 <joehuang> You mean user?
07:55:06 <SAshish> # /v2.1/​{tenant_id}​/os-keypairs
07:55:17 <SAshish> here also, I feel you can change tenant_id
07:55:22 <SAshish> this is admin tenant Id
07:55:27 <SAshish> Lists keypairs that are associated with the account.
07:55:33 <joehuang> no, don't need admin api
07:55:41 <joehuang> can be general tenant
07:55:51 <SAshish> admin or any ID with which token is generated
07:55:53 <SAshish> Lists keypairs that are associated with the account.
07:56:06 <joehuang> you can use demo tenant in devstack to see if it works
07:56:08 <SAshish> account == the account with which token is created
07:56:54 <joehuang> the account is user_id
07:57:14 <SAshish> I will give use the count for the user with which the client object is created
07:57:20 <SAshish> see,
07:57:22 <SAshish> Eg:
07:57:29 <SAshish> like nova list we have search options
07:57:39 <SAshish> so that we can list servers for any tenants
07:57:44 <SAshish> we dnt have same for keypair
07:58:03 <joehuang> OK, understand, you mean no way to list other users keypair
07:58:04 <SAshish> keypair only lists current account/user list
07:58:09 <SAshish> exactly
07:58:24 <joehuang> this is an issue
07:58:27 <SAshish> It has to be there
07:58:48 <SAshish> yeah. There has to be bug on this
07:58:52 <joehuang> Even admin can't be able to list other user's keypair
07:59:03 <SAshish> exactly
07:59:14 <joehuang> We can discuss this in Austin with some core in Nova
07:59:47 <joehuang> to see if this is an issue, but from security perspective, I guess they intend to do so
07:59:48 <SAshish> but for our release?
08:00:06 <joehuang> use another quota items
08:00:06 <sorantis> we mention it in the release notes
08:00:15 <SAshish> okay.
08:00:16 <SAshish> fine
08:00:19 <joehuang> for test purpose
08:00:31 <joehuang> and this one just as what Dimitri proposed
08:00:31 <SAshish> yeah. have used ram, cores and instances count
08:00:31 <sorantis> ok
08:00:38 <sorantis> let’s close the meeting
08:00:43 <joehuang> ok
08:00:49 <joehuang> thanks for attending the meeting
08:00:49 <SAshish> let us review and close commits
08:00:52 <joehuang> see you next time
08:00:53 <SAshish> thanks guys
08:00:55 <SAshish> sure
08:00:59 <joehuang> #endmeeting