08:00:32 <morgan_orange> #startmeeting Functest weekly meeting May 24th
08:00:32 <collabot`> Meeting started Tue May 24 08:00:32 2016 UTC.  The chair is morgan_orange. Information about MeetBot at http://wiki.debian.org/MeetBot.
08:00:32 <collabot`> Useful Commands: #action #agreed #help #info #idea #link #topic.
08:00:32 <collabot`> The meeting name has been set to 'functest_weekly_meeting_may_24th'
08:00:39 <morgan_orange> #topic call roll
08:00:41 <lhinds> #info lhinds
08:00:42 <morgan_orange> #info Morgan Richomme
08:00:48 <juhak> #info Juha Kosonen
08:00:48 <viktor_nokia> #info Viktor Tikkanen
08:00:55 <raghavendrachari> #info RaghavendraChari
08:00:55 <morgan_orange> #info agenda https://wiki.opnfv.org/display/functest/Functest+Meeting
08:01:10 <JuhaHaapa> #info Juha Haapavirta
08:01:10 <CG_Nokia> #info CG_Nokia (Colum Gaynor)
08:01:10 <morgan_orange> #topic action point follow up
08:01:18 <morgan_orange> #link http://ircbot.wl.linuxfoundation.org/meetings/opnfv-testperf/2016/opnfv-testperf.2016-05-17-08.00.html
08:01:30 <jose_lausuch> #info Jose Lausuch
08:01:44 <morgan_orange> #info AP1 ok: question on pep8 support transmitted
08:02:07 <morgan_orange> #info AP2 viktor_nokia did we increase the timeout value?
08:02:37 <viktor_nokia> nnot yet
08:02:43 <morgan_orange> #info AP3 any ref on the Jira tickect on Apex about the timeout issue
08:03:05 <viktor_nokia> but we have now a Jira ticket
08:03:29 <viktor_nokia> https://jira.opnfv.org/browse/APEX-149
08:03:59 <morgan_orange> #info AP3 JIRA created https://jira.opnfv.org/browse/APEX-149
08:04:27 <morgan_orange> #info AP4 lhinds David_Orange any sync regarding sec group tests => maybe talk later in the slot on sec tests
08:04:57 <morgan_orange> #info AP5: there were exchanges on the chan on the way to retrieve dynamically IP for security tests
08:05:13 <lhinds> Not yet spoke with David, plan to get this scap stuff implemented and then speak with him towards end of the week I hope.
08:05:19 <morgan_orange> #info AP6 ok: I added security in the feature section, maybe not teh right place
08:05:20 <SerenaFeng> #info SerenaFeng
08:05:43 <lhinds> also working with Tim Rozet on testing in POD7 Apex
08:05:43 <morgan_orange> jose_lausuch: we have a dedicated section for Tempest, Rally and the VNFs
08:06:20 <morgan_orange> would it make sense to put Tempest (full) and Rally (full) in one common category (OpenStack extended?)
08:06:32 <morgan_orange> where would you see security scan test in our tiers?
08:06:38 <David_Orange> #info David Blaisonneau
08:07:05 <lhinds> is that question for me morgan_orange ?
08:07:16 <morgan_orange> for everybody...
08:07:28 <morgan_orange> today I put it in feature section (Tier 3)
08:07:41 <lhinds> I think its worth it, there will be a lot of info on configuring the test setups
08:07:47 <lhinds> but happy either way
08:07:56 <morgan_orange> #link https://wiki.opnfv.org/display/SWREL/Test+Release+Criteria
08:08:10 <jose_lausuch> morgan_orange: that makes sense, and I Wanted to propose it as well
08:08:16 <jose_lausuch> Openstack something
08:08:46 <jose_lausuch> we can think of the name later
08:09:03 <morgan_orange> #action refactoring of the refactoring: group Tempest (full) and Rally (full) in a category Openstack XXXX (name to be found)
08:09:18 <jose_lausuch> action me
08:09:27 <jose_lausuch> its just about changing the testcaess.yaml
08:09:28 <morgan_orange> this category could be re used for long duration functation tests towards the infrastructure
08:09:44 <juhak> for Tempest and Rally separate success criterias are defined for full and smoke
08:09:46 <morgan_orange> #action jose_lausuch group Tempest (full) and rally (full) in 1 category
08:09:53 <juhak> in run scripts the criteria is currently hardcoded to 90% when pushing the summary
08:10:01 <juhak> maybe it would be good to define the value elsewhere, e.g. in testcases.yaml?
08:10:32 <jose_lausuch> juhak: good point, we can put it in testcases.yaml
08:10:48 <morgan_orange> good points, I think we can easily update the yaml find to indicate the criteria (the if  status = ????)
08:10:54 <jose_lausuch> #action jose_lausuch: propose success criteria on the testcases handling
08:11:12 <jose_lausuch> its about modifying the testcases.yaml and the tier handler
08:11:16 <jose_lausuch> tier builder
08:11:24 <jose_lausuch> in particular the Testcase Class :)
08:11:41 <morgan_orange> #info AP7: presentation for teh design summit initiated
08:12:03 <morgan_orange> #info link https://gerrit.opnfv.org/gerrit/#/c/14517/
08:12:26 <jose_lausuch> morgan_orange: I've seen your email to Sofia
08:12:29 <morgan_orange> #info open docs/com/pres/summit-Berlin.html
08:12:32 <morgan_orange> yep
08:12:36 <jose_lausuch> I think we should wait to get an answer
08:12:43 <jose_lausuch> that might also fit to all projects
08:12:48 <morgan_orange> the framework could be in opnfvdocs (useful for all the projects)
08:12:52 <jose_lausuch> and then you become a commiter in opnfvdocs :)
08:13:03 <morgan_orange> I puit it this time in Functest repos because last time the patch had been pending for months
08:13:17 <morgan_orange> wait for Sofia answer before merging, I could submit the patch in opnfvdocs
08:13:40 <morgan_orange> anyway people can have a look already. Some pictures are missing...
08:13:45 <enikher> #info Nikolas Hermanns
08:14:05 <morgan_orange> for instance I do not have Nikolas's picture :)
08:14:09 <jose_lausuch> morgan_orange:    :)20622 lines
08:14:19 <morgan_orange> missing also Mei-mei and Cedric
08:14:46 <morgan_orange> jose_lausuch:  I could only push the hml, the css and the images and make ref to the reveal framework upstrteam
08:14:55 <morgan_orange> I will discuss it with Sofia
08:15:07 <morgan_orange> I am not competing for the number of lines committed :)
08:15:27 <jose_lausuch> no no
08:15:44 <morgan_orange> #topic Security suite
08:15:45 <jose_lausuch> but it looks you spent some time with that commit, good effort!
08:16:02 <morgan_orange> lhinds:  update on the scan test case
08:17:50 <morgan_orange> lhinds: still there?
08:17:57 <lhinds> Functionally everything is there now. You can set up profiles for each node type (compute, control, nagios etc), packages are installed, scan is run, and reports are now downloaded to the artifacts directory.
08:18:16 <lhinds> I am currently working with Tim Rozet to test this on POD 7
08:18:21 <morgan_orange> #info src lhinds Functionally everything is there now. You can set up profiles for each node type (compute, control, nagios etc), packages are installed, scan is run, and reports are now downloaded to the artifacts directory.
08:18:26 <jose_lausuch> lhinds: feel free to #info the info :)
08:18:40 <morgan_orange> #info currently working with Tim Rozet to test this on POD 7
08:18:56 <morgan_orange> so you managed your access to the different nodes
08:19:01 <lhinds> #info main thing I need to verify is getting the IP addresses from the undercloud. This is easy (using Nova)...I just want to verify end to end
08:19:36 <lhinds> morgan_orange, yes.. I have that working now. I just want to get it going on an opnfv apex deployment, as been using my home lab for now
08:19:48 <morgan_orange> lhinds: ok, it means that the security scan will be (at least at the beginning ) only runnable on Apex, right?
08:20:26 <morgan_orange> so let's test the integration on Intel POD7, then we can help on the automation
08:20:29 <lhinds> morgan_orange, yes. Plan is to do the other installers, but as there is no unified connection method, they each need there own connect modules
08:20:42 <morgan_orange> oh yes...
08:20:44 <lhinds> Its a pain, and Genesis have meant to be fixing this from what I have seen
08:20:58 <morgan_orange> oh yes ...
08:21:04 <lhinds> I need to lean juju which has its own thing as well.
08:21:11 <morgan_orange> but let's work on Apex scenario first
08:21:22 <lhinds> Hopefully, at the summit people will get interested and join in and fill gaps.
08:21:36 <morgan_orange> it should be possible to test it on the other labs afterwards or use the Design summit to do it
08:21:49 <lhinds> anyone is welcome to get involved, even if playing with the scans and suggestiing different checks to make
08:21:51 <morgan_orange> so jose_lausuch we have to create the security scan in the testcases?yaml
08:21:58 <jose_lausuch> yes
08:22:08 <jose_lausuch> lhinds: can also do that when the tests are ready
08:22:12 <jose_lausuch> I would say the same commit
08:22:15 <morgan_orange> #action morgan_orange add security_scan as testcase in the DB, in the config
08:22:37 <lhinds> jose_lausuch, morgan_orange I use pyaml , so easy for me to fit into the parent functest yaml file
08:23:09 <lhinds> I have one ini file I use for the scan settings, but I could port to yaml there as well.
08:23:19 <lhinds> suggest you guys look when I push and we can decide from there
08:23:30 <morgan_orange> you can keep you ini file, the yaml is used to describe the testcase
08:23:42 <morgan_orange> https://git.opnfv.org/cgit/functest/tree/ci/testcases.yaml
08:23:44 <lhinds> as I thought, sounds good to me then
08:23:50 <morgan_orange> it is just declarative
08:23:57 <morgan_orange> we should agree on the name of the testcase
08:24:08 <morgan_orange> security_scan?
08:24:25 <lhinds> SECScan ?
08:24:32 <lhinds> morgan_orange, yours :)
08:24:36 <lhinds> security_scan
08:25:05 <lhinds> #info security_scan
08:25:10 <morgan_orange> ok
08:25:37 <morgan_orange> thnaks lhinds it seems that we are not very far from the automation, it should be possible before the summit on Apex, which would be great
08:25:54 <morgan_orange> #topics Flash test status
08:25:59 <enikher> hey
08:25:59 <lhinds> morgan_orange, yep. That is key for me, as I will have it as a topic at my talk :)
08:26:10 <morgan_orange> enikher: any update
08:26:28 <enikher> yes, the protype look good
08:26:41 <enikher> the next thing todo would be to include it into Functest
08:27:17 <morgan_orange> #info Flash test proto looks good
08:27:23 <morgan_orange> #info newt step Functest integration
08:27:24 <enikher> we did not manage to fully sync with yardstick
08:27:44 <morgan_orange> ok transition time for yardstick....
08:27:55 <morgan_orange> for functest you probably can see with jose_lausuch
08:28:05 <enikher> ok
08:28:08 <morgan_orange> where shall we declare flash test in our testcases
08:28:12 <jose_lausuch> morgan_orange: we are sitting next to each other :)
08:28:27 <morgan_orange> I know...
08:29:02 <enikher> thats it for the moment
08:29:05 <jose_lausuch> lhinds: you were asking the other day for flash test
08:29:13 <enikher> we do lack a bit on time at the moment
08:29:25 <enikher> I do ...
08:29:27 <enikher> :-)
08:29:30 <lhinds> jose_lausuch, I think I am covered now, but would still be interested in seeing the prototype
08:29:37 <lhinds> jose_lausuch, might be useful for fuel
08:29:43 <jose_lausuch> lhinds: ok
08:29:59 <morgan_orange> could be the way to generalize to all the installers...
08:30:25 <morgan_orange> shall we plan a GTM next week for some demos? (security_scan, flash test and the API (next topic)
08:30:36 <jose_lausuch> morgan_orange: +1
08:30:53 <enikher> morgan_orange: not sure if I will manage to show flash-test
08:31:02 <enikher> I don't have a setup for this at the moment
08:31:29 <morgan_orange> ok we could at least show security_scan and the API and do flash test later.
08:31:43 <lhinds> morgan_orange, I will be able to show the test happening on my lab, in fact hopefully on POD 7, but if not certainly on my own env
08:31:48 <morgan_orange> when you say you have no setup, you do not have labs to test?
08:31:55 <SerenaFeng> ok
08:32:25 <lhinds> morgan_orange, I do have POD 7 (it took a while to get my VPN), but Tim needs to change something for me.
08:32:50 <lhinds> so I am underway there, just delayed from a few things.. but its moving forward now
08:32:51 <morgan_orange> ok and enikher do you have a lab where you can test flash tests
08:33:17 <morgan_orange> SerenaFeng: can test eveything on his laptop :)
08:33:44 <morgan_orange> ok anyway, let's see the status on friday, we could maybe show some demos or postponed depending on the status
08:33:53 <morgan_orange> #topic Test APi Status
08:34:16 <morgan_orange> SerenaFeng: it is up to you
08:34:59 <SerenaFeng> for now, only dashboard APIs left
08:35:14 <SerenaFeng> and I have a question, I have send you an email
08:35:27 <SerenaFeng> It will be finished today
08:35:34 <morgan_orange> just to precise to everybody, SerenaFeng is refactoring the test APi that we are using to declare pods/projects/cases/results/dashboard
08:35:55 <jose_lausuch> morgan_orange: what about putting duration out of details?
08:36:09 <morgan_orange> she developped swagger/tornado addons to have automatic docuementation + unit tests (as we need soem stability here)
08:36:17 <SerenaFeng> I will put version number to the url path next time
08:36:21 <morgan_orange> jose_lausuch: yep there are questions on teh evolution of teh models (as we are refactoring)
08:36:30 <SerenaFeng> then I can work on swagger work
08:36:49 <morgan_orange> if you see any change in the model, it is time to indicate it
08:37:04 <morgan_orange> I agreee that start/stop in the fields would be nice
08:37:25 <morgan_orange> it means that the startinfo must be stored at the test level at the beginning of teh tests
08:37:43 <jose_lausuch> yes
08:37:53 <jose_lausuch> but this can be done out of the test cases
08:37:53 <morgan_orange> it means also that the udration put in some testcases will be useless (but we may keep it for the dashboard - some tests are using it)
08:37:58 <jose_lausuch> meaning, in the framework as such
08:38:20 <morgan_orange> but it makes sense to have a start a stop, wwe added a status and a Trust indicator also for colorado
08:38:36 <morgan_orange> #action morgan_orange add start/stop fields in the API
08:38:54 <jose_lausuch> yes
08:39:06 <morgan_orange> I also think that another param to retriever the N last results of a given test in a given configuration would make sense
08:39:08 <jose_lausuch> the more generic we do it the better
08:39:12 <jose_lausuch> to avoid duplicating code as well
08:39:20 <jose_lausuch> yes
08:39:34 <morgan_orange> for the moment we have Period taht allows us to retrieve teh results from the last Perido days
08:40:10 <morgan_orange> #action morgan_orange add a param to be able to retrieve last results (last occurence not test over last days)
08:40:35 <morgan_orange> i remember also someone asking for teh possibility to requests results over a given time window
08:40:51 <jose_lausuch> yes, but I didnt fully understand that
08:40:57 <morgan_orange> BTW jose_lausuch do you use the pod table for the infra GUI
08:41:15 <jose_lausuch> what do you mean?
08:41:30 <morgan_orange> jose_lausuch: get/results/(from 5/6/2016 to 1/8/2016)
08:41:52 <morgan_orange> in th eGUI you shared in the infra management, you have a list of PODs
08:42:07 <jose_lausuch> ah yes
08:42:12 <morgan_orange> I assume you use another DB, not the testresults.opnfv.org/testapi/pods
08:42:15 <jose_lausuch> no no
08:42:19 <jose_lausuch> I use a mysql local db
08:42:24 <jose_lausuch> and dummy entries
08:42:26 <jose_lausuch> not the official pods
08:42:58 <jose_lausuch> well, they are official, but Its just info I put manually in the DB
08:42:58 <jose_lausuch> :)
08:43:08 <morgan_orange> ok I understand, would it make sense (in the future...) to have only one DB with PODs declaration?
08:43:18 <jose_lausuch> yes, and I would say the pharos one
08:43:24 <morgan_orange> for the test results, it is a way for us to control the test collection
08:43:28 <jose_lausuch> I have some ideas to have a lot of info for the pods
08:43:34 <morgan_orange> if the pod is not declared we do not accept the results
08:43:47 <jose_lausuch> I'd say that's something post-colorado
08:44:01 <morgan_orange> for the moment in teh DB the model is poor http://testresults.opnfv.org/testapi/pods
08:44:16 <morgan_orange> creation_date, name, mode, details
08:45:12 <jose_lausuch> yes
08:45:14 <morgan_orange> back to SerenaFeng and her great work, you said that dashboard should be integrated soon, any estiamtion for teh swagger work?
08:45:25 <jose_lausuch> and we also need to distinguish in pharos the type of pod
08:45:32 <jose_lausuch> ci pod, dev pod, single node
08:45:50 <SerenaFeng> it will be ready before the end of next week
08:45:52 <morgan_orange> #action morgan_orange add type in pod description in the DB
08:46:25 <jose_lausuch> I imagine for after colorado, when the pharos DB is ready and 100% functional and trustable we switch to get the pods from that one
08:46:28 <SerenaFeng> and all the installer?
08:46:40 <SerenaFeng> and also the installer of the pod?
08:46:56 <morgan_orange> jose_lausuch: OK makes sense
08:47:19 <morgan_orange> SerenaFeng: in theory the PODs must not be specialized
08:47:27 <morgan_orange> so pod and installer should not be linked
08:47:38 <SerenaFeng> and I think we should connect project with pod, just like testcase with project
08:47:38 <morgan_orange> we make this connection through the test results
08:47:49 <morgan_orange> when we push a result we indicate the pod, the installer, the scenario
08:48:29 <morgan_orange> not sure to follow the link pod / project
08:49:02 <SerenaFeng> all the link is done in testcase
08:49:04 <morgan_orange> jose_lausuch: BTW the pharos DB could be this DB, it is managed in releng...
08:49:16 <jose_lausuch> which db?
08:49:20 <SerenaFeng> so no need to connect pod with project
08:49:28 <morgan_orange> jose_lausuch: the mongo DB
08:49:31 <jose_lausuch> mmmm
08:49:34 <jose_lausuch> not sure
08:49:48 <jose_lausuch> for pharos info I think it makes more sense some sql based
08:49:52 <morgan_orange> jose_lausuch: ok
08:49:59 <jose_lausuch> so relational
08:50:06 <morgan_orange> ok
08:50:26 <jose_lausuch> we can also discuss it this afternoon
08:50:30 <jose_lausuch> we have a meeting about that
08:50:36 <morgan_orange> once API refactored, we will have some integration work / data already in the DB
08:50:49 <SerenaFeng> Oh, by the way, I see lots of ***2Dashboard.py, do I need to make the unittests of them all?
08:51:04 <morgan_orange> SerenaFeng: no it is too specific
08:51:12 <SerenaFeng> ok,
08:51:55 <morgan_orange> historically, the ***2dashboard.py were a way to post process the results in order to provide a graphable version of the post processed results
08:52:26 <morgan_orange> each *** is supposed to know what it wants to display in the dashboard
08:52:31 <morgan_orange> so it is really test specific
08:52:41 <morgan_orange> if unit tests they must be done by ****
08:52:41 <SerenaFeng> yeah, each one is unique structure
08:52:45 <morgan_orange> not at the framework level
08:53:07 <morgan_orange> #topic Sprint #8
08:53:16 <morgan_orange> #info Sprint started last week for 3 weeks
08:53:36 <morgan_orange> it should be over week before the Summit...
08:54:02 <morgan_orange> #info Colorado roadmap has been shared by the TSC to the board
08:54:18 <morgan_orange> #info release will be mid of September, with first "freeze" beginning of July
08:54:27 <jose_lausuch> morgan_orange: I think we could create all the sprints at the beginning
08:54:32 <morgan_orange> I think we should have our internal features ready for this date
08:54:45 <jose_lausuch> and if someone thinks that a task cannot be made this sprint, it has to be moved to the proper sprint
08:55:19 <morgan_orange> ok
08:55:40 <morgan_orange> I think that is what we did for Colorado
08:55:53 <morgan_orange> Here as we had in theory only 2 Sprints, we did not really care
08:56:00 <morgan_orange> but it will be cleaner to do it this way
08:56:25 <morgan_orange> #info for D rivers, create all the sprints at the beginning and invite people to place their Jiras in the accurate Sprint
08:56:56 <morgan_orange> #link https://jira.opnfv.org/secure/RapidBoard.jspa?rapidView=59
08:57:04 <morgan_orange> so for the moment everything is in the Sprint
08:57:19 <morgan_orange> probably need to create a Jira for Flash test integration
08:57:38 <morgan_orange> any issues with the JIRas?
08:57:51 <morgan_orange> risk/problem/concern/doubt/...
08:58:41 <SerenaFeng> I create a Releng task, it is about to add zte-pod1 to the functest dashboard
08:58:43 <jose_lausuch> so far not
08:59:00 <jose_lausuch> maybe about this one https://jira.opnfv.org/browse/FUNCTEST-157
08:59:04 <jose_lausuch> not sure if I'll manage
08:59:05 <morgan_orange> SerenaFeng: OK it is just one param to add
08:59:07 <SerenaFeng> https://jira.opnfv.org/browse/RELENG-112
08:59:12 <jose_lausuch> its some work :)
08:59:13 <morgan_orange> note that the dashboard was only for brahmaputra
08:59:32 <SerenaFeng> I have already make a code review
08:59:35 <morgan_orange> I planned to adapt it to Colorado, but I need to change things in ****2dashboard.py as the data model changed a little bit
08:59:45 <morgan_orange> SerenaFeng: you can assign the Jira to me$
08:59:54 <SerenaFeng> https://gerrit.opnfv.org/gerrit/#/c/14537/
09:00:38 <SerenaFeng> I already change the code, maybe I can abandon the gerrit and assign it to you?
09:00:43 <morgan_orange> ok I already did some modifications after pod renaming, directly on the web server bu I will have a look at that
09:00:58 <morgan_orange> do not abandon it, I will merge it
09:01:07 <morgan_orange> #topic AoB
09:01:08 <SerenaFeng> ok, thank you
09:01:12 <morgan_orange> we are already late...
09:01:18 <morgan_orange> just some stuff to share
09:01:31 <morgan_orange> in the weekly test meeeting, we had 2 discussions last time
09:01:42 <morgan_orange> Test collection from a scenario
09:01:58 <morgan_orange> frankbrockners: detailed all the tests he would like to integrate for the vpp scenario
09:02:36 <morgan_orange> I answered by mail, that the list he mentioned for functest would be fine (as by default we try to run everything and only restrict test in case of constarint)
09:02:57 <morgan_orange> anyway it could make sense to think to an API to allow people to generate their own customize list
09:03:16 <morgan_orange> probably not in C but in the future. This could be share with all the test projects
09:03:22 <frankbrockners> thanks morgan_orange
09:03:41 <frankbrockners> you might not even need an API - a simple config file would be good enough
09:04:02 <frankbrockners> that way a scenario owner could decide which tests to run - even in CI/CD
09:04:30 <frankbrockners> it could well be that you know that 90% of your tests already work - and you're interested in the results of say 1 or two specific tests
09:04:47 <frankbrockners> if you had a config file - one could focus the testing on those two only
09:04:47 <morgan_orange> we have already this file, we just need a script that could eventually overwrite it in the container. Today we generate this file automatically beased on teh scenario name and the static description of teh cases
09:05:01 <frankbrockners> as a consequence - we would greatly cut down test execution time
09:05:21 <frankbrockners> great - that is good news
09:05:29 <morgan_orange> I think for manual processing it is just a question of documentation
09:05:35 <morgan_orange> as you can already do it
09:06:04 <morgan_orange> for automation, it should be possible to speficiy a file rather than rely on the one dunamically built
09:06:19 <frankbrockners> how would i modify it without chaning the jenkins job?
09:06:23 <frankbrockners> changing
09:07:41 <frankbrockners> agreed - as part of the jjb you should have a pointer to where to retrieve the test config from
09:07:47 <morgan_orange> we have to think on the better way,  for teh moment I imagine we could modify  'onceĆ  teh jenkins job in order to allow a custom list rather than the default
09:08:06 <frankbrockners> could be on a OPNFV git - could be somewhere else... just mget it from somewhere
09:08:16 <morgan_orange> ok let's continue offline
09:08:36 <frankbrockners> the issue is that few folks have access to jjb
09:08:45 <morgan_orange> last point was on the Summit presentation, I invite everybody to read it
09:08:53 <morgan_orange> any other points you want to share?
09:08:57 <frankbrockners> morgan_orange - ok - sorry for the distraction here
09:09:47 <morgan_orange> frankbrockners: I have to go.. :)
09:10:02 <morgan_orange> thanks everybody
09:10:06 <morgan_orange> enjoy your functest week
09:10:10 <morgan_orange> #endmeeting