15:03:51 <farheen_att> #startmeeting architecture committee
15:03:51 <collabot`> Meeting started Wed Jan 22 15:03:51 2020 UTC.  The chair is farheen_att. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:03:51 <collabot`> Useful Commands: #action #agreed #help #info #idea #link #topic.
15:03:51 <collabot`> The meeting name has been set to 'architecture_committee'
15:03:56 <farheen_att> #topic agenda
15:08:43 <farheen_att> #info Download user experience requested by Philippe D.
15:09:04 <farheen_att> #info No portal presence.
15:09:21 <farheen_att> #info If anyone wants to discuss topics contact Manoop
15:09:35 <farheen_att> #info Nat - Vasu on the Clio point release who are all still pending?
15:10:05 <farheen_att> #info Deployment, DS, and ML Workbench are outstanding
15:10:34 <farheen_att> #info Justin - Deployment model usage tracking.  Having issues but there is no one is assigned to deployment client anymore.
15:11:45 <farheen_att> #info Santosh has committer access.  Vineet also has committer access.
15:12:05 <farheen_att> #info Vineet can help with merge requests.
15:12:19 <farheen_att> #info in the deployment client
15:13:00 <farheen_att> #info Manoop - Clio maintenance release is our top priority.
15:13:38 <farheen_att> #topic Sayee - Sprint 1 new feature that the DS/ML Workbench team is working on.
15:14:47 <farheen_att> #info Sayee - YML workbench enhancement.  We are adding deployment of a model from ML Workbench.
15:15:34 <farheen_att> #info you can create and associate notebook and nifi pipeline to a project.  Model has to be onboarded on Acumos to be associated.  This blue Deploy icon added.
15:16:13 <farheen_att> #info this is similar to the portal version.  You can give a predictor name. Once complete a deployment in process.  A jenkins job is created where the model is being deployed.
15:16:28 <farheen_att> #info do you differentiate between models and predictors?
15:16:38 <farheen_att> #info this is for trained model.
15:17:31 <farheen_att> #info Sayee - predictor is a running model.  If you want to scale it to one or two pods.  Currently that is not being tested.  We want to see where it is being scaled to cutomers.
15:18:10 <farheen_att> #info You can check the deployment in the UI . it is a jenkins job.
15:18:51 <farheen_att> #info you can redeploy the model.  No restrictions on the number of times you can deploy a model.
15:19:55 <farheen_att> #info if i have my own jenkins job and deploy in azure I can link it here.  We want to provide but we will see if it is needed .  This is a way a data admin can manage their model across many clusters and troubleshoot as needed.
15:20:18 <farheen_att> #info manoop - is this feature represented as model deployment from ML workbench?
15:20:38 <farheen_att> #info yes list of predictors that can be deployed to multiple clusters.
15:21:21 <farheen_att> #info Nat - this has a resource constraint.  This is a huge gap and going to be an issue.
15:22:08 <farheen_att> #info Manoop - deploying the model.  Do you have a goal for which environment?
15:22:12 <farheen_att> #info k8
15:22:35 <farheen_att> #info Justin got it working.
15:22:58 <farheen_att> #info Ken - having trouble getting deployment to work.
15:23:41 <farheen_att> #info Justin offered to support Ken.
15:24:03 <farheen_att> #info Ken - Recommending a code review.
15:24:34 <farheen_att> #info Sayee - It needs to be scrutinized.  It needs a lot of work.  Sometimes demos don't work.
15:25:23 <farheen_att> #info Reuben - We need to start planning to support re-training.  License and dist. models is I can plug my data.  We need to extend this to training and evaluation.
15:25:37 <farheen_att> #info Sayee - I couldn't get Wenting's time.
15:26:58 <farheen_att> #info model builder can re-train.  Reuben will talk to Wenting and Sayee about the re-training aspect.  Sayee will have a separate store for data sets and training.
15:27:45 <farheen_att> #info Reuben wants to bring in IBM face detection.  We want to be able to run it here (demo of local ML workbenchd)
15:28:15 <farheen_att> #info I can deploy at run time.  Another is to run in one pod.
15:28:52 <farheen_att> #info When you deploy a model as a predictor and then evaluate the model.
15:29:09 <farheen_att> #action Sayee bring one single draft.
15:30:35 <farheen_att> #info Reuben cautions putting a single draft alone.  ML flow deals with a model registry trained with different data sets and allows selection of what should be deployed where.
15:31:29 <farheen_att> #info Reuben - we want to look at the logic of other projects before designing our own.
15:32:56 <farheen_att> #info one way of increasing the value of acumos is to make sure that we have inter-operability.  IF we an make this inter-operability a requirement of all the projects, not only do we get the synergy but we also get the benefit of bringing in the expertise such as ML workflow team who understand model version management.  We will get more
15:32:56 <farheen_att> resources.
15:33:21 <farheen_att> #link https://wiki.lfai.foundation/display/DL/ML+Workflow+Committee?preview=/10518537/18481275/ML%20stack_v10.pptx
15:33:38 <farheen_att> #info Nat sharing the LF ecosystem.
15:35:46 <farheen_att> #info can we learn from these sub-projects?
15:36:19 <farheen_att> #info Nat - yes, i know the person who already has a deployment.  They focus on sprk and python.
15:36:56 <farheen_att> #info Sayee - we can work with these teams and leverage already developed resources.
15:37:34 <farheen_att> #info Reuben - Angel is being used predominantly with Chinese companies.
15:37:57 <farheen_att> #info Manoop is someone at the LF level reaching out to these projects?
15:39:07 <farheen_att> #info Nat - I can introduce the people I know with Guy and Philippe.  This is the TAG committee (LFAI).  Angel has graduated.  ONNX also.
15:39:21 <farheen_att> #info Manoop can we review with Angel?
15:40:02 <farheen_att> #info you clearly see the overlap over the project.  Should we bring to discuss with PTLs.  Rather than general meetings
15:40:36 <farheen_att> #info Reuben - we should know which projects are doing what. We should have an easy way to access these resources.
15:41:19 <farheen_att> #info Guy and Philippe have discussed deploying a model.  As far as ML workbench and serving pipeline.
15:41:45 <farheen_att> #action Nat introduce ml workbench with the Angel project.
15:42:05 <farheen_att> #info Sayee bring them to this call.
15:42:30 <farheen_att> #info Manoop we don't want a marketing call we want to be specific.
15:42:42 <farheen_att> #info how can we leverage the feature from open source.
15:50:39 <farheen_att> #action Reuben will bring information about his contacts on the community meeting next Wednesday.
15:51:08 <farheen_att> #topic Philippe - UX model deployment
15:52:16 <farheen_att> #info Philippe created a user story about downloading multiple artifacts.  Today you have to download artifacts one at a time.  ACUMOS-3679
15:53:57 <farheen_att> #info There is enough time to move this user story into an epic.  We would like to add docker image.  Today when you download the image it takes time.
15:54:00 <farheen_att> #link https://wiki.acumos.org/display/MOB/Enhance+UX+when+downloading+artifacts
15:54:51 <farheen_att> #info should we work on it in this release or save it for the next release?
15:56:52 <farheen_att> #info Manoop - remove artifacts... was addressed in the past releases.  Manoop will check.  The requirement was addressed.  Some are hidden that are internally used.  Manoop will check.  About selecting artifacts.  What is the motivation?  Downloading one zip file is easier than multiple clicks and time.  Easier to select the artifacts and download
15:56:52 <farheen_att> with one click.
16:07:21 <collabot`> farheen: Error: Can't start another meeting, one is in progress.  Use #endmeeting first.
16:07:33 <farheen> #endmeeting