14:02:59 <farheen_att> #startmeeting Architecture Committtee
14:03:00 <collabot`> Meeting started Wed Sep 11 14:02:59 2019 UTC.  The chair is farheen_att. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:03:00 <collabot`> Useful Commands: #action #agreed #help #info #idea #link #topic.
14:03:00 <collabot`> The meeting name has been set to 'architecture_committtee'
14:04:45 <farheen_att> #topic Licensing
14:07:10 <farheen_att> #info Working on things that are difficult to demo because we still need the front end pieces.  We created the api to interact with the portal.  We have handed over what portal can deliver in their parts.  LUM is what we demonstrated on the Community Call yesterday.  It isn't a great user demo but we will review the enhancements that have been
14:07:11 <farheen_att> done.  We will review the detailed apis.
14:07:55 <farheen_att> #info cut short demo for on-boarding team offshore who have to leave early.
14:08:37 <farheen_att> #topic On-boarding Priya, Guy Spark Java Spark on-boarding
14:11:38 <farheen_att> #link https://wiki.acumos.org/display/MOB/Acumos+spark
14:13:01 <farheen_att> #info Priya today we are assuming that you are using the Spark ML library.
14:13:29 <farheen_att> #info with respect to spark lib.  Does the lib have to be packaged as a k8 container?
14:13:32 <farheen_att> #info yes
14:14:11 <farheen_att> #info current ms gen method we put everything inside the container.
14:14:22 <farheen_att> #info so every container will contain the spark lib?
14:14:50 <farheen_att> #info it will be packaged as part of the jar
14:15:03 <farheen_att> #info the model docker image will be independent of the docker image.
14:15:16 <farheen_att> #info so I can have a spark cluster outside for my model to run?
14:15:20 <farheen_att> #info correct
14:16:57 <farheen_att> #info regarding mode of deployment there are two ways client spark and cluster mode.  We are considering the standalone spark for this version of implementation.  The apis used are different based on which manager is being used as part of the deployment.
14:18:01 <farheen_att> #info we don't really know which one should be used and we wanted to verfiy based on configuration.  YML or K8.  Today we are focusing on stand alone deployment of spark.
14:19:02 <farheen_att> #info YARN not YML*
14:19:35 <farheen_att> #info being able to onboard spark models.  But for Clio we are limiting it to java spark based on boarding.
14:20:06 <farheen_att> #info Java spark based model will remain as a docker image where a modeler will deploy?
14:20:25 <farheen_att> #info it is the same process and ms generation is the same as any other type of model.
14:20:52 <farheen_att> #info how does it differ?
14:21:11 <farheen_att> #info the apis used are different.  Whatever apis used to invoke the instances are different.
14:22:41 <farheen_att> #info there are two types of libraries available today. and Mlib and a spark MLIB.  Available from spark 2.3 onwards.
14:24:13 <farheen_att> #info 2 ways to create docker image.  1.  python based models today we are deploying all the keras dependencies contained.  But in the case of spark it is going to be outside the docker image.  The docker won't have the docker engine only the spark libs to talk to spark instance.
14:25:22 <farheen_att> #info important to have a clear input output.  Spark ML models may be reading from a hadoop cluster or a different data source as of today.  Spark ML model will define a clear input and output per protobuf.
14:25:38 <farheen_att> #info whatever required has to be specified.
14:26:16 <farheen_att> #info can we not have a single container for model is consuming resoures and performance will hit.
14:26:46 <farheen_att> #info agree the spark engine will be independent.
14:27:11 <farheen_att> #info don't be tightly coupled with protobuf
14:28:28 <farheen_att> #info long term we should involve.  JSON and Parkay file support should be there.  If you look at customers they are using other file formats.  My recommendation is to keep it generic.
14:28:57 <farheen_att> #info We want to support both protobuf and JSON.
14:30:07 <farheen_att> #info for now we will support only protobuf but eventually we will support other types. v.1 protobuf and v.2 JSON and then others.
14:32:16 <farheen_att> #info these are the apis that model runner will support.  Invoking the model will have a sequence when you invoke the model.  The spark config that you have can be different than what is in the deployed format.  Accepting and submitting the config at run time.  Other models are self contained but in term of spark since the engine is outside the
14:32:17 <farheen_att> model that is the user has to provide.  It may or may not be available for the deployment env.  If not then the user has to enable.
14:33:21 <farheen_att> #info Data source and output are both coming from the same response.  It is independent of what fs to use.  These are consideration to invoke the model.  Docker image will be submitting the job but the input output will be outside of request/response.
14:34:20 <farheen_att> #info this is the view we thought that kafka and spark to go together.  We will need a pipeline for kafka to support spark.
14:35:01 <farheen_att> #info java client is impacted by this change.
14:35:18 <farheen_att> #topic Licensing resume.
14:35:28 <farheen_att> #info Michelle, Justin
14:36:15 <farheen_att> #info We've completed the apis for license profile.  We can now review the apis for this sprint.
14:39:32 <farheen_att> #info we're starting with 3336 which is an api manager.  In this demo we are using junit test.  We are using different json files checked in as resources.  A license validator and the warnings.  Once you upload your license profile.  You will be able to call this api and get feedback vs. down stream problems.  This is the back end of reading the
14:39:33 <farheen_att> json file and use the license profile validator and validate.  If its a bad indidcator then you will get a list of errors.  Flor example we are missing the license name.  We return the warning to the user that it is missing a license name.  They are not the nicest messages but it does tell you what is missing.
14:41:17 <farheen_att> #info as far as different variables missing.  This is the validation of the structure of the document and the unit test.  When it does pass as the license json.  When you call this from the api perspective we wrap it under the profile class.  3 inputs.  Give it a string, input stream, or json node if you've done the parsing of the json for
14:41:17 <farheen_att> Acumos-3336
14:41:24 <farheen_att> #info questions?
14:41:47 <farheen_att> #info Michelle we showed the schema validation api to make sure it is captured.
14:43:05 <farheen_att> #info Please open the response? So you will have multiple messages are there any status?  We will check the field if it's a success we won't do anything.
14:45:31 <farheen_att> #info tausif - I thought we will be calling for onboarding but there is a different flow for just the license file so you want portal to call when there is a license.
14:45:41 <farheen_att> #info yes we want to catch issues up front.
14:45:55 <farheen_att> #info will there be an option to edit online or do they have to reload the file?
14:46:55 <farheen_att> #info yes, they have to upload againg.  The editor uses the schema.  Upload without using editor.  The editor is baked into
14:47:02 <farheen_att> #info is the format json?
14:47:07 <farheen_att> #info yes
14:47:16 <farheen_att> #info stored as blob in?
14:47:20 <farheen_att> #info nexus
14:47:28 <farheen_att> #info can it be hacked?
14:47:53 <farheen_att> #info only the system component.  User does not access nexus
14:51:32 <farheen_att> #info to create a custom license requirement you have to ready readthedocs.
14:52:12 <farheen_att> #info We made sure that all of our java apis are updated to java 11.  We were able to update.
14:52:20 <farheen_att> #info you use the docker base images?
14:58:17 <farheen_att> #info we help the portal team update to open j9,  It wasn't licensing specific.  There were issues.  There are two libs that licensing manager client library and we helped them with the LUM code.  In order to interface with LUM there is a code generator that connects with LUM.  LMCL is consuming that . I wanted to show you that.  It wasn't a jira.
14:59:07 <farheen_att> #info demonstrated how to register the software with the LUM with the solution id. Reaches out to CDS and nexus to gather the information based on the solution id.
15:00:00 <farheen_att> #info We're trying to simplify how clients interface with LUM.  Construct swid tag (revision id) so the mapping is consisitent in the LCML.
15:00:13 <farheen_att> #info demonstrated the library working as a bonus
15:05:47 <farheen_att> #action Murali co-ordinate a full end to end demo with portal.  In your scrum or their scrum have a joined session.
15:06:39 <farheen_att> #action Manoop add license review to the next call.
15:06:45 <farheen_att> #endmeeting