Goal: extend the data wizardry to allow PL to accept a second datafile (necessarily identical in structure etc. to the initial, e.g. CSV) data file, run the model against the data and
a) create outputs (images, other) as appropriate by running the trained model against this file
b) duplicate the second data file and save into it:
b.1) model outputs - paths to image outputs, values for e.g. logistic predictions etc., labels…
b.2) additional info (e.g. object detection rectangles, probabilities…)
Rationale: currently a trained model has to be saved and served by a TF server, this creates an extra barrier to deriving value from PL’s visual modelling capabilities. The technical skills/resource access required to deploy a model are not at the same level as those for using PL to build a model - but what is the value of building a model - other than learning how - if it cannot easily be used?
NB “models” are idealisations for particular purposes… it’s only when they are used in real life that their strengths and weaknesses (e.g. hidden assumptions) become readily apparent. Smoothing the path to using PL models would also enable iterative development by users… and their shareability