Skip to content

XV. Load and run models II (Model)

carlosuc3m edited this page Sep 13, 2023 · 2 revisions

Running a model in JDLL is intended to be simple and generic. The library deals internally with the different Deep Learning frameworks allowing the user to load and run every model in an uniform manner.

Loading refers to the action of bringing a DL model into computer memory to perform the predictions and running the model/making inference is the actual event of making the predictions. Both loading and running the model are time consuming tasks, however once a model has been loaded the model can be run without having to load it again until it is unloaded from memory or another model is loaded.

This page is going to provides a step by step guide on how to load and run a Deep Learning model. First note that in order to load a model the DL engine needed has to already been defined by an EngineInfo instance. Unless the model to be loaded is in the Bioimage.io format, every model requires an EngineInfo instance, so if loading the model of interest needs it and you are not familiar with it, please read first the Wiki page dedicated to it. In order to get more information about running a model please click here.

io.bioimage.modelrunner.model.Model

General description

Once the engine to load the model has been defined by an EngineInfo instance the only missing step is to provide the loacation of the model.

The location of the model is used to create an instance of a Model object, which can be then used to load and run it.

The instantiation of a Model object requires:

  • Path to the model folder. The model folder is the folder that contains the files that define the model of interest. The files vary from one framework to another. The model folder is the folder that contains the .pth/.pt file in the case of Pytorch/Torchscript, the folder that contains the .onnx file for Onnx models and the folder that contains the variables folder and .pb file for Tensorflow.

  • Path to the model source file. This field is not necessary for Tensorflow models at the moment. It is the path to the exact file that contains the model. In the case of Pytorch is the path to the .pt/.pth file and for Onnx, the path to the .onnx file.

  • EngineInfo instance. An instance of the EngineInfo class that contains the information of a DL framework that can be used to run the DL model. The DL framework needs to be compatible with the model. For more information about the EngineInfo class, click here.

Regar that if the model of interest adheres to the Bioimage.io format (contains an rdf.yaml in the model folder) creating a Model instance is much easier. No information about the source file or engine (no EngineInfo instance) is needed as all that information is expected to be in the rdf.yaml file. For Bioimage.io models the only argument needed is the model folder, which should contain the model files and the rdf.yaml file.

Static methods

Model.createDeepLearningModel( String modelFolder, String modelSource, EngineInfo engineInfo )

Method that creaes a JDLL DL model that can be loaded and run.

  • modelFolder: path to the directory housing the files containing the desired DL model. The model folder is the folder that contains the .pth/.pt file in the case of Pytorch/Torchscript, the folder that contains the .onnx file for Onnx models and the folder that contains the variables folder and .pb file for Tensorflow.

  • modelSource: path to the file that contains the DL model. Not relevant for Tensorflow. In the case of Pytorch is the path to the .pt/.pth file and for Onnx, the path to the .onnx file.

  • engineInfo: instance of the EngineInfo class that contains the information about the engine that is going to be used to load and run the model. The engine needs to be compatible with the model. For more info about EngineInfo click here.

Below there is a complete example on how to create a Model. The example includes the instantiation of the EngineInfo object.

// First instantiate the EngineInfo object witht the framework name, the version
// and the engines directory
String framework = "torchscript";
String enginesDir = "C:\\Users\\carlos\\icy\\engines";
String version = "1.9.0";
EngineInfo engineInfo = EngineInfo.defineCompatibleDLEngineCPU(framework, version, enginesDir);

// Now define the model
String modelFolder = "C:\\Users\\carlos\\icy\\models\\pytorch_model";
String modelSource = "C:\\Users\\carlos\\icy\\models\\pytorch_model\\torchscript-weights.pt";

Model model = Model.createDeepLearningModel(modelFolder, modelSource, engineInfo);

// The model can now be loaded into memory
model.load()
System.out.println("Great success!");

Output:

Great success!

Model.createDeepLearningModel( String modelFolder, String modelSource, EngineInfo engineInfo, ClassLoader classloader)

Almost the same method as Model.createDeepLearningModel( String modelFolder, String modelSource, EngineInfo engineInfo). Th only difference is that this method can choose the parent ClassLoader for engine. JDLL creates a separate ChildFirst-ParentLast CustomClassLoader for each of the engines loaded to avoid conflicts between them. In order to have access to the classes of the main ClassLoader the ChildFirst-ParentLast CustomClassLoader needs a parent. If no classloader argument is provided the parent ClassLoader will be the Thread's context ClassLoader (Thread.currentThread().getContextClassLoader()).

The classloader argument is usually not needed, but for some softwares such as Icy, that have a custom management of ClassLoaders it is necessary.

  • modelFolder: path to the directory housing the files containing the desired DL model. The model folder is the folder that contains the .pth/.pt file in the case of Pytorch/Torchscript, the folder that contains the .onnx file for Onnx models and the folder that contains the variables folder and .pb file for Tensorflow.

  • modelSource: path to the file that contains the DL model. Not relevant for Tensorflow. In the case of Pytorch is the path to the .pt/.pth file and for Onnx, the path to the .onnx file.

  • engineInfo: instance of the EngineInfo class that contains the information about the engine that is going to be used to load and run the model. The engine needs to be compatible with the model. For more info about EngineInfo click here.

  • classloader: ClassLoader that needs to be used as the parent ClassLoader to load the wanted engine. It is usually the ClassLoader where the ImgLib2 class has been loaded.

Below there is a complete example on how to create a Model. The example includes the instantiation of the EngineInfo object.

// First instantiate the EngineInfo object witht the framework name, the version
// and the engines directory
String framework = "torchscript";
String enginesDir = "C:\\Users\\carlos\\icy\\engines";
String version = "1.9.0";
EngineInfo engineInfo = EngineInfo.defineCompatibleDLEngineCPU(framework, version, enginesDir);

// Now define the model
String modelFolder = "C:\\Users\\carlos\\icy\\models\\pytorch_model";
String modelSource = "C:\\Users\\carlos\\icy\\models\\pytorch_model\\torchscript-weights.pt";

Model model = Model.createDeepLearningModel(modelFolder, modelSource, engineInfo, Thread.currentThread().getContextClassLoader());

// The model can now be loaded into memory
model.load()
System.out.println("Great success!");

Output:

Great success!

Model.createBioimageioModel(String bmzModelFolder, String enginesFolder)

Creates a Model instance for a Bioimage.io model. This method does not need the previous creation of an EngineInfo object because all the information needed to load the model is contained in the rdf.yaml file required by the Bioimage.io model format.

For this method, the engine associated to the model is not required to be exactly the same as the one defined in the rdf.yaml file. It just needs to be compatible. As long as the engine is from the same DL framework and shares the major version the model will be created correctly. If it is required to use the exact same engine defined in the rdf.yaml file, use the method Model.createBioimageioModelWithExactWeigths(String bmzModelFolder, String enginesFolder).

The safest option is always to load the exact engine defined for the model. However, DL frameworks put a lot of effort in backwards compatiblity, thus using a close version will be ok in the majority of the cases. In addition, loading every model with its exact engine implies having all the engines installed.

  • bmzModelFolder: path to the folder containing the Bioimage.io model. The Bioimage.io model folder is the directory that hosts the rdf.yaml file and the files that contain the DL model (.pth/.pt file in the case of Pytorch/Torchscript, .onnx file for Onnx models and the variables folder and .pb file for Tensorflow).

  • enginesFolder: directory where all the engines are installed. In the image below the argument would be "C:\\Users\\carlos\\icy\\engines":

Image Alt Text

Example of loading a Bioimage.io model. The model used for this example can be found here. The rdf.yaml file of the model requires Pytorch 1.13.1 to load it. The enginesDir of the example is represented by the image above. As it can be seen, Pytorch 1.13.1 is not installed, however, there is a compatible Pytorch (Pytorch 1.9.1) which will be able to load the model. Note that the this is not the best practice as Pytorch 1.9.1 is older and several subversions away from 1.13.1. Even though it might work, it is always advisable to have the latest versions of each DL framework installed.

String bmzModelFolder = "C:\\Users\\carlos\\icy\\models\\Neuron Segmentation in EM (Membrane Prediction)";
String enginesDir = "C:\\Users\\carlos\\icy\\engines";

Model model = Model.createBioimageioModel(bmzModelFolder, enginesDir);

model.load()
System.out.println("Great success!");

Output:

Great success!

Now, an example where the model will not be created and an exception will be thrown as there is no compatible engine with the model installed. The example uses the placid-llama model from the Bioimage.io. The selected model requires Tensorflow 2, and by the image above, there is no engine compatible with Tensorflow 2 installed, only Pytorch and Tensorflow 1, which is not compatible because of different major version.

String bmzModelFolder = "C:\\Users\\carlos\\icy\\models\\B. Sutilist bacteria segmentation - Widefield microscopy - 2D UNet";
String enginesDir = "C:\\Users\\carlos\\icy\\engines";

Model model = Model.createBioimageioModel(bmzModelFolder, enginesDir);

Output:

IOException: Please install a compatible engine with the model weights. To be compatible the engine has to be of the same framework and the major version needs to be the same. The model weights are: [tensorflow 2.11.0, keras 2.11.0]

Model.createBioimageioModelWithExactWeigths(String bmzModelFolder, String enginesFolder)

Similar to Model.createBioimageioModelWithExactWeigths(String bmzModelFolder, String enginesFolder), this method creates a Model instance from a Bioimage.io model without needing to instantiate an EngineInfo object to define the needed engine to load the model.

The only difference is that this method will only load the model with an engine that has been defined in its rdf.yaml file. If none of the exact engines (same DL framework, same version) are installed, the method will throw an exception. Thus, if the exact engine is not installed or if a previous engine that blocks loading the new engine has previously been loaded, and error will occur.

  • bmzModelFolder: path to the folder containing the Bioimage.io model. The Bioimage.io model folder is the directory that hosts the rdf.yaml file and the files that contain the DL model (.pth/.pt file in the case of Pytorch/Torchscript, .onnx file for Onnx models and the variables folder and .pb file for Tensorflow).

  • enginesFolder: directory where all the engines are installed. In the image below the argument would be "C:\\Users\\carlos\\icy\\engines":

Image Alt Text

Example of loading a Bioimage.io model with exact weights. The model used for this example can be found here. The rdf.yaml file of the model requires Tensorflow 1.15 to load it. The enginesDir of the example is represented by the image above. As it can be seen, Tensorflow 1.15 is installed, so it should work.

String bmzModelFolder = "C:\\Users\\carlos\\icy\\models\\Neuron Segmentation in 2D EM (Membrane)";
String enginesDir = "C:\\Users\\carlos\\icy\\engines";

Model model = Model.createBioimageioModelWithExactWeigths(bmzModelFolder, enginesDir);

model.load()
System.out.println("Great success!");

Output:

Great success!

Example of not being able to load a Bioimage.io model because the exact weigths are not installed. The model used for this example can be found here. The rdf.yaml file of the model requires Pytorch 1.13.1 to load it. The enginesDir of the example is represented by the image above. As it can be seen, Pytorch 1.13.1 is not installed, thus trying to load this model with Model.createBioimageioModelWithExactWeigths(String bmzModelFolder, Sting enginesDir) will throw an exception.

String bmzModelFolder = "C:\\Users\\carlos\\icy\\models\\Neuron Segmentation in EM (Membrane Prediction)";
String enginesDir = "C:\\Users\\carlos\\icy\\engines";

Model model = Model.createBioimageioModelWithExactWeigths(bmzModelFolder, enginesDir);

Output:

IOException: Please install the engines defined by the model weights. The model weights are: torchscript 1.13.1

Non-static methods (Loading a model, running a model and closing the model)

model.load()

Once the Model instance has been created for the wanted model, this method moves the model from storage to memory in order be able to use it to make inference and predictions. Once it has been loaded it can be used several times to make inference as long as it is not closed.

model.runModel( List< Tensor < ? > > inTensors, List< Tensor < ? > > outTensors )

Once the model has been loaded, it can be used to run inference on JDLL tensors. If the model has been closed (unloaded) this method cannot be used. For more information about ow to create JDLL tensors, click here.

  • inTensors: list of input tensors.
  • outTensors: list of output tensors, they can be empty or not.

model.closeModel()

Unload the model from the memory and free the resources allocated by it. Once it has been closed, the model needs to be loaded again to be used.

Clone this wiki locally