-
Notifications
You must be signed in to change notification settings - Fork 6
I. List of supported engines
Currently JDLL supports 3 Deep Learning frameworks:
-
Tensorflow 1 and 2. The tags used to refer to this framework are
tensorflow
ortensorflow_saved_model_bundled
(second one is the one used by Bioimage.io). -
Pytorch 1 and 2. The tags used to refer to this framework are
pytorch
ortorchscript
(second one is the one used by Bioimage.io). -
Onnx. The tag used to refer to this framework is
onnx
.
DL framework | Tag in JDLL | Versions for Windows-x86_64 | Versions for Linux-x86_64 | Versions for MacOS-x86_64 | Versions for Linux-arm64 | Versions for MacOS-arm64 |
---|---|---|---|---|---|---|
Tensorfow |
tensorflow or tensorflow_saved_model_bundle
|
1.12.0, 1.13.1, 1.14.0, 1.15.0, 2.3.1, 2.4.1, 2.7.0, 2.7.1, 2.7.4, 2.10.1 | 1.12.0, 1.13.1, 1.14.0, 1.15.0, 2.3.1, 2.4.1, 2.7.0, 2.7.1, 2.7.4, 2.10.1 | 1.12.0, 1.13.1, 1.14.0, 1.15.0, 2.3.1, 2.4.1, 2.7.0, 2.7.1, 2.7.4, 2.10.1 | 2.7.0 | 2.7.0 |
Pytorch |
pytorch or torchscript
|
1.7.1, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.11.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0 | 1.7.1, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.11.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0 | 1.7.1, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.11.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0 | 1.11.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0 | 1.11.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0 |
Onnx* | onnx |
8(1.3.1), 9(1.4.0), 10(1.5.2), 11(1.6.0), 12(1.7.0), 13(1.8.1), 14(1.9.0), 15(1.10.0), 16(1.11.0), 17(1.12.1), 18(1.13.1) | 8(1.3.1), 9(1.4.0), 10(1.5.2), 11(1.6.0), 12(1.7.0), 13(1.8.1), 14(1.9.0), 15(1.10.0), 16(1.11.0), 17(1.12.1), 18(1.13.1) | 8(1.3.1), 9(1.4.0), 10(1.5.2), 11(1.6.0), 12(1.7.0), 13(1.8.1), 14(1.9.0), 15(1.10.0), 16(1.11.0), 17(1.12.1), 18(1.13.1) | 17(1.12.1), 18(1.13.1) | 17(1.12.1), 18(1.13.1) |
*For the versions column in Onnx, the number not in parenthesis refers to the opset version (used in the Bioimage.io rdf.yaml) and the number in parenthesis represents the actual version.
MacOS systems do not support GPU acceleration, although arm64 chips in newer Macs accelerate greatly the computation. On the other hand, for Windows and Linux-x86_64, for every DL framework and version there is an engine that supports GPU.
Looking at the table above, one can observe that for some frameworks as Tensorfow, not every Python version has a Java API associated. JDLL tries to dive support for every model existing. It is assumed that the Java API of a certain version will be quite compatible with the close Python versions that do not have a coresponding Java API just for themselves.
This is why for example, when looking for Tensorflow 2.7.2 the among the available engines, the result will return the Tensorflow 2.7.4 version. JDLL assumes that Tensorflow 2.7.4 is equivalent to Tensorflow 2.7.2, so they can be used interchangeably. The list of interchangeable versions can be found here.
Here there are a couple of examples explaining this JDLL behaviour.
The first tries to find if JDLL has any engine that supports Tensorflow 2.7.2 in both CPU and GPU. The example uses methods explained in the Wiki section: VIV. Engine Management II (AvailableEngines).
String framework = "tensorflow";
String version = "2.7.2";
Boolean cpu = True;
Boolean gpu = True;
List<DeepLearningVersion> engineList =
AvailableEngines.getEnginesForOsByParams(framework, version, cpu, gpu);
System.out.println("Number of engines found: " + engineList.size());
System.out.println("Framework of the engine: " + engineList.get(0).getFramework());
System.out.println("Python version of the DL framework of the engine: " + engineList.get(0).getPythonVersion());
System.out.println("Java API version of the DL framework of the engine: " + engineList.get(0).getPythonVersion());
Output:
Number of engines found: 1
Framework of the engine: tensorflow
Python version of the DL framework of the engine: 2.7.4
Java API version of the DL framework of the engine: 0.4.2
It can be seen that the only engine returned does not have the same DL framework Python version as the one asked for (2.7.2 != 2.7.4), however JDLL assumes they are interchangeable, so it returns it. Looking at the JSON file with interchangeabilities, one can observe that they both use the same Java API version: 0.4.2
The second example shows how an EngineInfo
instance can be created for Tensorflow 2.7.2, which by the table above, does not have an exact Java API available in JDLL. All the methods and classes for the example are explained in detail in the Wiki section: XIV. Load and run models I (EngineInfo)
String framework = "tensorflow";
String version = "2.7.2";
Boolean cpu = True;
Boolean gpu = True;
EngineInfo engineInfo = EngineInfo.defineDLEngine(framework, version, cpu, gpu);
System.out.println(engineInfo.getFramework());
System.out.println(engineInfo.getVersion());
Output:
tensorflow
2.7.4
Again it can be observed that JDLL considers Tensorflow 2.7.2 and 2.7.4 equivalent.
DL framework | API version | Minimum Java version required | Interchangeable versions |
---|---|---|---|
tensorflow | 2.10.1 | 11 | 2.11.0, 2.10.1, 2.10.0, 2.9.4, 2.9.3, 2.9.2, 2.9.1, 2.9.0, 2.8.5, 2.8.4, 2.8.3, 2.8.2, 2.8.1, 2.8.0 |
tensorflow | 2.7.4 | 8 | 2.7.4, 2.7.2 |
tensorflow | 2.7.1 | 8 | 2.7.1 |
tensorflow | 2.7.0 | 8 | 2.7.0, 2.6.0, 2.6.1, 2.6.2, 2.6.3, 2.5.2, 2.5.1, 2.5.0 |
tensorflow | 2.4.1 | 8 | 2.4.1, 2.4.4, 2.4.3, 2.4.2, 2.4.0 |
tensorflow | 2.3.1 | 8 | 2.3.1, 2.3.4, 2.3.3, 2.3.2, 2.3.0, 2.2.3, 2.2.2, 2.2.1, 2.2.0, 2.1.4, 2.1.3, 2.1.2, 2.1.1, 2.1.0, 2.0.4, 2.0.3, 2.0.2, 2.0.1, 2.0.0 |
tensorflow | 1.15.0 | 8 | 1.15.0, 1.15.5, 1.15.4, 1.15.3, 1.15.2, 1.15.1 |
tensorflow | 1.14.0 | 8 | 1.14.0 |
tensorflow | 1.13.1 | 8 | 1.13.1, 1.13.2 |
tensorflow | 1.12.0 | 8 | 1.12.0, 1.12.3, 1.12.2, 1.12.1, 1.12.0 |
pytorch | 2.0.0 | 8 | 2.0.0 |
pytorch | 1.13.1 | 8 | 1.13.1 |
pytorch | 1.13.0 | 8 | 1.13.0 |
pytorch | 1.12.1 | 8 | 1.12.1 |
pytorch | 1.11.0 | 8 | 1.11.0 |
pytorch | 1.10.0 | 8 | 1.10.0 |
pytorch | 1.9.1 | 8 | 1.9.1 |
pytorch | 1.9.0 | 8 | 1.9.0 |
pytorch | 1.8.1 | 8 | 1.8.1, 1.8.0 |
pytorch | 1.7.1 | 8 | 1.7.1, 1.7.0, 1.6.0, 1.5.1, 1.5.0 |
onnx | 18(1.13.1) | 8 | 18(1.13.1) |
onnx | 17(1.12.1) | 8 | 17(1.12.1) |
onnx | 16(1.11.0) | 8 | 16(1.11.0) |
onnx | 15(1.10.0) | 8 | 15(1.10.0) |
onnx | 14(1.9.0) | 8 | 14(1.9.0) |
onnx | 13(1.8.1) | 8 | 13(1.8.1) |
onnx | 12(1.7.0) | 8 | 12(1.7.0) |
onnx | 11(1.6.0) | 8 | 11(1.6.0) |
onnx | 10(1.5.2) | 8 | 10(1.5.2) |
onnx | 9(1.4.0) | 8 | 9(1.4.0) |
onnx | 8(1.3.1) | 8 | 8(1.3.1), 7(1.2), 6(1.1.2) |