You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So, I got pulled in JDLL by the spanish people on Prague
I was looking at the readme and the first thing I thought is that I might help you, folks, with some automatization/template ready to clone and start playing from there
It's based on Kotlin (and some or all in Gradle), I'm pretty confident I could provide something along these lines:
// 0. Setting Up JDLL// no need, just clone the template repo// 1. Downloading a model (optional)
downloadModel {
// enum, statically typed
model =Model.`B. Sutilist bacteria segmentation -Widefield microscopy - 2D UNet`
// set with some default value, but customizable//dst = projectDir / "models"
}
// 2. Installing DL engines// we may also implement all the necessary logic expressing the compatibility among // the different DL framework, OS and Arch, printing errors if incompatible or warnings// if a best effort try is being made
framework {
// if engine, cpu and gpu are not specified, then // `EngineInstall::installEnginesForModelByNameinDir` will be called// engine = Tensorflow.`2.0` // also enum, statically typed// cpu = true// gpu = true// set with some default value, but customizable
installationDir = projectDir /"engines"
}
// will automatically failed if `!installed`// 3. Creating the tensorsval img1 = model.create<FloatType>() // [1, 512, 512, 1] inferred from `model`
tensor {
input = build(model.inputs.bxyc, img1) // "input_1" might be inferred
outputEmpty = buildEmptyTensor(model.outputs.bxyc) // "conv2d_19" might be inferred
outputBlankTensor = buildBlankTensor<FloatType>(model.outputs.bxyc) // [1, 512, 512, 3] inferred
}
// 4. Loading the model
dlEngine { // or dlCompatibleEngine {
framework =TensorFlow.`2.7.0`
cpu =true
gpu =true// engineDir inferred
}
// the rest of the step can be created and executed automatically// everything gets inferred:// - model load// - model run// - cleanup
Following the Gradle philosophy of "convention over configuration", we could assume conventions over framework and have that step completely optional as well. Something similar for cpu/gpu=true
The text was updated successfully, but these errors were encountered:
It's a little hacky to get something asap and it works just before the Model::createDeepLearningModel is called, because then the classloader concept has to be fixed/reworked, but the idea is that
You can go massively down with requested code (original) and make it truly script (now essentially it's running during Gradle configuration time, with manual caching engine and model)
So, I got pulled in JDLL by the spanish people on Prague
I was looking at the readme and the first thing I thought is that I might help you, folks, with some automatization/template ready to clone and start playing from there
It's based on Kotlin (and some or all in Gradle), I'm pretty confident I could provide something along these lines:
Following the Gradle philosophy of "convention over configuration", we could assume conventions over
framework
and have that step completely optional as well. Something similar forcpu/gpu=true
The text was updated successfully, but these errors were encountered: