-
Notifications
You must be signed in to change notification settings - Fork 4
5.1 Workflows (Custom workflow)
This section follows on from workflows and will demonstrate how you can generate custom workflows and apply it to your deskewed data within napari-lattice!
We will use examples to demonstrate this.
Image analysis workflows are usually bespoke and although we can generate segmented objects using pyclesperanto-assistant, what if we'd like to perform shape measurements as well?
In the previous section, we performed segmentation using:
- gaussian blur
- voronoi-otsu labelling
Now, we need to measure the shape properties of every label object across time. We will use napari-workflow API to design a workflow where we can call scikit-image regionprops at the end to perform measurements.
For executable code and more notes, please refer to this notebook.
We will create a workflow to perform the above segmentation:
#import workflow
from napari_workflows import Workflow
#We initialise a workflow
segmentation_workflow = Workflow()
#Set task for gaussian blur first
input_image = "input_img"
#To set a task, we use the set method
segmentation_workflow.set("gaussian",
cle.gaussian_blur,
source = input_image,
sigma_x = 1,
sigma_y= 1,
sigma_z= 0)
#The second task will use input from above. so set the task name 'gaussian' as input
segmentation_workflow.set("voronoi-otsu",
cle.voronoi_otsu_labeling,
source = "gaussian",
spot_sigma = 14,
outline_sigma=0)
This only gives the images. To extract regionprops, we can write a custom function and save it as a .py
file
from skimage.measure import regionprops_table
import numpy as np
def measure_region_properties(label_img):
label_np = np.array(label_img)
measurements = regionprops_table(label_np, properties=( 'area',
'centroid',
'axis_major_length',
'axis_minor_length'))
return measurements,label_np
This is saved as measure_regionprops.py
. Now, we can import it within our workflow and set it as a task, where the output from voronoi-otsu
m which is a label image is the input for regionprops:
segmentation_workflow.set("region_props",
measure_regionprops.measure_region_properties,
"voronoi-otsu"
)
Save this workflow:
from napari_workflows import _io_yaml_v1
_io_yaml_v1.save_workflow("regionprops_workflow.yml", segmentation_workflow)
To use the segmentation workflow with the custom python module regionprops
, we need to ensure both the regionprops_workflow.yml
file and measure_regionprops.py
are in the same folder. This is ready to be used within napari-lattice
.
To apply channel-specific operations, refer to this section.
Open napari-lattice and initialize your plugin. You can try with test dataset from here.
You can use an ROI from the Crop and Deskew
or just test it on the whole image. To learn how to import workflows and apply it on your data, instructions can be found here. If Previewing
the workflow napari-lattice will perform the segmentation and save the regionprops table as a csv
file in the same folder as the workflow. If you are Applying and Saving
using a channel and time range, it will save the images and the table as a csv file in the specified output folder.
Cellpose is a very popular generalist cell segmentation algorithm. It now has a 'human-in-the-loop' training feature making it relatively easier to generate custom models.
Let us generate cell segmentation workflows using Cellpose for LLS data. You can install cellpose using its napari plugin, via Plugins -> Install/Uninstall Plugins
and searching for cellpose
. The installation may take a while. Once installed, to use the GPU for prediction, exit napari and then follow the GPU configuration instructions at the cellpose github page.
Link to Cellpose workflow notebook
For using cellpose in workflows, create a python file like the following and save it as cellpose_1.py
. You can use any name, but make sure to use that name when calling the module from your workflow:
### cellpose_1.py
import numpy as np
from cellpose import models
def predict_cellpose(img,model_type:str="cyto"):
model = models.Cellpose(gpu=True, model_type=model_type)
channels = [0,0]
img =np.array(img)
masks, flows, styles, diams = model.eval(img, flow_threshold=None, channels=channels, diameter=25, do_3D=True)
return masks
For details on how to use the cellpose API, refer to their https://cellpose.readthedocs.io/en/latest/index.html.
We define a workflow using the predict_cellpose
function from cellpose_1.py
module.
from napari_workflows import Workflow
from cellpose_1 import predict_cellpose
#We initialise a workflow
cellpose_workflow = Workflow()
#define cellpose prediction
input_arg = "input"
task_name = "cellpose"
cellpose_workflow.set(task_name,predict_cellpose,input_arg,model_type="cyto")
We use the cyto
model here, but we can pass any model we like. Save the workflow:
from napari_workflows import _io_yaml_v1
_io_yaml_v1.save_workflow("cellpose_1_workflow.yml",cellpose_workflow)
To use different model or parameter for each channel, refer to this section.
Start napari-lattice and import your LLS data. To learn how to import workflows and apply it on your data, instructions can be found here.
More than often, the image processing alogrithm such as thresholds or deep learning models used is specific to the image channel within the microscopy image. It is possible to design workflows with channels-specific operations. This is particularly useful for batch operations.
napari-lattice has a module named: config
from napari-lattice import config
Within you custom function, you can specify:
#indexing starts at zero
if config.channel==0:
#APPLY channel 0 specific function here
elif config.channel==1:
#APPLY channel 1 specific function here
Similarly, this can also be done for time
if config.time==x:
#APPLY time x specific function here
elif config.time==y:
#APPLY time y specific function here