Here we provide our implementation for CM recognition at stage III. We adopt the pix2pix model for predicting cTnT fluorescent labels from day 12 live-cell brightfield images. The code for training and testing the pix2pix model is borrowed from the official implementation junyanz/pytorch-CycleGAN-and-pix2pix: Image-to-Image Translation in PyTorch (github.com).
Download the datasets from the following link, and unzip them at ./pix2pix/datasets/
.
CM.zip
(https://drive.google.com/file/d/1aH4ASfTFt5GivGyNydaiP4AYY0s6SjwV/view?usp=sharing), containing 35 paired whole-well bright-field images and cTnT fluorescence images for training and 36 for testing. The training and the testing set are from the same cell lines.CM_new_cell_lines.zip
(https://drive.google.com/file/d/1bB7OoehPhM5GnAWCBEXKUnAY9Xi3vhV1/view?usp=sharing), containing 62 paired whole-well bright-field images and cTnT fluorescence images for testing. They are from three additional cell lines.
./pix2pix/datasets/(CM|CM_new_cell_lines)/A/(train|test)/*.png
are bright-field images (1536×1536 pixels), and .../B/(train|test)/*.png
are the corresponding fluorescence images (1536×1536 pixels).
If you want to train and test the models, please run
cd pix2pix/datasets
python combine_A_and_B.py --fold_A ./CM/A --fold_B ./CM/B --fold_AB ./CM/ --no_multiprocessing
python combine_A_and_B.py --fold_A ./CM_new_cell_lines/A --fold_B ./CM_new_cell_lines/B --fold_AB ./CM_new_cell_lines/ --no_multiprocessing
cd ..
These will concatenate the bright-field and fluorescence images to .../(CM|CM_new_cells_lines)/(train|test)/*.png
(1536×3072 pixels).
You can also prepare your custom dataset in a similar way.
To train the pix2pix model, run the following command.
python train.py --dataroot ./datasets/CM --name brightfield2fluorescence --model pix2pix --input_nc 1 --output_nc 1 --load_size 1536 --crop_size 256 --lr 2e-4 --n_epochs 1000 --n_epochs_decay 0 --norm instance --netD n_layers --n_layers_D 1 --batch_size 16 --direction AtoB --save_epoch_freq 100 --dataset_mode aligned --use_resize_conv --seed 1234
python train.py --dataroot ./datasets/CM --name brightfield2fluorescence --model pix2pix --input_nc 1 --output_nc 1 --load_size 1536 --crop_size 256 --lr 2e-4 --n_epochs 1000 --n_epochs_decay 1000 --norm instance --netD n_layers --n_layers_D 1 --batch_size 16 --direction AtoB --save_epoch_freq 100 --dataset_mode aligned --use_resize_conv --no_adversarial_loss --epoch 1000 --epoch_count 1001 --continue_train --seed 5678
Note that we modified the code so that it only supports gray-scale input and output. Note also that in the last 1000 epoch of training, adversarial loss is disabled to encourage higher fidelity of fluorescence reconstruction.
We added two options in our code:
--use_resize_conv
: replace the transposed convolutional layer with resize-convolution.--no_adversarial_loss
: disable the adversarial loss (i.e., disable the discriminator).
You can use python train.py --help
to see the meaning of other options.
Use visdom to visualize the training process. You may use visdom to visualize the loss curve and example generated images during training. Just run visdom
in the command line window, open links localhost:8097
in a web browser. By default 8097 is the visdom port number, which can be set by option --display_port
.
If you want to use our pretrained models, download it from (https://drive.google.com/file/d/1JqZQfDAh43lne4IU9zKqtHl5bgsuKzEW/view?usp=sharing), and unzip it to ./pix2pix/checkpoints/brightfield2fluorescence/latest_net_G.pth
.
To test the pix2pix model, please run the following command:
python test.py --dataroot ./datasets/CM --name brightfield2fluorescence --model pix2pix --direction AtoB --input_nc 1 --output_nc 1 --load_size 1536 --crop_size 1536 --use_resize_conv --eval --num_test 1000
or
python test.py --dataroot ./datasets/CM_new_cell_lines --name brightfield2fluorescence --model pix2pix --direction AtoB --input_nc 1 --output_nc 1 --load_size 1536 --crop_size 1536 --use_resize_conv --eval --num_test 1000
Note that the last option --num_test
will specify the maximum total number of images processed by the model.
The results will be saved in ./pix2pix/results/brightfield2fluorescence/test_latest/images/*_fake_B.png
. We also provided the exact samples generated by our trained model in the dataset folder ./pix2pix/datasets/(CM|CM_new_cell_lines)/B_predicted/
.
Pixel-level comparison between true and predicted fluorescence images is implemented in Matlab code ./evaluation/pixel_correlation.m
. It computes the Pearson's r and a heatmap for a given pair of true and predicted image (Fig. 3d). Colormap can be modified for better visualization.
Image-level comparison is implemented in Python Jupyter notebook file image_correlation.ipynb
. It computes the Differentiation Efficiency Index (i.e., total fluorescence intensity) for true and predicted fluorescence images and provides a Pearson's r (Fig. 3e,f, Supplementary Fig. S4d).