Code for training and benchmarking morphology appropriate representation learning methods, associated with the following manuscript.
Interpretable representation learning for 3D multi-piece intracellular structures using point clouds
Ritvik Vasan, Alexandra J. Ferrante, Antoine Borensztejn, Christopher L. Frick, Nathalie Gaudreault, Saurabh S. Mogre, Benjamin Morris, Guilherme G. Pires, Susanne M. Rafelski, Julie A. Theriot, Matheus P. Viana
bioRxiv 2024.07.25.605164; doi: https://doi.org/10.1101/2024.07.25.605164
Our analysis is organized as follows.
- Single cell images
- Preprocessing (result: pointclouds and SDFs)
- Punctate structures
- Alignment, masking, and registration
- Generate pointclouds
- Polymorphic structures: Generate SDFs
- Punctate structures
- Model training (result: checkpoint)
- Model inference (results: embeddings, model cost statistics)
- Interpretability analysis (results: figures)
Continue below for guidance on using these models on your own data. If you'd like to reproduce this analysis on our data, check out the following documentation.
- Main usage documentation for reproducing the figures in the paper from published pointclouds and SDFs, including model training and inference (steps 3-5).
- Preprocessing documentation for generating pointclouds and SDFs from from our input movies (step 2).
- Development documentation for guidance working on the code in this repository.
Coming soon
Allen Institute for Cell Science ([email protected])