Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TotalSegmentator method added in volume.from_nifti() #77

Open
wants to merge 31 commits into
base: multi-organ-seg
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
69339cb
Update volume.py
wu-qiyuan Jun 3, 2022
ff89413
Update use_nnunet.py
wu-qiyuan Jun 3, 2022
a916f4c
Update use_nnunet.py
wu-qiyuan Jun 3, 2022
a68edf8
Update volume.py
wu-qiyuan Jun 3, 2022
83e5593
Update use_nnunet.py
wu-qiyuan Jun 3, 2022
3ceac2e
Update use_nnunet.py
wu-qiyuan Jun 3, 2022
8146044
Update use_nnunet.py
wu-qiyuan Jun 3, 2022
cc7be4b
Create Readme.md
Zhiyuan-Ding Jun 5, 2022
1b38acc
Update Readme.md
Zhiyuan-Ding Jun 5, 2022
ccb1662
Add files via upload
Zhiyuan-Ding Jun 5, 2022
ebab616
Update Readme.md
Zhiyuan-Ding Jun 5, 2022
9601786
Merge pull request #3 from Ding515/multi-organ-seg
wu-qiyuan Jun 7, 2022
078ff2f
Create random.py
wu-qiyuan Jun 20, 2022
fcb349f
Update __init__.py
wu-qiyuan Jun 20, 2022
2753b21
Update __init__.py
wu-qiyuan Jun 20, 2022
c33118a
Update camera_projection.py
wu-qiyuan Jun 20, 2022
85aeec4
Update core.py
wu-qiyuan Jun 20, 2022
4a99ff7
Create exceptions
wu-qiyuan Jun 20, 2022
9889b25
Rename exceptions to exceptions.py
wu-qiyuan Jun 20, 2022
3fbc1bc
Update material_coefficients.py
wu-qiyuan Aug 9, 2022
7da58b5
Update material_coefficients.py
wu-qiyuan Aug 10, 2022
a489235
Update use_nnunet.py
wu-qiyuan Aug 10, 2022
8cebb59
Update material_coefficients.py
wu-qiyuan Aug 10, 2022
d0af595
Update material_coefficients.py
wu-qiyuan Aug 28, 2022
dcc1809
Update volume.py
wu-qiyuan Aug 29, 2022
77d1a8f
Update volume.py
wu-qiyuan Aug 30, 2022
b509398
Update volume.py
wu-qiyuan Aug 30, 2022
20624a3
Update volume.py
wu-qiyuan Aug 30, 2022
ba48fe9
Update use_nnunet.py
wu-qiyuan Sep 5, 2022
48a0769
Update volume.py
wu-qiyuan Sep 5, 2022
10f8cb2
Update README.md
wu-qiyuan May 20, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 35 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -255,3 +255,38 @@ https://github.com/mattmacy/vnet.pytorch
F. Milletari, N. Navab, S-A. Ahmadi. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. arXiv:160604797. 2016.

We gratefully acknowledge the support of the NVIDIA Corporation with the donation of the GPUs used for this research.

## Supplimentary materials
scripts for virtual environment
source project_env/bin/activate
deactivate

Convert the Prostate dataset into the correct format with
nnUNet_convert_decathlon_task -i /xxx/Task05_Prostate
Note that Task05_Prostate must be the folder that has the three 'imagesTr', 'labelsTr', 'imagesTs' subfolders!

3D full resoltion U-Net:
nnUNet_predict -i $nnUNet_raw_data_base/nnUNet_raw_data/Task003_Liver/imagesTs/ -o nnU_OUTPUT_Task03 -t 3 -m 3d_fullres
! change needed for different cases.

image processing python:
https://github.com/fitushar/3D-Medical-Imaging-Preprocessing-All-you-need

DECOM to NEFTI:
/home/qiyuan/Downloads/MRIcroGL/Resources/dcm2niix -f "Ped01-ref" -p y -z y "output_dir" "/media/qiyuan/My Passport/Segmentation/data/Pediatric-CT-SEG/manifest-1645994167898/Pediatric-CT-SEG/Pediatric-CT-SEG-018B687C/10-11-2008-NA-CT-72580/2.000000-RTSTRUCT-58813"

scripts to ssh into lab server:
ssh [email protected]

scp -r [email protected]:/srv/data1/sean/torso_mid_result/ts_mesh /media/qiyuan/My_Passport/Segmentation/data

scp -r /home/qiyuan/environments/project_env [email protected]:/home/sean/anaconda3/envs

scp -r /home/qiyuan/Downloads/activate [email protected]:/home/sean/anaconda3/envs/project_env/bin

scp -r /home/qiyuan/Documents/run_server.py [email protected]:/home/sean/cis2/nnUNet

/home/qiyuan/Documents

git remote add origin https://github.com/Ding515/3D-CT-Segmentation.git
git push -u
10 changes: 10 additions & 0 deletions ct-org/Readme.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Instruction for CT-ORG based mask generation

CT-ORG is a 5-classes abdominal organ segmentation model as in [CT-ORG, a new dataset for multiple organ segmentation in computed tomography](https://www.nature.com/articles/s41597-020-00715-8). And the trained network is packaged in docker in [this link](https://github.com/bbrister/ct_organ_seg_docker).

## Mask generation steps
1. Installing the pre-trained models from [previous link](https://github.com/bbrister/ct_organ_seg_docker).
2. Running `org_mask_batch.py`.(Currently I/O part still in in file for modification, before publication this should be worked in command line format).
3. In some cases the original docker setting up will not work due to the following reasons:
- For issues with CPU/GPUs, add `--gpus gpu_index` in ./docker/run_docker_container.py line `sudo docker run --gpus all -v $HOST_SHARED:$CONTAINER_SHARED -t $IMAGE $INFILE $OUTFILE`
- For issues with `IOError: CRC check failed`, this is due to `nibabel` or nii data version, change `get_data` to `get_fdata`.
32 changes: 32 additions & 0 deletions ct-org/org_mask_batch.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# -*- coding: utf-8 -*-
"""
Created on Fri Apr 29 08:28:03 2022

@author: Ding
"""

import os
import shutil

data_loading_path = '/home/sean/torso_mid_result/data'
docker_shared_path = '/home/sean/cis2/ct_organ_seg_docker/shared'
org_mask_path = '/home/sean/torso_mid_result/org_mask'
file_list = os.listdir(data_loading_path)
#os.system('cd ~/cis2/ct_organ_seg_docker')
existing_file_list = os.listdir(org_mask_path)
for case_name in file_list:
if case_name not in existing_file_list:
current_file_path = os.path.join(data_loading_path,case_name)
shared_file_path = os.path.join(docker_shared_path,case_name)
shutil.copyfile(current_file_path,shared_file_path)

mask_name = 'seg_'+ case_name
command_line = 'sh run_docker_container.sh '+ case_name+' '+mask_name
current_process = os.system(command_line)
print(current_process)
shutil.move(os.path.join(docker_shared_path,mask_name),os.path.join(org_mask_path,case_name))
os.remove(shared_file_path)




31 changes: 29 additions & 2 deletions deepdrr/geo/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,44 +18,71 @@

from .core import (
HomogeneousObject,
HomogeneousPointOrVector,
PointOrVector,
get_data,
Point,
Vector,
Point2D,
Point3D,
Vector2D,
Vector3D,
Line,
HyperPlane,
Line2D,
Plane,
Line3D,
point,
vector,
line,
plane,
p,
v,
l,
pl,
Transform,
FrameTransform,
frame_transform,
RAS_from_LPS,
LPS_from_RAS,
)
from .exceptions import JoinError, MeetError
from .camera_intrinsic_transform import CameraIntrinsicTransform
from .camera_projection import CameraProjection
from scipy.spatial.transform import Rotation
from .random import spherical_uniform

__all__ = [
"HomogeneousObject",
"HomogeneousPointOrVector",
"PointOrVector",
"get_data",
"Point",
"Point2D",
"Point3D",
"Vector",
"Vector2D",
"Vector3D",
"Line",
"HyperPlane",
"Line2D",
"Plane",
"Line3D",
"point",
"vector",
"line",
"plane",
"p",
"v",
"l",
"pl",
"Transform",
"FrameTransform",
"frame_transform",
"RAS_from_LPS",
"LPS_from_RAS",
"JoinError",
"MeetError",
"CameraIntrinsicTransform",
"CameraProjection",
"Rotation",
"spherical_uniform",
]
44 changes: 32 additions & 12 deletions deepdrr/geo/camera_projection.py
Original file line number Diff line number Diff line change
@@ -1,17 +1,12 @@
from typing import Union, Optional, Any, TYPE_CHECKING
import numpy as np

from .core import Transform, FrameTransform, point, Point3D, get_data
from .core import Transform, FrameTransform, point, Point3D, get_data, Plane
from .camera_intrinsic_transform import CameraIntrinsicTransform
from ..vol import AnyVolume

# if TYPE_CHECKING:
# from ..vol import AnyVolume
# else:
# AnyVolume = Any


# TODO(killeen): CameraProjection never calls super().__init__() and thus has no self.data attribute.
# TODO: reorganize geo so you have primitives.py and transforms.py. Have separate classes for each type of transform?


class CameraProjection(Transform):
Expand All @@ -24,7 +19,9 @@ def __init__(
intrinsic: Union[CameraIntrinsicTransform, np.ndarray],
extrinsic: Union[FrameTransform, np.ndarray],
) -> None:
"""A generic camera projection.
"""A class for instantiating camera projections.

The object itself contains the "index_from_world" transform, or P = K[R|t].

A helpful resource for this is:
- http://wwwmayr.in.tum.de/konferenzen/MB-Jass2006/courses/1/slides/h-1-5.pdf
Expand All @@ -47,6 +44,30 @@ def __init__(
if isinstance(extrinsic, FrameTransform)
else FrameTransform(extrinsic)
)
index_from_world = self.index_from_camera3d @ self.camera3d_from_world
super().__init__(
get_data(index_from_world), _inv=get_data(index_from_world.inv)
)

@property
def index_from_world(self) -> Transform:
return self

@classmethod
def from_krt(
cls, K: np.ndarray, R: np.ndarray, t: np.ndarray
) -> "CameraProjection":
"""Create a CameraProjection from a camera intrinsic matrix and extrinsic matrix.

Args:
K (np.ndarray): the camera intrinsic matrix.
R (np.ndarray): the camera extrinsic matrix.
t (np.ndarray): the camera extrinsic translation vector.

Returns:
CameraProjection: the camera projection.
"""
return cls(intrinsic=K, extrinsic=FrameTransform.from_rt(K, R, t))

@classmethod
def from_rtk(
Expand Down Expand Up @@ -87,10 +108,6 @@ def index_from_camera3d(self) -> Transform:
def camera3d_from_index(self) -> Transform:
return self.index_from_camera3d.inv

@property
def index_from_world(self) -> Transform:
return self.index_from_camera3d @ self.camera3d_from_world

@property
def world_from_index(self) -> Transform:
"""Gets the world-space vector between the source in world and the given point in index space."""
Expand Down Expand Up @@ -127,6 +144,9 @@ def get_center_in_world(self) -> Point3D:
Point3D: the center of the camera in center.
"""

# TODO: can also get the center from the intersection of three planes formed
# by self.data.

world_from_camera3d = self.camera3d_from_world.inv
return world_from_camera3d(point(0, 0, 0))

Expand Down
Loading