diff --git a/README.md b/README.md
index 396c83ee..94335bb9 100644
--- a/README.md
+++ b/README.md
@@ -16,8 +16,8 @@ Imaging datasets in cancer research are growing exponentially in both quantity a
| Branch | Test status |
| ------ | ------------- |
-| master | ![tests](https://github.com/Dana-Farber-AIOS/pathml/actions/workflows/tests-conda.yml/badge.svg?branch=master) |
-| dev | ![tests](https://github.com/Dana-Farber-AIOS/pathml/actions/workflows/tests-conda.yml/badge.svg?branch=dev) |
+| master | ![tests](https://github.com/Dana-Farber-AIOS/pathml/actions/workflows/tests-linux.yml/badge.svg?branch=master) |
+| dev | ![tests](https://github.com/Dana-Farber-AIOS/pathml/actions/workflows/tests-linux.yml/badge.svg?branch=dev) |
diff --git a/examples/InferenceOnnx_tutorial.ipynb b/examples/InferenceOnnx_tutorial.ipynb
index b4445554..eaea1011 100644
--- a/examples/InferenceOnnx_tutorial.ipynb
+++ b/examples/InferenceOnnx_tutorial.ipynb
@@ -12,19 +12,23 @@
"\n",
"## Introduction\n",
"\n",
- "This notebook is a tutorial on how to use the future ONNX `inference` feature in PathML. \n",
+ "This notebook is a tutorial on how to use the future ONNX `inference` feature in PathML. The use case for this API is to create an ONNX model in HaloAI or similar software, export it, and run it at scale using PathML. \n",
"\n",
"Some notes:\n",
+ "\n",
"- The ONNX inference pipeline uses the existing PathML Pipeline and Transforms infrastructure.\n",
" - ONNX labels are saved to a `pathml.core.slide_data.SlideData` object as `tiles`.\n",
" - Users can iterate over the tiles as they would when using this feature for preprocessing. \n",
+ "\n",
"- Preprocessing images before inference\n",
" - Users will need to create their own bespoke `pathml.preprocessing.transforms.transform` method to preprocess images before inference if necessary.\n",
" - A guide on how to create preprocessing pipelines is [here](https://pathml.readthedocs.io/en/latest/creating_pipelines.html). \n",
" - A guide on how to run preprocessing pipelines is [here](https://pathml.readthedocs.io/en/latest/running_pipelines.html). \n",
+ "\n",
"- ONNX Model Initializers \n",
" - ONNX models often have neural network initializers stored in the input graph. This means that the user is expected to specify initializer values when running inference. To solve this issue, we have a function that removes the network initializers from the input graph. This functions is adopted from the `onnxruntime` [github](https://github.com/microsoft/onnxruntime/blob/main/tools/python/remove_initializer_from_input.py). \n",
" - We also have a function that checks if the initializers have been removed from the input graph before running inference. Both of these functions are described more below. \n",
+ "\n",
"- When using a model stored remotely on HuggingFace, the model is *downloaded locally* before being used. The user will need to delete the model after running `Pipeline` with a method that comes with the model class. An example of how to do this is below. \n",
"\n",
"## Quick Sample Code\n",
@@ -194,14 +198,7 @@
" - This is the base class for all Inference classes for ONNX modeling\n",
" - Each instance of a class also comes with a `model_card` which specifies certain details of the model in dictionary form. The default parameters are:\n",
" - ```python \n",
- " self.model_card = {\n",
- " 'name' : None, \n",
- " 'num_classes' : None,\n",
- " 'model_type' : None, \n",
- " 'notes' : None, \n",
- " 'model_input_notes': None, \n",
- " 'model_output_notes' : None,\n",
- " 'citation': None } \n",
+ " self.model_card = {'name' : None, 'num_classes' : None,'model_type' : None, 'notes' : None, 'model_input_notes': None, 'model_output_notes' : None,'citation': None } \n",
" ``` \n",
" - Model cards are where important information about the model should be kept. Since they are in dictionary form, the user can add keys and values as they see fit. \n",
" - This class also has getter and setter functions to adjust the `model_card`. Certain functions include `get_model_card`, `set_name`, `set_num_classes`, etc. \n",
@@ -233,13 +230,7 @@
" - Pocock J, Graham S, Vu QD, Jahanifar M, Deshpande S, Hadjigeorghiou G, Shephard A, Bashir RM, Bilal M, Lu W, Epstein D. TIAToolbox as an end-to-end library for advanced tissue image analytics. Communications medicine. 2022 Sep 24;2(1):120.\n",
" - Its `model_card` is:\n",
" - ```python \n",
- " {'name': 'Tiabox HoverNet Test',\n",
- " 'num_classes': 5,\n",
- " 'model_type': 'Segmentation',\n",
- " 'notes': None,\n",
- " 'model_input_notes': 'Accepts tiles of 256 x 256',\n",
- " 'model_output_notes': None,\n",
- " 'citation': 'Pocock J, Graham S, Vu QD, Jahanifar M, Deshpande S, Hadjigeorghiou G, Shephard A, Bashir RM, Bilal M, Lu W, Epstein D. TIAToolbox as an end-to-end library for advanced tissue image analytics. Communications medicine. 2022 Sep 24;2(1):120.'}\n",
+ " {'name': 'Tiabox HoverNet Test','num_classes': 5,'model_type': 'Segmentation','notes': None, 'model_input_notes': 'Accepts tiles of 256 x 256', 'model_output_notes': None, 'citation': 'Pocock J, Graham S, Vu QD, Jahanifar M, Deshpande S, Hadjigeorghiou G, Shephard A, Bashir RM, Bilal M, Lu W, Epstein D. TIAToolbox as an end-to-end library for advanced tissue image analytics. Communications medicine. 2022 Sep 24;2(1):120.'}\n",
" ```\n",
"\n",
"
\n",
@@ -251,13 +242,8 @@
" - Greenwald NF, Miller G, Moen E, Kong A, Kagel A, Dougherty T, Fullaway CC, McIntosh BJ, Leow KX, Schwartz MS, Pavelchek C. Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning. Nature biotechnology. 2022 Apr;40(4):555-65.\n",
" - Its `model_card` is:\n",
" - ```python \n",
- " {'name': \"Deepcell's Mesmer\",\n",
- " 'num_classes': 3,\n",
- " 'model_type': 'Segmentation',\n",
- " 'notes': None,\n",
- " 'model_input_notes': 'Accepts tiles of 256 x 256',\n",
- " 'model_output_notes': None,\n",
- " 'citation': 'Greenwald NF, Miller G, Moen E, Kong A, Kagel A, Dougherty T, Fullaway CC, McIntosh BJ, Leow KX, Schwartz MS, Pavelchek C. Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning. Nature biotechnology. 2022 Apr;40(4):555-65.'}"
+ " {'name': \"Deepcell's Mesmer\", 'num_classes': 3, 'model_type': 'Segmentation','notes': None, 'model_input_notes': 'Accepts tiles of 256 x 256', 'model_output_notes': None, 'citation': 'Greenwald NF, Miller G, Moen E, Kong A, Kagel A, Dougherty T, Fullaway CC, McIntosh BJ, Leow KX, Schwartz MS, Pavelchek C. Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning. Nature biotechnology. 2022 Apr;40(4):555-65.'}\n",
+ " ```"
]
},
{