diff --git a/README.md b/README.md index 94335bb9..97fd5a38 100644 --- a/README.md +++ b/README.md @@ -5,6 +5,8 @@ [![codecov](https://codecov.io/gh/Dana-Farber-AIOS/pathml/branch/master/graph/badge.svg?token=UHSQPTM28Y)](https://codecov.io/gh/Dana-Farber-AIOS/pathml) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) [![PyPI version](https://img.shields.io/pypi/v/pathml)](https://pypi.org/project/pathml/) +![tests](https://github.com/Dana-Farber-AIOS/pathml/actions/workflows/tests-linux.yml/badge.svg?branch=master) +![dev-tests](https://github.com/Dana-Farber-AIOS/pathml/actions/workflows/tests-linux.yml/badge.svg?branch=dev) ⭐ **PathML objective is to lower the barrier to entry to digital pathology** @@ -14,25 +16,45 @@ Imaging datasets in cancer research are growing exponentially in both quantity a docker pull pathml/pathml && docker run -it -p 8888:8888 pathml/pathml -| Branch | Test status | -| ------ | ------------- | -| master | ![tests](https://github.com/Dana-Farber-AIOS/pathml/actions/workflows/tests-linux.yml/badge.svg?branch=master) | -| dev | ![tests](https://github.com/Dana-Farber-AIOS/pathml/actions/workflows/tests-linux.yml/badge.svg?branch=dev) | +Done, what analyses can I write now? πŸ‘‰  
 **[πŸ”¬πŸ€– Click here to launch your PathML Digital Pathology Assistant πŸŽ“](https://chat.openai.com/g/g-L1IbnIIVt-digital-pathology-assistant-v3-0)** 
 
- + + + + + +
+ +This AI will: + +- πŸ€– write digital pathology analyses for you +- πŸ”¬ walk you through the code, step-by-step +- πŸŽ“ be your teacher, as you embark on your digital pathology journey ❀️ + +More information [here](./ai-digital-pathology-assistant-v3) and usage examples [here](https://github.com/Dana-Farber-AIOS/pathml/blob/master/examples/talk_to_pathml.ipynb) + +
- -**View [documentation](https://pathml.readthedocs.io/en/latest/)** +πŸ“– **Official PathML Documentation** + +View the official [PathML Documentation on readthedocs](https://pathml.readthedocs.io/en/latest/) + +πŸ”₯ **Examples! Examples! Examples!** + +[↴ Jump to the gallery of examples below](#3-examples) + +
+ + -:construction: the `dev` branch is under active development, with experimental features, bug fixes, and refactors that may happen at any time! -Stable versions are available as tagged releases on GitHub, or as versioned releases on PyPI + -# Installation +# 1. Installation `PathML` is an advanced tool for pathology image analysis. Below are simplified instructions to help you install PathML on your system. Whether you're a user or a developer, follow these steps to get started. -## 1. Prerequisites +## 1.1 Prerequisites We recommend using [Conda](https://conda.io/projects/conda/en/latest/user-guide/getting-started.html#) for managing your environments. @@ -84,9 +106,9 @@ For Windows users, an alternative to using `vcpkg` is to download and use pre-bu - Extract the archive to your desired location, e.g., `C:\OpenSlide\`. -## 2. PathML Installation Methods +## 1.2 PathML Installation Methods -### 2.1 Install with pip (Recommended for Users) +### 1.2.1 Install with pip (Recommended for Users) #### Create and Activate Conda Environment ```` @@ -103,7 +125,7 @@ conda install -c conda-forge 'openjdk<=18.0' pip install pathml ```` -### 2.2 Install from Source (Recommended for Developers) +### 1.2.2 Install from Source (Recommended for Developers) #### Clone repository ```` @@ -133,7 +155,7 @@ conda activate pathml pip install -e . ```` -### 2.3 Use Docker Container +### 1.2.3 Use Docker Container First, download or build the PathML Docker container: @@ -170,7 +192,7 @@ Note that these instructions assume that there are no other processes using port Please refer to the `Docker run` [documentation](https://docs.docker.com/engine/reference/run/) for further instructions on accessing the container, e.g. for mounting volumes to access files on a local machine from within the container. -### 2.4 Use Google Colab +### 1.2.4 Use Google Colab To get PathML running in a Colab environment: @@ -188,7 +210,7 @@ os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-17-openjdk-amd64" *Thanks to all of our open-source collaborators for helping maintain these installation instructions!* *Please open an issue for any bugs or other problems during installation process.* -## 3. Import PathML +## 1.3. Import PathML After you have installed all necessary dependencies and PathML itself, import it using the following command: @@ -220,7 +242,7 @@ This code snippet ensures that the OpenSlide DLLs are correctly found by Python If you encounter any DLL load failures, verify that the OpenSlide `bin` directory is correctly added to your `PATH`. -## CUDA +## 1.4 CUDA To use GPU acceleration for model training or other tasks, you must install CUDA. This guide should work, but for the most up-to-date instructions, refer to the [official PyTorch installation instructions](https://pytorch.org/get-started/locally/). @@ -244,11 +266,11 @@ After installing PyTorch, optionally verify successful PyTorch installation with python -c "import torch; print(torch.cuda.is_available())" ```` -## Using with Jupyter +# 2. Using with Jupyter (optional) Jupyter notebooks are a convenient way to work interactively. To use `PathML` in Jupyter notebooks: -### Set JAVA_HOME environment variable +## 2.1 Set JAVA_HOME environment variable PathML relies on Java to enable support for reading a wide range of file formats. Before using `PathML` in Jupyter, you may need to manually set the `JAVA_HOME` environment variable @@ -261,7 +283,7 @@ specifying the path to Java. To do so: os.environ["JAVA_HOME"] = "/opt/conda/envs/pathml" # change path as needed ```` -### Register environment as an IPython kernel +## 2.2 Register environment as an IPython kernel ```` conda activate pathml conda install ipykernel @@ -269,43 +291,59 @@ python -m ipykernel install --user --name=pathml ```` This makes the pathml environment available as a kernel in jupyter lab or notebook. +# 3. Examples + +Now that you are all set with ``PathML`` installation, let's get started with some analyses you can easily replicate: + + + + + + +
+ +1. [Load over 160+ different types of pathology images using PathML](https://github.com/Dana-Farber-AIOS/pathml/blob/master/examples/loading_images_vignette.ipynb) +2. [H&E Stain Deconvolution and Color Normalization](https://github.com/Dana-Farber-AIOS/pathml/blob/master/examples/stain_normalization.ipynb) +3. [Brightfield imaging pipeline: load an image, preprocess it on a local cluster, and get it read for machine learning analyses in PyTorch](https://github.com/Dana-Farber-AIOS/pathml/blob/master/examples/workflow_HE_vignette.ipynb) +4. [Multiparametric Imaging: Quickstart & single-cell quantification](https://github.com/Dana-Farber-AIOS/pathml/blob/master/examples/multiplex_if.ipynb) +5. [Multiparametric Imaging: CODEX & nuclei quantization](https://github.com/Dana-Farber-AIOS/pathml/blob/master/examples/codex.ipynb) +6. [Train HoVer-Net model to perform nucleus detection and classification, using data from PanNuke dataset](https://github.com/Dana-Farber-AIOS/pathml/blob/master/examples/train_hovernet.ipynb) +7. [Gallery of PathML preprocessing and transformations](https://github.com/Dana-Farber-AIOS/pathml/blob/master/examples/pathml_gallery.ipynb) +8. [Use the new Graph API to construct cell and tissue graphs from pathology images](https://github.com/Dana-Farber-AIOS/pathml/blob/master/examples/construct_graphs.ipynb) +9. [Train HACTNet model to perform cancer sub-typing using graphs constructed from the BRACS dataset](https://github.com/Dana-Farber-AIOS/pathml/blob/master/examples/train_hactnet.ipynb) +10. [Perform reconstruction of tiles obtained from pathology images using Tile Stitching](https://github.com/Dana-Farber-AIOS/pathml/blob/master/examples/tile_stitching.ipynb) +11. [Create an ONNX model in HaloAI or similar software, export it, and run it at scale using PathML](https://github.com/Dana-Farber-AIOS/pathml/blob/master/examples/InferenceOnnx_tutorial.ipynb) +12. [Step-by-step process used to analyze the Whole Slide Images (WSIs) of Non-Small Cell Lung Cancer (NSCLC) samples as published in the Journal of Clinical Oncology](https://github.com/Dana-Farber-AIOS/pathml/blob/master/examples/Graph_Analysis_NSCLC.ipynb) +13. [Talk to the PathML Digital Pathology Assistant](https://github.com/Dana-Farber-AIOS/pathml/blob/master/examples/talk_to_pathml.ipynb) -# Contributing - -``PathML`` is an open source project. Consider contributing to benefit the entire community! - -There are many ways to contribute to `PathML`, including: - -* Submitting bug reports -* Submitting feature requests -* Writing documentation and examples -* Fixing bugs -* Writing code for new features -* Sharing workflows -* Sharing trained model parameters -* Sharing ``PathML`` with colleagues, students, etc. + + -See [contributing](https://github.com/Dana-Farber-AIOS/pathml/blob/master/CONTRIBUTING.rst) for more details. +
-# Citing +# 4. Citing & known uses -If you use `PathML` please cite: +If you use ``PathML`` please cite: - [**J. Rosenthal et al., "Building tools for machine learning and artificial intelligence in cancer research: best practices and a case study with the PathML toolkit for computational pathology." Molecular Cancer Research, 2022.**](https://doi.org/10.1158/1541-7786.MCR-21-0665) -So far, PathML was used in the following manuscripts: +So far, **PathML** was referenced in 20+ manuscripts: -- [J. Linares et al. **Molecular Cell** 2021](https://www.cell.com/molecular-cell/fulltext/S1097-2765(21)00729-2) -- [A. Shmatko et al. **Nature Cancer** 2022](https://www.nature.com/articles/s43018-022-00436-4) -- [J. Pocock et al. **Nature Communications Medicine** 2022](https://www.nature.com/articles/s43856-022-00186-5) -- [S. Orsulic et al. **Frontiers in Oncology** 2022](https://www.frontiersin.org/articles/10.3389/fonc.2022.924945/full) -- [D. Brundage et al. **arXiv** 2022](https://arxiv.org/abs/2203.13888) -- [A. Marcolini et al. **SoftwareX** 2022](https://www.sciencedirect.com/science/article/pii/S2352711022001558) -- [M. Rahman et al. **Bioengineering** 2022](https://www.mdpi.com/2306-5354/9/8/335) -- [C. Lama et al. **bioRxiv** 2022](https://www.biorxiv.org/content/10.1101/2022.09.28.509751v1.full) -- the list continues [**here πŸ”— for 2023 and onwards**](https://scholar.google.com/scholar?oi=bibs&hl=en&cites=1157052756975292108) +- [H. Pakula et al. **Nature Communications**, 2024](https://www.nature.com/articles/s41467-023-44210-1) +- [B. Ricciuti et al. **Journal of Clinical Oncology**, 2024](https://ascopubs.org/doi/full/10.1200/JCO.23.00580) +- [A. Song et al. **Nature Reviews Bioengineering**, 2023](https://www.nature.com/articles/s44222-023-00096-8) +- [I. Virshup et al. **Nature Bioengineering**, 2023](https://www.nature.com/articles/s41587-023-01733-8) +- [A. Karargyris et al. **Nature Machine Intelligence**, 2023](https://www.nature.com/articles/s42256-023-00652-2) +- [S. Pati et al. **Nature Communications Engineering**, 2023](https://www.nature.com/articles/s44172-023-00066-3) +- [C. Gorman et al. **Nature Communications**, 2023](https://www.nature.com/articles/s41467-023-37224-2) +- [J. Nyman et al. **Cell Reports Medicine**, 2023](https://doi.org/10.1016/j.xcrm.2023.101189) +- [A. Shmatko et al. **Nature Cancer**, 2022](https://www.nature.com/articles/s43018-022-00436-4) +- [J. Pocock et al. **Nature Communications Medicine**, 2022](https://www.nature.com/articles/s43856-022-00186-5) +- [S. Orsulic et al. **Frontiers in Oncology**, 2022](https://www.frontiersin.org/articles/10.3389/fonc.2022.924945/full) +- [J. Linares et al. **Molecular Cell**, 2021](https://doi.org/10.1016/j.molcel.2021.08.039) +- the list continues [**here** **πŸ”—**](https://scholar.google.com/scholar?oi=bibs&hl=en&cites=1157052756975292108) -# Users +# 5. Users
This is where in the world our most enthusiastic supporters are located:

@@ -320,17 +358,35 @@ and this is where they work: Source: https://ossinsight.io/analyze/Dana-Farber-AIOS/pathml#people -# License +# 6. Contributing + +``PathML`` is an open source project. Consider contributing to benefit the entire community! + +There are many ways to contribute to `PathML`, including: + +* Submitting bug reports +* Submitting feature requests +* Writing documentation and examples +* Fixing bugs +* Writing code for new features +* Sharing workflows +* Sharing trained model parameters +* Sharing ``PathML`` with colleagues, students, etc. + +See [contributing](https://github.com/Dana-Farber-AIOS/pathml/blob/master/CONTRIBUTING.rst) for more details. + + +# 7. License The GNU GPL v2 version of PathML is made available via Open Source licensing. The user is free to use, modify, and distribute under the terms of the GNU General Public License version 2. Commercial license options are available also. -# Contact +# 8. Contact Questions? Comments? Suggestions? Get in touch! [pathml@dfci.harvard.edu](mailto:pathml@dfci.harvard.edu) - + \ No newline at end of file diff --git a/ai-digital-pathology-assistant-v3/README.md b/ai-digital-pathology-assistant-v3/README.md new file mode 100644 index 00000000..dcab668d --- /dev/null +++ b/ai-digital-pathology-assistant-v3/README.md @@ -0,0 +1,31 @@ +![image](https://github.com/Dana-Farber-AIOS/pathml/assets/25375373/0e1c7adf-6510-4733-bf38-93c7be1c73cc) + +# Digital Pathology Assistant v3.0 + +πŸ‘‹ Say _Hi!_ to your new digital pathology AI + +It will help you: + +- install PathML +- write digital pathology analyses +- will be your teacher as you embark on your digital pathology journey + +## Run it! + +πŸ‘‰  
 **[πŸ”¬πŸ€– Click here to Launch our AI πŸŽ“ ‴](https://chat.openai.com/g/g-L1IbnIIVt-digital-pathology-assistant-v3-0)** 
 
+ +## Examples + +### Use this AI to get started with PathML: +![Screen Recording 2024-04-06 at 10 47 21 AM](https://github.com/Dana-Farber-AIOS/pathml/assets/25375373/cfc4969b-8000-4fc4-b1b9-19a2279ba980) + +### Or to learn how to load your WSI image and count the nuclei in it: +![Screen Recording 2024-04-06 at 10 53 46 AM](https://github.com/Dana-Farber-AIOS/pathml/assets/25375373/a225fadd-e019-485d-959f-6d0c39218f5b) + +## Recreate it + +[Here](./src) you will find all material needed to re-create our Digital Pathology Assistant, which is a custom OpenAI GPT available to all ChatGPT Plus users for research purposes. + +## Note + +This AI is **NOT** intended for clinical use. diff --git a/ai-digital-pathology-assistant-v3/src/DESC.txt b/ai-digital-pathology-assistant-v3/src/DESC.txt new file mode 100644 index 00000000..41086be4 --- /dev/null +++ b/ai-digital-pathology-assistant-v3/src/DESC.txt @@ -0,0 +1 @@ +Specify your requirements in plain english and I'll provide PathML and Python code for your use-case diff --git a/ai-digital-pathology-assistant-v3/src/DPTv3.png b/ai-digital-pathology-assistant-v3/src/DPTv3.png new file mode 100644 index 00000000..c26deb99 Binary files /dev/null and b/ai-digital-pathology-assistant-v3/src/DPTv3.png differ diff --git a/ai-digital-pathology-assistant-v3/src/INSTRUCTIONS.txt b/ai-digital-pathology-assistant-v3/src/INSTRUCTIONS.txt new file mode 100644 index 00000000..37183ef6 --- /dev/null +++ b/ai-digital-pathology-assistant-v3/src/INSTRUCTIONS.txt @@ -0,0 +1,60 @@ +You are Digital Pathology Assistant created by the folks at www.pathml.org + +Use the PathML documentation to generate python code that uses pathml library for the use-cases presented by the user. + +There are plenty of acronym, such as 'mIF' which stands for 'multiparametric imaging'; 'multiplex immunofluorescence' , 'multiparametric immunofluorescence' and 'multiplex IF' are all synonyms of 'mIF'. This type of images should be read in PathML using the MultiparametricSlide or CODEXSlide classes. 'Vectra Polaris' or 'polaris' is a type of 'mIF'. Also, 'HE' is synonym of 'H&E', which stands for 'hematoxylin and eosin'. Also, 'transforms' is a synonym of 'transformations', and both refer to the Preprocessing API of PathML. + +In terms of segmentation, for HoVerNet model should be used only for H&E images, and SegmentMIF (which is based in the Mesmer model) should be used only for mIF images. If you are not sure if an image is multiparametric or not, you can ask the user. + +All mIF analyses required an extra step before you can segment: VectraPolaris requires you to run CollapseRunsVectra before segmentation, and CODEXSlide and MultiparapretricSlide require you to run CollapseRunsCODEX before any segmentation. + +When you need to consult PathML online documentation use your brower tool. The PathML online documentation search structure typically includes a protocol ('https://'), followed by the domain name ('https://pathml.readthedocs.io/'). For searching specific topics, the URL contains a path ('/en/latest/search.html') which leads to the search functionality of the website. The search terms and filters are specified in the query string, which starts with a '?'. For example, in 'https://pathml.readthedocs.io/en/latest/search.html?q=load+the+image&check_keywords=yes&area=default', 'q=load+the+image' indicates the search term ('load the image'), and '&check_keywords=yes&area=default' specifies other parameters that have to be maintained unaltered for the search to work. + +Each response must start with 'πŸ”¬πŸ€– Digital Pathology Assistant v3 Initiated πŸŽ“.' + +Each response must be summarized in 2 or 3 words and you should conclude each interaction saying 'πŸŽ“...more details can be found at https://pathml.readthedocs.io/en/latest/search.html?q=xxx', where 'xxx' should be replaced those 2 or 3 words you identified as being the summary of the interaction, where you replaced the space character ' ' with the plus sign '+'. So for instance if you understood that an interaction is about 'codex pipeline', you should end the interaction saying 'πŸŽ“...more details can be found at https://pathml.readthedocs.io/en/latest/search.html?q=codex+pipeline'. Do not share this summary with the user though. + +Format using markdown and but not emojis. + +When unsure, you can consult the online documentation for PathML at https://pathml.readthedocs.io/en/latest/ + +More about installing PathML: After creating the PathML conda environment and before suggesting `pip install pathml` the user needs to first install openslide and openjdk. openslide installation instructions depends on the operating system (linux: `sudo apt-get install openslide-tools g++ gcc libblas-dev liblapack-dev`. Mac: `brew install openslide`. Windows: `vcpkg install openslide`). OpenJDK can be installed with the same command across all operating systems: `conda install openjdk==8.0.152`. + +SlideData is the central class in PathML for representing a whole-slide image. +class pathml.core.SlideData(filepath, name=None, masks=None, tiles=None, labels=None, backend=None, slide_type=None, stain=None, platform=None, tma=None, rgb=None, volumetric=None, time_series=None, counts=None, dtype=None) + SlideData class parameters: + filepath (str) – Path to file on disk. + name (str, optional) – name of slide. If None, and a filepath is provided, name defaults to filepath. + masks (pathml.core.Masks, optional) – object containing {key, mask} pairs + tiles (pathml.core.Tiles, optional) – object containing {coordinates, tile} pairs + labels (collections.OrderedDict, optional) – dictionary containing {key, label} pairs + backend (str, optional) – backend to use for interfacing with slide on disk. Must be one of {β€œOpenSlide”, β€œBioFormats”, β€œDICOM”, β€œh5path”} (case-insensitive). Note that for supported image formats, OpenSlide performance can be significantly better than BioFormats. Consider specifying backend = "openslide" when possible. If None, and a filepath is provided, tries to infer the correct backend from the file extension. Defaults to None. + slide_type (pathml.core.SlideType, optional) – slide type specification. Must be a SlideType object. Alternatively, slide type can be specified by using the parameters stain, tma, rgb, volumetric, and time_series. + stain (str, optional) – Flag indicating type of slide stain. Must be one of [β€˜HE’, β€˜IHC’, β€˜Fluor’]. Defaults to None. Ignored if slide_type is specified. + platform (str, optional) – Flag indicating the imaging platform (e.g. CODEX, Vectra, etc.). Defaults to None. Ignored if slide_type is specified. + tma (bool, optional) – Flag indicating whether the image is a tissue microarray (TMA). Defaults to False. Ignored if slide_type is specified. + rgb (bool, optional) – Flag indicating whether the image is in RGB color. Defaults to None. Ignored if slide_type is specified. + volumetric (bool, optional) – Flag indicating whether the image is volumetric. Defaults to None. Ignored if slide_type is specified. + time_series (bool, optional) – Flag indicating whether the image is a time series. Defaults to None. Ignored if slide_type is specified. + counts (anndata.AnnData) – object containing counts matrix associated with image quantification + +Convenience SlideData Classes: +class pathml.core.HESlide(*args, **kwargs) + Convenience class to load a SlideData object for H&E slides. Passes through all arguments to SlideData(), along with slide_type = types.HE flag. Refer to SlideData for full documentation. +class pathml.core.VectraSlide(*args, **kwargs) + Convenience class to load a SlideData object for Vectra (Polaris) slides. Passes through all arguments to SlideData(), along with slide_type = types.Vectra flag and default backend = "bioformats". Refer to SlideData for full documentation. +class pathml.core.MultiparametricSlide(*args, **kwargs) + Convenience class to load a SlideData object for multiparametric immunofluorescence slides. Passes through all arguments to SlideData(), along with slide_type = types.IF flag and default backend = "bioformats". Refer to SlideData for full documentation. +class pathml.core.CODEXSlide(*args, **kwargs) + Convenience class to load a SlideData object from Akoya Biosciences CODEX format. Passes through all arguments to SlideData(), along with slide_type = types.CODEX flag and default backend = "bioformats". Refer to SlideData for full documentation. + +Slide Types: +class pathml.core.SlideType(stain=None, platform=None, tma=None, rgb=None, volumetric=None, time_series=None) + SlideType objects define types based on a set of image parameters. + Parameters: + stain (str, optional) – One of [β€˜HE’, β€˜IHC’, β€˜Fluor’]. Flag indicating type of slide stain. Defaults to None. + platform (str, optional) – Flag indicating the imaging platform (e.g. CODEX, Vectra, etc.). + tma (bool, optional) – Flag indicating whether the slide is a tissue microarray (TMA). Defaults to False. + rgb (bool, optional) – Flag indicating whether image is in RGB color. Defaults to False. + volumetric (bool, optional) – Flag indicating whether image is volumetric. Defaults to False. + time_series (bool, optional) – Flag indicating whether image is time-series. Defaults to False. diff --git a/ai-digital-pathology-assistant-v3/src/KB-settings.png b/ai-digital-pathology-assistant-v3/src/KB-settings.png new file mode 100644 index 00000000..d4bbb3b3 Binary files /dev/null and b/ai-digital-pathology-assistant-v3/src/KB-settings.png differ diff --git a/ai-digital-pathology-assistant-v3/src/README.pdf b/ai-digital-pathology-assistant-v3/src/README.pdf new file mode 100644 index 00000000..721d4c0a Binary files /dev/null and b/ai-digital-pathology-assistant-v3/src/README.pdf differ diff --git a/ai-digital-pathology-assistant-v3/src/STARTERS.txt b/ai-digital-pathology-assistant-v3/src/STARTERS.txt new file mode 100644 index 00000000..7395101f --- /dev/null +++ b/ai-digital-pathology-assistant-v3/src/STARTERS.txt @@ -0,0 +1,11 @@ +How do I load a wsi image? + +How do I segment all nuclei from a wsi image? + +How do I run a PathML analysis on a cluster? + +How do I analyze a codex image? + +How do I install PathML? + +What's the fastest way to get a PathML analysis environment up and running? diff --git a/ai-digital-pathology-assistant-v3/src/Screen Recording 2024-04-06 at 10.47.21 AM.gif b/ai-digital-pathology-assistant-v3/src/Screen Recording 2024-04-06 at 10.47.21 AM.gif new file mode 100644 index 00000000..89f21518 Binary files /dev/null and b/ai-digital-pathology-assistant-v3/src/Screen Recording 2024-04-06 at 10.47.21 AM.gif differ diff --git a/ai-digital-pathology-assistant-v3/src/Screen Recording 2024-04-06 at 10.53.46 AM.gif b/ai-digital-pathology-assistant-v3/src/Screen Recording 2024-04-06 at 10.53.46 AM.gif new file mode 100644 index 00000000..d3491e08 Binary files /dev/null and b/ai-digital-pathology-assistant-v3/src/Screen Recording 2024-04-06 at 10.53.46 AM.gif differ diff --git a/ai-digital-pathology-assistant-v3/src/api_reference_merged.pdf b/ai-digital-pathology-assistant-v3/src/api_reference_merged.pdf new file mode 100644 index 00000000..c0898cb4 Binary files /dev/null and b/ai-digital-pathology-assistant-v3/src/api_reference_merged.pdf differ diff --git a/ai-digital-pathology-assistant-v3/src/examples_merged.pdf b/ai-digital-pathology-assistant-v3/src/examples_merged.pdf new file mode 100644 index 00000000..1a07994f Binary files /dev/null and b/ai-digital-pathology-assistant-v3/src/examples_merged.pdf differ diff --git a/ai-digital-pathology-assistant-v3/src/main_merged.pdf b/ai-digital-pathology-assistant-v3/src/main_merged.pdf new file mode 100644 index 00000000..a124cf41 Binary files /dev/null and b/ai-digital-pathology-assistant-v3/src/main_merged.pdf differ diff --git a/examples/talk_to_pathml.ipynb b/examples/talk_to_pathml.ipynb index 2314839c..f013da3e 100644 --- a/examples/talk_to_pathml.ipynb +++ b/examples/talk_to_pathml.ipynb @@ -15,7 +15,7 @@ "id": "c4f09515-97fb-4a7a-8fcb-1eba0d631d21", "metadata": {}, "source": [ - "We leveraged the recent progress in medical Large Language Models (LLMs) to create a new chat interface for those who would like to get started with PathML for advanced image analysis. This was implemented by injecting all PathML examples and documentation into a Retrieval Augmented Generation (RAG) system based on GPT-4 capabilities. Our β€œDigital Pathology Assistant” prototype, available [here](https://chat.openai.com/g/g-4YBcZ3iYS-digital-pathology-assistant-v0-1), can be leveraged to build advanced end-to-end computational pipelines for specific use-cases. \n", + "We leveraged the recent progress in medical Large Language Models (LLMs) to create a new chat interface for those who would like to get started with PathML for advanced image analysis. This was implemented by injecting all PathML examples and documentation into a Retrieval Augmented Generation (RAG) system based on GPT-4 capabilities. Our β€œDigital Pathology Assistant” prototype, available [here](https://chat.openai.com/g/g-L1IbnIIVt-digital-pathology-assistant-v3-0), can be leveraged to build advanced end-to-end computational pipelines for specific use-cases. \n", "\n", "In this notebook, we report specific examples of how it can be used to generate specific computational pipelines for preprocessing and analyzing different types of multiplexed images. " ]