This Jupyter notebook demonstrates how artificial neural networks (ANNs) can be applied to image segmentation problems. Segmentation in this context refers to the task of assigning discrete labels to individual pixels or regions of an image. We can use segmentation models to identify and locate features of interest within an image. This notebook contains a simple application to self-driving cars, where we train a segmentation model to identify important features in dashcam footage, as well as a more complicated example, based on the work of Coney et al. (2023), identifying and characterising trapped lee waves over the UK.
Binder and Colab buttons
Will launch this tutorial in binder (CPU) or Google Colab (GPU).
Running locally
If you're already familiar with Git, Anaconda and virtual environments, the environment you need to create is found in unet.yml and the code below will install, activate and launch the notebook. The .yml file has been tested on the latest Linux, macOS and Windows operating systems.
git clone [email protected]:cemac/LIFD_ImageSegmentation.git
cd LIFD_ImageSegmentation
conda env create -f unet.yml
conda activate unet
jupyter-notebook
This notebook is designed to run on a laptop with no special hardware required. However, training of neural networks can take a long time (hours) without dedicated GPU hardware. If you have a GPU, it is recommended to do a local installation as outlined in the repository howtorun and jupyter_notebooks sections. Otherwise, online compute platforms which offer GPU access (e.g. Google Colab) are strongly recommended.
LIFD_ENV_ML_NOTEBOOKS by CEMAC are licenced under a Creative Commons Attribution 4.0 International License.
Thanks to Jonathan Coney for making available the code on which this notebook is based. This tutorial is part of the LIFD_ENV_ML_NOTEBOOKS series. Please refer to the parent repository for full acknowledgements.