This exemplar will explain and demonstrate the steps required to go from image-based data to a finished Convolutional Neural Network (CNN) pipeline, which can be used to extract relevant information from the images. While demonstrating how to solve this machine learning problem, I will also explain how to prototype code in Jupyter notebooks. I will start by explaining how to analyse the statistics of the data to create appropriate training, validation and testing sets; here I will emphasize the importance of uniform parameter spaces. The exemplar will then go through the process of setting up the architecture of the network and how to train it. Once the network is trained I will discuss what possible next steps are, and which is the most appropriate. Finally, I will go through how to convert the code prototyped in Jupyter notebooks into a useable package
- Use a Jupyter Lab notebook to prototype code
- Use tensorflow to create a CNN to infer parameters from simulated images
- Convert that prototyped code into a runable script that can then be scaled up to be run on something like the HPC
Task | Time |
---|---|
Reading | 3 hours |
Practising | 3 hours |
- Familiarity with Python 3
- Have used Jupyter Lab before
- Very little command line knowledge
- 4GB of disk space for datasets
- Python 3.11 or newer
- Access to the HPC (optional)
.
├── docs
├── notebooks
│ ├── ReCoDE.ipynb
│ └── ex2
├── src
| ├── file1.py
| ├── file2.cpp
| ├── ...
│ └── data
├── app
├── main
├── test
└── requirements.txt
This project is licensed under the BSD-3-Clause license