Contains all course materials from the HPML group Course environment: https://jupyter.snellius.surf.nl/jhssrf014/
- Introduction to Deep Learning
- Using the PyTorch framework
- Fully connected networks, Convolutional networks, Transformers (time permitting)
- Software installations on HPC systems
- Packed file formats for Machine Learning
- Parallel computing for deep learning
- Hardware (e.g. Tensor cores) and software features (e.g. low level libraries for deep learning) for accelerated deep learning
- Profiling PyTorch with TensorBoard
09:30 – 9:44 Welcome and course overview (Lars Veefkind)
09:45 – 10:30 Introduction to ML & DL basic principles (Lars Veefkind)
10:30 – 10:50 Introduction to PyTorch (notebook) (Lars Veefkind)
10:50 – 11:05 Coffee break
11:05 – 11:45 Hands-on: Fully connected network (Lars Veefkind)
11:45 – 12:00 Recap Hands-on
12:00 – 13:00 Lunch Break
13:00 – 14:00 Convolutional neural networks (Lars Veefkind)
14:00 – 14:45 Hands-on: Convolutional neural networks (Lars Veefkind)
14:45 – 15:00 Recap hands-on
15:00 – 15:15 Coffee Break
15:15 – 16:00 Self-attention / Transformers (Robert Jan Schlimbach)
16:00 – 16:45 Hands-on/demo notebook: Transformers
16:45 – 17:00 Questions, wrap up
09:30 – 10:45 Software installations on HPC systems (Robert Jan Schlimbach)
10:45 – 11:00 Coffee break
11:00 – 11:30 Packed file formats (Robert Jan Schlimbach)
11:30 – 12:15 Hands-on: Packed file formats
12:15 – 13:15 Lunch Break
13:15 – 14:45 Parallel Computing for Deep Learning (Lars Veefkind)
14:45 – 15:00 Coffee Break
15:00 – 15:45 Hardware and software features to accelerate deep learning (Lars Veefkind)
15:45 – 16:45 Profiling to understand your neural network’s performance (Robert Jan Schlimbach)
16:45 – 17:00 Questions, wrap up