Here are the lectures, exercises, and additional course materials corresponding to the spring semester 2018 course at ETH Zurich, 227-0966-00L: Quantitative Big Imaging.
The lectures have been prepared and given by Kevin Mader and associated guest lecturers. Please note the Lecture Slides and PDF do not contain source code, this is only available in the handout file. Some of the lectures will be recorded and placed on YouTube on the QBI Playlist. The lectures are meant to be followed in chronological order and each lecture has a corresponding hands-on exercises in the exercises section.
The lecture focuses on the challenging task of extracting robust, quantitative metrics from imaging data and is intended to bridge the gap between pure signal processing and the experimental science of imaging. The course will focus on techniques, scalability, and science-driven analysis.
- Ability to compare qualitative and quantitative methods and name situations where each would be appropriate
- Awareness of the standard process of image processing, the steps involved and the normal order in which they take place
- Ability to create and evaluate quantitative metrics to compare the success of different approaches/processes/workflows
- Appreciation of automation and which steps it is most appropriate for
- The relationship between automation and reproducibility for analysis
- Awareness of the function enhancement serves and the most commonly used methods
- Knowledge of limitations and new problems created when using/overusing these techniques
- Awareness of different types of segmentation approaches and strengths of each
- Understanding of when to use automatic methods and when they might fail
- Knowledge of which types of metrics are easily calculated for shapes in 2D and 3D
- Ability to describe a physical measurement problem in terms of shape metrics
- Awareness of common metrics and how they are computed for arbitrary shapes
- Awareness of common statistical techniques for hypothesis testing
- Ability to design basic experiments to test a hypothesis
- Ability to analyze and critique poorly designed imaging experiments
- Familiarity with vocabulary, tools, and main concepts of big data
- Awareness of the differences between normal and big data approaches
- Ability to explain MapReduce and apply it to a simple problem
The course is designed with both advanced undergraduate and graduate level students in mind. Ideally students will have some familiarity with basic manipulation and programming in languages like Python (Matlab or R are also reasonable starting points). Much of the material is available as visual workflows in a tool called KNIME, although these are less up to date than the Python material. Interested students who are worried about their skill level in this regard are encouraged to contact Kevin Mader directly ([email protected]).
- Students with very diverse academic backgrounds have done well in the course (Informatics to Art History to Agriculture).
- Successful students typically spent a few hours a week working on the exercises to really understand the material.
- More advanced students who are already very familiar with Python, C++, or Java are also encouraged to take the course and will have to opportunity to develop more of their own tools or explore topics like machine learning in more detail.
For communicating, discussions, asking questions, and everything, we will be trying out Slack this year. You can sign up under the following link. It isn't mandatory, but it seems to be an effective way to engage collaboratively How scientists use slack
- Slides (static) Lecture Handout
- Part 2: Slides (static) Lecture Handout
- Lecture Video
- Old Lecture Slides Old Lecture Handout as PDF
- Slides (static) Lecture Handout
- Lecture Video
- Old Lecture Video: Part 1, Part 2, Part 3, Part 4
- Old Lecture Slides Old Lecture Handout
-
Lecture Video (silent, technical problems), for voice-over use Old Lecture Video
- High Content Screening Slides - Michael Prummer / Nexus / Roche
- Slides (static) Lecture Handout
- Lecture Video
- Old Lecture Slides Old Lecture Handout as PDF
- Old Lecture Video: Part 1 and Part 2
Javier Montoya / Computer Vision / ScopeM
Presented by Aurelien Lucchi in Data Analytics Lab in D-INFK at ETHZ
The exercises are based on the lectures and take place in the same room after the lecture completes. The exercises are designed to offer a tiered level of understanding based on the background of the student. We will (for most lectures) take advantage of an open-source tool called KNIME (www.knime.org), with example workflows here (https://www.knime.org/example-workflows). The basic exercises will require adding blocks in a workflow and adjusting parameters, while more advanced students will be able to write their own snippets, blocks or plugins to accomplish more complex tasks easily. The exercises from two years ago (available here are done entirely in ImageJ and Matlab for students who would prefer to stay in those environments (not recommended)
The exercises will be supported by Amogha Pandeshwar and Kevin Mader. There will be office hours in ETZ H75 on Thursdays between 14-15 or by appointment.
The exercises will be available on Kaggle as 'Datasets' and we will be using mybinder as stated above. For those interested there will be an option to use Github Classroom to turn in assignments (make sure your @student.ethz.ch address is linked to your github account)
- For all exercises it is important to take the starting data
- Starting Data
- The KNIME or workflow based exercises are here
- KNIME Exercises
For students experienced in Python there are the binder Notebooks
- Demonstration view or binder
- Non Local Means view or binder
- Exercises 1-3 (Exercises/02-files/Exercises1-3) view or binder
- Exercises 4 (Exercises/02-files/Exercise4) view or binder
- Setup Jupyter on the D61 Machines
- You can get started on Kaggle (no installation required just register)
- Online Dataset
- Online Kernel for Exercises 1-3 and Exercise 4
- Additionally there is an competition on Image Enhancement
- KNIME Exercises
- Old Workflows
- Python Fossil Segmentation Exercises or binder
- Python Nerve Segmentation Exercises or binder or Kaggle
- Kernel for Ultrasound Segmentation - Exercises
- Kernel for Superpixels on PETCT
- Kernel for K-Means on Temporal/Video Data
- Advanced Kernel Predicting Malignancy using Superpixels
- Multispectral / Hyperspectral Data
- KNIME Exercises
- Kaggle Texture Analysis scikit-image
- Kaggle Radiomics Analysis
- Paraview Curvature
- Python Exercises
- Battery Dataset The battery dataset along with Kernels for basic preprocessing and analysis.
- KNIME Exercises
- IPython Notebook (Under development)
- Kaggle Street Network
- Kaggle Electron Microscopy Segmentation
- Kaggle Python Notebook
- Kaggle R Notebook
- KNIME Exercises
- Kaggle Neuron Tracking
- Kaggle X-Ray Scan Registration
- Registration Tutorial: Slides or Interactive - By Duncan Betts
- KNIME Exercises
- C. Elegans Dataset on Kaggle R Notebook or Python Notebook
- Lung Segmentation Rule-based Image Processing and Simple Neural Network
- High Content Screening with C. Elegans
- Goal is looking at what metrics accurately indicate living or dead worms and building a simple predictive model
- Kaggle Overview
- Shape Analysis
- Processing in R
- KNIME / Spark Exercises
- Tensorflow DAG Notebook Filtering
- Kaggle DAG Notebook for Filtering
- Block-based 3D Image Analysis in Dask
- Create an issue (on the group site that everyone can see and respond to, requires a Github account), issues from last year
- Provide anonymous feedback on the course here
- Or send direct email (slightly less anonymous feedback) to Kevin
The final examination (as originally stated in the course material) will be a 30 minute oral exam covering the material of the course and its applications to real systems. For students who present a project, they will have the option to use their project for some of the real systems related questions (provided they have sent their slides to Kevin after the presentation and bring a printed out copy to the exam including several image slices if not already in the slides). The exam will cover all the lecture material from Image Enhancement to Scaling Up (the guest lecture will not be covered). Several example questions (not exhaustive) have been collected which might be helpful for preparation.
- Overview of possible projects
- Here you signup for your project with team members and a short title and description
The course, slides and exercises are primarily done using Python 3.6 and Jupyter Notebook 5.5. The binder/repo2docker-compatible environment](https://github.com/jupyter/repo2docker) can be found at binder/environment.yml. A full copy of the environment at the time the class was given is available in the wiki file. As many of these packages are frequently updated we have also made a copy of the docker image produced by repo2docker uploaded to Docker Hub at https://hub.docker.com/r/kmader/qbi2018/
The packages which are required for all lectures
- numpy
- matplotlib
- scipy
- scikit-image
- scikit-learn
- ipyvolume
For machine learning and big data lectures a few additional packages are required
- tensorflow
- pytorch
- opencv
- dask
- dask_ndmeasure
- dask_ndmorph
- dask_ndfilter
For the image registration lecture and medical image data
- itk
- SimpleITK
- itkwidgets
- Course Wiki (For Questions and Answers, discussions etc, we use the old one)
- Main Page
- Data Science/Python Introduction Handbook
- ETH Deep Learning Course taught in the Fall Semester, also uses Python but with a much more intensive mathematical grounding and less focus on images.
- FastAI Deep Learning Course and Part 2 for a very practically focused introduction to Deep Learning using the Python skills developed in QBI.
- Reproducible Research
- Coursera Course
- Course and Tools in R
- Performance Computing Courses
- High Performance Computing for Science and Engineering (HPCSE) I
- Programming Massively Parallel Processors with CUDA
- Introduction to Machine Learning (EPFL)