"Computer Vision" , "ImageNet", "Fei Fei Li" are analogous, I love the idea of taking CS231n. All the memories, with my experience with Vision and working for "Inceptionism and Residualism in the Classification of Breast Fine-Needle Aspiration Cytology Cell Samples". GoogLeNet, ResNet , all the emotions with "Visiting the Stanford Vision Lab". Thank You ! I would love to go through CS231n again, in a much more detailed manner, steering through Mathematics. Let's get started, CS231n here I come :)
CS231n course lecture video's from Spring 2017 | 2017 course website )
Grade : Assignment #1: 15%, Assignment #2: 15%, Assignment #3: 15%, Midterm: 15% and Final Project: 40%
Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. Core to many of these applications are visual recognition tasks such as image classification, localization and detection. Recent developments in neural network (aka โdeep learningโ) approaches have greatly advanced the performance of these state-of-the-art visual recognition systems. This course is a deep dive into details of the deep learning architectures with a focus on learning end-to-end models for these tasks, particularly image classification. During the 10-week course, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision. The final assignment will involve training a multi-million parameter convolutional neural network and applying it on the largest image classification dataset (ImageNet). We will focus on teaching how to set up the problem of image recognition, the learning algorithms (e.g. backpropagation), practical engineering tricks for training and fine-tuning the networks and guide the students through hands-on assignments and a final course project. Much of the background and materials of this course will be drawn from the ImageNet Challenge.
- Setup Instructions
- Python / Numpy Tutorial
- IPython Notebook Tutorial
- Google Cloud Tutorial
- AWS Tutorial
- Image Classification: Data-driven Approach, k-Nearest Neighbor, train/val/test splits
L1/L2 distances, hyperparameter search, cross-validation - Linear classification: Support Vector Machine, Softmax
parameteric approach, bias trick, hinge loss, cross-entropy loss, L2 regularization, web demo - Optimization: Stochastic Gradient Descent
optimization landscapes, local search, learning rate, analytic/numerical gradient - Backpropagation, Intuitions
chain rule interpretation, real-valued circuits, patterns in gradient flow - Neural Networks Part 1: Setting up the Architecture
model of a biological neuron, activation functions, neural net architecture, representational power - Neural Networks Part 2: Setting up the Data and the Loss
preprocessing, weight initialization, batch normalization, regularization (L2/dropout), loss functions - Neural Networks Part 3: Learning and Evaluation
gradient checks, sanity checks, babysitting the learning process, momentum (+nesterov), second-order methods, Adagrad/RMSprop, hyperparameter optimization, model ensembles - Putting it together: Minimal Neural Network Case Study
minimal 2D toy data example
- Convolutional Neural Networks: Architectures, Convolution / Pooling Layers
layers, spatial arrangement, layer patterns, layer sizing patterns, AlexNet/ZFNet/VGGNet case studies, computational considerations - Understanding and Visualizing Convolutional Neural Networks
tSNE embeddings, deconvnets, data gradients, fooling ConvNets, human comparisons - Transfer Learning and Fine-tuning Convolutional Neural Networks
Computer vision Nanodegree Udacity | OpenCV | colah.github.io | awesome-cv | awesome-Deep Vision | cs231n summary
EXAM : 2017 Sample Midterm, Solution
FINAL PROJECT | Past Project
This is it, this project needs to be awesome. The past projects of CS231n are sooo awesome. All the information on conference, datasets, posters can be found here. As part of CS231n, I did, " ".