CS221 is "the" intro AI class at Stanford and [ this playlist ] in Youtube, lists the video lectures of CS221 Autumn 2018-19 ( I guess someone uploaded the videos without knowing the terms of taking the class ). Having Access to the video lectures is great, makes going through the slides easier. Since I didn't pay for the course [I am not a Full-time Stanford Student], The difference is , you don't get to ask TA's, submit the projects, or get any feedback but, you get access to notes and slides from the course website, get to learn CS221 ( & that's what matters the most). Also, I was lucky to have access to CS221 Piazza class (CS221 doubt clearing channel) as I had access to my stanford email account (I was a Stanford Visiting Student). All in all, if you want to learn, stay truthful, learn the contents well, be curious and Maintain Honor Code. CS221 is exciting!
Grade Structure - Homework - 60% , Exam - 20% and Final Project- 20%
[Schedule] ; [Coursework] ; [CS221 2017-18 Autumn Class]
What do web search, speech recognition, face recognition, machine translation, autonomous driving, and automatic scheduling have in common? These are all complex real-world problems, and the goal of artificial intelligence (AI) is to tackle these with rigorous mathematical tools. In this course, we will learn the foundational principles that drive these applications and practice implementing some of these systems. Specific topics include machine learning, search, game playing, Markov decision processes, constraint satisfaction, graphical models, and logic. The main goal of the course is to equip us with the tools to tackle new AI problems we might encounter in life.
Books : Artificial Intelligence: A Modern Approach / AIMA-pdf , The Elements of Statistical Learning - pdf | AIMA is a great book. Here - PSEUDOCODE algorithms, AIMA code repo, Resources from the book.
β 1. Foundations β« ( zip )
β 2. Sentiment classification β« ( zip )
β 3. Text reconstruction β« ( zip )
β 6. Course scheduling β« ( zip )
β 7. Car tracking β« ( zip )
β 8. Language and logic β« ( zip )
@ Paper Projects | Guidelines | MIT 6.034 Artificial Intelligence
π
Overview of course, Optimization [ slide1p ], [ slide6p ]
β N.O.T.E.S
π
Linear classification, Loss minimization, Stochastic gradient descent [ slide1p ] , [ slide6p ]
π
Section: optimization, probability, Python (review) [ slide ]
π
Features and non-linearity, Neural networks, nearest neighbors [ slide1p ] , [ slide6p ]
π
Generalization, Unsupervised learning, K-means [ slide1p ],[ slide6p ]
π
Section: Backpropagation and SciKit Learn [ slide ]
β N.O.T.E.S
π
Tree search, Dynamic programming, uniform cost search [ slide1p ] , [ slide6p ]
π
A*, consistent heuristics, Relaxation [ slide1p ] , [ slide6p ]
π
Section: UCS,Dynamic Programming, A* [ slide ]
β N.O.T.E.S
π
Policy evaluation, policy improvement, Policy iteration, value iteration [ slide1p ] , [ slide6p ]
π
Reinforcement learning, Monte Carlo, SARSA, Q-learning, Exploration/exploitation, function approximation [ slide1p ] , [ slide6p ]
π
Section: deep reinforcement learning [ slide ]
β N.O.T.E.S
π
Minimax, expectimax, Evaluation functions, Alpha-beta pruning [ slide1p ] , [ slide6p ]
π
TD learning, Game theory [ slide1p ] , [ slide6p ]
π
Section: AlphaZero [ slide ]
β N.O.T.E.S
π
Factor graphs, Backtracking search, Dynamic ordering, arc consistency [ slide1p ] , [ slide6p ]
π
Beam search, local search, Conditional independence, variable elimination [ slide1p ] , [ slide6p ]
π
Section: CSPs [ slide ]
β N.O.T.E.S
π
Bayesian inference, Marginal independence, Hidden Markov models [ slide1p ] , [ slide6p ]
π
Forward-backward, Gibbs sampling, Particle filtering [ slide1p ] , [ slide6p ]
π
Section: Bayesian networks [ slide ]
π
Learning Bayesian networks, Laplace smoothing, Expectation Maximization [ slide1p ] , [ slide6p ] , [ supplementary ]
β N.O.T.E.S
π
Syntax versus semantics, Propositional logic, Horn clauses [ slide1p ] , [ slide6p ]
π
First-order logic, Resolution [ slide1p ] , [ slide6p ]
β N.O.T.E.S
π
Deep learning, autoencoders, CNNs, RNNs [ slide1p ] , [ slide6p ]
π
Section: semantic parsing (advanced), Higher-order logics, Markov logic, Semantic parsing [ slide ]
π
Summary, future of AI [ slide1p ] , [ slide6p ]
β N.O.T.E.S
βExam Papers - F2017, F2016, F2015, M2014, M2013 , F2012, M2012, PractiveM1:Solution, PractiveM2:Solution
β· My Solutions for CS221 Exams - 2017, 2016, 2015
PSets π Search : Solution | Variables : Solution ||||| [221@2013] | Project e.g | e.gII
-
I would like to understand the contents from - STATS 202: Data Mining and Analysis, CS103: Mathematical Foundations of Computing, CS109: Probability for Computer Scientists before starting AI. These are the Mathematics behind AI. Also read, Decision Making Under Uncertainty: Theory and Application by Mykel J. Kochenderfer : pdf, which serves as the coursebook for CS238:Decision Making under Uncertainty. CS246: Mining Massive Data Sets is important as well for AI.
-
Udacity's Artificial Intelligence and AI Programming with Python Nanodegree
FINAL PROJECT | Past Final Projects
Since the Marks Distribution is Homework - 60% , Exam - 20% and Final Project- 20%. I legit enjoy learning the past posters of CS221, they are exciting. The Final project I made is : " AI playing Mario : Reinforcement Learning Approach ", here is the implementation/code and the poster of the project is here: