Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 1.13 KB

README.md

File metadata and controls

5 lines (3 loc) · 1.13 KB

Optimus

In the final project for my optimization class, I implemented gradient descent, momentum, adagrad, adam, and accelerated gradient with hyperparameter tuning. I optimized the Booth, Beale, Rosenbrock, and Ackley functions, in each case finding a solution within a Euclidean distance of < 10-10 to the analytic solution. Note that answer.py (written by Suvansh Sanjeev) was provided to me and left unedited, OptimizationAlgorithmsProject.ipynb served as a template where I wrote the optimization algorithms, and I wrote the report entirely by myself. As such the main deliverable here is OptimizationProjectReport.pdf, but I have uploaded the other files for posterity.

As a fun fact, during the course of this project I noticed that the staff solution was inefficient for one of the algorithms we had been asked to implement; in particular the algorithm took longer to converge than what I had written, and in fact in the case of the provided example and hyperparameters, it actually diverged to NaN. The course staff were generous enough to give me a shoutout (and I think extra credit) for finding that bug and suggesting a fix.