-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python porting questions #1
Comments
Thank you for using the project.
I am sorry that I cannot understand your question. Could you provide me the clearer question so that I can answer to?? I am looking forward to hearing from you soon. Regards, Hiro |
To be more clear here is what I'm trying to do using this method: I have a random function with n parameters. The input being [X1,X2....Xn] and the output is only one float (y). What I am trying to figure out is how to adapt the code. What should calculate the gradient, creating a n by n tensor with every possible y would be impossible. This problem doesn't have the usual mapping that we can see on line 226 of your obfgs code( That way the optimizer would perform on any solutions. |
Hi Alex, I thank you for your question. I have not yet understood your question clearly. But, the below is my answer. As you might already understand, the optimization solver (such as obfgs.m) and the problem are completely separately defined. As in demo.m file in the project, problem = logistic_regression(data.x_train, data.y_train, data.x_test, data.y_test); is defined for the "problem" definition. Then, the solver is executed with this problem as [w_svrg, info_svrg] = svrg(problem, options); Thus, the solver, which is "svrg" in this demo.m, binds itself with the problem (logistic regression problem in this example). Therefore, if the solver file calls like f_val = problem.cost(w); This calculates the cost value defined in the "problem" definition (in logistic regression problem in this example). They are not the pure Matlab functions or libraries. Does this make it clearer in the first place?? Thanks, Hiro |
Greetings Hiro, Yes this makes more sense, I am currently trying to find a way to move from predefined functions ( such as problem.cost, problem.grad and problem.full_grad) to find a general solver. This would mean that for a function that inputs a vector of n lenght , it would output a scalar.: y2=foo([5,7,6,2,10]) My goal would be to optimize foo in order to obtain the biggest or slower value as the output. For this to work I would need a way to adapt the current code. Do you have any idea how this could be done? Thanks again for your precious time. |
Greetings,
First, I would like to thank you for creating such a nice project with so many optimization algorythms!
I'm currently trying to port the online memory limited BFGS algorythm to python ( yes scipy has many algorythm coded in haskell but it's lacking some that I would like to test, namely this one.)
I've got most of it done but I'm confused about the gradient descent part using the weight on the input.
I'm not familiar with matlab and I want to know if calling .cost , .grad or anything else on the input (the variable
problem
) are integrated method in theproblem
code itself as a method or is it a matlab built-in?P.S.:Thanks for all your hard work!
The text was updated successfully, but these errors were encountered: