-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proper epsilon
for complex?
#115
Comments
To be clear, you want to do finite differencing on ℂ → ℂ functions? |
Yes. |
To clarify, ℂ → ℂ functions which sometimes are just R^2n -> R^2n packed in ℂ^N @AshtonSBradley |
Not in favour of this. For instance, JuliaNLSolvers/Optim.jl#435 would silently fail, because (unless I'm mistaken) the finite-differencing of a C^n to R function would be well-defined, but wrong (it would be in R^n). As I mentioned in JuliaDiff/ForwardDiff.jl#157, I think the way to go is to define only those derivatives which make mathematical sense, ie functions C^n -> R. What do you mean by C to C functions that are R^2n packed in C^n ? In this case the jacobian should be 2n x 2n and can not be done with a n x n complex matrix, right? edit: to be clear, what I'm in favour of is that finite_difference assumes (and checks) a C^n to R function, computes the finite-differences as a R^2n to R function (i.e. 2n funcalls), and reassembles as a C^n vector. This is the only way that I can see that makes sense: the other possibility is to return a doubled-size jacobian (ie converting input and output to R^2n), but this conversion to reals should rather be done in the caller. |
It's the same result as doing this: JuliaDiff/DualNumbers.jl#29 (comment) . ℂ → ℂ is the most standard case, so that should have first class treatment.
What about a keyword argument to enable this? That's double the function calls and very unnecessary in most cases, but is something that should be treated correctly. |
Thinking about this more, probably default to safe, but keyword arg to opt into performance ( |
Note that this, and many performance issues, have been solved by DiffEqDiffTools.jl, which is more of a dev library than a user-facing finite differencing package, but can be incorporated into packages similarly but give these features and a hefty speedup. |
It seems that the finite difference routines would "just work" if
epsilon
was changed to the "correct value".Unless there's some kind of norm argument, I would propose changing each part using the current epsilon. For example, if
epsilon1
is fromreal(x)
andepsilon2
is fromimag(x)
, thenepsilon = epsilon1 + im*epsilon2
.Would this be fine? This definition would make it safe for the case where a user is just "packing" floats into complex numbers, which can be common in physical applications. If this is acceptable I'll make a PR.
The text was updated successfully, but these errors were encountered: