Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix LHA VFNS SV benchmark 3 #222

Merged
merged 22 commits into from
Mar 31, 2023
Merged

Fix LHA VFNS SV benchmark 3 #222

merged 22 commits into from
Mar 31, 2023

Conversation

felixhekhorn
Copy link
Contributor

Closes #215 - because I'm crazy, yet another alternative implementation of #218

  • it is really built on top of Fix LHA VFNS SV benchmark 2 #219 (as can be seen from the history), but the last commit changes the strategy all over again
  • also this PR restores the v0.10 benchmark
  • it implements Eq. (2.41) of Pegasus (and should, in fact, be as a whole identical with that implementation)
  • it deprecates the fact_scale argument to a_s (this is a consequent of the former if you wish) (though in practice I don't see that warning inside the benchmarks)
  • it implements the LHS of Eq. (3.33) of MHOU (i.e. Eq. (3.34)) with the displaced alpha point (as requested by @andreab1997 )
  • it introduces a strange flip in the L definition (inside evolution_operator)
  • it does not implement the scheme B, yet
  • although I didn't touch the unit tests they are still passing, but one benchmark if failing ....
  • the meaning of $\xi_F$ to Pegasus and LHA was flipped - I didn't run a Apfel test yet, to see if we also gained a flip there ...

PS: @andreab1997 you see with enough juggling I can find another solution 🙃

@felixhekhorn felixhekhorn added the bug Something isn't working label Mar 7, 2023
@alecandido
Copy link
Member

alecandido commented Mar 7, 2023

I'd like to see:

  • tests and benchmarks passing
  • docs explaining the strategy
  • strange things explained, as much as possible

Sorry to be annoying, but once arrived at iteration 3 we certainly need more care than before...

@andreab1997
Copy link
Contributor

Closes #215 - because I'm crazy, yet another alternative implementation of #218

  • it is really built on top of Fix LHA VFNS SV benchmark 2 #219 (as can be seen from the history), but the last commit changes the strategy all over again
  • also this PR restores the v0.10 benchmark
  • it implements Eq. (2.41) of Pegasus (and should, in fact, be as a whole identical with that implementation)
  • it deprecates the fact_scale argument to a_s (this is a consequent of the former if you wish) (though in practice I don't see that warning inside the benchmarks)

About this I am confused, I understood that fact_scale was used for the resummation_scales so I agree that in this case we should not use it but why deprecating it?

  • it implements the LHS of Eq. (3.33) of MHOU (i.e. Eq. (3.34)) with the displaced alpha point (as requested by @andreab1997 )

Yes thanks. BTW in this way we are also moving the thresholds according to xif right?.

  • it introduces a strange flip in the L definition (inside evolution_operator)

This is probably the thing that convince me the less. Of course I understand that is needed (I also found the same when playing with this) but we should find why.

  • it does not implement the scheme B, yet

About this, I believe than once we understand scheme A (as maybe we understood now) the generalization to scheme B should be trivial (i.e. the evolution should be exactly the same)

  • although I didn't touch the unit tests they are still passing, but one benchmark if failing ....
  • the meaning of ξF to Pegasus and LHA was flipped - I didn't run a Apfel test yet, to see if we also gained a flip there ...

PS: @andreab1997 you see with enough juggling I can find another solution 🙃

@alecandido
Copy link
Member

About this I am confused, I understood that fact_scale was used for the resummation_scales so I agree that in this case we should not use it but why deprecating it?

I agree this should be a further degree of freedom: we will use in one and only one way, so you could set it to None by default, and encode the choice in the default behavior.
But at the same time, we can have a value there that if not None is running over the orthogonal direction, and use it to reproduce the resummation scales results (we are not deadly interested, but one thing is not doing it, and a different one not being able to).

This is probably the thing that convince me the less. Of course I understand that is needed (I also found the same when playing with this) but we should find why.

That's exactly what I was referring above: at this point, a numerical proof is not at all a proof, we should understand things analytically.

@andreab1997
Copy link
Contributor

About this I am confused, I understood that fact_scale was used for the resummation_scales so I agree that in this case we should not use it but why deprecating it?

I agree this should be a further degree of freedom: we will use in one and only one way, so you could set it to None by default, and encode the choice in the default behavior. But at the same time, we can have a value there that if not None is running over the orthogonal direction, and use it to reproduce the resummation scales results (we are not deadly interested, but one thing is not doing it, and a different one not being able to).

This is probably the thing that convince me the less. Of course I understand that is needed (I also found the same when playing with this) but we should find why.

That's exactly what I was referring above: at this point, a numerical proof is not at all a proof, we should understand things analytically.

Yes about this the problem is that there is no analytical proof for this because no one ever explained in detail how all these sv schemes should work. Trying to understand from the MHOU paper, Felix and I concluded that this is what should be done but:

  1. There are details that are never explained in the paper (for example if we need to move the matchings according to xif or not). These are most likely choices but still we need to choose what to do.
  2. as a consequence of 1, it is not given that we can reproduce results from other codes because they may do something different.

benchmarks/apfel_bench.py Show resolved Hide resolved
benchmarks/lha_paper_bench.py Show resolved Hide resolved
src/eko/couplings.py Show resolved Hide resolved
src/eko/evolution_operator/__init__.py Outdated Show resolved Hide resolved
src/eko/evolution_operator/__init__.py Outdated Show resolved Hide resolved
src/ekomark/benchmark/external/LHA_utils.py Outdated Show resolved Hide resolved
src/ekomark/navigator/navigator.py Outdated Show resolved Hide resolved
src/ekomark/navigator/navigator.py Outdated Show resolved Hide resolved
src/ekomark/navigator/navigator.py Outdated Show resolved Hide resolved
src/ekomark/navigator/navigator.py Outdated Show resolved Hide resolved
@alecandido
Copy link
Member

@andreab1997 we should prove what is the space of consistent choices (even if not spelled out in any paper, this is given by pQFT + renormalization + collinear subtraction). Once we know which are the consistent alternatives, and we are left with choices, then we can choose, and we can test if the other people are consistent with any of them.

In particular, are LHA + PEGASUS + APFEL consistent among themselves? They should all implement the same scheme (i.e. exponentiated), so if they make consistent choices they should compute the same numbers.

This was referenced Mar 9, 2023
@giacomomagni giacomomagni linked an issue Mar 14, 2023 that may be closed by this pull request
@andreab1997
Copy link
Contributor

Just to understand, what is missing here?

@andreab1997
Copy link
Contributor

Is the last commit (a1f57b7) related to the change of sign in L=-np.log(self.xif2)?

@andreab1997
Copy link
Contributor

Can I do something to help here? I believe we want this merged in order to release a new tag before #220 will be merged, right?

@andreab1997 andreab1997 mentioned this pull request Mar 29, 2023
@felixhekhorn
Copy link
Contributor Author

@andreab1997 in the end I rewrote the docs in 8b52917 🙃

@felixhekhorn
Copy link
Contributor Author

This is ready to be merged.

src/eko/scale_variations/__init__.py Outdated Show resolved Hide resolved
tests/eko/scale_variations/test_expanded.py Show resolved Hide resolved
@felixhekhorn felixhekhorn merged commit 769a019 into master Mar 31, 2023
@felixhekhorn felixhekhorn deleted the fix-couplings-evol-sv-3 branch March 31, 2023 16:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

LHA VFNS SV is broken Remove renormalization scale mentions in the code
4 participants