Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix objective.compile in Benchmarks #1483

Open
wants to merge 10 commits into
base: master
Choose a base branch
from
Open

Fix objective.compile in Benchmarks #1483

wants to merge 10 commits into from

Conversation

YigitElma
Copy link
Collaborator

@YigitElma YigitElma commented Dec 20, 2024

Some benchmarks were using obj.compile and jac_scaled_error together but obj.compile only compiles jac_scaled and compute_scaled. This cause some benchmarks to have very different min/max values, like,
image
With the change it is more consistent,
image

Misc

@YigitElma YigitElma added easy Short and simple to code or review skip_changelog No need to update changelog on this PR labels Dec 20, 2024
Copy link
Contributor

github-actions bot commented Dec 20, 2024

|             benchmark_name             |         dt(%)          |         dt(s)          |        t_new(s)        |        t_old(s)        | 
| -------------------------------------- | ---------------------- | ---------------------- | ---------------------- | ---------------------- |
 test_build_transform_fft_lowres         |     +0.15 +/- 1.90     | +7.76e-04 +/- 9.78e-03 |  5.17e-01 +/- 8.7e-03  |  5.16e-01 +/- 4.5e-03  |
 test_equilibrium_init_medres            |     +0.17 +/- 0.90     | +6.75e-03 +/- 3.65e-02 |  4.05e+00 +/- 2.7e-02  |  4.05e+00 +/- 2.4e-02  |
 test_equilibrium_init_highres           |     -0.33 +/- 0.77     | -1.75e-02 +/- 4.12e-02 |  5.30e+00 +/- 3.6e-02  |  5.32e+00 +/- 2.0e-02  |
 test_objective_compile_dshape_current   |     +0.24 +/- 4.47     | +9.77e-03 +/- 1.79e-01 |  4.02e+00 +/- 1.3e-01  |  4.01e+00 +/- 1.2e-01  |
 test_objective_compute_dshape_current   |     +0.04 +/- 1.25     | +2.20e-06 +/- 6.36e-05 |  5.11e-03 +/- 4.1e-05  |  5.11e-03 +/- 4.9e-05  |
 test_objective_jac_dshape_current       |     -1.04 +/- 7.97     | -4.39e-04 +/- 3.38e-03 |  4.19e-02 +/- 2.5e-03  |  4.24e-02 +/- 2.2e-03  |
 test_perturb_2                          |     -0.53 +/- 1.28     | -1.02e-01 +/- 2.47e-01 |  1.92e+01 +/- 2.3e-01  |  1.93e+01 +/- 7.9e-02  |
 test_proximal_freeb_jac                 |     -1.78 +/- 1.27     | -1.32e-01 +/- 9.41e-02 |  7.29e+00 +/- 8.4e-02  |  7.42e+00 +/- 4.3e-02  |
 test_solve_fixed_iter                   |     +0.10 +/- 2.04     | +3.19e-02 +/- 6.36e-01 |  3.12e+01 +/- 5.0e-01  |  3.12e+01 +/- 3.9e-01  |
 test_LinearConstraintProjection_build   |     -1.05 +/- 1.81     | -1.44e-01 +/- 2.48e-01 |  1.36e+01 +/- 1.7e-01  |  1.37e+01 +/- 1.8e-01  |
 test_build_transform_fft_midres         |     +0.22 +/- 4.97     | +1.37e-03 +/- 3.03e-02 |  6.11e-01 +/- 2.8e-02  |  6.10e-01 +/- 1.1e-02  |
 test_build_transform_fft_highres        |     -0.56 +/- 4.81     | -5.55e-03 +/- 4.73e-02 |  9.78e-01 +/- 4.7e-02  |  9.83e-01 +/- 7.5e-03  |
 test_equilibrium_init_lowres            |     +0.34 +/- 2.63     | +1.30e-02 +/- 1.02e-01 |  3.87e+00 +/- 8.9e-02  |  3.86e+00 +/- 5.0e-02  |
 test_objective_compile_atf              |     -0.75 +/- 3.10     | -6.22e-02 +/- 2.57e-01 |  8.24e+00 +/- 2.0e-01  |  8.30e+00 +/- 1.7e-01  |
 test_objective_compute_atf              |     -0.32 +/- 2.16     | -5.05e-05 +/- 3.41e-04 |  1.57e-02 +/- 2.6e-04  |  1.58e-02 +/- 2.3e-04  |
 test_objective_jac_atf                  |     +0.87 +/- 3.74     | +1.70e-02 +/- 7.29e-02 |  1.97e+00 +/- 5.2e-02  |  1.95e+00 +/- 5.1e-02  |
 test_perturb_1                          |     +0.44 +/- 1.52     | +6.45e-02 +/- 2.24e-01 |  1.48e+01 +/- 1.3e-01  |  1.47e+01 +/- 1.8e-01  |
 test_proximal_jac_atf                   |     +0.01 +/- 0.67     | +1.06e-03 +/- 5.54e-02 |  8.32e+00 +/- 2.7e-02  |  8.32e+00 +/- 4.8e-02  |
 test_proximal_freeb_compute             |     -0.41 +/- 1.04     | -8.30e-04 +/- 2.09e-03 |  2.00e-01 +/- 1.8e-03  |  2.01e-01 +/- 1.1e-03  |
 test_solve_fixed_iter_compiled          |     +1.06 +/- 3.10     | +2.15e-01 +/- 6.33e-01 |  2.06e+01 +/- 2.6e-01  |  2.04e+01 +/- 5.8e-01  |

Copy link

codecov bot commented Dec 20, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 95.61%. Comparing base (34dbac0) to head (a4ab21e).

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #1483   +/-   ##
=======================================
  Coverage   95.60%   95.61%           
=======================================
  Files          98       98           
  Lines       25417    25417           
=======================================
+ Hits        24300    24302    +2     
+ Misses       1117     1115    -2     
Files with missing lines Coverage Δ
desc/objectives/objective_funs.py 94.75% <100.00%> (ø)

... and 1 file with indirect coverage changes

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@YigitElma YigitElma requested review from a team, rahulgaur104, f0uriest, ddudt, dpanici, kianorr, sinaatalay and unalmis and removed request for a team December 20, 2024 20:34
sinaatalay
sinaatalay previously approved these changes Dec 20, 2024
Copy link
Member

@sinaatalay sinaatalay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR looks good to me, although others should review it as I am looking at it with only my Python and GitHub knowledge, without DESC knowledge.

  • Workflows (benchmark.yaml, notebook_tests.yaml, and regression_tests.yaml works the same way as its previous version, except a new step is added, Action Details, which moves all the debugging-related commands to a separate step. It makes sense.
  • There is a change in desc.objectives.objective_funcs.ObjectiveFunction.compile method, which uses compute_scaled_error method instead of compute_scaled in lsq and all modes. This hasn't been explained in the PR or commit messages. Maybe @YigitElma should explain it, but I am sure it's okay.
  • The documentation is updated and seems okay.
  • Tests haven't been changed algorithmically (except changing rounds in benchmarks) but have been cleaned. It looks better, I don't see any errors.

docs/performance_tips.rst Outdated Show resolved Hide resolved
@@ -131,67 +131,53 @@ def build():
N = 25
_ = Equilibrium(L=L, M=M, N=N)

benchmark.pedantic(build, setup=setup, iterations=1, rounds=50)
benchmark.pedantic(build, setup=setup, iterations=1, rounds=10)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reducing the number of rounds will make the statistics more noisy, and may lead to more false positives

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. For example, in the last benchmark, this PR shows speed improvement for perturb but it shouldn't. We can decide on the exact number of rounds, but my intention is to balance the time spent on tests. Previously, these equilibrium initialization tests took more time than fixed_iter_solve and perturb tests. Given that benchmark workflow started to take around 50mins, I wanted to reduce them from 50, which is a bit overkill.



@pytest.mark.slow
@pytest.mark.benchmark
def test_proximal_freeb_compute(benchmark):
"""Benchmark computing free boundary objective with proximal constraint."""
jax.clear_caches()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why remove this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't cause too much difference,(I can re add it) but technically only run is benchmarked, so clearing the cache here doesn't have much purpose.


def setup():
def run():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moving everything to the run function is changing what its actually profiling, this now includes a bunch of other stuff besides building the linear constraints, is that what we want?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The extra stuff is just building individual constraints and objectives, right? Previously, this was effectively just benchmarking factorize_linear_constraints. Do we usually pass built constraints to the LinearConstraintProjection? Then I can revert it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
easy Short and simple to code or review skip_changelog No need to update changelog on this PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Export compiled objectives for common equilibrium resolutions?
3 participants