Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BigFloat improvements #136

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open

BigFloat improvements #136

wants to merge 3 commits into from

Conversation

nsajko
Copy link
Contributor

@nsajko nsajko commented Apr 16, 2023

See the commit messages of the two commits

@nsajko
Copy link
Contributor Author

nsajko commented Apr 16, 2023

The errors seem to be related to MOI and unrelated to these changes.

@odow
Copy link
Contributor

odow commented Apr 20, 2023

You can comment out the failing tests for now: #137

@nsajko
Copy link
Contributor Author

nsajko commented Apr 20, 2023

Locally all tests pass with the changes in #137

nsajko added 3 commits April 20, 2023 21:48
Use the `BigFloat` dot product from MutableArithmetics in HSD code.

Helps with the performance of the `BigFloat` arithmetic. The change
shouldn't affect other arithmetics, and it's coded so it'd be easy to
extend it to another mutable arithmetic apart from just `BigFloat`, if
necessary, and if such a type will support MutableArithmetics.

Apart from improving performance, this change could possibly also
benefit LP problems with numerical issues (when using `BigFloat`),
because the MA dot product uses a summation algorithm that's more
accurate than naive summation.

A performance experiment is presented in the commit message of the
following commit. The conclusion is that this commit improves
performance only by a tiny bit, likewise with allocation.
We now do basically this for a logical vector l: `dot(a[l], b[l])`,
instead of `dot(a .* l,  b .* l) as before.

This is as suggested here:
ds4dm#122 (comment)

Helps with the performance of the BigFloat arithmetic.

A Julia script and a Unix shell script were used to conduct an
experiment for assesing the impact of this commit and the previous
commit on performance. The scripts and the resulting CSV file follow.

The benchmark experiment is conducted and the CSV created by removing
sources of system load on a computer and running the shell script four
times: once with the `init_csv` command and three times with the `run`
command, each time checking out a different commit in the Tulip git
repo.

Unix (Bourne) shell script:
```sh

set -u

command=$1

julia_opts='-O3 --min-optlevel=3 --heap-size-hint=5G --depwarn=error --warn-overwrite=yes'

script=tulip_benchmark.jl

case "$command" in

init_csv)
  printf '%s,%s,%s,%s\n' 'Tulip version' estimator 'measurement type' value
  ;;

run)
  tulip_version=$2
  $PATH_TO_JULIA_BIN $julia_opts "$script" "$tulip_version"
  ;;

*)
  printf '%s\n' error 2>&1
  exit 1
  ;;

esac
```

Julia script:
```julia
const benchmark_seconds = 500
const polynomial_degree = 20
setprecision(BigFloat, 12 * 2^7)
using BenchmarkTools
import
  FindMinimaxPolynomial,  # v0.2.3
  Tulip,
  MathOptInterface
const FMP = FindMinimaxPolynomial
const MMX = FMP.Minimax
const PPTI = FMP.PolynomialPassingThroughIntervals
const NE = FMP.NumericalErrorTypes
const to_poly = FMP.ToSparsePolynomial.to_sparse_polynomial
const mmx = MMX.minimax_polynomial
const error_type_relative = NE.RelativeError()
const MOI = MathOptInterface
const itv_max_err = FMP.ApproximateInfinityNorm.interval_max_err

function make_lp()
  lp = Tulip.Optimizer{BigFloat}()

  # Remove iteration limit just in case
  MOI.set(lp, MOI.RawOptimizerAttribute("IPM_IterationsLimit"), 2000)

  # Disable presolve, speeds things up
  #MOI.set(lp, MOI.RawOptimizerAttribute("Presolve_Level"), 0)

  lp
end

const itv = (-big"2.0"^-3, big"45.0")

odd_monomials(n::Int) = 1:2:n

sind_mmx(n::Int) =
  mmx(
    make_lp,
    sind,
    (itv,),
    odd_monomials(n),

    # Small factor to have less variance in the results
    initial_perturb_factor = 1//(2^20),

    # We're benchmarking LP, so disable other stuff
    worst_segments_density = 5,
    worst_segments_breadth_limit = 2,
    worst_segments_depth_ratio = 1/2,

    # Exit right after the first step
    exit_condition = true,
  )

function report(estimator; benchmark, benchmark_name)
  b = estimator(benchmark)
  println("$benchmark_name,$estimator,time,$(b.time)")
  println("$benchmark_name,$estimator,gctime,$(b.gctime)")
  println("$benchmark_name,$estimator,memory,$(b.memory)")
  println("$benchmark_name,$estimator,allocs,$(b.allocs)")
end

function report(;benchmark, benchmark_name)
  for quantile in (minimum, median, maximum)
    report(
      quantile,
      benchmark = benchmark,
      benchmark_name = benchmark_name,
    )
  end
end

report(
  benchmark = (@benchmark sind_mmx(polynomial_degree) seconds=benchmark_seconds),
  benchmark_name = first(ARGS),
)
```

CSV results:
```csv
Tulip version,estimator,measurement type,value
v0.9.5,minimum,time,2.63759609e8
v0.9.5,minimum,gctime,3.4505713e7
v0.9.5,minimum,memory,495299296
v0.9.5,minimum,allocs,3629874
v0.9.5,median,time,4.2594161e8
v0.9.5,median,gctime,1.3250321e8
v0.9.5,median,memory,495299296
v0.9.5,median,allocs,3629874
v0.9.5,maximum,time,4.55935021e8
v0.9.5,maximum,gctime,1.40426286e8
v0.9.5,maximum,memory,495299296
v0.9.5,maximum,allocs,3629874
MutableArithmetics for IPM/HSD,minimum,time,2.57993117e8
MutableArithmetics for IPM/HSD,minimum,gctime,2.9403466e7
MutableArithmetics for IPM/HSD,minimum,memory,442052896
MutableArithmetics for IPM/HSD,minimum,allocs,3238720
MutableArithmetics for IPM/HSD,median,time,4.22323273e8
MutableArithmetics for IPM/HSD,median,gctime,1.282365305e8
MutableArithmetics for IPM/HSD,median,memory,442052896
MutableArithmetics for IPM/HSD,median,allocs,3238720
MutableArithmetics for IPM/HSD,maximum,time,4.56330849e8
MutableArithmetics for IPM/HSD,maximum,gctime,1.57061172e8
MutableArithmetics for IPM/HSD,maximum,memory,442052896
MutableArithmetics for IPM/HSD,maximum,allocs,3238720
IPM/HSD: use logical slicing ...,minimum,time,2.40996648e8
IPM/HSD: use logical slicing ...,minimum,gctime,2.5335783e7
IPM/HSD: use logical slicing ...,minimum,memory,386588512
IPM/HSD: use logical slicing ...,minimum,allocs,2833356
IPM/HSD: use logical slicing ...,median,time,3.76039574e8
IPM/HSD: use logical slicing ...,median,gctime,1.06930941e8
IPM/HSD: use logical slicing ...,median,memory,386588512
IPM/HSD: use logical slicing ...,median,allocs,2833356
IPM/HSD: use logical slicing ...,maximum,time,4.00347376e8
IPM/HSD: use logical slicing ...,maximum,gctime,1.27260987e8
IPM/HSD: use logical slicing ...,maximum,memory,386588512
IPM/HSD: use logical slicing ...,maximum,allocs,2833356
```

Fixes ds4dm#122
@nsajko
Copy link
Contributor Author

nsajko commented Apr 20, 2023

Added the compat entry for MA to Project.toml and fixed a typo in a commit message.

@codecov-commenter
Copy link

codecov-commenter commented Apr 20, 2023

Codecov Report

Merging #136 (9bbe799) into master (a0032b5) will increase coverage by 0.16%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master     #136      +/-   ##
==========================================
+ Coverage   89.02%   89.18%   +0.16%     
==========================================
  Files          43       44       +1     
  Lines        2751     2783      +32     
==========================================
+ Hits         2449     2482      +33     
+ Misses        302      301       -1     
Impacted Files Coverage Δ
src/IPM/HSD/HSD.jl 87.50% <100.00%> (+0.19%) ⬆️
src/IPM/HSD/dot_for_mutable.jl 100.00% <100.00%> (ø)
src/IPM/HSD/step.jl 94.80% <100.00%> (+0.06%) ⬆️
src/Tulip.jl 100.00% <100.00%> (ø)

... and 1 file with indirect coverage changes

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants