Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Methodology for our comparisons vs. Groute #31

Open
jowens opened this issue Apr 3, 2017 · 4 comments
Open

Methodology for our comparisons vs. Groute #31

jowens opened this issue Apr 3, 2017 · 4 comments

Comments

@jowens
Copy link
Contributor

jowens commented Apr 3, 2017

As we noted in our email communications, we think the fairest comparisons to make between two graph frameworks are those that offer the best available performance for each at the time the comparisons were made. For Gunrock today, that would be the 0.4 release (10 November 2016). We recognize this version was not available at the time the Groute paper was submitted (although it would have been appropriate for camera-ready), so we ran comparisons against a Gunrock version dated July 11, 2016 (6eb6db5d09620701bf127c5acb13143f4d8de394). Yuechao notes that to build this version, we "need to comment out the lp related includes in tests/pr/test_pr.cu, line 33 to line 35, otherwise the build will fail".

In our group, we generally run primitives multiple times within a single binary launch and report the average time (Graph500 does this, for instance). We think the most important aspect is simply to run it more than once to mitigate any startup effects. In our comparisons, we use --iteration-num=32.

By default, we use a source vertex of 0, and depending on the test, we have used both 0-source and random-source in our publications. Getting good performance on a randomized source is harder, but avoids overtuning. In our comparisons, we use source 0, as Groute does.

@jowens
Copy link
Contributor Author

jowens commented Apr 3, 2017

This methodology applies to the performance measurements we summarize in #32 and #33.

@sree314
Copy link

sree314 commented Apr 6, 2017

A comment on your methodology (which is, of course, different from what we do):

"we generally run primitives multiple times within a single binary launch and report the average time"

I generally avoid this because it is a common cause of systematic errors in measurement.

Now, running multiple times and taking the average is an estimation procedure for the population mean. This procedure assumes the samples are i.i.d.

Here are the runtimes of the individual samples from the K80+METIS, non-idempotent, non-DO (https://github.com/gunrock/io/blob/master/gunrock-output/20170303/bfs_k80x2_metis_soc-LiveJournal1.txt) data in #32, in order of iterations:

61.02, 46.54, 46.41, 43.97, 42.10, 42.20, 42.07, 40.22, 38.91, 38.85, 38.92, 37.24, 36.25, 36.10, 36.08, 36.05, 35.41, 35.07, 35.03, 35.06, 34.77, 34.43, 34.47, 34.42, 34.41, 34.49, 34.30, 34.40, 33.80, 34.49, 34.49, 34.46

The trend of clearly decreasing runtimes for what should be random samples from the same population is worrying. You'll see this pattern in all of your data (it's more evident in your multi-GPU runs).

Is your procedure estimating the population mean correctly? I.e. is the average you compute using this procedure comparable to the population mean?

@sgpyc
Copy link
Member

sgpyc commented Apr 10, 2017

Thanks for pointing that out, and I do see such variances in running time.
Further investigation shows that it's more of a variance (may with small decreasing trend) than a decreasing.
20170410.xlsx

screen shot 2017-04-10 at 15 58 15
shows the normalized (against the avg. running time without the first run) running time for different {GPU generation, number of GPUs, primitives, graph, partitioner}. From what I observed:

  1. The timings do go up and down;
  2. Some experiments show a decreasing running average, but the decrease is mostly bounded within -10%;
  3. Some running conditions (say, {single GPU, K40 or P100, PR, road_usa}) may give more stable running times;
  4. Partitioning method may not change the trend;
  5. It's almost incorrect to only take the first run, as it shows the warmup effect (on avg., the first run takes 37% longer than the rest);
  6. The decreasing trend is more likely on {K80, M60} than on {K40, P100}.

I still don't know the actual condition(s) and the reason(s) of this trend, and here is my guess:

  1. the power management of GPUs. The GPU hardware + driver may dynamically change its running speed, to protect it from overheat and to get better performance when possible. The fact that dual-chip GPUs (K80 & M60) is far more likely than single-chip GPUs (K40 and P100), to show the decreasing trend let me think this maybe the main reason;

  2. cache effects on the GPU.

  3. CPU side optimizations, especially tread-binding or memory-binding to die / core. The experiments are all run on dual-CPU machines, and the running time may reduce when things are cached on or allocated to the CPU that's closer to the GPU of specific workload.

Currently I think 2) and 3) are less likely than 1), but all of them point to lower level optimizations. From what I can tell, it's far less possible that systematic errors are the cause.

@sree314
Copy link

sree314 commented Apr 11, 2017

Hi sgpyc,

Things like power management, cache effects and optimizations are systematic errors. It may help to think of "systematic bias" if it is helpful.

In general, if behaviour of later runs is affected by earlier runs then your observations are not independent of each other. Their average is not a good estimator of the population mean.

I would advise figuring out exactly why the trend exists and controlling for it (for example, nvcc -gencode to avoid JIT overhead, or using nvidia-smi to disable power management, pinning threads to CPUs manually, etc.).

If you do this, the average computed by running (say) BFS n times from the shell should not be significantly different from running n repetitions of BFS from within breadth_first_search.

Hope this helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants