Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GTX1080 CUDA issues #8

Closed
psteinb opened this issue Jul 25, 2016 · 12 comments
Closed

GTX1080 CUDA issues #8

psteinb opened this issue Jul 25, 2016 · 12 comments
Labels

Comments

@psteinb
Copy link

psteinb commented Jul 25, 2016

I wanted to benchmark a GTX 1080 with cuda 8.0.27 under CentOS 7.2.1511. the gpu-stream-cuda app behaves normal with the default parameters.
Strange enough though, when I want to provide more than the default number of elements in the array:

$ gpu-stream-cuda --arraysize 67108864

the copy kernel dispatch throws a CUDA API error 0xb which is Invalid Argument. I tracked down the problem to (this line of code)[https://github.com/UoB-HPC/GPU-STREAM/blob/master/CUDAStream.cu#L112]:

template <class T>
void CUDAStream<T>::copy()
{
  copy_kernel<<<array_size/TBSIZE, TBSIZE>>>(d_a, d_c);
  check_error();
  cudaDeviceSynchronize();
  check_error();
}

strange enough, if I look at the values of array_size/TBSIZE, they are in plausible ranges arraysize/TBSIZE = 65536.

Does anyone have an idea where this is coming from? (as this is a RC cuda, I see no problem forwarding this issue to nvidia)

@psteinb
Copy link
Author

psteinb commented Jul 25, 2016

happens with cuda 7.5 on a K80 as well

@michaelboulton
Copy link

I think that the maximum for block sizes is 65535, not 65536. Might be the issue?

@psteinb
Copy link
Author

psteinb commented Jul 25, 2016

are you sure? deviceQuery from cuda 8 SDK reports something else:

Device 0: "GeForce GTX 1080"
  CUDA Driver Version / Runtime Version          8.0 / 8.0
  CUDA Capability Major/Minor version number:    6.1
  Total amount of global memory:                 8113 MBytes (8507555840 bytes)
  (20) Multiprocessors, (128) CUDA Cores/MP:     2560 CUDA Cores
  GPU Max Clock rate:                            1848 MHz (1.85 GHz)
  Memory Clock rate:                             5005 Mhz
  Memory Bus Width:                              256-bit
  L2 Cache Size:                                 2097152 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 3 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

to be more precise:

Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)

@michaelboulton
Copy link

You're right! I think I did so much programming on compute 2.x that I forgot they ever changed it...

@psteinb
Copy link
Author

psteinb commented Jul 26, 2016

I took the vectorAdd example from the CUDA 8 SDK and replaced the numbers to match those cited above, see here. the code runs fine on the 1080!

@psteinb
Copy link
Author

psteinb commented Jul 26, 2016

I got it ... the problem is that you do not generate PTX/SASS for the architecture at hand but use the default nvcc options. If I inject architecture specific options in the nvcc compilation step, gpu-stream-cuda runs through alright!

$ cmake -DCUDA_NVCC_FLAGS="-gencode arch=compute_20,code=sm_20 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_60,code=compute_60"  ..
...
$ ./gpu-stream-cuda --arraysize 67108864
GPU-STREAM
Version: 2.0
Implementation: CUDA
Running kernels 100 times
Precision: double
Array size: 536.9 MB (=0.5 GB)
Total size: 1610.6 MB (=1.6 GB)
Using CUDA device GeForce GTX 1080
Driver: 8000
Function    MBytes/sec  Min (sec)   Max         Average     
Copy        237328.459  0.00452     0.00455     0.00453     
Mul         236882.317  0.00453     0.00455     0.00454     
Add         243057.902  0.00663     0.00666     0.00664     
Triad       243114.255  0.00662     0.00666     0.00664

FWIW, this issue can be closed.

@tomdeakin
Copy link
Contributor

Thanks for highlighting this; useful to know about this behaviour. Closing as not an issue with the code itself.

@psteinb
Copy link
Author

psteinb commented Aug 1, 2016

are you guys planning to include respective nvcc flags in the cmake file or document this on the wiki/landing page?

@tomdeakin
Copy link
Contributor

This is probably a change we won't add into the CMake file because it could tune it for specific hardware too; and we want the "vanilla" code to be as neutral as possible. We are looking to add some tuned versions of some of the models into the repo somehow, and we would put this change in with a tuned CUDA version.

I'll reopen the issue and mark as won't fix so the bug doesn't get forgotten. This is the same as we did with #1.

@tomdeakin tomdeakin reopened this Aug 1, 2016
@tomdeakin tomdeakin changed the title 64M elements break kernel launch GTX1080 issues Aug 1, 2016
@tomdeakin tomdeakin changed the title GTX1080 issues GTX1080 CUDA issues Aug 1, 2016
@psteinb
Copy link
Author

psteinb commented Aug 1, 2016 via email

@tomdeakin
Copy link
Contributor

The motivation behind this code is to explore what performance there is on a variety of architectures across a variety of programming models for simple STREAM, and were focussing on 'out of the box' performance. We modelled it on STREAM itself which doesn't do any special tuning. The original STREAM benchmark provides the ability for tuned versions to be added, which is something we are planning on doing. Our results show that on the GPUs there isn't any tuning required to get close to theoretical peak performance.

It is surprising that nvcc has this behaviour by default. You normally don't need to run at full capacity to see the best bandwidth on the GPUs though.

@tomdeakin
Copy link
Contributor

Build system has been revised in v.3.1. You can now pass in the architecture flag easily:

make -f CUDA.make EXTRA_FLAGS="-arch=sm_61"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants