Lecture 8

SMs, Cores and Warps

GPU specs:

<<  ./deviceQuery
>>  ./DeviceQuery/Debug/deviceQuery Starting...


Device 0: "GeForce RTX 2070"
  CUDA Driver Version / Runtime Version          10.1 / 10.1
  CUDA Capability Major/Minor version number:    7.5
  Total amount of global memory:                 7982 MBytes (8370061312 bytes)
  (36) Multiprocessors, ( 64) CUDA Cores/MP:     2304 CUDA Cores
  GPU Max Clock rate:                            1710 MHz (1.71 GHz)
  Memory Clock rate:                             7001 Mhz
  Memory Bus Width:                              256-bit
  L2 Cache Size:                                 4194304 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1024
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 3 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 11 / 0

Note:

Key details from printout above:

When a kernel is invoked with a configuration of Grid, Blocks and Threads

Syncronisation

More on Warps

Warp Execution and divergence

A whole warp is handled by a single controller. Consider what happens if some threads A\bf{A} in a warp take one branch, 11, of an if statement and others B\bf{B} take the other branch, 22.

Loops where different threads in the warp execute different numbers of iterations also form divergences. I.e. the threads that conceptually execute the lowest number of iterations actually execute the same as the rest of the threads, conceptually the same as the threads executing the most iterations.

Divergence and Reduction

Now we will look at the consequences of divergence for the process of reducing a set of numbers to 1. E.g. x=11024x\sum_{x=1}^{1024}x

Diagrammatically, this would look like:

This could be implemented as follows:

float partialSum[];
...
uint t = threadIdx.x;
for (uint stride = 1 ; stride < blockDim.x; stride *= 2){
  __syncthreads()
  if (t % (2 * stride) == 0){
    partialSum[t] += partialSum[t+stride];
  }
}

Diagrammatically an alternative, more efficient, method could look like:

with the corresponding code looking like:

float partialSum[];
...
uint t = threadIdx.x;
for (uint stride = blockDim.x/2 ; stride > 0 ; stride >> 1){
  __syncthreads();
  if (t<stride){
    partialSum[t] += partialSum[t+stride];
  }
}

Memory Bandwidth as a Performance Barrier

Applying this to our parallel reduction: