最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

cuda - Do contiguous threads need to access contiguous data - Stack Overflow

programmeradmin4浏览0评论

Cuda code:

__global__ void matrixAdd1(int *d_a, int *d_b, int *d_c, int width, int height) {
  // Calculate the row and column indices for this thread
  // RECOMMENDED VERSION:
  int row = blockIdx.y * blockDim.y + threadIdx.y;
  int col = blockIdx.x * blockDim.x + threadIdx.x;
  // ALTERNATIVE:
  int col = blockIdx.y * blockDim.y + threadIdx.y;
  int row = blockIdx.x * blockDim.x + threadIdx.x;

  // Make sure we don't go out of bounds
  if (row < height && col < width) {
    // Calculate the linear index for the matrices
    int idx = row * width + col;

    // Perform the addition
    d_c[idx] = d_a[idx] + d_b[idx];
  }
}

I tested the two ways of computing the data index, on arrays size of millions, repeated a number of times, and got EDIT only a small /EDIT performance difference. (Grace Hopper GPU)

Used blocksize is 16x16.

Should there be a performance difference? Can I demonstrate a "right" and "wrong" way to index?

EDIT It turns out that GPUs have the same performance problems with "streaming" operations that CPUs have. I edited both code variants to essentially repeat that addition operation a couple of times. Now there is a clear factor of 3 performance difference.

Using the profiler I get the following stats.

Right way:

    ----------------------- ----------- ------------
    Metric Name             Metric Unit Metric Value
    ----------------------- ----------- ------------
    DRAM Frequency                  Ghz         2.62
    SM Frequency                    Ghz         1.51
    Elapsed Cycles                cycle      272,148
    Memory Throughput                 %        69.18
    DRAM Throughput                   %        40.12
    Duration                         us       179.58
    L1/TEX Cache Throughput           %        66.13
    L2 Cache Throughput               %        87.64
    SM Active Cycles              cycle   263,722.43
    Compute (SM) Throughput           %        48.22
    ----------------------- ----------- ------------

Wrong way:

    ----------------------- ----------- ------------
    Metric Name             Metric Unit Metric Value
    ----------------------- ----------- ------------
    DRAM Frequency                  Ghz         2.62
    SM Frequency                    Ghz         1.53
    Elapsed Cycles                cycle      743,618
    Memory Throughput                 %        66.57
    DRAM Throughput                   %        14.89
    Duration                         us       486.11
    L1/TEX Cache Throughput           %        60.71
    L2 Cache Throughput               %        66.57
    SM Active Cycles              cycle   739,352.92
    Compute (SM) Throughput           %        17.57
    ----------------------- ----------- ------------

It is interesting that of the measures expressed as percentages only the DRAM throughput is significantly affected. I'll have to learn more about the architecture to understand why this is the case. Somewhat surprisingly cache behavior doesn't seem to play a role here.

发布评论

评论列表(0)

  1. 暂无评论