最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

hpc - How can i let the process to different ranks not only on rank 0? - Stack Overflow

programmeradmin4浏览0评论

I got some MISTAKE when trying to bind the program with IntelMPI.

#define _GNU_SOURCE
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <sched.h>
#include <mpi.h>
#include <omp.h>
#include <sys/syscall.h>

/* Borrowed from util-linux-2.13-pre7/schedutils/taskset.c */
static char *cpuset_to_cstr(cpu_set_t *mask, char *str)
{
  char *ptr = str;
  int i, j, entry_made = 0;
  for (i = 0; i < CPU_SETSIZE; i++) {
    if (CPU_ISSET(i, mask)) {
      int run = 0;
      entry_made = 1;
      for (j = i + 1; j < CPU_SETSIZE; j++) {
        if (CPU_ISSET(j, mask)) run++;
        else break;
      }
      if (!run)
        sprintf(ptr, "%d,", i);
      else if (run == 1) {
        sprintf(ptr, "%d,%d,", i, i + 1);
        i++;
      } else {
        sprintf(ptr, "%d-%d,", i, i + run);
        i += run;
      }
      while (*ptr != 0) ptr++;
    }
  }
  ptr -= entry_made;
  *ptr = 0;
  return(str);
}

int main(int argc, char *argv[])
{
  int rank, thread;
  cpu_set_t coremask;
  char clbuf[7 * CPU_SETSIZE], hnbuf[64];

  MPI_Init(&argc, &argv);
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);
  memset(clbuf, 0, sizeof(clbuf));
  memset(hnbuf, 0, sizeof(hnbuf));
  (void)gethostname(hnbuf, sizeof(hnbuf));
  #pragma omp parallel private(thread, coremask, clbuf)
  {
    thread = omp_get_thread_num();
    (void)sched_getaffinity(0, sizeof(coremask), &coremask);
    cpuset_to_cstr(&coremask, clbuf);
    #pragma omp barrier
    printf("Hello from rank %d, thread %d, on %s. (core affinity = %s)\n",
            rank, thread, hnbuf, clbuf);
  }
/*  sleep(60); */
  MPI_Finalize();
  return(0);
}

The above is my code for testing core affinity. However, when i tried to use intelmpi to run the excutable program with 8 processes on two nodes, 4 processes on each node, all the processes remained on the same rank 0. Although i tried to bind the processes on physical cores and let the OMP_THREADS=4 on each node, it still stayed in the same rank 0.

It is my shell: mpiexec -genv I_MPI_PIN=1 -genv I_MPI_PIN_CELL=core -f hosts.txt -np 8 -ppn 4 ./test-bind
export OMP_NUM_THREADS=4
hosts.txt: node1:4
node2:4

And it is what happened: The result of test-bind program)

Could anyone fine mistake? Also i have the same question with openmpi.

发布评论

评论列表(0)

  1. 暂无评论