最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

multithreading - How to nest mpi processes with python - Stack Overflow

programmeradmin4浏览0评论

I am using Mpi4py with OpenMPI on Ubuntu, and I also need this to run on RedHat. I am trying to run compute_ldos() for a relatively large set of parameters, and am trying to use as all available cores on my node to make this faster. Right now I am calling compute_ldos on a single thread for each parameter combination (embarrassingly parallel I believe it's called?). compute_ldos will use all available processes, and I need to use multiple threads to reduce memory usage. However, I lose efficiency if I assign to many to a single compute_ldos call. How do I assign multiple processes to each worker process?

def worker_process(distances, rank):
    """Worker function executed by each MPI process."""
    return [compute_ldos(dist[0], dist[1], dist[2], rank) for dist in distances]

def main():
    comm = MPI.COMM_WORLD
    rank = comm.Get_rank()
    size = comm.Get_size()
    start = 50
    end = 100
    res = [20, 10, 5, 5]
    num_angles = 10
    
    distances = []
    points = np.linspace(start, end, len(res) + 1)
    
    for idx, point in enumerate(points[:-1]):
        distances.extend(np.linspace(point, points[idx + 1], res[idx])[:-1])

    distances.append(end)
    
    angle_sample = np.arange(0, 360, num_angles / 360)
    
    distances = [[x, angle, False] for x in distances for angle in angle_sample]
    distances.append([0, 0, True])

    # Distribute distances across MPI ranks
    distances_split = np.array_split(distances, size)[rank]

    setup_stdout(rank)

    # Each MPI process runs its own worker function
    procs = worker_process(distances_split, rank)

compute_ldos calls a meep function to run an FDTD simulation.

发布评论

评论列表(0)

  1. 暂无评论