最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

How can I use CUDA streams in llamacpp? - Stack Overflow

programmeradmin2浏览0评论

I see support for CUDA streams in CUDA ggml implementation (for example, here .cpp/blob/master/ggml/src/ggml-cuda/softmax.cu#L172 and here .cpp/blob/master/ggml/src/ggml-cuda/common.cuh#L674 ), but it is in vain since ggml_backend_cuda_context.stream() always return stream #0: .cpp/blob/master/ggml/src/ggml-cuda/common.cuh#L683

Am I right? There is no way to use CUDA streams in llamacpp?

发布评论

评论列表(0)

  1. 暂无评论