最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

llama - How can I get my GPU to work with an Ollama model connected to an Agent in LangFlow? - Stack Overflow

programmeradmin1浏览0评论

I am working in LangFlow and have this basic design:

  1. Chat Input connected to Agent (Input).
  2. Ollama (Llama3, Tool Model Enabled) connected to Agent (Language Model).
  3. Agent (Response) connected to Chat Output.

And when I test in Playground and ask a basic question, it took almost two minutes to respond.

I have gotten Ollama (model Llama3) work with my system's GPU (NVIDIA 4060) in VS Code but I haven't figured out how to apply the cuda settings in LangFlow.

发布评论

评论列表(0)

  1. 暂无评论