最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

pytorch - Difference between torch.as_tensor() and torch.asarray() - Stack Overflow

programmeradmin2浏览0评论

What I understand from the docs is that both torch.as_tensor() and torch.asarray() return the tensor that shares the memory with the input data, and return a copy otherwise. I noticed only two differencies in parameteres:

  • I can implicitly pass copy=False into torch.asarray() to require shared memory and get the exeption if the copy is not possible, or I can pass copy=True to require the copy.
  • I can specify requires_grad in torch.asarray().

So does torch.asarray() just offer more capabilities than torch.as_tensor()?

But if I just want to get the shared memory if possible, what should I use: torch.asarray() or torch.as_tensor()? Is there any difference in performance or something?

What I understand from the docs is that both torch.as_tensor() and torch.asarray() return the tensor that shares the memory with the input data, and return a copy otherwise. I noticed only two differencies in parameteres:

  • I can implicitly pass copy=False into torch.asarray() to require shared memory and get the exeption if the copy is not possible, or I can pass copy=True to require the copy.
  • I can specify requires_grad in torch.asarray().

So does torch.asarray() just offer more capabilities than torch.as_tensor()?

But if I just want to get the shared memory if possible, what should I use: torch.asarray() or torch.as_tensor()? Is there any difference in performance or something?

Share Improve this question asked Jan 18 at 21:42 Denis ShafarenkoDenis Shafarenko 253 bronze badges
Add a comment  | 

2 Answers 2

Reset to default 0

“So does torch.asarray() just offer more capabilities than torch.as_tensor()?”

Yes that's basically it.

torch.as_tensor automatically tries to copy data and autograd information, while torch.asarray gives you more explicit control over data copying and autograd information.

If you want shared memory/autograd by default, I would just use as_tensor. To my knowledge there is no performance difference between the two provided the same memory/autograd sharing parameters are used.

To give a little more context, I think the real "difference" between torch.asarray() and torch.as_tensor() is that the former is part of the standard Array API while the latter is not. So in other words, it's not that these two functions are meant for different use-cases, it's that there's a standard and non-standard way to do the same thing.

发布评论

评论列表(0)

  1. 暂无评论