I want to implement a neural network on pytorch where gradients are not computed over all the weights. Let's say for example I have an MLP with three layers and I want half of the nodes in the last layer to have their backpropagation computed all the way up to the first layer but the other half of the last layer have their gradients computed only up to the middle layer. I would be grateful for any help. Thanks
python - How to specify gradient computation path in a neural network in pytorch - Stack Overflow
与本文相关的文章
评论列表(0)
- 暂无评论