I converted the pytorch model to ONNX in this way , I dont have much idea how performance would differ in different formats(any suggestion on the same is welcome.)
import torch
from model_core import Two_Stream_Net
# Load your model
model = Two_Stream_Net()
checkpoint = torch.load('/media/node9/hdd/face-fery-detection/src/weights_finetune_clean/checkpoint_epoch_140.pth',
map_location='cpu')
# Load only the model state dict
model.load_state_dict(checkpoint['model_state_dict'])
# Set model to evaluation mode
model.eval()
# Create dummy input with 4 channels instead of 3
dummy_input = torch.randn(1, 4, 256, 256) # Changed from 3 to 4 channels
# For ONNX export:
torch.onnx.export(
model,
dummy_input,
"two_stream_net.onnx",
export_params=True,
opset_version=12,
do_constant_folding=True,
input_names=['input'],
output_names=['output', 'features', 'attention_map'],
dynamic_axes={'input': {0: 'batch_size'},
'output': {0: 'batch_size'},
'features': {0: 'batch_size'},
'attention_map': {0: 'batch_size'}}
)
Now , How can I convert it into TensorRT, TorchScript? And what is difference between trace and script in torchscript.