-
Notifications
You must be signed in to change notification settings - Fork 35
Open
Description
Hi. I'm trying using torch2trt_dynamic to convert models from torch to trt. I got an unexpected issue.
To reproduce it I use lenet5 like neural network.
from torch2trt_dynamic import torch2trt_dynamic
import torch
from lenet5 import LeNet5
model = LeNet5()
model_path = 'models/lenet5_mnist.pt'
model_to_save = 'models/lenet5_mnist_trt.pt'
# loading the model and getting model parameters by using load_state_dict
model.load_state_dict(torch.load(model_path))
model.eval().cuda()
# create example data
x = torch.ones((1, 1, 32, 32)).cuda()
# convert to TensorRT feeding sample data as input
opt_shape_param = [
[
[5, 1, 16, 16], # min
[5, 1, 32, 32], # opt
[5, 1, 64, 64] # max
]
]
model_trt = torch2trt_dynamic(model, [x], fp16_mode=False, opt_shape_param=opt_shape_param)
x = torch.rand(1,1,32,32).cuda()
with torch.no_grad():
y = model(x)
y_trt = model_trt(x)
# check the output against PyTorch
print(torch.max(torch.abs(y - y_trt)))
I got the next error:
[TensorRT] ERROR: (Unnamed Layer* 31) [Convolution]: number of channels in input tensor to a convolution layer must not be dynamic
[TensorRT] ERROR: Builder failed while analyzing shapes.
Traceback (most recent call last):
File "example_converter.py", line 39, in <module>
y_trt = model_trt(x)
I tried to deal with it and found that if I change opt_shape_param it works:
opt_shape_param = [
[
[5, 1, 32, 32], # min
[5, 1, 32, 32], # opt
[5, 1, 32, 32] # max
]
]
@grimoire So I do not understand why this error occurs in your example opt shape param has different shapes for height and width?
TensorRT Version: 7.1.3
GPU Type: GeForce RTX 2080 Ti
Nvidia Driver Version: 470.103.01
CUDA Version: 10.2
CUDNN Version: 8.0.5
Operating System + Version: Ubuntu 18.04
Python Version: 3.6.9
PyTorch Version: 1.8.1
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels