fix: expand rank mismatch on symbolic shapes; add regression test#4018
fix: expand rank mismatch on symbolic shapes; add regression test#4018lordaarush wants to merge 1 commit intopytorch:mainfrom
Conversation
…regression test. Fixes pytorch#3972
There was a problem hiding this comment.
Thanks for the PR. I don't think the above fix addresses the issue. Since dynamic shapes are already handled in prepend_ones. Also current_rank should now be the shape it is expanded to.
try:
current_rank = len(input_t.shape)
except Exception:
current_rank = shape_rank
In the below test case there will be 0 computational nodes that will depend on runtime input, since the shape values will be constant. You could make them dynamic to invoke the converter.
I went through the above issue and looks like the root issue is dims being 10 here which is not permitted in TRT. It can handle till 8 max dims- https://docs.nvidia.com/deeplearning/tensorrt/latest/_static/c-api/classnvinfer1_1_1_dims64.html
Pytorch decomposes repeat to unsqueeze-> expand -> permute -> reshape
But for 5D tensor layer.reshape_dims = new_shape fails here, since DIMS can't be 10 here, and it fails in the dynamic case too in layer.set_input(1, reshape_dim_layer.get_output(0)). Hence input_tensor.shape would come invalid.
The original example would work with only 4 dimensions. Instead of emb_t = self.pos_emb_t[: pe_size[0]][None, :, None, None, :].repeat(batch_size, 1, pe_size[1], pe_size[2], 1) 5 dims here.
WAR would be to replace repeat with expand without broadcasting the dimension or using tile operation. I need to look into this more.
Ideally for tests below we would want the test case to be in https://github.com/pytorch/TensorRT/tree/main/tests/py/dynamo/conversion if it is a converter fix, so below location would not work.
There was a problem hiding this comment.
Thanks for reviewing this, looks like I misunderstood what was actually failing and ended up fixing the wrong thing. I see the issue much more clearly now. I’ll take another pass at it with the correct context and follow up if I find something useful.
Description
This PR fixes a failure in the Torch-TensorRT dynamo converter when lowering repeat() patterns that internally get rewritten into expand() under dynamic or symbolic shapes. In these situations, len(input_t.shape) may fail because symbolic dimensions are not standard Python sequences, leading to errors such as:
ValueError: len() should return >= 0
This was triggered by positional embedding patterns like:
x[..., None, None, :].repeat(batch, 1, H, W, 1)
Root cause:
After padding singleton dimensions, the converter asserted:
assert len(input_t.shape) == shape_rank
However, calling len() on symbolic shapes can fail or return invalid results, causing the converter to raise Python-level errors during lowering.
Fix:
Replaces the unsafe assert with a robust rank check that:
This preserves all existing expand() semantics. There are no behavior changes for static shapes.
A minimal regression test has been added at:
tests/dynamo/test_repeat_expand_repro.py
It reproduces the issue in #3972 and is skipped automatically when CUDA + Torch-TensorRT are unavailable.
Fixes: #3972
Type of change
Checklist: