Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #2088 +/- ##
=======================================
Coverage 49.67% 49.67%
=======================================
Files 339 339
Lines 12998 12998
Branches 1906 1906
=======================================
Hits 6457 6457
Misses 6090 6090
Partials 451 451
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
can temporarily use this docker image for testing
|
|
Hey, thanks for this. I wanted to know how do I correctly send multiple bboxes for keypoint-detection inference. I created a dict for each bbox here and added to the bbox_list = [{'bbox':bbox} for bbox in bboxes.tolist()]
bbox = {
'type': 'PoseBbox',
'value': bbox_list
} |
|
Also, what does this mean |
Cou you show the visualize result with bboxes? Are the inference result with single bbox looks right?
For batch inference of mmdeploy, you can refer to this #839 (comment) Triton server support dynamic batcher and sequence batcher. But mmdeploy backend only support dynamic batcher. You can add these lines to config.pbtxt. With allow_ragged_batch and In summary, to use mmdeploy triton backend with batch inference, you have to:
|
I am not sure if this works. I don't see any improvements when I do this after checking with It supports batching in the I can see better improvements by launching multiple model instances using: I think dynamic_batcher depends on sequence_batching. But since each request is handled separately in |


Motivation
Support model serving
Modification
Add triton custom backend
Add demo