### 🚀 The feature, motivation and pitch see title. currently, we don't split prefill/decode in flashinfer ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and checked the [documentation](https://nvidia.github.io/TensorRT-LLM/) and [examples](https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples) for answers to frequently asked questions.
🚀 The feature, motivation and pitch
see title. currently, we don't split prefill/decode in flashinfer
Alternatives
No response
Additional context
No response
Before submitting a new issue...