Hi Protenix team,
thanks for the Protenix v0.7.0 release. It's great for inference.
Quick question: can any of the following inference acceleration flags also be used during training, or are they strictly inference-only?
--enable_cache
--enable_fusion
--enable_tf32
Thanks for the clarification!