[TMM] Official implementation of Segmenting Anything in the Dark via Depth Perception.
The conda environment setup can follow the instructions in HQ-SAM.
Download the SAM weights provided by SAM and put them in the folder ./train/pretrained_checkpoint/.
A training job can be launched using:
cd train
bash script_train_adapter_vit_enc_sem.shIf you find this repo useful in your research or refer to the provided baseline results, please star ⭐ this repository and consider citing 📝:
@article{liu2025segmenting,
title={Segmenting anything in the dark via depth perception},
author={Liu, Peng and Deng, Jinhong and Duan, Lixin and Li, Wen and Lv, Fengmao},
journal={IEEE Transactions on Multimedia},
year={2025},
publisher={IEEE}
}