Skip to content

liupeng3425/DPSAM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Segmenting Anything in the Dark via Depth Perception

[TMM] Official implementation of Segmenting Anything in the Dark via Depth Perception.

Training

The conda environment setup can follow the instructions in HQ-SAM.

Download the SAM weights provided by SAM and put them in the folder ./train/pretrained_checkpoint/.

A training job can be launched using:

cd train
bash script_train_adapter_vit_enc_sem.sh

Citation

If you find this repo useful in your research or refer to the provided baseline results, please star ⭐ this repository and consider citing 📝:

@article{liu2025segmenting,
  title={Segmenting anything in the dark via depth perception},
  author={Liu, Peng and Deng, Jinhong and Duan, Lixin and Li, Wen and Lv, Fengmao},
  journal={IEEE Transactions on Multimedia},
  year={2025},
  publisher={IEEE}
}

Acknowledgments

  • Thanks HQ-SAM and SAM for their public code and released models.

About

[TMM] Official implementation of Segmenting Anything in the Dark via Depth Perception.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages