Unified Image Restoration and Enhancement: Degradation Calibrated Cycle Reconstruction Diffusion Model
This is the official implementation code for CycleRDM.
[2025.3.16] Updated Raindrop Removal tasks and restoration results for existing tasks.
[2024.12.23] The pretrained weights of the four task models were released separately.
[2024.12.23] Released test code for four image restoration tasks: image deraining, image denoising, image dehazing, and image inpainting.
[2024.12.19] Added dataset links for training and testing of various tasks.
- OS: Ubuntu 22.04
- nvidia:
- cuda: 12.1
- python 3.9
Clone Repo
git clone https://github.com/hejh8/CycleRDM.git
cd CycleRDM
Create Conda Environment and Install Dependencies:
pip install -r requirements.txt
Preparing the train and test datasets following our paper Dataset subsection as:
#### for training dataset ####
data
|--dehazing
|--train
|--LQ/*.png
|--GT/*.png
|--train.txt
|--test
|--LQ/*.png
|--GT/*.png
|--test.txt
|--deblurring
|--denoising
|--deraining
|--raindrop
|--inpainting
|--low-light
|--underwater
|--backlight
Then You need to get into the `CycleRDM/config` directory and modify the `Task_train.yml` and `Task_test.yml` settings therein to suit your needs.
For the training datas, we selected only up to 500 images in each task. You can select more training datas as you wish through the script.
cd CycleRDM/scripts
python3 Random_select_data.py
Image Restoration Task | deblurring | dehazing | deraining | raindrop | denoising | inpainting |
---|---|---|---|---|---|---|
Datasets | BSD | RESIDE-6k | Rain100H: train, test | RainDrop | CBSD68 | CelebaHQ-256 |
Following the settings of DA-CLIP. For noisy datasets, you can use this script to generate LQ images. For image inpainting restoration tasks, you can generate LQ images by adding facial occlusion via the script.
cd CycleRDM/scripts
##denoising
python denoising_LQ.py
##inpainting
python mask_inpaint.py
Image Enhancement Task | low-light | underwater | backlight |
---|---|---|---|
Datasets | LOLv1,LOLv2 | LSUI | Backlit300 |
We will be releasing the training code shortly.
You can downlaod our pre-training models from [Google Drive], Pre-trained models for all tasks will be published in the future.
Before performing the following steps, please download our pretrained model first. To evalute our method on image restoration, please modify the benchmark path and model path.
You need to modify test_ir.py and datasets.py
according to your environment and then
cd CycleRDM/image_ir
python test_ir.py
🙁 Since degradation datasets mostly contain only a single degradation label for each image, our current model has not been trained to recover multiple degradations in the same scene. Although we have demonstrated the performance stability and generalization capabilities of CycleRDM when extended for degradation tasks, this limitation has prevented us from effectively exploring recovery capabilities for realistic mixed degradation scenes.
Acknowledgment: Our CycleRDM is based on CFWD and DiffLL. Thanks for their code!
If our code helps your research or work, please consider citing our paper. The following are BibTeX references:
@article{xue2024unified,
title={Unified Image Restoration and Enhancement: Degradation Calibrated Cycle Reconstruction Diffusion Model},
author={Xue, Minglong and He, Jinhong and Palaiahnakote, Shivakumara and Zhou, Mingliang},
journal={arXiv preprint arXiv:2412.14630},
year={2024}
}
If you have any question, please contact us.