Jianyi Wang, Kelvin C.K. Chan, Chen Change Loy
S-Lab, Nanyang Technological University
- Website release
-
Code release
The same as MMEditing, currently is using an old version.
conda create -n clipiqa python=3.6
pip install torch==1.8.1+cu101 torchvision==0.9.1+cu101 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html # For CUDA 10.1
pip install -r requirements.txt
pip install -v -e .
Test CLIP-IQA on KonIQ-10k
python demo/clipiqa_koniq_demo.py
Test CLIP-IQA on Live-iWT
python demo/clipiqa_liveiwt_demo.py
# Support dist training as MMEditing
python tools/train.py configs/clipiqa/clipiqa_coop_koniq.py
Test CLIP-IQA+ on KonIQ-10k (Checkpoint)
python demo/clipiqa_koniq_demo.py --config configs/clipiqa/clipiqa_coop_koniq.py --checkpoint ./iter_80000.pth
[Note] You may change prompts for different datasets, please refer to config files for details.
[Note] For testing on a single image, please refer to here for details.
For more evaluation, please refer to our paper for details.
If our work is useful for your research, please consider citing:
@inproceedings{wang2022exploring,
author = {Wang, Jianyi and Chan, Kelvin CK and Loy, Chen Change},
title = {Exploring CLIP for Assessing the Look and Feel of Images},
booktitle = {AAAI},
year = {2023}
}
This project is licensed under NTU S-Lab License 1.0. Redistribution and use should follow this license.
This project is based on MMEditing and CLIP. Thanks for their awesome works.
If you have any question, please feel free to reach me out at [email protected]
.