Web Analytics

Semantic-Guided-Low-Light-Image-Enhancement

⭐ 126 stars English by ShenZheng2000

Semantic-Guided-Low-Light-Image-Enhancement

This is the official Pytorch implementation for our paper "Semantic-Guided Zero-Shot Learning for Low-Light Image/Video Enhancement"

Updates

Abstract

Low-light images challenge both human perceptions and computer vision algorithms. It is crucial to make algorithms robust to enlighten low-light images for computational photography and computer vision applications such as real-time detection and segmentation tasks. This paper proposes a semantic-guided zero-shot low-light enhancement network which is trained in the absence of paired images, unpaired datasets, and segmentation annotation. Firstly, we design an efficient enhancement factor extraction network using depthwise separable convolution. Secondly, we propose a recurrent image enhancement network for progressively enhancing the low-light image. Finally, we introduce an unsupervised semantic segmentation network for preserving the semantic information. Extensive experiments on various benchmark datasets and a low-light video demonstrate that our model outperforms the previous state-of-the-art qualitatively and quantitatively. We further discuss the benefits of the proposed method for low-light detection and segmentation.

Model Architecture

Click the following link to see the model architecture in pdf format.

Model Architecture

Sample Results

1. Low-Light Video Frames

From left to right, and from top to bottom: Dark, Retinex [1], KinD [2], EnlightenGAN [3], Zero-DCE [4], Ours.

2. Low-Light Images (Real-World)

From left to right, and from top to bottom: Dark, PIE [5], LIME [6], Retinex [1], MBLLEN [7], KinD [2] , Zero-DCE [4], Ours

Get Started

1. Requirements

2. Prepare Datasets

Testing Dataset

Training Dataset

NOTE: If you don't have BaiduYun account, you can download both the training and the testing dataset via Google Drive

After preparation, the data folders should look like this:

data/
├── test_data/
│   ├── lowCUT/
│   ├── BDD/
│   ├── Cityscapes/
│   ├── DICM/
│   ├── LIME/
│   ├── LOL/
│   ├── MEF/
│   ├── NPE/
│   └── VV/
└── train_data/
    └── ...

3. Training from Scratch

To train the model:
python train.py \
  --lowlight_images_path path/to/train_images \
  --snapshots_folder path/to/save_weights
Example (train from scratch):

python train.py \
  --lowlight_images_path data/train_data \
  --snapshots_folder weight/

4. Resume Training

To resume training from a checkpoint:

python train.py \
  --lowlight_images_path path/to/train_images \
  --snapshots_folder path/to/save_weights \
  --load_pretrain True \
  --pretrain_dir path/to/checkpoint.pth

Example (resume from Epoch99.pth):

python train.py \
  --lowlight_images_path data/train_data \
  --snapshots_folder weight/ \
  --load_pretrain True \
  --pretrain_dir weight/Epoch99.pth

5. Testing

NOTE: Please delete all readme.txt in the data folder to avoid model inference error.

To test the model:

python test.py \
  --input_dir path/to/your_input_images \
  --weight_dir path/to/pretrained_model.pth \
  --test_dir path/to/output_folder 
Example:

python test.py \
  --input_dir data/test_data/lowCUT \
  --weight_dir weight/Epoch99.pth \
  --test_dir test_output

6. Testing on Videos

For model testing on videos (MP4 format), run in terminal:
bash test_video.sh

There are five hyperparameters in demo/make_video.py for video testing. See the following explanation.

Hyperparameters

| Name | Type | Default | |----------------------|-------|--------------------| | lowlight_images_path | str | data/train_data/ | | lr | float | 1e-3 | | weight_decay | float | 1e-3 | | grad_clip_norm | float | 0.1 | | num_epochs | int | 100 | | train_batch_size | int | 6 | | val_batch_size | int | 8 | | num_workers | int | 4 | | display_iter | int | 10 | | snapshot_iter | int | 10 | | scale_factor | int | 1 | | snapshots_folder | str | weight/ | | load_pretrain | bool | False | | pretrain_dir | str | weight/Epoch99.pth | | num_of_SegClass | int | 21 | | conv_type | str | dsc | | patch_size | int | 4 | | exp_level | float | 0.6 |

TODO List

Others

Please reach zhengsh@kean.edu if you have any questions. This repository is heavily based upon Zero-DCE. Thanks for sharing the code!

Citations

Please cite the following paper if you find this repository helpful.
@inproceedings{zheng2022semantic,
  title={Semantic-guided zero-shot learning for low-light image/video enhancement},
  author={Zheng, Shen and Gupta, Gaurav},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={581--590},
  year={2022}
}

References

[1] Wei, Chen, et al. "Deep retinex decomposition for low-light enhancement." arXiv preprint arXiv:1808.04560 (2018).

[2] Zhang, Yonghua, Jiawan Zhang, and Xiaojie Guo. "Kindling the darkness: A practical low-light image enhancer." Proceedings of the 27th ACM international conference on multimedia. 2019.

[3] Jiang, Yifan, et al. "Enlightengan: Deep light enhancement without paired supervision." IEEE Transactions on Image Processing 30 (2021): 2340-2349.

[4] Guo, Chunle, et al. "Zero-reference deep curve estimation for low-light image enhancement." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.

[5] Fu, Xueyang, et al. "A probabilistic method for image enhancement with simultaneous illumination and reflectance estimation." IEEE Transactions on Image Processing 24.12 (2015): 4965-4977.

[6] Guo, Xiaojie, Yu Li, and Haibin Ling. "LIME: Low-light image enhancement via illumination map estimation." IEEE Transactions on image processing 26.2 (2016): 982-993.

[7] Lv, Feifan, et al. "MBLLEN: Low-Light Image/Video Enhancement Using CNNs." BMVC. 2018.

--- Tranlated By Open Ai Tx | Last indexed: 2025-12-29 ---