Reading: IDBP-CNN-IA —Image-Adapted Denoising CNNs: Incorporating External and Internal Learning (Super Resolution & Image Restoration)


Sik-Ho Tsang
4 min readJul 25, 2020

In this story, Super-Resolution via Image-Adapted Denoising CNNs: Incorporating External and Internal Learning (IDBP-CNN-IA), by Tel Aviv University, is briefly presented.

  • For external learning CNN, CNN is trained using external dataset and the trained CNN is applied to the images that are outside the dataset.
  • For internal learning CNN, CNN is trained using the current mage only.
  • In this paper, IDBP-CNN-IA combines both to achieve a better result.

This is a letter in 2019 IEEE Signal Processing Letter (SPL) where SPL has a high impact factor of 3.268. (Sik-Ho Tsang @ Medium)


  1. IDBP-CNN-IA Overall Scheme
  2. Experimental Results

1. IDBP-CNN-IA Overall Scheme

1.1. With External Learning: IDBP-CNN

  • IDBP: Iterative Denoising and Backward Projections.
  • IDBP-CNN: A set of CNN denoisers proposed and trained in IRCNN.
  • This set is composed of 25 CNNs, each of them is trained for a different noise level, and together they span the noise level range of [0, 50].
  • The IDBP-CNN uses a fixed number of 30 iterations.
  • In each iteration, a suitable CNN denoiser (i.e. associated with σe+𝛿k) is used. After 30 iterations, an estimator of the high-resolution image x is obtained.
  • (There is a section talking about the iteration optimization problem which is highly related to signal processing, please feel free to read the paper if interested.)
  • (Please also feel free to read IRCNN if interested.)

1.2. With Also Internal Learning: IDBP-CNN-IA

  • IA: Image-Adaptive
  • In IDBP-CNN-IA, a single change is made: the CNN denoisers are obtained by fine-tuning the pre-trained denoisers using the LR image.
  • Patches of size uniformly chosen from {34, 40, 50} are extracted from the LR image y, which serve as the ”ground truth”.
  • To enrich this ”training set”, data augmentation is done by downscaling y to 0.9 of its size with probability 0.5, using mirror reflections in the vertical and horizontal directions with uniform probability, and using 4 rotations {0, 90, 180, 270}, again, with uniform probability.
  • L1 loss is used.
  • mini-batch size is 32
  • 320 iteration is used.

1.3. Internal Learning by Fine-Tuning

  • The fine-tuning time for a single denoiser is small and independent of the image size and the desired SR scalefactor.
  • However, if the fine-tuning is done for every denoiser, the inference run-time becomes very large.
  • For this reason, in the reported results, authors fine-tune only the last two CNN denoisers, and thus only moderately increase the inference run-time compared to the baseline IDBP-CNN.
(a) SR ×2 with bicubic kernel; (b) SR ×3 with Gaussian kernel on Set5
  • IDBP-CNN: Obtains the lowest average PSNR.
  • IDBP-CNN-IA-earlier: Fine-tuning is also performed at earlier denoisers.
  • However, such configuration, which significantly increases run-time, improves the results (as expected), but not always significantly. Presumably, because the high-level denoisers in early iterations improve mainly coarse details.
  • IDBP-CNN-IA: Fine-tuning only at the last two CNN denoisers.

2. Experimental Results

2.1. Ideal Cases

Super-resolution results (average PSNR in dB) for ideal (noiseless) observation model with bicubic and Gaussian downscaling kernels.
  • IDBP-CNN-IA outperforms all other model-flexible methods such as IRCNN and ZSSR as shown above. It also obtains the overall best results in the Gaussian kernel case.
  • Regarding the inference run-time, the IDBP-CNN requires 20s per image. Its image-adapted version requires 100s, which is only a moderate increase and is significantly faster than ZSSR that requires 150s in its fastest version.

2.2. Non-Ideal Cases

Super-resolution results (average PSNR in dB) for 8 estimated (inexact) non-ideal downscaling kernels and scale factor of 2.
  • IDBP-CNN performs well, and its IA version further improves it, outperforms EDSR+, RCAN, IRCNN and ZSSR.

2.3. Low-Quality Real LR Images

Visual Comparison
  • By making the CNN denoisers image-adaptive (IA), a more accurate image reconstruction with less artifacts is obtained.

This is the 22nd story in this month.



Sik-Ho Tsang

PhD, Researcher. I share what I learn. :) Linktree: for Twitter, LinkedIn, etc.