Reading: CNNLF — Residual Convolutional Neural Network Based In-Loop Filter (AVS3 Codec Filtering)
8.66% and 8.75% BD-Rate Reduction on Y Component Under RA and LD Configurations Separately
In this story, Residual Convolutional Neural Network Based In-Loop Filter with Intra and Inter Frames Processed Respectively for AVS3 (CNNLF), by Tencent, is presented. It is called CNNLF because the network is named at the subjective comparison. I read this because I work on video coding research. In this paper:
- A deep residual convolutional neural network based in-loop filter is proposed to suppress compression artifacts for the third generation of Audio Video Standard (AVS3).
This is a paper in 2020 ICMEW. (Sik-Ho Tsang @ Medium)
- CNNLF: Network Architecture
- AVS3 Implementation
- Experimental Results
1. CNNLF: Network Architecture
- It is found that the residual block and residual in residual structure can promote the ability of plain network obviously with little complexity added.
- Skip connection and residual learning are used to accelerate the information transferring and they also make the network focus more on compression distortion.
- Apart from the head and the tail convolutional layers, the network contains M=10 residual blocks, where each layer has N=64 input channels and N output channels. These residual blocks remove batch normalization layer.
- YUV420 block is converted into YUV44 block before being fed into the network.
- A QP map is also fed into the network with reconstructed frame.
- Thus, the proposed model can be used to suppress compression related artifacts with different QPs, with no need to train multiple models for multiple QP bands.
- Weighted average of L1 loss is used:
- where a1 is greater than a2, a3, for chroma components are more smooth and easier to converge than luma component.
2. AVS3 Implementation
- Two models are trained to process intra and inter frames respectively.
- (There is analysis/argument for the distortion difference between intra and inter frames. Please feel free to read the paper if interested.)
- HPM5.0 is used.
- DIV2K is used for training.
- The proposed in-loop filter is to replace the traditional DF and SAO filters.
- Frame level and CTU level RDO are performed to choose whether the proposed filter is used or not.
3. Experimental Results
- Whether under RA or LD configuration, the proposed method achieves higher improvement than the others, with moderate model size and computation complexity.
3.2. Subjective Quality
- The proposed method can promote subjective quality.
3.3. Generalization Ability
- Thanks to the QP map, the proposed model can be used to suppressing compression artifacts related to different QPs, with no need to training multiple models for multiple QP bands.
This is the 3rd story in this month.
JPEG [ARCNN] [RED-Net] [DnCNN] [Li ICME’17] [MemNet] [MWCNN]
HEVC [Lin DCC’16] [IFCNN] [VRCNN] [DCAD] [MMS-net] [DRN] [Lee ICCE’18] [DS-CNN] [CNNF] [RHCNN] [VRCNN-ext] [S-CNN & C-CNN] [MLSDRN] [ARTN] [Double-Input CNN] [CNNIF & CNNMC] [B-DRRN] [Residual-VRN] [Liu PCS’19] [DIA_Net] [RRCNN] [QE-CNN] [Jia TIP’19] [EDCNN] [VRCNN-BN] [MACNN]
AVS3 [Lin PCS’19] [CNNLF]
VVC [AResNet] [Lu CVPRW’19] [Wang APSIPA ASC’19] [ADCNN] [DRCNN]