Summary: My Paper Reading Lists, Tutorials & Sharings

From Image Classification, Object Detection, Natural Language Processing (NLP), Self-Supervised Learning, Semi-Supervised Learning, Vision-Language, Generative Adversarial Network (GAN) to …

Sik-Ho Tsang
11 min readMar 15, 2020

In this story, as the list is too long to be posted in each story, a list of my paper readings, tutorials and also sharings are posted here for convenience and will be updated from time to time.

Actually, I wrote what I’ve learnt only. Reading a paper can consume hours or days. Sometimes, it is quite luxury to read a paper. I hope I can dig out some important points in the paper, or help reading the papers at a faster pace. But if there are some papers that you’re particularly interested in, it’s better to read the original papers for more detailed explanations. If there are something wrong, please also tell me. Thank you. (Sik-Ho Tsang @ Medium)

  • Thanks everyone for reading my stories.
  • Your claps are important to me for continuing my writing as well !!!

1. Computer Vision

1.1. Image Classification

1989-1998 [LeNet] 2010–2014 [ReLU] [AlexNet & CaffeNet] [Dropout] [Maxout] [NIN] [ZFNet] [SPPNet] [Distillation] 2015 [VGGNet] [Highway] [PReLU-Net] [STN] [DeepImage] [GoogLeNet / Inception-v1] [BN-Inception / Inception-v2] [All-CNN] [RCNN] 2016 [SqueezeNet] [Inception-v3] [ResNet] [Pre-Activation ResNet] [RiR] [Stochastic Depth] [WRN] [Trimps-Soushen] [GELU] [Layer Norm, LN] [Weight Norm, WN] [ELU] [Veit NIPS’16] 2017 [Inception-v4] [Xception] [MobileNetV1] [Shake-Shake] [Cutout] [FractalNet] [PolyNet] [ResNeXt] [DenseNet] [PyramidNet] [DRN] [DPN] [Residual Attention Network] [IGCNet / IGCV1] [Deep Roots] [CWN] [RevNet] 2018 [RoR] [DMRNet / DFN-MR] [MSDNet] [ShuffleNet V1] [SENet] [NASNet] [MobileNetV2] [CondenseNet] [IGCV2] [IGCV3] [FishNet] [SqueezeNext] [ENAS] [PNASNet] [ShuffleNet V2] [BAM] [CBAM] [MorphNet] [NetAdapt] [mixup] [DropBlock] [Group Norm (GN)] [Pelee & PeleeNet] [DLA] [Swish] [CoordConv] 2019 [ResNet-38] [AmoebaNet] [ESPNetv2] [MnasNet] [Single-Path NAS] [DARTS] [ProxylessNAS] [MobileNetV3] [FBNet] [ShakeDrop] [CutMix] [MixConv] [EfficientNet] [ABN] [SKNet] [CB Loss] [AutoAugment, AA] [BagNet] [Stylized-ImageNet] [FixRes] [SASA] [SE-WRN] [SGELU] [ImageNet-V2] [Bag of Tricks, ResNet-D] [PBA] [Fast AutoAugment (FAA)] [Switch Norm (SN)] [SNIP] 2020 [Random Erasing (RE)] [SAOL] [AdderNet] [FixEfficientNet] [BiT] [RandAugment] [ImageNet-ReaL] [ciFAIR] [ResNeSt] [Batch Augment, BA] [Mish] [WS, BCN] [AdvProp] [RegNet] [SAN] [Cordonnier ICLR’20] [ICMLM] [Self-Training] [SupCon] [Open Images] [Axial-DeepLab] [GhostNet] [ECA-Net] [MobileNeXt] [Dynamic ReLU] [Teacher Assistant (TA)] 2021 [Learned Resizer] [Vision Transformer, ViT] [ResNet Strikes Back] [DeiT] [EfficientNetV2] [MLP-Mixer] [T2T-ViT] [Swin Transformer] [CaiT] [ResMLP] [ResNet-RS] [NFNet] [PVTv1] [CvT] [HaloNet] [TNT] [CoAtNet] [Focal Transformer] [TResNet] [CPVT] [Twins] [Exemplar-v1, Exemplar-v2] [RepVGG] [V-MoE] [ImageNet-21K Pretraining] [Do You Even Need Attention?] [ResTv1] [ViL] [ReLabel] [MixToken / LV-ViT] [gMLP] [MViTv1] [CLIP] [GFNet] [Res2Net] [Sharpness-Aware Minimization (SAM)] [Transformer-LS] [R-Drop] [ParNet] [LeViT] [BotNet] [CrossViT] [Tuli CogSci’21] [Coordinate Attention (CA)] [DeepViT] [PiT] [HRNetV2, HRNetV2p] [Raghu NeurIPS’21] [SoftPool] [Complement Cross Entropy (CCE)] 2022 [ConvNeXt V1] [PVTv2] [ViT-G] [AS-MLP] [ResTv2] [CSWin Transformer] [Pale Transformer] [Sparse MLP] [MViTv2] [S²-MLP] [CycleMLP] [MobileOne] [GC ViT] [VAN] [ACMix] [CVNets] [MobileViTv1] [RepMLP] [RepLKNet] [MetaFormer, PoolFormer] [Swin Transformer V2] [hMLP] [DeiT III] [GhostNetV2] [C-GhostNet & G-GhostNet] [AlterNet] [DHVT] [CrossFormer] [DynaMixer] [FocalNet] [WideNet] [CMT] [EfficientFormer] [MobileViTv3] [Model Soups] [ViT-SAM & Mixer-SAM] 2023 [Vision Permutator (ViP)] [ConvMixer] [CrossFormer++] [FastViT] [EfficientFormerV2] [MobileViTv2] [ConvNeXt V2] [SwiftFormer] [OpenCLIP] [SLaK] [EfficientViT] [Log RGB] 2024 [FasterViT] [CAS-ViT] [TinySaver] [Fast Vision Transformer (FViT)] [MogaNet] [RDNet] [Logarithmic Lenses]

1.2. Unsupervised/Self-Supervised Learning

1993 [de Sa NIPS’93] 2008–2010 [Stacked Denoising Autoencoders] 2014 [Exemplar-CNN] 2015 [Context Prediction] [Wang ICCV’15] 2016 [Context Encoders] [Colorization] [Jigsaw Puzzles] 2017 [L³-Net] [Split-Brain Auto] [Motion Masks] [Doersch ICCV’17] [TextTopicNet] [Counting] 2018 [RotNet/Image Rotations] [DeepCluster] [CPCv1] [Instance Discrimination] [Spot Artifacts] 2019 [Ye CVPR’19] [S⁴L] [Goyal ICCV’19] [Rubik’s Cube] [AET] [Deep InfoMax (DIM)] [AMDIM] [Local Aggregation (LA)] [DeeperCluster] 2020 [CMC] [MoCo] [CPCv2] [PIRL] [SimCLR] [MoCo v2] [iGPT] [BoWNet] [BYOL] [SimCLRv2] [BYOL+GN+WS] [CompRess] [MoCo v2+Distillation] [SeLa] [SwAV] 2021 [MoCo v3] [SimSiam] [DINO] [Exemplar-v1, Exemplar-v2] [Barlow Twins] [W-MSE] [SimSiam+AL] [BYOL+LP] [SEED] [SEER] [SplitMask] [SimReg] [MoCLR, DnC] 2022 [BEiT] [BEiT V2] [Masked Autoencoders (MAE)] [DiT] [SimMIM] [LDBM] [data2vec] [SEER 10B, RG-10B] [iBOT]

1.3. Pretraining or Weakly/Semi-Supervised Learning

2004 [Entropy Minimization, EntMin] 2013 [Pseudo-Label (PL)] 2015 [Ladder Network, Γ-Model] 2016 [Sajjadi NIPS’16] [Improved DCGAN, Inception Score] 2017 [Mean Teacher] [PATE & PATE-G] [Π-Model, Temporal Ensembling] 2018 [WSL] [Oliver NeurIPS’18] 2019 [VAT] [Billion-Scale] [Label Propagation] [Rethinking ImageNet Pre-training] [MixMatch] [SWA & Fast SWA] [S⁴L] [Kolesnikov CVPR’19] 2020 [BiT] [Noisy Student] [SimCLRv2] [UDA] [ReMixMatch] [FixMatch] [Self-Training] 2021 [Curriculum Labeling (CL)] [Su CVPR’21] [Exemplar-v1, Exemplar-v2] [SimPLE] [BYOL+LP]

1.4. Object Detection

2014 [OverFeat] [R-CNN] 2015 [Fast R-CNN] [Faster R-CNN] [MR-CNN & S-CNN] [DeepID-Net] 2016 [OHEM] [CRAFT] [R-FCN] [ION] [MultiPathNet] [Hikvision] [GBD-Net / GBD-v1 & GBD-v2] [SSD] [YOLOv1] 2017 [NoC] [G-RMI] [TDM] [DSSD] [YOLOv2 / YOLO9000] [FPN] [RetinaNet] [DCNv1] [Light-Head R-CNN] [DSOD] [CoupleNet] 2018 [YOLOv3] [Cascade R-CNN] [MegDet] [StairNet] [RefineDet] [CornerNet] [Pelee & PeleeNet] [SiLU] [FRN, SCUT-HEAD] 2019 [DCNv2] [Rethinking ImageNet Pre-training] [GRF-DSOD & GRF-SSD] [CenterNet] [Grid R-CNN] [NAS-FPN] [ASFF] [Bag of Freebies] [VoVNet/OSANet] [FCOS] [GIoU] 2020 [EfficientDet] [CSPNet] [YOLOv4] [SpineNet] [DETR] [Mish] [PP-YOLO] [Open Images] [YOLOv5] [CornerNet-Lite] [ATSS] 2021 [Scaled-YOLOv4] [PVTv1] [Deformable DETR] [HRNetV2, HRNetV2p] [MDETR] [TPH-YOLOv5] [YOLOX] [TOOD] [ViT-YOLO] [YOLOS] [PP-YOLOv2] [CenterNet2] [ORE] 2022 [Pix2Seq] [MViTv2] [SF-YOLOv5] [GLIP] [TPH-YOLOv5++] [YOLOv6] [ViDT] [ViTDet] [PP-YOLOE] [YOLO-Ret] 2023 [YOLOv7] [YOLOv8] [Lite DETR] [YOLOv8 for Helmet Violation Detection] [YOLOv8 for Flying Object Detection] 2024 [YOLOv9] [YOLOv10] [RT-DETR]

1.5. Semantic Segmentation / Scene Parsing / Instance Segmentation / Panoptic Segmentation

2014-2015 [SDS] [FCN] [DeconvNet] [DeepLabv1 & DeepLabv2] [CRF-RNN] [SegNet] [DPN] [Hypercolumn] [DeepMask] [DecoupledNet] [Weakly-Supervised EM] 2016 [ENet] [ParseNet] [DilatedNet] [Cityscapes] [SharpMask] [MultiPathNet] [MNC] [InstanceFCN] [TransferNet] 2017 [DRN] [RefineNet] [ERFNet] [GCN] [PSPNet] [DeepLabv3] [LC] [FC-DenseNet] [IDW-CNN] [DIS] [SDN] [Cascade-SegNet & Cascade-DilatedNet] [FCIS] [Mask R-CNN] [SPN] [FCN + Outfit Filter + CRF] 2018 [ESPNet] [ResNet-DUC-HDC] [DeepLabv3+] [PAN] [DFN] [EncNet] [DLA] [Non-Local Neural Networks] [UPerNet] [PSANet] [Probabilistic U-Net] [MaskLab] [PANet] [Mask X R-CNN] [MaskLab] [PersonLab] [ResUNet] [TernausNet] 2019 [ResNet-38] [C3] [ESPNetv2] [ADE20K] [Semantic FPN, Panoptic FPN] [Auto-DeepLab] [DANet] [Improved U-Net] [Gated-SCNN] [Recurrent U-Net (R-UNet)] [EFCN] [DCNv2] [Rethinking ImageNet Pre-training] [HTC] [YOLACT] [MS R-CNN] [PS] [UPSNet] [DeeperLab] [Bellver CVPRW’19] 2020 [DRRN Zhang JNCA’20] [Trans10K, TransLab] [CCNet] [Open Images] [DETR] [Panoptic-DeepLab] [Axial-DeepLab] [Zhang JNCA’20] [CenterMask] 2021 [PVTv1] [SETR] [Trans10K-v2, Trans2Seg] [Copy-Paste] [HRNetV2, HRNetV2p] [Lite-HRNet] 2022 [YOLACT++] 2023 [Segment Anthing Model (SAM)] [FastSAM] [MobileSAM]

1.6. Face Recognition

2005 [Chopra CVPR’05] 2010 [ReLU] 2014 [DeepFace] [DeepID2] [CASIANet] 2015 [FaceNet] 2016 [N-pair-mc Loss]

1.7. Human Pose Estimation

2014–2015 [DeepPose] [Tompson NIPS’14] [Tompson CVPR’15] 2016 [CPM] [FCGN] [IEF] [DeepCut & DeeperCut] [Newell ECCV’16 & Newell POCV’16] 2017 [G-RMI] [CMUPose & OpenPose] [Mask R-CNN] [RMPE] 2018 [PersonLab] [CPN] 2019 [OpenPose] [HRNetV1] 2020 [A-HRNet] [Dynamic ReLU] 2021 [HRNetV2, HRNetV2p] [Lite-HRNet]

1.8. Video Classification / Action Recognition

2014 [Deep Video] [Two-Stream ConvNet] 2015 [DevNet] [C3D] [LRCN] 2016 [TSN] 2017 [Temporal Modeling Approaches] [4 Temporal Modeling Approaches] [P3D] [I3D] [Something Something] 2018 [Non-Local Neural Networks] [S3D, S3D-G] 2019 [VideoBERT] [Moments in Time] 2021 [MViTv1] [MViTv2] [SoftPool]

1.9. Weakly Supervised Object Localization (WSOL)

2014 [Backprop] 2016 [CAM] 2017 [Grad-CAM] [Hide-and-Seek] 2018 [Grad-CAM++] [ACoL] [SPG] 2019 [CutMix] [ADL] 2020 [Evaluating WSOL Right] [SAOL]

1.10. Visualization

2002 [SNE] 2006 [Autoencoder] [DrLIM] 2007 [UNI-SNE] 2008 [t-SNE] 2016 [CAM] 2017 [Grad-CAM] 2018 [Grad-CAM++] [Loss Landscape]

1.11. Data-Centric AI

2021 [SimSiam+AL] [BYOL+LP] 2022 [Small is the New Big] [DataPerf]

2. Natural Language Processing (NLP)

2.1. Language Model (LM)

2007 [Bengio TNN’07] 2013 [Word2Vec] [NCE] [Negative Sampling] 2014 [GloVe] [Doc2Vec] [DT-RNN, DOT-RNN, sRNN] 2015 [Skip-Thought] [IRNN] [ConvLSTM] 2016 [GCNN/GLU] [context2vec] [Jozefowicz arXiv’16] [LSTM-Char-CNN] [Layer Norm, LN] 2017 [TagLM] [CoVe] [MoE] [fastText] 2018 [GLUE] [T-DMCA] [GPT, GPT-1] [ELMo] 2019 [T64] [Transformer-XL] [BERT] [RoBERTa] [GPT-2] [DistilBERT] [MT-DNN] [Sparse Transformer] [SuperGLUE] [FAIRSEQ] [XLNet] [XLM] [UniLM] [ERNIE 1.0] [SciBERT] 2020 [ALBERT] [T5] [Pre-LN Transformer] [MobileBERT] [TinyBERT] [BART] [Longformer] [ELECTRA] [Megatron-LM] [SpanBERT] [UniLMv2] [DeFINE] [BIGBIRD] [ReGLU, GEGLU & SwiGLU] [ERNIE 2.0] [XLM-R] [ERNIE-Doc] [Linformer] [MiniLM] 2021 [Performer] [gMLP] [Roformer] [PPBERT] [DeBERTa] [DeLighT] [Transformer-LS] [R-Drop] [mT5] [ERNIE 3.0] [nmT5] [C4] [MMLU] [FastFormer] 2022 [GLM] [Switch Transformers] [WideNet] [MoEBERT] [X-MoE] [sMLP] [LinkBERT, BioLinkBERT] [AlphaCode] [Block-wise Dynamic Quantization] 2023 [ERNIE-Code] [Grouped-Query Attention (GQA)]

2.2. Large Langauge Model (LLM)

2020 [GPT-3] 2021 [Jurassic-1] [Gopher] [Codex] [ERNIE 3.0 Titan] 2022 [GPT-NeoX-20B] [GPT-3.5, InstructGPT, ChatGPT] [MT-NLG 530B] [Chinchilla] [PaLM] [AlexaTM] [BLOOM] [AlexaTM 20B] [OPT] [LaMDA] [Galactica] [DeepSpeed-MoE] [GLaM] 2023 [GPT-4] [LLaMA] [Koala] [BloombergGPT] [GLM-130B] [UL2] [PaLM 2] [Llama 2] [MultiMedQA, HealthSearchQA, Med-PaLM] [Med-PaLM 2] [Flan 2022, Flan-T5] [AlphaCode 2] [Mistral 7B] [Alpaca] [Inflection-1] 2024 [Nemotron-4 15B]

2.3 LM Tuning / Prompting

2019 [BERT for Text Classification] 2020 [Human Feedback Model] 2021 [T5+LM, Prompt Tuning] [Prefix-Tuning] 2022 [GPT-3.5, InstructGPT] [LoRA] [Chain-of-Thought Prompting] [T0] [FLAN] [UL2R, U-PaLM] [Flan-PaLM] [Tk-INSTRUCT] 2023 [LIMA] [SELF-INTRUCT] [Self-Consistency] [Med-PaLM 2] [QLoRA, Guanaco] [Alpaca] 2024 [LLaMA-Adapter]

2.4. Neural Machine Translation (NMT)

2013 [Translation Matrix] 2014 [Seq2Seq] [RNN Encoder-Decoder] 2015 [Attention Decoder/RNNSearch] 2016 [GNMT] [ByteNet] [Deep-ED & Deep-Att] [Byte Pair Encoding (BPE)] [Back Translation] 2017 [ConvS2S] [Transformer] [MoE] [GMNMT] [CoVe] [PBT] 2018 [Shaw NAACL’18] [CSLS] [Back Translation+Sampling] [UNMT] [SentencePiece] 2019 [AdaNorm] [GPT-2] [Pre-Norm Transformer] [FAIRSEQ] [XLM] [Multi-Query Attention (MQA)] 2020 [Batch Augment, BA] [GPT-3] [T5] [Pre-LN Transformer] [OpenNMT] [DeFINE] [MUTE] [BERTScore] 2021 [ResMLP] [GPKD] [Roformer] [DeLighT] [R-Drop] [GShard] 2022 [DeepNet] [PaLM] [BLOOM] [AlexaTM 20B] 2023 [Grouped-Query Attention (GQA)]

2.5. Summarization

2018 [T-DMCA] 2020 [Human Feedback Model] 2022 [GPT-3.5, InstructGPT] 2023 [DetectGPT]

2.6. Sentence Embedding / Dense Text Retrieval

2017 [InferSent] 2018 [Universal Sentence Encoder (USE)] 2019 [Sentence-BERT (SBERT)] 2020 [Multilingual Sentence-BERT] [Retrieval-Augmented Generation (RAG)] [Dense Passage Retriever (DPR)] [IS-BERT] 2021 [Fusion-in-Decoder] [Augmented SBERT (AugSBERT)] [SimCSE] [ANCE] 2022 [E5] 2024 [Multilingual E5] [E5 Mistral 7B]

2.7. Question Answering (QA)

2016 [SQuAD 1.0/1.1] 2017 [Dynamic Coattention Network (DCN)] 2018 [SQuAD 2.0] [QuAC]

3. Time-Series / Speech / Audio / Acoustic Signal Processing

3.1. Acoustic Model / Automatic Speech Recognition (ASR) / Speech-to-Text (STT)

1991 [MoE] 1997 [Bidirectional RNN (BRNN)] 2005 [Bidirectional LSTM (BLSTM)] 2006 [Connectionist Temporal Classification (CTC)] 2012 [TED-LIUM] 2013 [SGD+CR] [Leaky ReLU] 2014 [GRU] [Deep KWS] [TED-LIUM 2] 2015 [LibriSpeech] [ARSG] 2016 [Listen, Attend and Spell (LAS)] [WaveNet] [Wav2Letter] 2017 [CNN for KWS] 2018 [Speech Commands] [TED-LIUM 3] 2019 [SpecAugment] [Cnv Cxt Tsf] 2020 [FAIRSEQ S2T] [PANNs] [Conformer] [SpecAugment & Adaptive Masking] [Multilingual LibriSpeech (MLS)] 2023 [Whisper]

3.2. Sound Classification / Audio Tagging / Sound Event Detection (SED)

2015 [ESC-50, ESC-10, ESC-US] 2017 [AudioSet / Audio Set] [M3, M5, M11, M18, M34-res (DaiNet)] [Sample-Level DCNN (LeeNet)] 2021 [Audio Spectrogram Transformer (AST)] 2022 [CyTex]

3.4. Self-Supervised Learning

2019 [wav2vec] 2020 [wav2vec 2.0] 2021 [HuBERT] 2023 [Masked Modeling Duo (M2D)]

3.5. Semi-Supervised Learning

2020 [Noisy Student Training (NST)] [Conformer XL & Conformer XXL]

3.6. Time Series Classification

2020 [InceptionTime]

4. Foundation Model, Large Multimodal Model (LMM), or Multimodal Signal Processing (Audio / Vision / Language)

4.1. Foundation Model (Text, Visual & Audio)

2023 [Gemini]

4.2. Visual/Vision/Video Language Model (VLM)

2017 [Visual Genome (VG)] 2018 [Conceptual Captions] 2019 [VideoBERT] [VisualBERT] [LXMERT] [ViLBERT] 2020 [ConVIRT] [VL-BERT] [OSCAR] 2021 [CLIP] [VinVL] [ALIGN] [VirTex] [ALBEF] [Conceptual 12M (CC12M)] [MDETR] [Florence] 2022 [FILIP] [Wukong] [LiT] [Flamingo] [FLAVA] [SimVLM] [VLMo] [BEiT-3] [GLIP] [CoOp] [CoCoOp] 2023 [GPT-4] [GPT-4V(ision)] [MultiModal-CoT] [CoCa] [Florence-2] [PaLI] [PaLI-X] [OpenCLIP] 2024 [MiniGPT-4]

4.3. Text-to-Image Generation

2016 [GAN-CLS, GAN-INT, GAN-CLS-INT] 2017 [StackGAN-v1] 2018 [StackGAN++, StackGAN-v2] 2021 [DALL·E]

4.4. Image Captioning

2015 [m-RNN] [R-CNN+BRNN] [Show and Tell/NIC] [Show, Attend and Tell] [LRCN] 2017 [Visual N-Grams] 2018 [Conceptual Captions]

4.5. Video Captioning

2015 [LRCN] 2017 [Something Something] 2019 [VideoBERT]

5. Image Generation Related

5.1. Generative Adversarial Network (GAN)

Image Synthesis: 2014 [GAN] [CGAN] 2015 [LAPGAN] 2016 [AAE] [DCGAN] [CoGAN] [VAE-GAN] [InfoGAN] [Improved DCGAN, Inception Score] 2017 [SimGAN] [BiGAN] [ALI] [LSGAN] [EBGAN] [PBT] [WGAN] [WGAN-GP] [TTUR, Fréchet Inception Distance (FID)] [StackGAN-v1] [AC-GAN] 2018 [SNGAN] [StackGAN++, StackGAN-v2] [Progressive GAN] 2019 [SAGAN] [BigGAN] [BigBiGAN] 2020 [GAN Overview]
Text-to-Image Generation: 2016 [GAN-CLS, GAN-INT, GAN-CLS-INT] 2017 [StackGAN-v1] 2018 [StackGAN++, StackGAN-v2]
Image-to-image Translation: 2017
[Pix2Pix] [UNIT] [CycleGAN] 2018 [MUNIT] [StarGAN] [pix2pixHD] [SaGAN] [Mask Contrastive-GAN]
Style Transfer: 2016 [GAN-CLS, GAN-INT, GAN-CLS-INT] 2019 [StyleGAN]
Machine Translation: 2018
[UNMT]
Super Resolution: 2017
[SRGAN & SRResNet] [EnhanceNet] 2018 [ESRGAN]
Blur Detection: 2019 [DMENet]
Medical Imaging: 2018 [cGAN-AutoEnc & cGAN-Unet] 2019 [cGAN+AC+CAW] 2020 [cGAN JESWA’20]
Heart Sound Classification: 2020 [GAN for Normal Heart Sound Synthesis]
Snore Sound Classification: 2021
[Snore-GAN]
Camera Tampering Detection: 2019
[Mantini’s VISAPP’19]
Video Coding: 2018
[VC-LAPGAN] 2020 [Zhu TMM’20] 2021 [Zhong ELECGJ’21]

5.2. Image Generation

2018 [Image Transformer] 2021 [Performer]

5.3. Style Transfer

2016 [Artistic Style Transfer] [Image Style Transfer] [Perceptual Loss] [GAN-CLS, GAN-INT, GAN-CLS-INT] [Texture Nework] [Instance Norm (IN)] 2017 [StyleNet] [AdaIN] 2019 [StyleGAN]

6. Image Reconstruction Related

6.1. Single Image Super Resolution (SISR)

2014–2016 [SRCNN] 2016 [FSRCNN] [VDSR] [ESPCN] [RED-Net] [DRCN] [Perceptual Loss] 2017 [DnCNN] [DRRN] [LapSRN & MS-LapSRN] [MemNet] [IRCNN] [WDRN / WavResNet] [SRDenseNet] [SRGAN & SRResNet] [SelNet] [CNF] [BT-SRN] [EDSR & MDSR] [EnhanceNet] 2018 [MWCNN] [MDesNet] [RDN] [SRMD & SRMDNF] [DBPN & D-DBPN] [RCAN] [ESRGAN] [CARN] [IDN] [ZSSR] [MSRN] [Image Transformer] 2019 [SR+STN] [IDBP-CNN-IA] [SRFBN] [OISR] 2020 [PRLSR] [CSFN & CSFN-M]

6.2. Image Restoration

2008 [Jain NIPS’08] 2016 [RED-Net] [GDN] 2017 [DnCNN] [MemNet] [IRCNN] [WDRN / WavResNet] 2018 [MWCNN] 2019 [IDBP-CNN-IA]

6.3. Video Super Resolution (VSR)

2017 [STMC / VESPCN] 2018 [VSR-DUF / DUF] 2019 [EDVR]

6.4. Video Frame Interpolation / Extrapolation

2016 [Mathieu ICLR’16] 2017 [AdaConv] [SepConv] 2020 [DSepConv] 2021 [SepConv++]

--

--

Sik-Ho Tsang
Sik-Ho Tsang

Written by Sik-Ho Tsang

PhD, Researcher. I share what I learn. :) Linktree: https://linktr.ee/shtsang for Twitter, LinkedIn, etc.

Responses (12)