Each image is treated as a class and projected to hypersphere

Unsupervised Feature Learning via Non-Parametric Instance Discrimination
Instance Discrimination
, by UC Berkeley / ICSI, Chinese University of Hong Kong, and Amazon Rekognition
2018 CVPR, Over 1100 Citations (Sik-Ho Tsang @ Medium)
Unsupervised Learning, Deep Metric Learning, Self-Supervised Learning, Semi-Supervised Learning, Image Classification, Object Detection

  • Authors start by asking a question…

Assign Pseudo Labels to Unlabeled Data Using Label Propagation

Label propagation on manifolds toy example. Triangles denote labeled, and circles unlabeled training data, respectively…

Label Propagation for Deep Semi-supervised Learning
Label Propagation, by Czech Technical University in Prague, and Univ Rennes
2019 CVPR, Over 200 Citations (Sik-Ho Tsang @ Medium)
Semi-Supervised Learning, Pseudo Label, Image Classification

  • A new iterative process is proposed, in which a transductive label propagation method is employed that is based…

Pseudo Labels for Unlabeled Data

Pseudo-Label for Unlabeled Data (Figure from Here)

Pseudo-Label: The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks
Pseudo-Label (PL), by Nangman Computing
2013 ICLRW, Over 1500 Citations (Sik-Ho Tsang @ Medium)
Semi-Supervised Learning, Pseudo Label, Image Classification

  • Unlabeled data is labelled by supervised-learnt network, which is so called pseudo labeling.
  • Network is then trained using…

Obtains Images Using Instagram Hashtags for Weakly Supervised Pretraining

Obtains Images Using Instagram Hashtag: https://www.instagram.com/explore/tags/brownbear/

Exploring the Limits of Weakly Supervised Pretraining
WSL, by Facebook
2018 ECCV, Over 700 Citations (Sik-Ho Tsang @ Medium)
Weakly-Supervised Learning, Image Classification, Object Detection

  • Datasets are expensive to collect and annotate.
  • A unique study is investigated on transfer learning with large convolutional networks trained to predict hashtags on billions…

Teacher Student Model for Semi-Supervised Learning Using 1 Billion Unlabeled Data

Semi-Supervised Learning Procedures

Billion-Scale Semi-Supervised Learning for Image Classification
Billion-Scale, by Facebook AI
2019 arXiv, Over 200 Citations (Sik-Ho Tsang @ Medium)
Teacher Student Model, Semi-Supervised Learning, Image Classification, Video Classification

  • A semi-supervised learning based on teacher/student paradigm is proposed, which leverages a large collection of unlabelled images (up to 1 billion). By…

Squeeze-and-Excitation (SE) Attention Applied onto Wide Residual Networks (WRN)

Squeeze-and-Excitation Wide Residual Networks in Image Classification
SE-WRN, by Wuhan University of Technology, Hubei Province Key Laboratory of Transportation Internet of Things, and Wuhan University
2019 ICIP (Sik-Ho Tsang @ Medium)

  • SE block in SENet is applied onto Wide Residual Networks (WRN), where Global covariance pooling (GVP) is used, and…

Apply FixRes onto EfficientNet for Additional Results

FixEfficientNet (orange curve) surpasses all EfficientNet models, including the models trained with Noisy student (red…

Fixing the train-test resolution discrepancy: FixEfficientNet
FixEfficientNet, by Facebook AI Research
2020 arXiv v5, Over 200 Citations. (Sik-Ho Tsang @ Medium)

Outline

  1. FixEfficientNet
  2. Experimental Results

1. FixEfficientNet

Self-Attention Blocks Replace Convolutional Blocks in ResNet

Similar Accuracy with Much Fewer parameters and FLOPS Comparing with ResNet-50

Stand-Alone Self-Attention in Vision Models
Ramachandran’s NeurIPS’19, by Google Research, Brain Team
2019 NeurIPS, Over 400 Citations (Sik-Ho Tsang @ Medium)
Self-Attention, Image Classification, Object Detection

  • In convention, attention blocks are built on top of convolutions.
  • Self-attention block is…

Pretrain Using Multiple Pretext Tasks to Improve Downstream Task Accuracy

Multi-task Self-Supervised Visual Learning
Doersch ICCV’17, by DeepMind, and VGG, University of Oxford
2017 ICCV, Over 400 Citations (Sik-Ho Tsang @ Medium)
Self-Supervised Learning, Representation Learning, Image Classification, Object Detection, Depth Prediction

  • In this paper, 4 different self-supervised tasks using ResNet-101 are combined to jointly train a network.
  • Lasso regularization

Sik-Ho Tsang

PhD, Researcher. I share what I've learnt and done. :) My LinkedIn: https://www.linkedin.com/in/sh-tsang/, My Paper Reading List: https://bit.ly/33TDhxG

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store