Review — Symmetrical Gaussian Error Linear Units (SGELUs)
Symmetrical Gaussian Error Linear Units (SGELUs)
Symmetrical Gaussian Error Linear Units (SGELUs)
SGELU, by Southeast University Nanjing, and Jiangsu Smartwin Electronics Technology Co., Ltd.
2019 arXiv (Sik-Ho Tsang @ Medium)
Image Classification, Activation Function
- SGELU is achieved by effectively integrating the property of the stochastic regularizer in the Gaussian Error Linear Unit (GELU) with the symmetrical characteristics.
Outline
- SGELU Formulation
- Experimental Results
1. SGELU Formulation
- The activation function of GELU can be represented by:
- where erf() represents the Gaussion error function, that is:
- Since the GELU function represents the nonlinearity using the stochastic regularizer on an input, which is the cumulative distribution function derived from the Gaussian error function, it has shown the advantage over other functions, e.g., ReLU, ELU.
- However, most activation functions do not fully exploit the negative value. Taking this into account, the advantage of stochastic regularizer is taken on the input and exploit the negative value, and a novel Symmetrical Gaussian Error Linear Unit (SGELU) is proposed, which can be represented by:
- in which α represents the hyper-parameter.
- For ReLU, if z (input) is negative, the gradient is zero and thus the weight stops updating.
- For ELU, if z is negative, the gradient is positive but with small values. The weight updates up to a bigger value and moves towards to the positive direction with a relatively slow learning rate.
- For GELU, if z is negative, the gradient value is then positive or very close to zero if z is “very” negative for most cases, which pushes weight to a smaller value. Finally, the weight stops updating.
SGELU can update its weight symmetrically towards to two directions in both positive and negative half axis. In other words, the function of SGELU is a two-to-one mapping between the input and the output, while the others are a one-to-one mapping.
2. Experimental Results
2.1. MNIST Classification
- A fully connected SGELU neural network with =0.1 is trained to compare with a similar network using GELU and LiSHT, each 8-layer, 128-neuron wide neural network is trained for 50 epochs with a batch size of 128, in which the Adam optimizer.
SGELU is more accurate than GELU and LiSHT.
2.2. MNIST Autoencoder
- A simply network with only one encoder layer and one decoder layer is constructed and trained, in which 128 neurons is chosen for each layer.
SGELU significantly outperforms GELU and LiSHT.
Reference
[2019 arXiv] [SGELU]
Symmetrical Gaussian Error Linear Units (SGELUs)
Image Classification
1989–2018 … 2019: [ResNet-38] [AmoebaNet] [ESPNetv2] [MnasNet] [Single-Path NAS] [DARTS] [ProxylessNAS] [MobileNetV3] [FBNet] [ShakeDrop] [CutMix] [MixConv] [EfficientNet] [ABN] [SKNet] [CB Loss] [AutoAugment, AA] [BagNet] [Stylized-ImageNet] [FixRes] [Ramachandran’s NeurIPS’19] [SE-WRN] [SGELU]
2020: [Random Erasing (RE)] [SAOL] [AdderNet] [FixEfficientNet]
2021: [Learned Resizer]