# Review — GIoU: Generalized Intersection over Union

## Generalized IoU for Object Detection

@ Medium)

Generalized Intersection over Union: A Metric and A Loss for Bounding Box RegressionGIoU, by Stanford University, The University of Adelaide, and Aibee Inc.2019 CVPR, Over 1200 Citations(

Object Detection

- The weakness of Intersection over Union (IoU) is addressed, and
**Generalized IoU (GIoU)**is proposed. - YOLOv3, Faster R-CNN, and Mask R-CNN using GIoU, obtains better performance.

# Outline

**Intersection over Union (IoU)****Generalized IoU (GIoU)****Experimental Results**

**1. Intersection over Union (IoU)**

- Intersection over Union (IoU) for
**comparing similarity between two arbitrary shapes (volumes)***A*,*B**S*∈*R^n*is attained by:

- IoU can be treated as a distance, e.g.
**L_IoU=1−IoU**, is a**metric**[9]. - L_IoU fulfills all properties of a metric such as
**non-negativity, identity of indiscernibles, symmetry**and**triangle inequality**. - IoU is
**invariant to the scale**of the problem. This means that the similarity between two arbitrary shapes*A*and*B*is independent from the scale of their space*S*.

However, IoU has a

major weakness: If |A∩B|=0, IoU(A,B)=0. In this case,IoU does not reflect if two shapes are in vicinity of each other or very far from each other.

- A general extension to IoU, namely Generalized Intersection over Union GIoU, is proposed.

**2. Generalized IoU (GIoU)**

For two arbitrary convex shapes (volumes)⊆A,BS∈R^n, wefirst find the smallest convex shapes⊆CS∈R^nenclosing both.AandBThen we

calculate a ratio between the volume (area) occupied byCexcludingAandBand divide by the total volume (area) occupied byC.

- Similar to IoU,
**GIoU as a distance**, e.g.**L_GIoU=1−GIoU**, holding all properties of a metric such as non-negativity, identity of indiscernibles, symmetry and triangle inequality. - Similar to IoU, GIoU is invariant to the scale of the problem.
- GIoU is always a lower bound for IoU. (Please read paper for the proof.)
- Similar to IoU, the value 1 occurs only when two objects overlay perfectly.
- GIoU value asymptotically converges to -1 when the ratio between occupying regions of two shapes, |
*A*∪*B*|, and the volume (area) of the enclosing shape |*C*| tends to zero.

In summary, this generalization keeps the major properties of IoU while rectifying its weakness. Therefore,

GIoU can be a proper substitute for IoUin all performance measures used in 2D/3D computer vision tasks.

# 3. **Experimental Results**

## 3.1. YOLOv3

- To train YOLOv3 using IoU and GIoU losses, the bounding box regression MSE loss is simply replaced with LIoU and LGIoU losses.
- A very minimal effort to regularize these new regression losses against the MSE classification loss.

The results show consistent improvement in performance for YOLOv3 when it is trained using LGIoU as regression loss.

**(a)**: The**localization accuracy**for YOLOv3**significantly improves when LGIoU loss is used.****(b)**: However, with the current naïve tuning of regularization parameters, balancing bounding box loss vs. classification loss, the**classification scores may not be optimal, compared to the baseline.**

## 3.2. Faster R-CNN and Mask R-CNN

- To train Faster R-CNN and Mask R-CNN using IoU and GIoU losses, their ℓ1-smooth loss in the final bounding box refinement stage is replaced with LIoU and LGIoU losses.
- LIoU and LGIoU losses are simply multiplied by a factor of 10 for all experiments.

Incorporating LIoU as the regression loss can slightly improve the performance of Faster R-CNN on this benchmark.

Training Faster R-CNN and Mask R-CNN using LGIoU as the bounding box regression loss can consistently improve its performance compared to its own regression loss (ℓ1-smooth).

## 3.3. Qualitative Results

## Reference

[2019 CVPR] [GIoU]

Generalized Intersection over Union: A Metric and A Loss for Bounding Box Regression

## Object Detection

**2014–2018 … 2019**: [DCNv2] [Rethinking ImageNet Pre-training] [GRF-DSOD & GRF-SSD] [CenterNet] [Grid R-CNN] [NAS-FPN] [ASFF] [Bag of Freebies] [VoVNet/OSANet] [FCOS] [GIoU]**2020**: [EfficientDet] [CSPNet] [YOLOv4] [SpineNet]**2021**: [Scaled-YOLOv4]