Brief Review — InceptionTime: Finding AlexNet for Time Series Classification

Inception Module for Time Series Data

Sik-Ho Tsang
3 min readOct 24, 2024

InceptionTime: Finding AlexNet for Time Series Classification
InceptionTime
, by Universite Haute Alsace, Monash University, and Universite Bretagne Sud
2020 JTMKD, Over 1200 Citations (Sik-Ho Tsang @ Medium)

Time Series Classification (TSC)
==== My Other Paper Readings Are Also Over Here ====

  • HIVE-COTE takes more than 8 days to learn from a small dataset with N = 1500 time series of short length T = 46 for Time Series Classification (TSC).
  • InceptionTime is introduced, which is an ensemble of deep Convolutional Neural Network (CNN) models, inspired by the Inception-v4 architecture for TSC. It learns from 1,500 time series in one hour, and 8M time series in 13 hours.

Outline

  1. InceptionTime
  2. Results

1. InceptionTime

1.1. Time Series Classiciation (TSC)

  • A Multivariate Time Series (MTS) has a dataset X = [X1, X2, … ,XT] with M dimensions, consists of T ordered elements Xi. The task of classifying time series data consists of learning a classifier on dataset D in order to map from the space of possible inputs X to a probability distribution over the classes Y.

1.2. InceptionTime

InceptionTime
  • The proposed model InceptionTime consists of an ensemble of 5 different Inception networks initialized randomly.
  • The composition of an Inception network classifier contains two different residual blocks, as opposed to ResNet, which is comprised of three.
  • For the Inception network, each block is comprised of three Inception modules rather than traditional fully convolutional layers.
  • Following these residual blocks, a Global Average Pooling (GAP) layer is used that averages the output multivariate time series over the whole time dimension.
  • At last, a final traditional fully-connected softmax layer is used, with a number of neurons equal to the number of classes in the dataset.
  • Fig. 1 above depicts an Inception network’s architecture showing 6 different Inception modules stacked one after the other.

1.3. Inception Module

Inception Module
  • The first major component of the Inception module is called the “bottleneck” layer.
  • The second major component of the Inception module is sliding multiple filters of different lengths simultaneously on the same input time series. Three different convolutions with length l of {10, 20, 40} are applied to the input MTS.
  • Another parallel MaxPooling operation is also introduced, followed by a bottleneck layer to reduce the dimensionality.
  • Finally, the output of each independent parallel convolution/MaxPooling is concatenated to form the output MTS.

2. Results

Accuracy Plot
  • 85 datasets of the UCR archive are used for evaluation.
  • HIVE-COTE is a method from a paper “The hierarchical vote collective of transformation-based ensembles for time series classification.” in 2016.

The results show a Win/Tie/Loss of 40/6/39 in favor of InceptionTime.

  • Fig. 7: InceptionTime’s complexity increases almost linearly with an increase in the time series’ length, unlike HIVE-COTE, whose execution is almost two order of magnitudes slower. InceptionTime is significantly faster when dealing with long time series.
  • Fig. 8: InceptionTime is an order of magnitude faster than HIVE-COTE for increasing training set.
Accuracy Against Training Set Size

The accuracy continues to increase with InceptionTime for larger training set sizes, where HIVE-COTE would take 100 times longer to run.

--

--

Sik-Ho Tsang
Sik-Ho Tsang

Written by Sik-Ho Tsang

PhD, Researcher. I share what I learn. :) Linktree: https://linktr.ee/shtsang for Twitter, LinkedIn, etc.