[19.11] GhostNet
Ghost in Feature Maps
GhostNet: More Features from Cheap Operations
After the introduction of MobileNet, many researchers continued to work on improving the performance of lightweight networks.
This paper specifically addresses the problem of "feature map redundancy."
Defining the Problem
Feature maps in convolutional networks typically contain a lot of redundant information, which consumes computational resources and adversely affects the network's performance.
The image above is a simple example showing that some areas of the feature map are repetitive, which might not significantly contribute to the network's performance.
Solving the Problem
Generally, there are two common approaches: model compression and making the model more compact.
Model Architecture
In GhostNet, the authors proposed a new model architecture, as shown above.
Unlike typical convolutional networks, the GhostNet module operates in two steps:
- First, a convolution operation is used to compress the input feature map, obtaining a "base feature map" A.
- Then, linear operations are applied to each channel of the base feature map A to generate "ghost feature maps" B.
Finally, the base feature map A and the ghost feature maps B are concatenated to obtain the final feature map.
These linear operations are much simpler than convolution operations, significantly reducing the computational load of GhostNet.
The "Cheap" linear operation of each channel mentioned in the paper is actually grouped convolution. This method can reduce the amount of calculation and improve efficiency.