MobileNet-V3

MobileNet-V3

背景介绍

  MobileNet-V3:是Google继MobileNet-V2之后的又一力作,于2019年提出,效果较MobileNet-V2有所提升。MobileNet-V3提供了两个版本,分别为MobileNet-V3-Large以及MobileNet-V3-Small,分别适用于对资源不同要求的情况。

MobileNet_V3

MobileNet-V3特点

  保留MobileNet-V2的SeparableConv深度可分离结构和残差结构
  引入SE结构,具有轻量级的注意力模型
  对MobileNet-V2的头部结构进行优化,MobileNet-V2中第二层得到的特征图大小为112x112x32,而在MobileNet-V3中,只需要112x112x16即可保证精度,并且提升运行速度。
  对MobileNet-V2的尾部结构进行优化,MobileNet-V2中对7x7的特征图进行1x1的卷积提升通道数,然后再进行全局平均池化,而在MobileNet-V3中,先对7x7的特征图进行全局平均池化,然后再进行1x1的卷积提升通道数,节约了49倍的参数量。
  激活函数使用h-swishReLU6并存的方式,加快了运行的速度

Separable Convolution

Xception
  Separable Convolution(深度可分离卷积):是上面两个卷积合二为一的卷积操作。
  第一步:DepthwiseConv,对每一个通道进行卷积
  第二步:PointwiseConv,对第一步得到的结果进行1x1卷积,实现通道融合
  主要作用是
大大降低网络的参数量
,并且可以调整为任意合适的通道数。第一步的目的是减少参数量,第二步是调整通道数,因此将两个卷积操作结合,组成深度可分离卷积。

Squeeze-and-Excitation

SENet
  Squeeze-and-Excitation:又称为特征重标定卷积,或者注意力机制。具体来说,就是通过学习的方式来自动获取到每个特征通道的重要程度,然后依照这个重要程度去提升有用的特征并抑制对当前任务用处不大的特征
  首先是 Squeeze操作,先进行全局池化,具有全局的感受野,并且输出的维度和输入的特征通道数相匹配,它表征着在特征通道上响应的全局分布。
  然后是Excitation操作通过全连接层为每个特征通道生成权重,建立通道间的相关性输出的权重看做是进过特征选择后的每个特征通道的重要性,然后通过乘法逐通道加权到先前的特征上,完成在通道维度上的对原始特征的重标定。

不同尺寸MobileNet-V3网络结构

MobileNet_V3

MobileNet-V3图像分析

MobileNet_V3

TensorFlow2.0实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
from functools import reduce
import tensorflow.keras as keras


def compose(*funcs):
if funcs:
return reduce(lambda f, g: lambda *a, **kw: g(f(*a, **kw)), funcs)
else:
raise ValueError('Composition of empty sequence not supported.')


class H_Swish(keras.layers.Layer):
def __init__(self, name='h_swish'):
super(H_Swish, self).__init__()
self._name = name

def call(self, inputs, **kwargs):

return inputs * keras.activations.relu(inputs + 3, max_value=6) / 6


def se_block(x, filters, name):
shortcut = x
x = compose(keras.layers.GlobalAveragePooling2D(name='{}_global_averagepool'.format(name)),
keras.layers.Dense(filters // 4, name='{}_dense1'.format(name)),
keras.layers.ReLU(6, name='{}_relu6'.format(name)),
keras.layers.Dense(filters, name='{}_dense2'.format(name)),
H_Swish(name='{}_h_swish'.format(name)),
keras.layers.Reshape((1, 1, filters), name='{}_reshape'.format(name)))(x)
x = keras.layers.Multiply(name='{}_multiply'.format(name))([x, shortcut])

return x


def bneck(x, filters, up_dim, kernel_size, strides, squeeze, activation, name):
shortcut = x
x = compose(Conv_Bn_Relu6(up_dim, (1, 1), (1, 1), 'same', name='{}_conv_bn_{}'.format(name, activation)),
Conv_Bn_Relu6(None, kernel_size, strides, 'same', name='{}_depthwiseconv_bn_{}'.format(name, activation)))(x)

if squeeze:
x = se_block(x, up_dim, name='{}_se_block'.format(name))

x = Conv_Bn_Relu6(filters, (1, 1), (1, 1), 'same', name='{}_conv_bn'.format(name))(x)

if shortcut.shape[-1] == filters and strides == (1, 1):
x = keras.layers.Add(name='{}_add'.format(name))([x, shortcut])

return x


class Conv_Bn_Relu6(keras.layers.Layer):
def __init__(self, filters, kernel_size, strides, padding, name):
super(Conv_Bn_Relu6, self).__init__()
self._name = name
self.block = keras.Sequential()
if name.find('depthwise') == -1:
self.block.add(keras.layers.Conv2D(filters, kernel_size, strides, padding=padding))
else:
self.block.add(keras.layers.DepthwiseConv2D(kernel_size, strides, padding=padding))
self.block.add(keras.layers.BatchNormalization())
if name.find('h_swish') != -1:
self.block.add(H_Swish())
elif name.find('relu6') != -1:
self.block.add(keras.layers.ReLU(6))

def call(self, inputs, **kwargs):

return self.block(inputs)


def mobilenet_v3(input_shape):
input_tensor = keras.layers.Input(input_shape, name='input')
x = input_tensor

x = Conv_Bn_Relu6(16, (3, 3), (2, 2), 'same', name='conv_bn_h_swish1')(x)

x = bneck(x, 16, 16, (3, 3), (1, 1), squeeze=False, activation='relu6', name='bneck1')

x = bneck(x, 24, 64, (3, 3), (2, 2), squeeze=False, activation='relu6', name='bneck2_1')
x = bneck(x, 24, 72, (3, 3), (1, 1), squeeze=False, activation='relu6', name='bneck2_2')

x = bneck(x, 40, 72, (5, 5), (2, 2), squeeze=True, activation='relu6', name='bneck3_1')
x = bneck(x, 40, 120, (5, 5), (1, 1), squeeze=True, activation='relu6', name='bneck3_2')
x = bneck(x, 40, 120, (5, 5), (1, 1), squeeze=True, activation='relu6', name='bneck3_3')

x = bneck(x, 80, 240, (3, 3), (2, 2), squeeze=False, activation='h_swish', name='bneck4_1')
x = bneck(x, 80, 200, (3, 3), (1, 1), squeeze=False, activation='h_swish', name='bneck4_2')
x = bneck(x, 80, 184, (3, 3), (1, 1), squeeze=False, activation='h_swish', name='bneck4_3')
x = bneck(x, 80, 184, (3, 3), (1, 1), squeeze=False, activation='h_swish', name='bneck4_4')

x = bneck(x, 112, 480, (3, 3), (1, 1), squeeze=True, activation='h_swish', name='bneck5_1')
x = bneck(x, 112, 672, (3, 3), (1, 1), squeeze=True, activation='h_swish', name='bneck5_2')

x = bneck(x, 160, 672, (5, 5), (2, 2), squeeze=True, activation='h_swish', name='bneck6_1')
x = bneck(x, 160, 960, (5, 5), (1, 1), squeeze=True, activation='h_swish', name='bneck6_2')
x = bneck(x, 160, 960, (5, 5), (1, 1), squeeze=True, activation='h_swish', name='bneck6_3')

x = compose(Conv_Bn_Relu6(960, (1, 1), (1, 1), 'same', name='conv_bn_h_swish2'),
keras.layers.GlobalAveragePooling2D(name='global_averagepool'),
keras.layers.Reshape((1, 1, 960), name='reshape1'),
Conv_Bn_Relu6(1280, (1, 1), (1, 1), 'same', name='conv_bn_h_swish3'),
keras.layers.Conv2D(1000, (1, 1), activation='softmax', name='conv'),
keras.layers.Reshape((1000,), name='reshape2'))(x)

model = keras.Model(input_tensor, x, name='MobileNet-V3')

return model


if __name__ == '__main__':

model = mobilenet_v3(input_shape=(224, 224, 3))
model.build(input_shape=(None, 224, 224, 3))
model.summary()

MobileNet_V3

MobileNet-V3小结

  MobileNet-V3是一种复杂的轻量级深度学习网络,从上图可以看出MobileNet-V3模型的参数量为5M,其在MobileNet-V2的基础上加入了大量黑科技,因此获得了更好的效果。

-------------本文结束感谢您的阅读-------------
0%