PSPNet

PSPNet

背景介绍

  PSPNet:由香港中文大学和商汤科技提出,获得2016年ImageNet场景解析挑战的冠军,于2017发表在CVPR,通过使用金字塔池化模块完成图像分割,是一种高效的语义分割模型。

PSPNet

PSPNet特点

  特征提取网络选择施加空洞卷积(atrous convolutions)的ResNet,并且选择AL(auxiliary loss, 辅助损失)对ResNet进行训练,通常在某一层后接着几个转换层和全连接层,最后分类预测,并且给予损失小于1的权重,完成辅助损失,目的是缓解深度神经网络中梯度消失的问题。
  使用金字塔池化模块聚合信息,根据不同内核的池化层,获取不同尺度的图像信息,然后再Concatenate,完成信息的融合。

空洞卷积(atrous convolutions)和普通卷积之间的区别

  空洞卷积(atrous convolutions)又称膨胀卷积(dilated convolutions),在卷积层引入了一个膨胀率(dilation rate)参数,定义了卷积核的间隔数量,普通卷积的卷积核dilation rate=1
  优点:
扩大感受野
,相邻的像素点可能存在大量冗余信息,扩大感受野可能会获取多尺度信息,这在视觉任务上非常重要,且不需要引入额外参数,如果增加分辨率或者采用大尺寸的卷积核则会大大增加模型的参数量。
  缺点:由于空洞卷积的计算方式类似于棋盘格式,因此可能产生棋盘格效应,可以参考棋盘格可视化。如果膨胀率太大卷积结果之间没有相关性,可能会丢失局部信息。
PSPNet

PSPNet图像分析

PSPNet

TensorFlow2.0实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
from functools import reduce
import tensorflow as tf
import tensorflow.keras as keras


def compose(*funcs):
if funcs:
return reduce(lambda f, g: lambda *a, **kw: g(f(*a, **kw)), funcs)
else:
raise ValueError('Composition of empty sequence not supported.')


class Conv_Bn_ReLU(keras.layers.Layer):
def __init__(self, filters, kernel_size, strides, name, dilation_rate=(1, 1)):
super(Conv_Bn_ReLU, self).__init__(name=name)
self.blocks = keras.Sequential()
self.blocks.add(keras.layers.Conv2D(filters, kernel_size, strides, padding='same', dilation_rate=dilation_rate))
self.blocks.add(keras.layers.BatchNormalization())
self.blocks.add(keras.layers.ReLU())

def call(self, inputs, **kwargs):
output = self.blocks(inputs)

return output


def res_block(x, filters, strides, name, dilation_rate=(1, 1)):
shortcut = x

x = compose(Conv_Bn_ReLU(filters // 4, (1, 1), (1, 1), name='{}_conv_bn_relu1'.format(name)),
Conv_Bn_ReLU(filters // 4, (3, 3), strides, name='{}_conv_bn_relu2'.format(name), dilation_rate=dilation_rate),
keras.layers.Conv2D(filters, (1, 1), name='{}_conv3'.format(name)),
keras.layers.BatchNormalization(name='{}_bn3'.format(name)))(x)
if name.find('conv_block') != -1:
shortcut = keras.layers.Conv2D(filters, (1, 1), strides, name='{}_shortcut_conv'.format(name))(shortcut)

output = keras.layers.Add(name='{}_add'.format(name))([x, shortcut])
output = keras.layers.ReLU(name='{}_relu'.format(name))(output)

return output


def psp_block(x, name):

p1 = compose(keras.layers.MaxPool2D((60, 60), name='{}_part1_maxpool'.format(name)),
Conv_Bn_ReLU(512, (1, 1), (1, 1), name='{}_part1_conv_bn_relu'.format(name)))(x)
p2 = compose(keras.layers.MaxPool2D((30, 30), name='{}_part2_maxpool'.format(name)),
Conv_Bn_ReLU(512, (1, 1), (1, 1), name='{}_part2_conv_bn_relu'.format(name)))(x)
p3 = compose(keras.layers.MaxPool2D((20, 20), name='{}_part3_maxpool'.format(name)),
Conv_Bn_ReLU(512, (1, 1), (1, 1), name='{}_part3_conv_bn_relu'.format(name)))(x)
p4 = compose(keras.layers.MaxPool2D((10, 10), name='{}_part4_maxpool'.format(name)),
Conv_Bn_ReLU(512, (1, 1), (1, 1), name='{}_part4_conv_bn_relu'.format(name)))(x)
input_size = (x.shape[1], x.shape[2])
p1 = tf.image.resize(p1, input_size, name='{}_resize1'.format(name))
p2 = tf.image.resize(p2, input_size, name='{}_resize2'.format(name))
p3 = tf.image.resize(p3, input_size, name='{}_resize3'.format(name))
p4 = tf.image.resize(p4, input_size, name='{}_resize4'.format(name))
output = keras.layers.Concatenate(name='{}_concatenate'.format(name))([p1, p2, p3, p4, x])

return output


def pspnet(input_shape):

input_tensor = keras.layers.Input(shape=input_shape, name='input')
x = input_tensor

x = compose(Conv_Bn_ReLU(64, (3, 3), (2, 2), name='conv_bn_relu1'),
Conv_Bn_ReLU(64, (3, 3), (1, 1), name='conv_bn_relu2'),
Conv_Bn_ReLU(128, (3, 3), (1, 1), name='conv_bn_relu3'),
keras.layers.MaxPool2D((3, 3), (2, 2), 'same', name='maxpool1'))(x)

filters = [256, 512, 1024, 2048]
strides = [(1, 1), (2, 2), (1, 1), (1, 1)]
dilation_rate = [(1, 1), (1, 1), (2, 2), (4, 4)]
times = [2, 3, 22, 2]
for i in range(len(filters)):
x = res_block(x, filters[i], strides=strides[i], name='conv_block{}'.format(i + 1))
for j in range(times[i]):
x = res_block(x, filters[i], strides=(1, 1), name='identity_block{}_{}'.format(i + 1, j + 1), dilation_rate=dilation_rate[i])

x = psp_block(x, name='psp_block')

x = compose(Conv_Bn_ReLU(512, (1, 1), (1, 1), name='conv_bn_relu4'),
keras.layers.Dropout(0.1, name='dropout'),
keras.layers.Conv2D(21, (1, 1), (1, 1), 'same', name='conv5'))(x)
x = tf.image.resize(x, (input_shape[0], input_shape[1]), name='resize')
output = keras.layers.Softmax(name='softmax')(x)

model = keras.Model(input_tensor, output, name='PSPNet')

return model


if __name__ == '__main__':

model = pspnet(input_shape=(473, 473, 3))
model.build(input_shape=(None, 473, 473, 3))
model.summary()

PSPNet

Shape数据集完整实战

文件路径关系说明

  • project
    • shape
      • train_imgs(训练集图像文件夹)
      • train_mask(训练集掩模文件夹)
      • test_imgs(测试集图像文件夹)
    • PSPNet_weight(模型权重文件夹)
    • PSPNet_test_result(测试集结果文件夹)
    • PSPNet.py

实战步骤说明

  1. 语义分割实战运行较为简单,因为它的输入的训练数据为图像,输入的标签数据也是图像,首先要对输入的标签数据进行编码,转换为类别信息,要和网络的输出维度相匹配,从(batch_size, height, width, 1)转换为(batch_size, height, width, num_class + 1),某个像素点为哪一个类别,则在该通道上置1,其余通道置0。即神经网络的输入大小为(batch_size, height, width, 3),输出大小为(batch_size, height, width, num_class + 1)。
  2. 设计损失函数,简单情况设置交叉熵损失函数即可达到较好效果。
  3. 搭建神经网络,设置合适参数,进行训练。
  4. 预测时,需要根据神经网络的输出进行逆向解码(编码的反过程),寻找每一个像素点,哪一个通道上值最大则归为哪一个类别,即可完成实战的过程。

小技巧

  1. 设置的图像类别数为实际类别数+1,1代表背景类别,此数据集为3类,最后的通道数为4,每一个通道预测一类物体。在通道方向求Softmax,并且求出最大的索引,索引为0则代表背景,索引为1则代表圆形,索引为2则代表三角形,索引为3则代表正方形。
  2. 在PSPNet中只使用ResNet101的最后一层,可以借鉴UNet的思想,可以使用多层输出,实现多尺度特征融合。
  3. 设置了权重的保存方式学习率的下降方式早停方式
  4. 使用yield关键字,产生可迭代对象,不用将所有的数据都保存下来,大大节约内存。
  5. 其中将1000个数据,分成800个训练集,100个验证集和100个测试集,小伙伴们可以自行修改。
  6. 注意其中的一些维度变换和numpytensorflow常用操作,否则在阅读代码时可能会产生一些困难。
  7. 金字塔池化模块中的池化核,可以根据需要进行调整,论文中金字塔池化模块的输入尺寸为60x60,因此可以进行60x60,30x30,20x20,10x10的池化核,在这个简单数据集中,输入尺寸为8,因此我选择的是8x8,4x4,2x2,1x1的池化核。
  8. PSPNet的特征提取网络为ResNet101,实战中我选择的是ResNet50,小伙伴们可以参考特征提取网络部分内容,选择其他的网络进行特征提取,比较不同网络参数量,运行速度,最终结果之间的差异。
  9. 在论文中提到的AL(辅助损失)是在构建ResNet101特征提取网络时使用的,在这里我们为了简单起见,直接使用ResNet50。
  10. 图像输入可以先将其归一化到0-1之间或者-1-1之间,因为网络的参数一般都比较小,所以归一化后计算方便,收敛较快。
  11. 实际的工程应用中,常常还需要对数据集进行大小调整和增强,在这里为了简单起见,没有进行复杂的操作,小伙伴们应用中要记得根据自己的需要,对图像进行resize或者padding,然后旋转对比度增强仿射运算等等操作,增加模型的鲁棒性,并且实际中的图像不一定按照顺序命名的,因此应用中也要注意图像读取的文件名。

完整实战代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
import os
from functools import reduce
import numpy as np
import cv2 as cv
import tensorflow as tf
import tensorflow.keras as keras


def compose(*funcs):
if funcs:
return reduce(lambda f, g: lambda *a, **kw: g(f(*a, **kw)), funcs)
else:
raise ValueError('Composition of empty sequence not supported.')


class Conv_Bn_ReLU(keras.layers.Layer):
def __init__(self, filters, kernel_size, strides, name, dilation_rate=(1, 1)):
super(Conv_Bn_ReLU, self).__init__(name=name)
self.blocks = keras.Sequential()
self.blocks.add(keras.layers.Conv2D(filters, kernel_size, strides, padding='same', dilation_rate=dilation_rate))
self.blocks.add(keras.layers.BatchNormalization())
self.blocks.add(keras.layers.ReLU())

def call(self, inputs, **kwargs):
output = self.blocks(inputs)

return output


def res_block(x, filters, strides, name, dilation_rate=(1, 1)):
shortcut = x

x = compose(Conv_Bn_ReLU(filters // 4, (1, 1), (1, 1), name='{}_conv_bn_relu1'.format(name)),
Conv_Bn_ReLU(filters // 4, (3, 3), strides, name='{}_conv_bn_relu2'.format(name), dilation_rate=dilation_rate),
keras.layers.Conv2D(filters, (1, 1), name='{}_conv3'.format(name)),
keras.layers.BatchNormalization(name='{}_bn3'.format(name)))(x)
if name.find('conv_block') != -1:
shortcut = keras.layers.Conv2D(filters, (1, 1), strides, name='{}_shortcut_conv'.format(name))(shortcut)

output = keras.layers.Add(name='{}_add'.format(name))([x, shortcut])
output = keras.layers.ReLU(name='{}_relu'.format(name))(output)

return output


def psp_block(x, name):

p1 = compose(keras.layers.MaxPool2D((x.shape[1], x.shape[2]), name='{}_part1_maxpool'.format(name)),
Conv_Bn_ReLU(x.shape[-1] // 4, (1, 1), (1, 1), name='{}_part1_conv_bn_relu'.format(name)))(x)
p2 = compose(keras.layers.MaxPool2D((x.shape[1] // 2, x.shape[2] // 2), name='{}_part2_maxpool'.format(name)),
Conv_Bn_ReLU(x.shape[-1] // 4, (1, 1), (1, 1), name='{}_part2_conv_bn_relu'.format(name)))(x)
p3 = compose(keras.layers.MaxPool2D((x.shape[1] // 4, x.shape[2] // 4), name='{}_part3_maxpool'.format(name)),
Conv_Bn_ReLU(x.shape[-1] // 4, (1, 1), (1, 1), name='{}_part3_conv_bn_relu'.format(name)))(x)
p4 = compose(keras.layers.MaxPool2D((x.shape[1] // 8, x.shape[2] // 8), name='{}_part4_maxpool'.format(name)),
Conv_Bn_ReLU(x.shape[-1] // 4, (1, 1), (1, 1), name='{}_part4_conv_bn_relu'.format(name)))(x)
input_size = (x.shape[1], x.shape[2])
p1 = tf.image.resize(p1, input_size, name='{}_resize1'.format(name))
p2 = tf.image.resize(p2, input_size, name='{}_resize2'.format(name))
p3 = tf.image.resize(p3, input_size, name='{}_resize3'.format(name))
p4 = tf.image.resize(p4, input_size, name='{}_resize4'.format(name))
output = keras.layers.Concatenate(name='{}_concatenate'.format(name))([p1, p2, p3, p4, x])

return output


def small_pspnet(input_shape):

input_tensor = keras.layers.Input(shape=input_shape, name='input')
x = input_tensor

x = compose(Conv_Bn_ReLU(32, (3, 3), (2, 2), name='conv_bn_relu1'),
Conv_Bn_ReLU(32, (3, 3), (1, 1), name='conv_bn_relu2'),
Conv_Bn_ReLU(64, (3, 3), (1, 1), name='conv_bn_relu3'),
keras.layers.MaxPool2D((3, 3), (2, 2), 'same', name='maxpool1'))(x)

x1 = res_block(x, 128, strides=(1, 1), name='conv_block1')
for j in range(2):
x1 = res_block(x1, 128, strides=(1, 1), name='identity_block1_{}'.format(j + 1), dilation_rate=(1, 1))

x2 = res_block(x1, 256, strides=(2, 2), name='conv_block2')
for j in range(2):
x2 = res_block(x2, 256, strides=(1, 1), name='identity_block2_{}'.format(j + 1), dilation_rate=(1, 1))

x3 = res_block(x2, 512, strides=(2, 2), name='conv_block3')
for j in range(2):
x3 = res_block(x3, 512, strides=(1, 1), name='identity_block3_{}'.format(j + 1), dilation_rate=(1, 1))

x4 = res_block(x3, 1024, strides=(1, 1), name='conv_block4')
for j in range(2):
x4 = res_block(x4, 1024, strides=(1, 1), name='identity_block4_{}'.format(j + 1), dilation_rate=(2, 2))

psp4 = psp_block(x4, name='psp_block4')
upsampling4 = keras.layers.UpSampling2D(name='upsampling4')(psp4)
y2 = keras.layers.Concatenate(name='concatenate4')([x2, upsampling4])
y2 = Conv_Bn_ReLU(512, (1, 1), (1, 1), name='conv_bn_relu4')(y2)
psp2 = psp_block(y2, name='psp_block2')

y = compose(Conv_Bn_ReLU(128, (1, 1), (1, 1), name='conv_bn_relu5'),
keras.layers.Dropout(0.1, name='dropout'),
keras.layers.Conv2D(num_class, (1, 1), (1, 1), 'same', name='conv6'))(psp2)
y = tf.image.resize(y, (input_shape[0], input_shape[1]), name='resize')
output = keras.layers.Softmax(name='softmax')(y)

model = keras.Model(input_tensor, output, name='Small_PSPNet')

return model


def generate_arrays_from_file(train_data, batch_size):
# 获取总长度
n = len(train_data)
i = 0
while 1:
X_train = []
Y_train = []
# 获取一个batch_size大小的数据
for _ in range(batch_size):
if i == 0:
np.random.shuffle(train_data)
# 从文件中读取图像
img = cv.imread(imgs_path + '\\' + str(train_data[i]) + '.jpg')
img = img / 127.5 - 1
X_train.append(img)

# 从文件中读取图像
img = cv.imread(mask_path + '\\' + str(train_data[i]) + '.png')
seg_labels = np.zeros((img_size[0], img_size[1], num_class))
for c in range(num_class):
seg_labels[:, :, c] = (img[:, :, 0] == c).astype(int)
Y_train.append(seg_labels)

# 读完一个周期后重新开始
i = (i + 1) % n
yield tf.constant(X_train), tf.constant(Y_train)


if __name__ == '__main__':
# 包括背景
num_class = 4
train_data = list(range(800))
validation_data = list(range(800, 900))
test_data = range(900, 1000)
epochs = 50
batch_size = 16
tf.random.set_seed(22)
img_size = (128, 128)
colors = [[0, 0, 0], [0, 0, 128], [0, 128, 0], [128, 0, 0]]

mask_path = r'.\shape\train_mask'
imgs_path = r'.\shape\train_imgs'
test_path = r'.\shape\test_imgs'
save_path = r'.\PSPNet_test_result'
weight_path = r'.\PSPNet_weight'

try:
os.mkdir(save_path)
except FileExistsError:
print(save_path + 'has been exist')

try:
os.mkdir(weight_path)
except FileExistsError:
print(weight_path + 'has been exist')

model = small_pspnet(input_shape=(img_size[0], img_size[1], 3))
model.build(input_shape=(None, img_size[0], img_size[1], 3))
model.summary()

optimizor = keras.optimizers.Adam(lr=1e-3)
lossor = keras.losses.BinaryCrossentropy()

model.compile(optimizer=optimizor, loss=lossor, metrics=['accuracy'])

# 保存的方式,3世代保存一次
checkpoint_period = keras.callbacks.ModelCheckpoint(
weight_path + '\\' + 'ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5',
monitor='val_loss',
save_weights_only=True,
save_best_only=True,
period=3
)

# 学习率下降的方式,val_loss3次不下降就下降学习率继续训练
reduce_lr = keras.callbacks.ReduceLROnPlateau(
monitor='val_loss',
factor=0.5,
patience=3,
verbose=1
)

# 是否需要早停,当val_loss一直不下降的时候意味着模型基本训练完毕,可以停止
early_stopping = keras.callbacks.EarlyStopping(
monitor='val_loss',
min_delta=0,
patience=10,
verbose=1
)

model.fit_generator(generate_arrays_from_file(train_data, batch_size),
steps_per_epoch=max(1, len(train_data) // batch_size),
validation_data=generate_arrays_from_file(validation_data, batch_size),
validation_steps=max(1, len(validation_data) // batch_size),
epochs=epochs,
callbacks=[checkpoint_period, reduce_lr, early_stopping])

for name in test_data:
test_img_path = test_path + '\\' + str(name) + '.jpg'
save_img_path = save_path + '\\' + str(name) + '.png'
test_img = cv.imread(test_img_path)
test_img = tf.constant([test_img / 127.5 - 1])
test_mask = model.predict(test_img)
test_mask = np.reshape(test_mask, (img_size[0], img_size[1], num_class))
test_mask = np.argmax(test_mask, axis=-1)
seg_img = np.zeros((img_size[0], img_size[1], 3))
for c in range(num_class):
seg_img[:, :, 0] += ((test_mask == c) * (colors[c][0]))
seg_img[:, :, 1] += ((test_mask == c) * (colors[c][1]))
seg_img[:, :, 2] += ((test_mask == c) * (colors[c][2]))
seg_img = seg_img.astype(np.uint8)
cv.imwrite(save_img_path, seg_img)

模型运行结果

PSPNet

PSPNet小结

  PSPNet是一种高效的语义分割网络,从上图可以看出PSPNet模型的参数量有49M,PSPNet不同于SegNet和UNet,没有很对称的编码解码结构,在编码过程中,使用不同尺寸金字塔池化核完成对不同尺寸特征的融合,在解码过程中,直接使用简单的resize完成对图像信息的恢复,对后面的深度学习网络的发展有重要的影响。

-------------本文结束感谢您的阅读-------------
0%