LSGAN

LSGAN

背景介绍

  LSGAN(Least Squares Generative Adversarial Networks, 最小二乘生成式对抗网络):于2016年提出,分析了GAN网络中使用的交叉熵损失函数时可能会导致在饱和区收敛速度太慢,而且提出了一种新的最小二乘损失函数代替,构建了一个更加稳定,收敛更快,质量更高的生成式对抗网络

acgan

LSGAN特点

  保持GAN的网络结构不变,仅仅将判别器网络最后的sigmoid删去,并且将损失函数由二分类交叉熵修改为均方差

LSGAN图像分析

generator
discriminator

TensorFlow2.0实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
import os
import numpy as np
import cv2 as cv
from functools import reduce
import tensorflow as tf
import tensorflow.keras as keras


def compose(*funcs):
if funcs:
return reduce(lambda f, g: lambda *a, **kw: g(f(*a, **kw)), funcs)
else:
raise ValueError('Composition of empty sequence not supported.')


def compose(*funcs):
if funcs:
return reduce(lambda f, g: lambda *a, **kw: g(f(*a, **kw)), funcs)
else:
raise ValueError('Composition of empty sequence not supported.')


def generator(input_shape):
input_tensor = keras.layers.Input(input_shape, name='input')
x = input_tensor

x = compose(keras.layers.Dense(256, activation='relu', name='dense_relu1'),
keras.layers.BatchNormalization(momentum=0.8, name='bn1'),
keras.layers.Dense(512, activation='relu', name='dense_relu2'),
keras.layers.BatchNormalization(momentum=0.8, name='bn2'),
keras.layers.Dense(1024, activation='relu', name='dense_relu3'),
keras.layers.BatchNormalization(momentum=0.8, name='bn3'),
keras.layers.Dense(784, activation='tanh', name='dense_tanh'),
keras.layers.Reshape((28, 28, 1), name='reshape'))(x)

model = keras.Model(input_tensor, x, name='LSGAN-Generator')

return model


def discriminator(input_shape):
input_tensor = keras.layers.Input(input_shape, name='input')
x = input_tensor

x = compose(keras.layers.Flatten(name='flatten'),
keras.layers.Dense(512, activation='relu', name='dense_relu1'),
keras.layers.Dense(256, activation='relu', name='dense_relu2'),
keras.layers.Dense(1, name='dense'))(x)

model = keras.Model(input_tensor, x, name='LSGAN-Discriminator')

return model


def lsgan(input_shape, model_g, model_d):
input_tensor = keras.layers.Input(input_shape, name='input')
x = input_tensor

x = model_g(x)
model_d.trainable = False
x = model_d(x)

model = keras.Model(input_tensor, x, name='LSGAN')

return model


def save_picture(image, save_path, picture_num):
image = ((image + 1) * 127.5).astype(np.uint8)
image = np.concatenate([image[i * picture_num:(i + 1) * picture_num] for i in range(picture_num)], axis=2)
image = np.concatenate([image[i] for i in range(picture_num)], axis=0)
cv.imwrite(save_path, image)


if __name__ == '__main__':
(x, _), (_, _) = keras.datasets.mnist.load_data()
batch_size = 256
epochs = 20
tf.random.set_seed(22)
save_path = r'.\lsgan'
if not os.path.exists(save_path):
os.makedirs(save_path)

x = x[..., np.newaxis].astype(np.float) / 127.5 - 1
x = tf.data.Dataset.from_tensor_slices(x).batch(batch_size)

optimizer = keras.optimizers.Adam(0.0002, 0.5)
loss = keras.losses.BinaryCrossentropy()

real_dmse = keras.metrics.MeanSquaredError()
fake_dmse = keras.metrics.MeanSquaredError()
gmse = keras.metrics.MeanSquaredError()

model_d = discriminator(input_shape=(28, 28, 1))
model_d.compile(optimizer=optimizer, loss='mse')

model_g = generator(input_shape=(100,))

model_g.build(input_shape=(100,))
model_g.summary()
keras.utils.plot_model(model_g, 'LSGAN-generator.png', show_shapes=True, show_layer_names=True)

model_d.build(input_shape=(28, 28, 1))
model_d.summary()
keras.utils.plot_model(model_d, 'LSGAN-discriminator.png', show_shapes=True, show_layer_names=True)

model = lsgan(input_shape=(100,), model_g=model_g, model_d=model_d)
model.compile(optimizer=optimizer, loss='mse')

model.build(input_shape=(100,))
model.summary()
keras.utils.plot_model(model, 'LSGAN.png', show_shapes=True, show_layer_names=True)

for epoch in range(epochs):
x = x.shuffle(np.random.randint(0, 10000))
x_db = iter(x)

for step, real_image in enumerate(x_db):
noise = np.random.normal(0, 1, (real_image.shape[0], 100))
fake_image = model_g(noise)

real_dmse(np.ones((real_image.shape[0], 1)), model_d(real_image))
fake_dmse(np.zeros((real_image.shape[0], 1)), model_d(fake_image))
gmse(np.ones((real_image.shape[0], 1)), model(noise))

real_dloss = model_d.train_on_batch(real_image, np.ones((real_image.shape[0], 1)))
fake_dloss = model_d.train_on_batch(fake_image, np.zeros((real_image.shape[0], 1)))
gloss = model.train_on_batch(noise, np.ones((real_image.shape[0], 1)))

if step % 20 == 0:
print('epoch = {}, step = {}, real_dmse = {}, fake_dmse = {}, gmse = {}'.format(epoch, step, real_dmse.result(), fake_dmse.result(), gmse.result()))
real_dmse.reset_states()
fake_dmse.reset_states()
gmse.reset_states()
fake_data = np.random.normal(0, 1, (100, 100))
fake_image = model_g(fake_data)
save_picture(fake_image.numpy(), save_path + '\\epoch{}_step{}.jpg'.format(epoch, step), 10)

lsgan

模型运行结果

lsgan

小技巧

  1. 图像输入可以先将其归一化到0-1之间或者-1-1之间,因为网络的参数一般都比较小,所以归一化后计算方便,收敛较快。
  2. 注意其中的一些维度变换和numpytensorflow常用操作,否则在阅读代码时可能会产生一些困难。
  3. 可以设置一些权重的保存方式学习率的下降方式早停方式
  4. LSGAN对于网络结构,优化器参数,网络层的一些超参数都是非常敏感的,效果不好不容易发现原因,这可能需要较多的工程实践经验
  5. 先创建判别器,然后进行compile,这样判别器就固定了,然后创建生成器时,不要训练判别器,需要将判别器的trainable改成False,此时不会影响之前固定的判别器,这个可以通过模型的_collection_collected_trainable_weights属性查看,如果该属性为空,则模型不训练,否则模型可以训练,compile之后,该属性固定,无论后面如何修改trainable,只要不重新compile,都不影响训练。
  6. 本博客中的LSGAN是在GAN的基础上修改了sigmoid函数和损失函数,当然小伙伴们也可以尝试在DCGAN,CGAN等模型上进行尝试

LSGAN小结

  LSGAN在提出时对网络的损失函数进行了大量的分析,我不是大佬,也不对数学公式进行过多的阐述,可能我说了会让小伙伴们更加迷糊,因此有需要的小伙伴们可以去网上搜索相关资料。因为LSGAN基本没有修改网络结构,只是更换了一个激活函数和损失函数,因此网络参数和GAN完全相同

-------------本文结束感谢您的阅读-------------
0%