前言
今天我们来解析下pspnet代码和论文里面所提到重要部分,pspnet虽然没有deeplabv3那样强的语义效果,但是deeplabv3是在pspnet上结合了一定思想,改出来的。下面我们来看看pspnet吧。
原论文地址
https://arxiv.org/pdf/1612.01105.pdf
github代码
https://github.com/yanjingke/pspnet-keras
预备知识
1.FCN
通常cnn网络在卷积之后会接上若干个全连接层,将卷积层产生的特征图(feature map)映射成为一个固定长度的特征向量。一般的CNN结构适用于图像级别的分类和回归任务,因为它们最后都期望得到输入图像的分类的概率,如ALexNet网络最后输出一个1000维的向量表示输入图像属于每一类的概率。
FCN对图像进行像素级的分类,从而解决了语义级别的图像分割问题。与经典的CNN在卷积层使用全连接层得到固定长度的特征向量进行分类不同,FCN可以接受任意尺寸的输入图像,采用反卷积层对最后一个卷基层的特征图(feature map)进行上采样(线性差值),使它恢复到输入图像相同的尺寸,从而可以对每一个像素都产生一个预测,同时保留了原始输入图像中的空间信息,最后奇偶在上采样的特征图进行像素的分类。
2.深度可分离卷积
深度可分离卷积,调整filters的接受野(field-of-view)。如下图所示,左图中标准卷积中的卷积核大小为 3x3,其感受野也为 3x3,在卷积核中间插入 0 之后变为右图空洞卷积,其中实际参与计算的卷积核大小仍为 3x3,而感受野已经扩大到了 5x5。
其中深度可分离卷积是怎么减少计算量的啦?
假设有一个3×3大小的卷积层,其输入通道为16、输出通道为32。具体为,32个3×3大小的卷积核会遍历16个通道中的每个数据,最后可得到所需的32个输出通道,所需参数为16×32×3×3=4608个。
应用深度可分离卷积,用16个3×3大小的卷积核分别遍历16通道的数据,得到了16个特征图谱。在融合操作之前,接着用32个1×1大小的卷积核遍历这16个特征图谱,所需参数为16×3×3+16×32×1×1=656个。
3. mobilenetv2
在3x3网络结构前利用1x1卷积降维,在3x3网络结构后,利用1x1卷积升维,相比直接使用3x3网络卷积效果更好,参数更少,先进行压缩,再进行扩张。而在MobileNetV2网络部分,其采用Inverted residuals结构,在3x3网络结构前利用1x1卷积升维,在3x3网络结构后,利用1x1卷积降维,先进行扩张,再进行压缩。
为了避免Relu对特征的破坏,在在3x3网络结构前利用1x1卷积升维,在3x3网络结构后,再利用1x1卷积降维后,不再进行Relu6层,直接进行残差网络的加法。
什么是pspnet?
pspnet是一款优秀的语义分割模型。它有如下优点
1.提出的金字塔池化模块(Pyramid Pooling Module)能够聚合不同区域的上下文信息,从而提高获取全局信息的能力。具有多尺度的特征融合,高层特征具有强的语义信息,底层特征包含更多的细节。
2.采用了4个不同的金字塔等级的池化,论文中使用的4个等级,核大小分别为1×1,2×2,3×3,6×6,对应了图片中的绿色、蓝色、橙色、红色的的输出。然后利用psp模块融合了4种不同金字塔尺度的特征,第一行红色是最粗糙的特征–全局池化生成单个bin输出,后面三行是不同尺度的池化特征。为了保证全局特征的权重,如果金字塔共有N个级别,则在每个级别后使用1×1,1×1的卷积将对于级别通道降为原本的1/N。再通过双线性插值获得未池化前的大小,最终concat到一起。
代码讲解
主干网络
在原始论文里,他主要利用了resnet50。在本次讲解代码中,本人提供2种网络方便大家训练和预测。由于我很喜欢mobilenetv2,所以本次博客将以mobienetv2展开讨论。如果大家喜欢resnet50,可以自行看代码学习。
在本次mobilnet主干特征提取中主要采用了4次下采样(长和宽的压缩),查阅一些资料,据说一般不会进行5次下采样,一般就3次4次。
其中f4作为辅佐分支的值
from keras.models import Model
from keras import layers
from keras.layers import Input
from keras.layers import Lambda
from keras.layers import Activation
from keras.layers import Concatenate
from keras.layers import Add
from keras.layers import Dropout
from keras.layers import BatchNormalization
from keras.layers import Conv2D
from keras.layers import DepthwiseConv2D
from keras.layers import ZeroPadding2D
from keras.layers import GlobalAveragePooling2D
from keras.activations import relu
def _make_divisible(v, divisor, min_value=None): if min_value is None: min_value = divisor # print(int(v + divisor / 2) // divisor * divisor) new_v = max(min_value, int(v + divisor / 2) // divisor * divisor) if new_v < 0.9 * v: new_v += divisor return new_v
def relu6(x): return relu(x, max_value=6)
def _inverted_res_block(inputs, expansion, stride, alpha, filters, block_id, skip_connection, rate=1): in_channels = inputs.shape[-1].value # inputs._keras_shape[-1] pointwise_conv_filters = int(filters * alpha) pointwise_filters = _make_divisible(pointwise_conv_filters, 8) x = inputs prefix = 'expanded_conv_{}_'.format(block_id) if block_id: # Expand x = Conv2D(expansion * in_channels, kernel_size=1, padding='same', use_bias=False, activation=None, name=prefix + 'expand')(x) x = BatchNormalization(epsilon=1e-3, momentum=0.999, name=prefix + 'expand_BN')(x) x = Activation(relu6, name=prefix + 'expand_relu')(x) else: prefix = 'expanded_conv_' # Depthwise x = DepthwiseConv2D(kernel_size=3, strides=stride, activation=None, use_bias=False, padding='same', dilation_rate=(rate, rate), name=prefix + 'depthwise')(x) x = BatchNormalization(epsilon=1e-3, momentum=0.999, name=prefix + 'depthwise_BN')(x) x = Activation(relu6, name=prefix + 'depthwise_relu')(x) # Project x = Conv2D(pointwise_filters, kernel_size=1, padding='same', use_bias=False, activation=None, name=prefix + 'project')(x) x = BatchNormalization(epsilon=1e-3, momentum=0.999, name=prefix + 'project_BN')(x) if skip_connection: return Add(name=prefix + 'add')([inputs, x]) # if in_channels == pointwise_filters and stride == 1: # return Add(name='res_connect_' + str(block_id))([inputs, x]) return x
def get_mobilenet_encoder(inputs_size, downsample_factor=8): if downsample_factor == 16: block4_dilation = 1 block5_dilation = 2 block4_stride = 2 elif downsample_factor == 8: block4_dilation = 2 block5_dilation = 4 block4_stride = 1 else: raise ValueError('Unsupported factor - `{}`, Use 8 or 16.'.format(downsample_factor)) # 473,473,3 inputs = Input(shape=inputs_size) alpha=1.0 first_block_filters = _make_divisible(32 * alpha, 8) # 473,473,3 -> 237,237,32 x = Conv2D(first_block_filters, kernel_size=3, strides=(2, 2), padding='same', use_bias=False, name='Conv')(inputs) x = BatchNormalization( epsilon=1e-3, momentum=0.999, name='Conv_BN')(x) x = Activation(relu6, name='Conv_Relu6')(x) # 237,237,32 -> 237,237,16 x = _inverted_res_block(x, filters=16, alpha=alpha, stride=1, expansion=1, block_id=0, skip_connection=False) #---------------------------------------------------------------# # 237,237,16 -> 119,119,24 x = _inverted_res_block(x, filters=24, alpha=alpha, stride=2, expansion=6, block_id=1, skip_connection=False) x = _inverted_res_block(x, filters=24, alpha=alpha, stride=1, expansion=6, block_id=2, skip_connection=True) #---------------------------------------------------------------# # 119,119,24 -> 60,60.32 x = _inverted_res_block(x, filters=32, alpha=alpha, stride=2, expansion=6, block_id=3, skip_connection=False) x = _inverted_res_block(x, filters=32, alpha=alpha, stride=1, expansion=6, block_id=4, skip_connection=True) x = _inverted_res_block(x, filters=32, alpha=alpha, stride=1, expansion=6, block_id=5, skip_connection=True) #---------------------------------------------------------------# # 60,60,32 -> 30,30.64 x = _inverted_res_block(x, filters=64, alpha=alpha, stride=block4_stride, expansion=6, block_id=6, skip_connection=False) x = _inverted_res_block(x, filters=64, alpha=alpha, stride=1, rate=block4_dilation, expansion=6, block_id=7, skip_connection=True) x = _inverted_res_block(x, filters=64, alpha=alpha, stride=1, rate=block4_dilation, expansion=6, block_id=8, skip_connection=True) x = _inverted_res_block(x, filters=64, alpha=alpha, stride=1, rate=block4_dilation, expansion=6, block_id=9, skip_connection=True) # 30,30.64 -> 30,30.96 x = _inverted_res_block(x, filters=96, alpha=alpha, stride=1, rate=block4_dilation, expansion=6, block_id=10, skip_connection=False) x = _inverted_res_block(x, filters=96, alpha=alpha, stride=1, rate=block4_dilation, expansion=6, block_id=11, skip_connection=True) x = _inverted_res_block(x, filters=96, alpha=alpha, stride=1, rate=block4_dilation, expansion=6, block_id=12, skip_connection=True) # 辅助分支训练 f4 = x #---------------------------------------------------------------# # 30,30.96 -> 30,30,160 -> 30,30,320 x = _inverted_res_block(x, filters=160, alpha=alpha, stride=1, rate=block4_dilation, # 1! expansion=6, block_id=13, skip_connection=False) x = _inverted_res_block(x, filters=160, alpha=alpha, stride=1, rate=block5_dilation, expansion=6, block_id=14, skip_connection=True) x = _inverted_res_block(x, filters=160, alpha=alpha, stride=1, rate=block5_dilation, expansion=6, block_id=15, skip_connection=True) x = _inverted_res_block(x, filters=320, alpha=alpha, stride=1, rate=block5_dilation, expansion=6, block_id=16, skip_connection=False) f5 = x return inputs, f4, f5
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
金字塔池化psp模块
论文提出了一个具有层次全局优先级,包含不同子区域之间的不同尺度的信息,称之为pyramid pooling module。
该模块融合了4种不同金字塔尺度的特征,第一行红色是最粗糙的特征–全局池化生成单个bin输出,后面三行是不同尺度的池化特征。为了保证全局特征的权重,如果金字塔共有N个级别,则在每个级别后使用1×11×1的卷积将对于级别通道降为原本的1/N。再通过双线性插值获得未池化前的大小,最终concat到一起。
金字塔等级的池化核大小是可以设定的,这与送到金字塔的输入有关。论文中使用的4个等级,核大小分别为1×1,2×2,3×3,6×6。
假设PSP结构输入进来的特征层为30x30x320,此时这个特征层的高和宽均为30,如果我们要将这个特征层划分成6x6的区域,只需要使得平均池化的步长stride=30/6=5和kernel_size=30/6=5就行了,此时的平均池化相当于将特征层划分成6x6的区域,每个区域内部各自进行平均池化。
def pool_block(feats, pool_factor, out_channel):
h = K.int_shape(feats)[1]
w = K.int_shape(feats)[2]
# strides = [30,30],[15,15],[10,10],[5,5]
# poolsize 30/6=5 30/3=10 30/2=15 30/1=30
pool_size = strides = [int(np.round(float(h)/pool_factor)),int(np.round(float(w)/pool_factor))]
# 进行不同程度的平均
x = AveragePooling2D(pool_size , data_format=IMAGE_ORDERING , strides=strides, padding='same')(feats)
# 进行卷积
x = Conv2D(out_channel//4, (1 ,1), data_format=IMAGE_ORDERING, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation('relu' )(x)
x = Lambda(lambda x: tf.image.resize_images(x, (K.int_shape(feats)[1], K.int_shape(feats)[2]), align_corners=True))(x)
return x
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
def pspnet(n_classes, inputs_size, downsample_factor=8, backbone='mobilenet', aux_branch=True):
if backbone == "mobilenet":
img_input, f4, o = get_mobilenet_encoder(inputs_size, downsample_factor=downsample_factor)
out_channel = 320
elif backbone == "resnet50":
img_input, f4, o = get_resnet50_encoder(inputs_size, downsample_factor=downsample_factor)
out_channel = 2048
else:
raise ValueError('Unsupported backbone - `{}`, Use mobilenet, resnet50.'.format(backbone))
#-------------------------------------#
# PSP模块
# 分区域进行池化
#-------------------------------------#
pool_factors = [1,2,3,6]
pool_outs = [o]
for p in pool_factors:
pooled = pool_block(o, p, out_channel)
pool_outs.append(pooled) # 连接
# 60x60xout_channel*2
o = Concatenate(axis=MERGE_AXIS)(pool_outs)
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
获得预测结果
获得预测结果。如果设置了辅佐结果的我们将有两个预测结果。
获得预测结果步骤为:
1.利用33的卷积 压缩通道为原来的4分之1
2.利用11的卷积通道数调整为20+1(20为voc的种类有20类,1位背景)
3.利用resize调整为输入图片大小,长和宽保持一致
from keras.models import *
from keras.layers import *
from nets.mobilenetv2 import get_mobilenet_encoder
from nets.resnet50 import get_resnet50_encoder
import tensorflow as tf
IMAGE_ORDERING = 'channels_last'
MERGE_AXIS = -1
def resize_image(inp, s, data_format):
return Lambda(lambda x: tf.image.resize_images(x, (K.int_shape(x)[1]*s[0], K.int_shape(x)[2]*s[1])))(inp)
def pool_block(feats, pool_factor, out_channel):
h = K.int_shape(feats)[1]
w = K.int_shape(feats)[2]
# strides = [30,30],[15,15],[10,10],[5,5]
# poolsize 30/6=5 30/3=10 30/2=15 30/1=30
pool_size = strides = [int(np.round(float(h)/pool_factor)),int(np.round(float(w)/pool_factor))]
# 进行不同程度的平均
x = AveragePooling2D(pool_size , data_format=IMAGE_ORDERING , strides=strides, padding='same')(feats)
# 进行卷积
x = Conv2D(out_channel//4, (1 ,1), data_format=IMAGE_ORDERING, padding='same', use_bias=False)(x)
x = BatchNormalization()(x)
x = Activation('relu' )(x)
x = Lambda(lambda x: tf.image.resize_images(x, (K.int_shape(feats)[1], K.int_shape(feats)[2]), align_corners=True))(x)
return x
def pspnet(n_classes, inputs_size, downsample_factor=8, backbone='mobilenet', aux_branch=True):
if backbone == "mobilenet":
img_input, f4, o = get_mobilenet_encoder(inputs_size, downsample_factor=downsample_factor)
out_channel = 320
elif backbone == "resnet50":
img_input, f4, o = get_resnet50_encoder(inputs_size, downsample_factor=downsample_factor)
out_channel = 2048
else:
raise ValueError('Unsupported backbone - `{}`, Use mobilenet, resnet50.'.format(backbone))
#-------------------------------------#
# PSP模块
# 分区域进行池化
#-------------------------------------#
pool_factors = [1,2,3,6]
pool_outs = [o]
for p in pool_factors:
pooled = pool_block(o, p, out_channel)
pool_outs.append(pooled) # 连接
# 60x60xout_channel*2
o = Concatenate(axis=MERGE_AXIS)(pool_outs)
#-------------------------------------#
# 利用特征获得预测结果
#-------------------------------------#
# 卷积
# 60x60xout_channel//4
o = Conv2D(out_channel//4, (3,3), data_format=IMAGE_ORDERING, padding='same', use_bias=False)(o)
o = BatchNormalization()(o)
o = Activation('relu')(o)
# 正则化,防止过拟合
o = Dropout(0.1)(o)
# 60x60x21
o = Conv2D(n_classes,(1,1),data_format=IMAGE_ORDERING, padding='same')(o)
# [473,473,nclasses]
o = Lambda(lambda x: tf.image.resize_images(x, (inputs_size[1], inputs_size[0]), align_corners=True))(o)
# 获得每一个像素点属于每一个类的概率了
o = Activation("softmax", name="main")(o) if aux_branch:
f4 = Conv2D(out_channel//8, (3,3), data_format=IMAGE_ORDERING, padding='same', use_bias=False)(f4)
f4 = BatchNormalization()(f4)
f4 = Activation('relu')(f4)
# 防止过拟合
f4 = Dropout(0.1)(f4) # 60x60x21
f4 = Conv2D(n_classes,(1,1),data_format=IMAGE_ORDERING, padding='same')(f4)
# [473,473,nclasses]
f4 = Lambda(lambda x: tf.image.resize_images(x, (inputs_size[1], inputs_size[0]), align_corners=True))(f4)
# 获得每一个像素点属于每一个类的概率了
f4 = Activation("softmax", name="aux")(f4)
model = Model(img_input,[f4,o])
return model
else:
model = Model(img_input,[o])
return model
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
loss计算
在loss值计算部分采用了Cross Entropy Loss和Dice Loss。
Cross Entropy Loss为普通的交叉熵损失函数
x,y为预测分割图与 GT 分割图,其中qs越大表示预测结果和真实结果重合度越大。
而Dice loss,值越小越好:
(1)预测分割图与 GT 分割图的点乘:
(2)逐元素相乘的结果元素的相加和:
如果我们在训练使用Dice Loss。我们会把Dice loss和Cross Entropy Loss相加计算loss。
开始dice loss的 loss值代码:
def dice_loss_with_CE(beta=1, smooth = 1e-5): def _dice_loss_with_CE(y_true, y_pred): y_pred = K.clip(y_pred, K.epsilon(), 1.0 - K.epsilon()) # y_pred = K.greater(y_pred, 0.5) # y_pred = K.cast(y_pred, K.floatx()) # y_pred = K.cast(y_pred, K.floatx()) CE_loss = - y_true[...,:-1] * K.log(y_pred) CE_loss = K.mean(K.sum(CE_loss, axis = -1)) tp = K.sum(y_true[...,:-1] * y_pred, axis=[0,1,2]) fp = K.sum(y_pred , axis=[0,1,2]) - tp fn = K.sum(y_true[...,:-1], axis=[0,1,2]) - tp score = ((1 + beta ** 2) * tp + smooth) / ((1 + beta ** 2) * tp + beta ** 2 * fn + fp + smooth) score = tf.reduce_mean(score) dice_loss = 1 - score dice_loss = tf.Print(dice_loss, [dice_loss, CE_loss]) return CE_loss + dice_loss return _dice_loss_with_CE
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
只采用Cross Entropy Loss:
def CE(): def _CE(y_true, y_pred): y_pred = K.clip(y_pred, K.epsilon(), 1.0 - K.epsilon()) CE_loss = - y_true[...,:-1] * K.log(y_pred) CE_loss = K.mean(K.sum(CE_loss, axis = -1)) # dice_loss = tf.Print(CE_loss, [CE_loss]) return CE_loss return _CE
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
文章来源: blog.csdn.net,作者:快了的程序猿小可哥,版权归原作者所有,如需转载,请联系作者。
原文链接:blog.csdn.net/qq_35914625/article/details/108318288