前言
在2015年的时候,ssd作为一匹黑马杀出,他的贡献给深度学习one-stage开辟新纪元。今天我来讲解下我半年前学的第一个优秀的目标检测算法ssd。ps:回想,当年入门确实艰难。
论文地址
https://arxiv.org/abs/1512.02325
github
https://github.com/yanjingke/ssd-keras
ssd的优点
多尺度预测:根据不同尺度的 feature map 生成不同尺度的预测,并通过长宽比(aspect ratio)明确分别预测。这是从15年到现在大量算法基本都基于这个思想。所以ssd开辟了目标检测的新纪元。
学习前言
a[::-1]
a[::-1]相当于 a[-1:-len(a)-1:-1],也就是从最后一个元素到第一个元素复制一遍,即倒序。
ssd—anchor
完整代码:看我github 里面 VisionForPriorBox.py
在ssd设计边界框,以便特定的feature map学习相应目标的特定尺度。假设我们要使用 m 个 feature map 进行预测。每个 feature map 默认边界框的尺度计算如下:
对应下面代码中的anchor_sizes。
Sk是每个特征层的先验框大小与原图片大小之比,Smax和Smin分别是最大,最小的比例.
在SSD中一共有6个用于分类和回归的特征层(feature map),
分别是feat_layers=[‘block4’, ‘block7’, ‘block8’, ‘block9’, ‘block10’, ‘block11’],
m是就是特征层的个数,按理说分母应该是6-1=5,但是这里是5-1=4, 因为作者将第一层S1, 单独拿出来设置了.
k是第几个特征层的意思,注意k的范围是1-m, 也就是1~6.
知道这些就可以开始计算了.
Sk在实际计算中考虑到取整的问题,所有与上式有所区别:
需要向下取整.
所以你将m=5, Smin, Smax=(0.2, 0.9) 就是anchor_size_bounds=[0.20, 0.90],带入之后,
得到S1~S6: 20, 37, 54, 71, 88, 105, 其实就是挨个+17.
此时你还需要将其除以100还原回来,也就是:
0.2, 0.37, 0.54, 0.71, 0.88, 1.05
然后,我们这个是比例, 我们需要得到其在原图上的尺寸,所以需要依次乘以300, 得到:
60, 111, 162, 213, 264, 315.
最后,你会问: 30呢? 这就要说到S1’了,S1’=0.5*S1, 就是0.1, 再乘以300, 就是30.
既然我们明白了anchor_size, 这个参数直接决定了当前特征层的box 的大小, 可以看出越靠近输入层, box越小, 越靠近输出层, box越大, 所以 SSD的底层用于检测小目标, 高层用于检测大目标.这个anchor_box的计算与anchor_sizes和anchor_ratios有关.
先解释一下anchor_box的由来. 我们假设当前层是block4, 那么我们的feat_shape就是(38, 38),
你可以将其理解为38x38个cell单元, 每个单元要预测出来几个anchor_box,这里每个cell有4个鲜艳框
default_params = SSDParams( img_shape=(300, 300), num_classes=21, no_annotation_label=21, feat_layers=['block4', 'block7', 'block8', 'block9', 'block10', 'block11'], feat_shapes=[(38, 38), (19, 19), (10, 10), (5, 5), (3, 3), (1, 1)], anchor_size_bounds=[0.2, 0.9], # anchor_size_bounds=[0.20, 0.90], anchor_sizes=[(30, 60.), (60., 111.), (111., 162.), (162., 213.), (213., 264.), (264., 315.)], # anchor_sizes=[(30., 60.), # (60., 111.), # (111., 162.), # (162., 213.), # (213., 264.), # (264., 315.)], anchor_ratios=[[1,1,2, .5], [2, .5, 3, 1./3], [2, .5, 3, 1./3], [2, .5, 3, 1./3], [1,1,2, .5], [1,1,2, .5]], anchor_steps=[8, 16, 32, 64, 100, 300], anchor_offset=0.5, normalizations=[20, -1, -1, -1, -1, -1], prior_scaling=[0.1, 0.1, 0.2, 0.2] )
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
anchor_ratios=[[1,1,2, .5], [2, .5, 3, 1./3], [2, .5, 3, 1./3], [2, .5, 3, 1./3], [1,1,2, .5], [1,1,2, .5]], anchor_sizes=[(30., 60.), (60., 111.), (111., 162.), (162., 213.), (213., 264.), (264., 315.)],
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
我们看到第一层还有最后两层, 只有两种长宽比, 所以这三层就只有4种box.
我们利用38x38的层计算他每个cell单元只有4个box, 包括一大一小两个正方形, 以及两个长方形.
他的anchor_ratio=anchor_ratios[0]
ps:等于[1,1,2, .5]
而 anchor_size=anchor_sizes[0]
为30,60
大正方形其边长为,ps:min_siz=30,max_size=60
然后是另外两个长方形,这个时候anchor_ratios就出马了,长方形的长宽是有公式可寻的.
这个ratio就是anchor_ratios里的1,1,2, .5, 的.
其中代码部分:
# Add first anchor boxes with ratio=1.
# sizes就是(30, 60), img_shape就是原图尺寸(300, 300) h[0] = sizes[0] / img_shape[0] w[0] = sizes[0] / img_shape[1] di = 1 if len(sizes) > 1: h[1] = math.sqrt(sizes[0] * sizes[1]) / img_shape[0] w[1] = math.sqrt(sizes[0] * sizes[1]) / img_shape[1]+= 1 # r 就是[2, 5, 3, 1/3] for i, r in enumerate(ratios): h[i+di] = sizes[0] / img_shape[0] / math.sqrt(r)
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
总结:
anchor_size, 这个参数直接决定了当前特征层的box 的大小, 可以看出越靠近输入层, box越小, 越靠近输出层, box越大, 所以 SSD的底层用于检测小目标, 高层用于检测大目标.
SSD代码讲解
SSD的主要贡献是在不同的输出层上使用不同尺度的默认边界框进行类别分数预测和边界框偏移定位。
1.预测部分
首先我们来认识下vgg16,它的结构如下:
1、一张原始图片被resize到(224,224,3)。
2、conv1两次[3,3]卷积网络,输出的特征层为64,输出为(224,224,64),再2X2最大池化,输出net为(112,112,64)。
3、conv2两次[3,3]卷积网络,输出的特征层为128,输出net为(112,112,128),再2X2最大池化,输出net为(56,56,128)。
4、conv3三次[3,3]卷积网络,输出的特征层为256,输出net为(56,56,256),再2X2最大池化,输出net为(28,28,256)。
5、conv4三次[3,3]卷积网络,输出的特征层为512,输出net为(28,28,512),再2X2最大池化,输出net为(14,14,512)。
6、conv5三次[3,3]卷积网络,输出的特征层为512,输出net为(14,14,512),再2X2最大池化,输出net为(7,7,512)。
7、利用卷积的方式模拟全连接层,效果等同,输出net为(1,1,4096)。共进行两次。
8、利用卷积的方式模拟全连接层,效果等同,输出net为(1,1,1000)。
最后输出1000的就是每个类的预测。
在ssd中他将vgg中分类部分,利用卷积代替全连接层,并且使用了空洞卷积,增加了感受野。这里是fc6,fc7。
总结:
1、将VGG16的FC6和FC7层转化为卷积层。
2、去掉所有的Dropout层和FC8层;
3、新增了Conv6、Conv7、Conv8、Conv9。
ps:vgg16中FC6、FC7和FC8是负责分类的。
代码:
import keras.backend as K
from keras.layers import Activation
from keras.layers import Conv2D
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import GlobalAveragePooling2D
from keras.layers import Input
from keras.layers import MaxPooling2D
from keras.layers import merge, concatenate
from keras.layers import Reshape
from keras.layers import ZeroPadding2D
from keras.models import Model
def VGG16(input_tensor): #----------------------------主干特征提取网络开始---------------------------# # SSD结构,net字典 net = {} # Block 1 net['input'] = input_tensor # 300,300,3 -> 150,150,64 net['conv1_1'] = Conv2D(64, kernel_size=(3,3), activation='relu', padding='same', name='conv1_1')(net['input']) net['conv1_2'] = Conv2D(64, kernel_size=(3,3), activation='relu', padding='same', name='conv1_2')(net['conv1_1']) net['pool1'] = MaxPooling2D((2, 2), strides=(2, 2), padding='same', name='pool1')(net['conv1_2']) # Block 2 # 150,150,64 -> 75,75,128 net['conv2_1'] = Conv2D(128, kernel_size=(3,3), activation='relu', padding='same', name='conv2_1')(net['pool1']) net['conv2_2'] = Conv2D(128, kernel_size=(3,3), activation='relu', padding='same', name='conv2_2')(net['conv2_1']) net['pool2'] = MaxPooling2D((2, 2), strides=(2, 2), padding='same', name='pool2')(net['conv2_2']) # Block 3 # 75,75,128 -> 38,38,256 net['conv3_1'] = Conv2D(256, kernel_size=(3,3), activation='relu', padding='same', name='conv3_1')(net['pool2']) net['conv3_2'] = Conv2D(256, kernel_size=(3,3), activation='relu', padding='same', name='conv3_2')(net['conv3_1']) net['conv3_3'] = Conv2D(256, kernel_size=(3,3), activation='relu', padding='same', name='conv3_3')(net['conv3_2']) net['pool3'] = MaxPooling2D((2, 2), strides=(2, 2), padding='same', name='pool3')(net['conv3_3']) # Block 4 # 38,38,256 -> 19,19,512 net['conv4_1'] = Conv2D(512, kernel_size=(3,3), activation='relu', padding='same', name='conv4_1')(net['pool3']) net['conv4_2'] = Conv2D(512, kernel_size=(3,3), activation='relu', padding='same', name='conv4_2')(net['conv4_1']) net['conv4_3'] = Conv2D(512, kernel_size=(3,3), activation='relu', padding='same', name='conv4_3')(net['conv4_2']) net['pool4'] = MaxPooling2D((2, 2), strides=(2, 2), padding='same', name='pool4')(net['conv4_3']) # Block 5 # 19,19,512 -> 19,19,512 net['conv5_1'] = Conv2D(512, kernel_size=(3,3), activation='relu', padding='same', name='conv5_1')(net['pool4']) net['conv5_2'] = Conv2D(512, kernel_size=(3,3), activation='relu', padding='same', name='conv5_2')(net['conv5_1']) net['conv5_3'] = Conv2D(512, kernel_size=(3,3), activation='relu', padding='same', name='conv5_3')(net['conv5_2']) net['pool5'] = MaxPooling2D((3, 3), strides=(1, 1), padding='same', name='pool5')(net['conv5_3']) # FC6 # 19,19,512 -> 19,19,1024 net['fc6'] = Conv2D(1024, kernel_size=(3,3), dilation_rate=(6, 6), activation='relu', padding='same', name='fc6')(net['pool5']) # x = Dropout(0.5, name='drop6')(x) # FC7 # 19,19,1024 -> 19,19,1024 net['fc7'] = Conv2D(1024, kernel_size=(1,1), activation='relu', padding='same', name='fc7')(net['fc6']) # x = Dropout(0.5, name='drop7')(x) # Block 6 # 19,19,512 -> 10,10,512 net['conv6_1'] = Conv2D(256, kernel_size=(1,1), activation='relu', padding='same', name='conv6_1')(net['fc7']) net['conv6_2'] = ZeroPadding2D(padding=((1, 1), (1, 1)), name='conv6_padding')(net['conv6_1']) net['conv6_2'] = Conv2D(512, kernel_size=(3,3), strides=(2, 2), activation='relu', name='conv6_2')(net['conv6_2']) # Block 7 # 10,10,512 -> 5,5,256 net['conv7_1'] = Conv2D(128, kernel_size=(1,1), activation='relu', padding='same', name='conv7_1')(net['conv6_2']) net['conv7_2'] = ZeroPadding2D(padding=((1, 1), (1, 1)), name='conv7_padding')(net['conv7_1']) net['conv7_2'] = Conv2D(256, kernel_size=(3,3), strides=(2, 2), activation='relu', padding='valid', name='conv7_2')(net['conv7_2']) # Block 8 # 5,5,256 -> 3,3,256 net['conv8_1'] = Conv2D(128, kernel_size=(1,1), activation='relu', padding='same', name='conv8_1')(net['conv7_2']) net['conv8_2'] = Conv2D(256, kernel_size=(3,3), strides=(1, 1), activation='relu', padding='valid', name='conv8_2')(net['conv8_1']) # Block 9 # 3,3,256 -> 1,1,256 net['conv9_1'] = Conv2D(128, kernel_size=(1,1), activation='relu', padding='same', name='conv9_1')(net['conv8_2']) net['conv9_2'] = Conv2D(256, kernel_size=(3,3), strides=(1, 1), activation='relu', padding='valid', name='conv9_2')(net['conv9_1']) #----------------------------主干特征提取网络结束---------------------------# return net
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
获得预测结果
从上述图可知我们将获得取conv4的第三次卷积的特征、fc7的特征、conv6的第二次卷积的特征、conv7的第二次卷积的特征、conv8的第二次卷积的特征、conv9的第二次卷积的特征。对获取到的每一个有效特征层,我们分别对其进行一次num_priors x 4的卷积、一次num_priors x num_classes的卷积、并需要计算每一个有效特征层对应的先验框。而num_priors指的是该特征层所拥有的先验框数量。
num_priors x 4的值,该特征层上 每一个网格点上 每一个先验框的变化情况。
num_priors x num_classes的卷积 用于预测 该特征层上 每一个网格点上 每一个预测框对应的种类。
代码:
import keras.backend as K
from keras.layers import Activation
#from keras.layers import AtrousConvolution2D
from keras.layers import Conv2D
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import GlobalAveragePooling2D
from keras.layers import Input
from keras.layers import MaxPooling2D
from keras.layers import merge, concatenate
from keras.layers import Reshape
from keras.layers import ZeroPadding2D
from keras.models import Model
from nets.VGG16 import VGG16
from nets.ssd_layers import Normalize
from nets.ssd_layers import PriorBox
def SSD300(input_shape, num_classes=21): # 300,300,3 input_tensor = Input(shape=input_shape) img_size = (input_shape[1], input_shape[0]) # SSD结构,net字典 net = VGG16(input_tensor) #-----------------------将提取到的主干特征进行处理---------------------------# # 对conv4_3进行处理 38,38,512 net['conv4_3_norm'] = Normalize(20, name='conv4_3_norm')(net['conv4_3']) num_priors = 4 # 预测框的处理 # num_priors表示每个网格点先验框的数量,4是x,y,h,w的调整 net['conv4_3_norm_mbox_loc'] = Conv2D(num_priors * 4, kernel_size=(3,3), padding='same', name='conv4_3_norm_mbox_loc')(net['conv4_3_norm']) net['conv4_3_norm_mbox_loc_flat'] = Flatten(name='conv4_3_norm_mbox_loc_flat')(net['conv4_3_norm_mbox_loc']) # num_priors表示每个网格点先验框的数量,num_classes是所分的类 net['conv4_3_norm_mbox_conf'] = Conv2D(num_priors * num_classes, kernel_size=(3,3), padding='same',name='conv4_3_norm_mbox_conf')(net['conv4_3_norm']) net['conv4_3_norm_mbox_conf_flat'] = Flatten(name='conv4_3_norm_mbox_conf_flat')(net['conv4_3_norm_mbox_conf']) priorbox = PriorBox(img_size, 30.0,max_size = 60.0, aspect_ratios=[2], variances=[0.1, 0.1, 0.2, 0.2], name='conv4_3_norm_mbox_priorbox') net['conv4_3_norm_mbox_priorbox'] = priorbox(net['conv4_3_norm']) # 对fc7层进行处理 num_priors = 6 # 预测框的处理 # num_priors表示每个网格点先验框的数量,4是x,y,h,w的调整 net['fc7_mbox_loc'] = Conv2D(num_priors * 4, kernel_size=(3,3),padding='same',name='fc7_mbox_loc')(net['fc7']) net['fc7_mbox_loc_flat'] = Flatten(name='fc7_mbox_loc_flat')(net['fc7_mbox_loc']) # num_priors表示每个网格点先验框的数量,num_classes是所分的类 net['fc7_mbox_conf'] = Conv2D(num_priors * num_classes, kernel_size=(3,3),padding='same',name='fc7_mbox_conf')(net['fc7']) net['fc7_mbox_conf_flat'] = Flatten(name='fc7_mbox_conf_flat')(net['fc7_mbox_conf']) priorbox = PriorBox(img_size, 60.0, max_size=111.0, aspect_ratios=[2, 3], variances=[0.1, 0.1, 0.2, 0.2], name='fc7_mbox_priorbox') net['fc7_mbox_priorbox'] = priorbox(net['fc7']) # 对conv6_2进行处理 num_priors = 6 # 预测框的处理 # num_priors表示每个网格点先验框的数量,4是x,y,h,w的调整 x = Conv2D(num_priors * 4, kernel_size=(3,3), padding='same',name='conv6_2_mbox_loc')(net['conv6_2']) net['conv6_2_mbox_loc'] = x net['conv6_2_mbox_loc_flat'] = Flatten(name='conv6_2_mbox_loc_flat')(net['conv6_2_mbox_loc']) # num_priors表示每个网格点先验框的数量,num_classes是所分的类 x = Conv2D(num_priors * num_classes, kernel_size=(3,3), padding='same',name='conv6_2_mbox_conf')(net['conv6_2']) net['conv6_2_mbox_conf'] = x net['conv6_2_mbox_conf_flat'] = Flatten(name='conv6_2_mbox_conf_flat')(net['conv6_2_mbox_conf']) priorbox = PriorBox(img_size, 111.0, max_size=162.0, aspect_ratios=[2, 3], variances=[0.1, 0.1, 0.2, 0.2], name='conv6_2_mbox_priorbox') net['conv6_2_mbox_priorbox'] = priorbox(net['conv6_2']) # 对conv7_2进行处理 num_priors = 6 # 预测框的处理 # num_priors表示每个网格点先验框的数量,4是x,y,h,w的调整 x = Conv2D(num_priors * 4, kernel_size=(3,3), padding='same',name='conv7_2_mbox_loc')(net['conv7_2']) net['conv7_2_mbox_loc'] = x net['conv7_2_mbox_loc_flat'] = Flatten(name='conv7_2_mbox_loc_flat')(net['conv7_2_mbox_loc']) # num_priors表示每个网格点先验框的数量,num_classes是所分的类 x = Conv2D(num_priors * num_classes, kernel_size=(3,3), padding='same',name='conv7_2_mbox_conf')(net['conv7_2']) net['conv7_2_mbox_conf'] = x net['conv7_2_mbox_conf_flat'] = Flatten(name='conv7_2_mbox_conf_flat')(net['conv7_2_mbox_conf']) priorbox = PriorBox(img_size, 162.0, max_size=213.0, aspect_ratios=[2, 3], variances=[0.1, 0.1, 0.2, 0.2], name='conv7_2_mbox_priorbox') net['conv7_2_mbox_priorbox'] = priorbox(net['conv7_2']) # 对conv8_2进行处理 num_priors = 4 # 预测框的处理 # num_priors表示每个网格点先验框的数量,4是x,y,h,w的调整 x = Conv2D(num_priors * 4, kernel_size=(3,3), padding='same',name='conv8_2_mbox_loc')(net['conv8_2']) net['conv8_2_mbox_loc'] = x net['conv8_2_mbox_loc_flat'] = Flatten(name='conv8_2_mbox_loc_flat')(net['conv8_2_mbox_loc']) # num_priors表示每个网格点先验框的数量,num_classes是所分的类 x = Conv2D(num_priors * num_classes, kernel_size=(3,3), padding='same',name='conv8_2_mbox_conf')(net['conv8_2']) net['conv8_2_mbox_conf'] = x net['conv8_2_mbox_conf_flat'] = Flatten(name='conv8_2_mbox_conf_flat')(net['conv8_2_mbox_conf']) priorbox = PriorBox(img_size, 213.0, max_size=264.0, aspect_ratios=[2], variances=[0.1, 0.1, 0.2, 0.2], name='conv8_2_mbox_priorbox') net['conv8_2_mbox_priorbox'] = priorbox(net['conv8_2']) # 对conv9_2进行处理 num_priors = 4 # 预测框的处理 # num_priors表示每个网格点先验框的数量,4是x,y,h,w的调整 x = Conv2D(num_priors * 4, kernel_size=(3,3), padding='same',name='conv9_2_mbox_loc')(net['conv9_2']) net['conv9_2_mbox_loc'] = x net['conv9_2_mbox_loc_flat'] = Flatten(name='conv9_2_mbox_loc_flat')(net['conv9_2_mbox_loc']) # num_priors表示每个网格点先验框的数量,num_classes是所分的类 x = Conv2D(num_priors * num_classes, kernel_size=(3,3), padding='same',name='conv9_2_mbox_conf')(net['conv9_2']) net['conv9_2_mbox_conf'] = x net['conv9_2_mbox_conf_flat'] = Flatten(name='conv9_2_mbox_conf_flat')(net['conv9_2_mbox_conf']) priorbox = PriorBox(img_size, 264.0, max_size=315.0, aspect_ratios=[2], variances=[0.1, 0.1, 0.2, 0.2], name='conv9_2_mbox_priorbox') net['conv9_2_mbox_priorbox'] = priorbox(net['conv9_2']) # 将所有结果进行堆叠 net['mbox_loc'] = concatenate([net['conv4_3_norm_mbox_loc_flat'], net['fc7_mbox_loc_flat'], net['conv6_2_mbox_loc_flat'], net['conv7_2_mbox_loc_flat'], net['conv8_2_mbox_loc_flat'], net['conv9_2_mbox_loc_flat']], axis=1, name='mbox_loc') net['mbox_conf'] = concatenate([net['conv4_3_norm_mbox_conf_flat'], net['fc7_mbox_conf_flat'], net['conv6_2_mbox_conf_flat'], net['conv7_2_mbox_conf_flat'], net['conv8_2_mbox_conf_flat'], net['conv9_2_mbox_conf_flat']], axis=1, name='mbox_conf') net['mbox_priorbox'] = concatenate([net['conv4_3_norm_mbox_priorbox'], net['fc7_mbox_priorbox'], net['conv6_2_mbox_priorbox'], net['conv7_2_mbox_priorbox'], net['conv8_2_mbox_priorbox'], net['conv9_2_mbox_priorbox']], axis=1, name='mbox_priorbox') if hasattr(net['mbox_loc'], '_keras_shape'): num_boxes = net['mbox_loc']._keras_shape[-1] // 4 elif hasattr(net['mbox_loc'], 'int_shape'): num_boxes = K.int_shape(net['mbox_loc'])[-1] // 4 # 8732,4 net['mbox_loc'] = Reshape((num_boxes, 4),name='mbox_loc_final')(net['mbox_loc']) # 8732,21 net['mbox_conf'] = Reshape((num_boxes, num_classes),name='mbox_conf_logits')(net['mbox_conf']) net['mbox_conf'] = Activation('softmax',name='mbox_conf_final')(net['mbox_conf']) net['predictions'] = concatenate([net['mbox_loc'], net['mbox_conf'], net['mbox_priorbox']], axis=2, name='predictions') print(net['predictions']) model = Model(net['input'], net['predictions']) return model
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
获得先验框代码(原理看预备知识):
class PriorBox(Layer): def __init__(self, img_size, min_size, max_size=None, aspect_ratios=None, flip=True, variances=[0.1], clip=True, **kwargs): if K.image_dim_ordering() == 'tf': self.waxis = 2 self.haxis = 1 else: self.waxis = 3 self.haxis = 2 self.img_size = img_size if min_size <= 0: raise Exception('min_size must be positive.') self.min_size = min_size self.max_size = max_size self.aspect_ratios = [1.0] if max_size: if max_size < min_size: raise Exception('max_size must be greater than min_size.') self.aspect_ratios.append(1.0) if aspect_ratios: for ar in aspect_ratios: if ar in self.aspect_ratios: continue self.aspect_ratios.append(ar) if flip: self.aspect_ratios.append(1.0 / ar) self.variances = np.array(variances) self.clip = True super(PriorBox, self).__init__(**kwargs) def compute_output_shape(self, input_shape): num_priors_ = len(self.aspect_ratios) layer_width = input_shape[self.waxis] layer_height = input_shape[self.haxis] num_boxes = num_priors_ * layer_width * layer_height return (input_shape[0], num_boxes, 8) def call(self, x, mask=None): if hasattr(x, '_keras_shape'): input_shape = x._keras_shape elif hasattr(K, 'int_shape'): input_shape = K.int_shape(x) # ------------------ # # 获取宽和高 # ------------------ # layer_width = input_shape[self.waxis] layer_height = input_shape[self.haxis] img_width = self.img_size[0] img_height = self.img_size[1] box_widths = [] box_heights = [] for ar in self.aspect_ratios: if ar == 1 and len(box_widths) == 0: box_widths.append(self.min_size) box_heights.append(self.min_size) elif ar == 1 and len(box_widths) > 0: box_widths.append(np.sqrt(self.min_size * self.max_size)) box_heights.append(np.sqrt(self.min_size * self.max_size)) elif ar != 1: box_widths.append(self.min_size * np.sqrt(ar)) box_heights.append(self.min_size / np.sqrt(ar)) box_widths = 0.5 * np.array(box_widths) box_heights = 0.5 * np.array(box_heights) step_x = img_width / layer_width step_y = img_height / layer_height linx = np.linspace(0.5 * step_x, img_width - 0.5 * step_x, layer_width) liny = np.linspace(0.5 * step_y, img_height - 0.5 * step_y, layer_height) centers_x, centers_y = np.meshgrid(linx, liny) centers_x = centers_x.reshape(-1, 1) centers_y = centers_y.reshape(-1, 1) num_priors_ = len(self.aspect_ratios) # 每一个先验框需要两个(centers_x, centers_y),前一个用来计算左上角,后一个计算右下角 prior_boxes = np.concatenate((centers_x, centers_y), axis=1) prior_boxes = np.tile(prior_boxes, (1, 2 * num_priors_)) # 获得先验框的左上角和右下角 prior_boxes[:, ::4] -= box_widths prior_boxes[:, 1::4] -= box_heights prior_boxes[:, 2::4] += box_widths prior_boxes[:, 3::4] += box_heights # 变成小数的形式 prior_boxes[:, ::2] /= img_width prior_boxes[:, 1::2] /= img_height prior_boxes = prior_boxes.reshape(-1, 4) prior_boxes = np.minimum(np.maximum(prior_boxes, 0.0), 1.0) num_boxes = len(prior_boxes) if len(self.variances) == 1: variances = np.ones((num_boxes, 4)) * self.variances[0] elif len(self.variances) == 4: variances = np.tile(self.variances, (num_boxes, 1)) else: raise Exception('Must provide one or four variances.') prior_boxes = np.concatenate((prior_boxes, variances), axis=1) prior_boxes_tensor = K.expand_dims(K.variable(prior_boxes), 0) pattern = [tf.shape(x)[0], 1, 1] prior_boxes_tensor = tf.tile(prior_boxes_tensor, pattern) return prior_boxes_tensor
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
预测框解码
我们通过网络获取到的值有
1.num_priors x 4为 该特征层 每一个网格点上 每一个先验框的变化情况。
2.num_priors x num_classes为该特征层上 每一个网格点上 每一个预测框对应的种类。
3.每一个有效特征层对应的先验框
先验框虽然可以代表一定的框的位置信息与框的大小信息,但是其是有限的,无法表示任意情况,因此还需要调整,ssd利用num_priors x 4的卷积的结果对先验框进行调整。
在num_priors x 4中的num_priors表示了这个网格点所包含的先验框数量,其中的4表示了x_offset、y_offset、h和w的调整情况。
x_offset与y_offset代表了预测框距离先验框中心的xy轴偏移情况。
h和w代表了预测框的宽与高相对于先验框的变化情况。
SSD解码过程:
1.每个网格的中心点加上它对应的x_offset和y_offset,加完后的结果就是预测框的中心,然后再利用 先验框和h、w结合 计算出预测框的长和宽。这样就能得到整个预测框的位置了。
2.取出每一类得分大于门限的框和得分。
3.进行非极大值抑制,筛选掉重合度大的框
4.按照自信度排序,只选择前200个结果
def detection_out(self, predictions, background_label_id=0, keep_top_k=200, confidence_threshold=0.5): # 网络预测的结果 mbox_loc = predictions[:, :, :4] # 0.1,0.1,0.2,0.2 variances = predictions[:, :, -4:] # 先验框 mbox_priorbox = predictions[:, :, -8:-4] # 置信度 mbox_conf = predictions[:, :, 4:-8] results = [] # 对每一个特征层进行处理 for i in range(len(mbox_loc)): results.append([]) decode_bbox = self.decode_boxes(mbox_loc[i], mbox_priorbox[i], variances[i]) for c in range(self.num_classes): if c == background_label_id: continue c_confs = mbox_conf[i, :, c] c_confs_m = c_confs > confidence_threshold if len(c_confs[c_confs_m]) > 0: # 取出得分高于confidence_threshold的框 boxes_to_process = decode_bbox[c_confs_m] confs_to_process = c_confs[c_confs_m] # 进行iou的非极大抑制 feed_dict = {self.boxes: boxes_to_process, self.scores: confs_to_process} idx = self.sess.run(self.nms, feed_dict=feed_dict) # 取出在非极大抑制中效果较好的内容 good_boxes = boxes_to_process[idx] confs = confs_to_process[idx][:, None] # 将label、置信度、框的位置进行堆叠。 labels = c * np.ones((len(idx), 1)) c_pred = np.concatenate((labels, confs, good_boxes), axis=1) # 添加进result里 results[-1].extend(c_pred) # print(results) print("*"*100) if len(results[-1]) > 0: # 按照置信度进行排序 results[-1] = np.array(results[-1]) # print(results[-1]) #[::-1]取相反的元素 argsort = np.argsort(results[-1][:, 1])[::-1] results[-1] = results[-1][argsort] # 选出置信度最大的keep_top_k个 results[-1] = results[-1][:keep_top_k] return results
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
2.训练部分
真实框编码
从预测部分我们知道,每个特征层的预测结果,num_priors x 4的卷积用于预测 该特征层上每一个网格点上每一个先验框的变化情况。我们直接利用ssd网络预测到的结果,并不是预测框在图片上的真实位置,需要解码才能得到真实位置。我们需要计算loss函数,这个loss函数是相对于ssd网络的预测结果的。当前的ssd网络中,得到预测结果;同时还需要把真实框的信息,进行编码,这个编码是把真实框的位置信息格式转化为ssd预测结果的格式信息。这个格式和预测出的格式相同。
代码:
def encode_box(self, box, return_iou=True): iou = self.iou(box) encoded_box = np.zeros((self.num_priors, 4 + return_iou)) # 找到每一个真实框,重合程度较高的先验框 assign_mask = iou > self.overlap_threshold if not assign_mask.any(): assign_mask[iou.argmax()] = True if return_iou: encoded_box[:, -1][assign_mask] = iou[assign_mask] # 找到对应的先验框 assigned_priors = self.priors[assign_mask] # 逆向编码,将真实框转化为ssd预测结果的格式 # 先计算真实框的中心与长宽 box_center = 0.5 * (box[:2] + box[2:]) box_wh = box[2:] - box[:2] # 再计算重合度较高的先验框的中心与长宽 assigned_priors_center = 0.5 * (assigned_priors[:, :2] + assigned_priors[:, 2:4]) assigned_priors_wh = (assigned_priors[:, 2:4] - assigned_priors[:, :2]) # 逆向求取ssd应该有的预测结果 encoded_box[:, :2][assign_mask] = box_center - assigned_priors_center encoded_box[:, :2][assign_mask] /= assigned_priors_wh # 除以0.1 encoded_box[:, :2][assign_mask] /= assigned_priors[:, -4:-2] encoded_box[:, 2:4][assign_mask] = np.log(box_wh / assigned_priors_wh) # 除以0.2 encoded_box[:, 2:4][assign_mask] /= assigned_priors[:, -2:] return encoded_box.ravel()
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
只需要选择iou最大的先验框就行了,这个iou最大的先验框就是我们用来预测这个真实框所用的先验框。因此还要经过一次筛选,将上述代码获得的真实框对应的所有的iou较大先验框的预测结果中,iou最大的那个筛选出来。
def assign_boxes(self, boxes): assignment = np.zeros((self.num_priors, 4 + self.num_classes + 8)) assignment[:, 4] = 1.0 if len(boxes) == 0: return assignment # 对每一个真实框都进行iou计算 encoded_boxes = np.apply_along_axis(self.encode_box, 1, boxes[:, :4]) # 每一个真实框的编码后的值,和iou encoded_boxes = encoded_boxes.reshape(-1, self.num_priors, 5) # 取重合程度最大的先验框,并且获取这个先验框的index best_iou = encoded_boxes[:, :, -1].max(axis=0) best_iou_idx = encoded_boxes[:, :, -1].argmax(axis=0) best_iou_mask = best_iou > 0 best_iou_idx = best_iou_idx[best_iou_mask] assign_num = len(best_iou_idx) # 保留重合程度最大的先验框的应该有的预测结果 encoded_boxes = encoded_boxes[:, best_iou_mask, :] assignment[:, :4][best_iou_mask] = encoded_boxes[best_iou_idx,np.arange(assign_num),:4] # 4代表为背景的概率,为0 assignment[:, 4][best_iou_mask] = 0 assignment[:, 5:-8][best_iou_mask] = boxes[best_iou_idx, 4:] assignment[:, -8][best_iou_mask] = 1 # 通过assign_boxes我们就获得了,输入进来的这张图片,应该有的预测结果是什么样子的 return assignment
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
loss值计算
在匹配步骤之后,大多数默认边界框为负例,尤其是当可能的默认边界框数量较多时。这在正的训练实例和负的训练实例之间引入了显著的不平衡,训练时难以收敛。所以不使用所有负例,而是使用每个默认边界框的最高置信度损失的前几个负例来排序它们,以便负例和正例之间的比例至多为3:1。
class MultiboxLoss(object): def __init__(self, num_classes, alpha=1.0, neg_pos_ratio=3.0, background_label_id=0, negatives_for_hard=100.0): self.num_classes = num_classes self.alpha = alpha self.neg_pos_ratio = neg_pos_ratio if background_label_id != 0: raise Exception('Only 0 as background label id is supported') self.background_label_id = background_label_id self.negatives_for_hard = negatives_for_hard def _l1_smooth_loss(self, y_true, y_pred): abs_loss = tf.abs(y_true - y_pred) sq_loss = 0.5 * (y_true - y_pred)**2 l1_loss = tf.where(tf.less(abs_loss, 1.0), sq_loss, abs_loss - 0.5) return tf.reduce_sum(l1_loss, -1) def _softmax_loss(self, y_true, y_pred): y_pred = tf.maximum(y_pred, 1e-7) softmax_loss = -tf.reduce_sum(y_true * tf.log(y_pred), axis=-1) return softmax_loss def compute_loss(self, y_true, y_pred): batch_size = tf.shape(y_true)[0] num_boxes = tf.to_float(tf.shape(y_true)[1]) # 计算所有的loss # 分类的loss # batch_size,8732,21 -> batch_size,8732 conf_loss = self._softmax_loss(y_true[:, :, 4:-8], y_pred[:, :, 4:-8]) # 框的位置的loss # batch_size,8732,4 -> batch_size,8732 loc_loss = self._l1_smooth_loss(y_true[:, :, :4], y_pred[:, :, :4]) # 获取所有的正标签的loss # 每一张图的pos的个数 num_pos = tf.reduce_sum(y_true[:, :, -8], axis=-1) # 每一张图的pos_loc_loss pos_loc_loss = tf.reduce_sum(loc_loss * y_true[:, :, -8], axis=1) # 每一张图的pos_conf_loss pos_conf_loss = tf.reduce_sum(conf_loss * y_true[:, :, -8], axis=1) # 获取一定的负样本 num_neg = tf.minimum(self.neg_pos_ratio * num_pos, num_boxes - num_pos) #如果正样本等于0,负样本也等于0 # 找到了哪些值是大于0的 pos_num_neg_mask = tf.greater(num_neg, 0) # 获得一个1.0,如果负样本和正样本等于0手动设置负样本的数量 has_min = tf.to_float(tf.reduce_any(pos_num_neg_mask)) num_neg = tf.concat( axis=0,values=[num_neg, [(1 - has_min) * self.negatives_for_hard]]) # 求平均每个图片要取多少个负样本 num_neg_batch = tf.reduce_mean(tf.boolean_mask(num_neg, tf.greater(num_neg, 0))) num_neg_batch = tf.to_int32(num_neg_batch) # conf的起始 confs_start = 4 + self.background_label_id + 1 # conf的结束 confs_end = confs_start + self.num_classes - 1 # 找到实际上在该位置不应该有预测结果的框,求他们最大的置信度。 max_confs = tf.reduce_max(y_pred[:, :, confs_start:confs_end], axis=2) # 取top_k个置信度,作为负样本 _, indices = tf.nn.top_k(max_confs * (1 - y_true[:, :, -8]), k=num_neg_batch) #创建一个shape也为(batch,num_neg_batch)的indices # 乘以num_boxes后得到batch中每一个sample的index的起始值,再加上top_k得到的index就得到了一个一维的full_indices。 batch_idx = tf.expand_dims(tf.range(0, batch_size), 1) batch_idx = tf.tile(batch_idx, (1, num_neg_batch)) full_indices = (tf.reshape(batch_idx, [-1]) * tf.to_int32(num_boxes) + tf.reshape(indices, [-1])) # full_indices = tf.concat(2, [tf.expand_dims(batch_idx, 2), # tf.expand_dims(indices, 2)]) # neg_conf_loss = tf.gather_nd(conf_loss, full_indices) neg_conf_loss = tf.gather(tf.reshape(conf_loss, [-1]), full_indices) neg_conf_loss = tf.reshape(neg_conf_loss, [batch_size, num_neg_batch]) neg_conf_loss = tf.reduce_sum(neg_conf_loss, axis=1) # loss is sum of positives and negatives num_pos = tf.where(tf.not_equal(num_pos, 0), num_pos, tf.ones_like(num_pos)) total_loss = tf.reduce_sum(pos_conf_loss) + tf.reduce_sum(neg_conf_loss) total_loss /= tf.reduce_sum(num_pos) total_loss += tf.reduce_sum(self.alpha * pos_loc_loss) / tf.reduce_sum(num_pos) return total_loss
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
总结
纸上得来终觉浅,觉知此事要躬行。
文章来源: blog.csdn.net,作者:快了的程序猿小可哥,版权归原作者所有,如需转载,请联系作者。
原文链接:blog.csdn.net/qq_35914625/article/details/108381093