你所在的位置: 首页 > 正文

机器学习-Faster RCNN的损失函数(Loss Function)

2019-07-22 点击:589

更快的RCNN的丢失函数的形式如下:

0fd3e80a163a4272be402282341e2873

预测的分类概率为p(i): Anchor [i];

当Anchor [i]是阳性样本时,p(i)*=1;当Anchor [i]是负样本时,p(i)*=0;

该作品的锚点是一个正样本:最大的锚点与地面真值盒的IOU(Intersection-Over-Union)重叠;具有Gound Truth的锚是负样本:与Gound Truth Box的IOU重叠区域<0.3;不属于阳性样本而不是阴性样本的锚点不参与培训。

t(i): Anchor [i]预测边界框的参数化坐标;

t(i)*: Anchor [i]参数化地面真理的边界框的坐标;

61c37652c99c44a986a4a5fc5cef3405

N(cls):小批量大小;

N(reg):锚点位置数;

ea1adad380dd442990f9c017e7197580

其中R是平滑L1函数;

18f62d1fba4a42f8a645b7b39eec3fab

表示仅在样本为正时返回“边界框”。

702665bad883417898c049ccf03b7d9ad235d4f5367c45d6ad0b69bcc73e53a38a141f16acc9441e8e64ab7467fda06d平滑L1完全避免了L1和L2丢失的缺陷。当x很小时,x的梯度变小。当x很大时,梯度到x的绝对值达到上限1.由于预测值的梯度较大,训练不稳定。

L(cls):是两类的对数丢失

350f67f08715429797c2860886c81585

λ:权重平衡参数,作者在论文中设置λ=10,但实际实验表明结果对λ变化不敏感,如下表所示,λ的值从1变为100,并且对最终结果的影响是1%。内。

5025d4fd72c643379f1fccac037efed2

平滑L1损失

Def _smooth_l1_loss(self,bbox_pred,bbox_targets,bbox_inside_weights,bbox_outside_weights,sigma=1.0,dim=[1]):

Sigma__2=西格玛** 2

Box_diff=bbox_pred - bbox_targets

In_box_diff=bbox_inside_weights * box_diff

Abs_in_box_diff=tf.abs(in_box_diff)

smoothL1_sign=tf.stop_gradient(tf.to_float(tf.less(abs_in_box_diff, 1./sigma_2)))

In_loss_box=tf.pow(in_box_diff, 2) * (sigma_2/2.) * smoothL1_sign \

+ (abs_in_box_diff - (0.5/sigma_2)) * (1. - smoothL1_sign)

Out_loss_box=bbox_outside_weights * in_loss_box

Loss_box=tf.reduce_mean(tf.reduce_sum(

Out_loss_box,

Axis=dim

))

Return loss_box

The Smooth L1 Loss in the code is more General.

84992ccc291843b98f5e8cdfe5957c90

Bbox_inside_weight corresponds to p* in formula (1) (the loss function of Faster RCNN), that is, the value is 1 when Anchor is a positive sample and 0 when it is a negative sample. Bbox_outside_weights corresponds to the setting of N(reg), λ, N(cls) in equation (1) (the loss function of Faster RCNN). In the paper, N(reg)=2400, λ=10, N(cls)=256, so the weights of the two losses of classification and regression are basically the same.

xx在代码中,N(REG)=N(CLS),λ=1,如此分类和回归两个损失的权重也基本相同。

损失

def _add_losses(self,sigma_rpn=3.0):

使用tf.variable_scope('LOSS_'+ self._tag)作为范围:

#RPN,课堂损失

rpn_cls_score=tf.reshape(self._predictions ['rpn_cls_score_reshape'],[ - 1,2])

rpn_label=tf.reshape(self._anchor_targets ['rpn_labels'],[ - 1])

rpn_select=tf.where(tf.not_equal(rpn_label,-1))

rpn_cls_score=tf.reshape(tf.gather(rpn_cls_score,rpn_select),[ - 1,2])

rpn_label=tf.reshape(tf.gather(rpn_label,rpn_select),[ - 1])

rpn_cross_entropy=tf.reduce_mean(

XXTf.nn.sparse_softmax_cross_entropy_with_logits(logits=rpn_cls_score,labels=rpn_label))

#RPN,bbox loss

Rpn_bbox_pred=self._predictions ['rpn_bbox_pred']

Rpn_bbox_targets=self._anchor_targets ['rpn_bbox_targets']

Rpn_bbox_inside_weights=self._anchor_targets ['rpn_bbox_inside_weights']

Rpn_bbox_outside_weights=self._anchor_targets ['rpn_bbox_outside_weights']

Rpn_loss_box=self._smooth_l1_loss(rpn_bbox_pred,rpn_bbox_targets,rpn_bbox_inside_weights,

Rpn_bbox_outside_weights,sigma=sigma_rpn,dim=[1,2,3])

#RCNN,班级失踪

Cls_score=self._predictions ['cls_score']

Label=tf.reshape(self._proposal_targets ['labels'],[ - 1])

Cross_entropy=tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=cls_score,labels=label))

#RCNN,bbox loss

Bbox_pred=self._predictions ['bbox_pred']

Bbox_targets=self._proposal_targets ['bbox_targets']

Bbox_inside_weights=self._proposal_targets ['bbox_inside_weights']

Bbox_outside_weights=self._proposal_targets ['bbox_outside_weights']

Loss_box=self._smooth_l1_loss(bbox_pred,bbox_targets,bbox_inside_weights,bbox_outside_weights)

Self._losses ['cross_entropy']=cross_entropy

Self._losses ['loss_box']=loss_box

self._losses ['rpn_cross_entropy']=rpn_cross_entropy

self._losses ['rpn_loss_box']=rpn_loss_box

loss=cross_entropy + loss_box + rpn_cross_entropy + rpn_loss_box

regularization_loss=tf.add_n(tf.losses.get_regularization_losses(),'regu')

self._losses ['total_loss']=loss + regularization_loss

self._event_summaries.update(self._losses)

回报损失

损失函数中包含了RPN交叉熵,RPN Box的回归,RCNN的交叉熵,RCNN Box的回归以及参数正则化损失。

期票的计算

def bbox_overlaps(

np.ndarray [DTYPE_t,ndim=2]方框,

np.ndarray [DTYPE_t,ndim=2] query_boxes):

'''

参数

----------

XX方框:(N,4)浮动的ndarray

query_boxes:(K,4)浮点数ndarray

返回

-------

重叠:(N,K)框和query_boxes之间重叠的ndarray

'''

cdef unsigned int N=boxes.shape [0]

cdef unsigned int K=query_boxes.shape [0]

cdef np.ndarray [DTYPE_t,ndim=2] overlapps=np.zeros((N,K),dtype=DTYPE)

cdef DTYPE_t iw,ih,box_area

cdef DTYPE_t ua

cdef unsigned int k,n

对于k在范围(K):

box_area=(

(query_boxes [k,2] - query_boxes [k,0] + 1)*

(query_boxes [k,3] - query_boxes [k,1] + 1)

XX对于n在范围(N):

iw=(

min(boxes [n,2],query_boxes [k,2]) -

max(boxes [n,0],query_boxes [k,0])+ 1

如果我没有&gt; 0:

ih=(

min(boxes [n,3],query_boxes [k,3]) -

max(boxes [n,1],query_boxes [k,1])+ 1

如果ih&gt; 0:

ua=float(

(方框[n,2] - 方框[n,0] + 1)*

(方框[n,3] - 方框[n,1] + 1)+

box_area - iw * ih

重叠[n,k]=iw * ih/ua

返回重叠

IOU覆盖率的计算过程:IOU=C /(A + B-C)

a982b3b47a8a4c569e686a5bad7b94d7

IOU计算

XX

日期归档
信阳新闻网 版权所有© www.pdsbb.cn 技术支持:信阳新闻网 | 网站地图