CN104915946B - A kind of object segmentation methods based on conspicuousness suitable for serious degraded image - Google Patents

A kind of object segmentation methods based on conspicuousness suitable for serious degraded image Download PDF

Info

Publication number
CN104915946B
CN104915946B CN201510069617.3A CN201510069617A CN104915946B CN 104915946 B CN104915946 B CN 104915946B CN 201510069617 A CN201510069617 A CN 201510069617A CN 104915946 B CN104915946 B CN 104915946B
Authority
CN
China
Prior art keywords
msub
mrow
mtd
point
mtr
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510069617.3A
Other languages
Chinese (zh)
Other versions
CN104915946A (en
Inventor
刘盛
王建峰
张少波
陈胜勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Shishang Technology Co ltd
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201510069617.3A priority Critical patent/CN104915946B/en
Publication of CN104915946A publication Critical patent/CN104915946A/en
Application granted granted Critical
Publication of CN104915946B publication Critical patent/CN104915946B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

一种适用于严重退化图像的基于显著性的对象分割方法,包括以下步骤:(1)通过显著图生成初始化的显著对象种子;(2)生成基于局部自相关性的骨架;(3)在显著对象的边缘上生成用于扩展的起始点;(4)为每个起始点初始化扩展的方向;(5)设置扩展的终止条件并完成扩展操作;(6)标记没能找到终止点的起始点作为退化区域标记点;(7)对扩展后的结果进行修补和平滑;(8)根据退化区域标记点用超像素修补对象分割结果。本发明结合局部自相关性以及超像素能有效地提高分割结果的准确性和完整性,避免了由严重图像退化所带来的分割结果丢失部分区域的问题,提高了显著对象分割对于图像退化的鲁棒性。

A saliency-based object segmentation method suitable for severely degraded images, including the following steps: (1) generate initialized salient object seeds through saliency maps; (2) generate skeletons based on local autocorrelation; (3) Generate the starting point for expansion on the edge of the object; (4) initialize the direction of expansion for each starting point; (5) set the termination condition of the expansion and complete the expansion operation; (6) mark the starting point where the termination point cannot be found As degraded area markers; (7) Repair and smooth the expanded result; (8) Repair object segmentation results with superpixels according to the degraded area markers. The present invention combines local autocorrelation and superpixels to effectively improve the accuracy and integrity of segmentation results, avoiding the problem of partial loss of segmentation results caused by severe image degradation, and improving the effectiveness of salient object segmentation for image degradation. robustness.

Description

一种适用于严重退化图像的基于显著性的对象分割方法A Saliency-Based Object Segmentation Method for Severely Degraded Images

技术领域technical field

本发明涉及计算机视觉、图像处理等技术领域,尤其是基于显著性的对象分割方法。The invention relates to technical fields such as computer vision and image processing, especially a saliency-based object segmentation method.

背景技术Background technique

基于显著性的对象分割是一项比较热门的研究,它的目标是将图像中的感兴趣对象分割出来。在自然图像中检测显著性区域是对于人类视觉系统的模仿,也就是自动的找出人类会将目光聚集到的感兴趣区域。当人们观察一张自然图像或者真实场景时,他们会花更多的注意力在整个显著对象上,而不仅仅是一块显著区域上。因此,基于显著性的对象分割是一项有必要的研究,它被广泛地应用于许多高层应用,例如对象识别、行为分析、图像感兴趣对象分割等。尽管如此,当图像被严重退化的情况下,尤其是前景具有局部的运动模糊而背景具有均一运动模糊的情况下,基于显著性信息的显著对象分割方法会经常失效。图像的退化现象会对大多数计算机视觉应用产生严重影响。它通常会导致算法结果的精度降低,甚至导致一些算法的失效。这个问题在对于自然图像的基于显著性的对象分割中十分常见。在大多数退化图像中的显著对象可能包含了许多不够显著的部分,这些部分就会在进行对象分割时引起歧义。因此,基于显著性的对象分割在退化图像上的结果通常是不完整的。Saliency-based object segmentation is a relatively popular research, and its goal is to segment the object of interest in the image. Detecting salient regions in natural images is an imitation of the human visual system, that is, automatically finding regions of interest that humans will focus their gaze on. When people observe a natural image or a real scene, they pay more attention to the whole salient object rather than just a salient region. Therefore, saliency-based object segmentation is a necessary research, which is widely used in many high-level applications, such as object recognition, behavior analysis, object-of-interest segmentation in images, etc. Nevertheless, salient object segmentation methods based on saliency information often fail when the image is severely degraded, especially when the foreground has localized motion blur and the background has uniform motion blur. Degradation phenomena in images can have serious impacts on most computer vision applications. It usually leads to a decrease in the accuracy of the algorithm results, and even causes some algorithms to fail. This problem is common in saliency-based object segmentation for natural images. Salient objects in most degraded images may contain many less salient parts, which cause ambiguity when performing object segmentation. Therefore, the results of saliency-based object segmentation on degraded images are usually incomplete.

发明内容Contents of the invention

为了克服由于严重图像退化所造成的对象分割结果丢失部分区域的问题,本发明提出一种适用于严重退化图像的基于显著性的对象分割方法,它能够有效地提高分割结果的准确性和完整性,避免了由严重图像退化所带来的分割结果丢失部分区域的问题,提高了显著对象分割对于图像退化的鲁棒性。In order to overcome the problem of partial area loss of object segmentation results caused by severe image degradation, the present invention proposes a saliency-based object segmentation method suitable for severely degraded images, which can effectively improve the accuracy and integrity of segmentation results , which avoids the problem of partial loss of segmentation results caused by severe image degradation, and improves the robustness of salient object segmentation to image degradation.

本发明解决其技术问题所采用的技术方案是:The technical solution adopted by the present invention to solve its technical problems is:

一种适用于严重退化图像的基于显著性的对象分割方法,包括以下步骤:A saliency-based object segmentation method suitable for severely degraded images, including the following steps:

(1)通过显著图生成初始化的显著对象种子(1) Salient object seeds initialized by saliency map generation

在基于软图像抽象化方法得到的显著图的直方图中显著度更高的范围内寻找一个直方图的峰值,该显著度更高的范围选取为(127,255],通过阈值T对显著图进行阈值分割得到一张二值化的分割结果;一个接一个地标记连通域,并且在标记前采用形态学膨胀操作来保护目标对象更多的显著细节,在已经标记好的不连通区域中提取最主要的区域作为初始显著对象种子;Find a histogram peak in the range with higher significance in the histogram of the saliency map obtained based on the soft image abstraction method. The range with higher significance is selected as (127,255], and threshold the saliency map through the threshold T Segmentation obtains a binarized segmentation result; mark the connected domains one by one, and use the morphological expansion operation before marking to protect more significant details of the target object, and extract the most important in the already marked disconnected areas The region of is used as the initial salient object seed;

(2)生成基于局部自相关性的骨架(2) Generate a skeleton based on local autocorrelation

点(x,y)在以该点为中心的局部窗口w内的局部自相关方程:The local autocorrelation equation of a point (x, y) within a local window w centered on this point:

其中,I(xk,yk)代表点(xk,yk)在窗口w(3×3)内的梯度,Δx和Δy分别代表在x和y两个方向上的位移量;Among them, I(x k , y k ) represents the gradient of the point (x k , y k ) in the window w(3×3), and Δx and Δy represent the displacements in the x and y directions respectively;

该公式(2.1)被近似为:This formula (2.1) is approximated as:

其中in

计算从每个点获得的矩阵M的两个特征值,其中较小特征值的特征向量对应椭圆的长轴方向,这个方向可以表示为点的延伸方向,将每个点计算得到的值转换到方向空间[0,180]中,每个值都表示在一条直线上的正反两个方向,所生成的运动方向图每个像素上的值都对应某个的方向;Calculate the two eigenvalues of the matrix M obtained from each point, where the eigenvector of the smaller eigenvalue corresponds to the long axis direction of the ellipse, which can be expressed as the extension direction of the point, and convert the value calculated at each point to In the direction space [0,180], each value represents the positive and negative directions on a straight line, and the value on each pixel of the generated motion direction map corresponds to a certain direction;

通过平均分配的方式将运动方向图归一化到4个方向{0,45,90,135},在运动方向图归一化后的图像中数量最大的方向将被当做背景的方向;之后将背景的方向都去除,剩下的最大连通区域就被当做补充的对象种子;沿着背景方向搜索背景中两个邻近方向来修补丢失的部分。当两个邻近相关区域在空间上靠近彼此时就将它们连在一起;Normalize the motion direction map to 4 directions {0, 45, 90, 135} by means of average distribution, and the direction with the largest number in the normalized image of the motion direction map will be taken as the direction of the background; then the direction of the background All directions are removed, and the remaining largest connected region is used as a supplementary object seed; two adjacent directions in the background are searched along the background direction to repair the missing part. Linking two neighboring related regions together when they are spatially close to each other;

如果从步骤(1)中得到初始显著对象种子由若干个不连通的区域组成,这就表示从步骤(1)中得到的结果不能用来代表目标对象的整个骨架,在这种情况下,基于局部自相关性的骨架被用来让初始显著对象种子优化成更好的显著对象种子,考虑到由局部自相关性得到的骨架和目标对象相比有些膨胀,所以应用了形态学的腐蚀操作来修正这一问题;当初始对象种子由若干不连通的区域组成时,将基于局部自相关性的骨架和初始对象种子融合在一起作为最终的显著对象种子;否则,仅仅将步骤(1)中得到结果作为最终的显著对象种子;If the initial salient object seed from step (1) consists of several disconnected regions, it means that the result from step (1) cannot be used to represent the whole skeleton of the target object, in this case, based on The skeleton of local autocorrelation is used to optimize the initial salient object seeds into better salient object seeds. Considering that the skeleton obtained by local autocorrelation is somewhat inflated compared with the target object, a morphological erosion operation is applied to Fix this problem; when the initial object seed consists of several disconnected regions, the skeleton based on local autocorrelation and the initial object seed are fused together as the final salient object seed; otherwise, only the obtained in step (1) The result serves as the final salient object seed;

在显著对象种子里的空洞需要被填充,除了那些占了整个目标对象的百分比阈值以上的空洞;Holes in salient object seeds need to be filled, except for those holes that account for a percentage of the entire target object above a threshold;

(3)在显著对象种子的边缘上生成用于扩展的起始点(3) Generate starting points for expansion on the edges of salient object seeds

在显著对象种子的边界上的每个点都被当作我们扩展方法的起始点,构建了一个3×3大小的卷积核Ns,并且让显著对象种子二值图Mb和卷积核Ns作卷积操作,当操作窗口中中心点p的值是1时,Ns就会被使用计算出一个决定因子d:Each point on the boundary of the salient object seed is taken as the starting point of our extended method, constructing a 3×3 convolution kernel N s , and let the salient object seed binary map M b and the convolution kernel N s is used for convolution operation. When the value of the center point p in the operation window is 1, N s will be used to calculate a determination factor d:

其中,pij显著对象种子在w窗口中第i行第j列的像素点的二进制值,而Nsij表示在Ns中第i行第j列的值,如果n不等于0或8,决定因子d为1,表示所有点的值不都是一样的,就把这个中心点作为显著对象种子边缘上的一个起始点;Among them, p ij is the binary value of the pixel of the i-th row and j-th column of the salient object seed in the w window, and N sij represents the value of the i-th row and j-th column in N s , if n is not equal to 0 or 8, it is determined The factor d is 1, indicating that the values of all points are not the same, and this central point is taken as a starting point on the edge of the salient object seed;

(4)为每个起始点初始化扩展的方向(4) Initialize the direction of expansion for each starting point

基于起始点集{Ps},每个起始点的方向都会被计算,设想每个起始点都沿着法线方向扩展,所以需要计算这个法线方向,将扩展方向分成8种类型,并且用0至7八个数字来标记它们,一个卷积核Nd被构造出来用来确定每个起始点的法线方向,于计算方向ld的标记值的公式如下所示:Based on the starting point set {P s }, the direction of each starting point will be calculated. It is assumed that each starting point expands along the normal direction, so it is necessary to calculate the normal direction, divide the expansion direction into 8 types, and use 0 to 7 eight numbers to mark them, a convolution kernel N d is constructed to determine the normal direction of each starting point, and the formula for calculating the label value of the direction l d is as follows:

其中,pij是二值图Mb在卷积窗口w中对应第i行第j列像素的值,表示对该像素值进行取反操作,如果原来是1则变为0,如果是0则变为1,而p11则是在窗口w中的第1行第1列像素的值,n1是当p11=1时除了中心点以外所有满足{pij=1}的点的数量,而n2则是当p11=0时除了中心点以外所有满足{pij=1}的点的数量;Among them, p ij is the value of the binary image M b corresponding to the pixel in the i-th row and j-th column in the convolution window w, Indicates that the pixel value is inverted, if it is 1, it becomes 0, if it is 0, it becomes 1, and p 11 is the value of the pixel in the first row and the first column in the window w, and n 1 is When p 11 =1, the number of all points satisfying {p ij =1} except the center point, and n 2 is the number of all points satisfying {p ij =1} except the center point when p 11 =0 ;

(5)设置扩展的终止条件并完成扩展操作(5) Set the termination condition of the extension and complete the extension operation

采用自适应阈值canny算子的算法来检测原始图像的边界,并规定了一个可扩展限制区域,该限制区域是由显著对象种子进行三次膨胀得到的;The algorithm of adaptive threshold canny operator is used to detect the boundary of the original image, and a scalable restricted region is specified, which is obtained by three dilations of salient object seeds;

在限制区域中,每个边界上的点都有机会成为扩展的终止点,根据扩展方向将终止条件分成两类:{0,2,4,6}和{1,3,5,7},每种类型都包含了若干种情况;In the restricted area, each point on the boundary has the opportunity to be the termination point of the expansion, and the termination conditions are divided into two categories according to the expansion direction: {0,2,4,6} and {1,3,5,7}, Each type contains several cases;

当一个起始点沿着它的法线方向找到了它的终止点,就会进行起始点向所对应的终止点延伸的一个操作,把在起始点(x,y)和终止点(xe,ye)之间的延伸线上的点重新赋值,延伸线上的点集合表示为如下公式:When a starting point finds its ending point along its normal direction, an operation will be performed to extend the starting point to the corresponding ending point, and the starting point (x, y) and the ending point (x e , y e ) points on the extension line are reassigned, and the set of points on the extension line is expressed as the following formula:

{p(x+dixΔx,y+diyΔy)=1|0≤Δx≤|x-xe|,0≤Δy≤|y-ye|}, (5.1){p(x+d ix Δx,y+d iy Δy)=1|0≤Δx≤|xx e |,0≤Δy≤|yy e |}, (5.1)

其中,p(x+dixΔx,y+diyΔy)是延伸线上的点的二进制值,它表示在限制区域中延伸线上重新赋值的点会成为我们最终目标对象中的点;di的取值从集合{(-1,-1),(0,-1),(1,-1),(1,0),(1,1),(0,1),(-1,1),(-1,0)}中寻找,它对应图3中的8个方向 标记值{0,1,2,3,4,5,6,7};Among them, p(x+d ix Δx,y+d iy Δy) is the binary value of the point on the extension line, which means that the reassigned point on the extension line in the restricted area will become the point in our final target object; d The value of i is from the set {(-1,-1),(0,-1),(1,-1),(1,0),(1,1),(0,1),(-1 ,1),(-1,0)}, which corresponds to the 8 direction marker values {0,1,2,3,4,5,6,7} in Figure 3;

(6)标记没能找到终止点的起始点作为退化区域标记点(6) Mark the starting point where the end point cannot be found as the degraded area marking point

如果一个起始点在扩展限制区域中不存在终止点,将该起始点标记为一个在退化区域中的点,获得一个代表退化区域的标记点集合P1If a start point does not have an end point in the extended restricted area, mark the start point as a point in the degenerate area, and obtain a set of marked points P 1 representing the degenerate area;

(7)对扩展后的结果进行修补和平滑(7) Repair and smooth the expanded result

按照面积对这些不连通的孔洞进行排序,并且将孔洞中的点重新赋值成为目标对象中的点,除了那些面积占整个对象的百分比以上的孔洞,最后,使用高斯滤波器来平滑结果中由误差引起的粗糙的边界。Sort these disconnected holes by area, and reassign the points in the holes as points in the target object, except for those holes whose area accounts for more than a percentage of the entire object, and finally, use a Gaussian filter to smooth the results caused by the error caused by rough borders.

(8)根据退化区域标记点用超像素修补对象分割结果(8) Use superpixels to repair object segmentation results according to degraded area marker points

采用由简单线性迭代聚类超像素分割方法得到的超像素来修补那些被丢失的退化区域,当基于局部自相关性的骨架结合到显著对象种子时,不执行基于超像素的对象融合的过程;Use the superpixels obtained by the simple linear iterative clustering superpixel segmentation method to patch those degenerated regions that are lost, when the local autocorrelation-based skeleton is combined with the salient object seeds, the process of superpixel-based object fusion is not performed;

其中,Objf表示最终的目标对象,而Obje表示扩展之后所得到的结果;参数a=0表示基于局部自相关性的骨架没有应用到显著对象种子中,nli表示在超像素Si中标记点的数量,这些标记点是从步骤(6)中获得的,并且属于集合PlAmong them, Obj f represents the final target object, and Obj e represents the result obtained after expansion; parameter a=0 means that the skeleton based on local autocorrelation is not applied to the salient object seed, and n li represents the superpixel S i The number of labeled points, which are obtained from step (6) and belong to the set P l .

本发明的技术构思为:构建了一种采用自动扩张机制的基于显著性的对象分割方法,它可以被粗略地分为两个步骤:显著对象种子的生成和基于显著对象种子的分割。在显著对象种子的生成过程中,我们基于软图像抽象化方法的结果生成了初始显著对象种子,并且结合局部自相关一致性来优化显著对象种子,将它作为目标对象的整个骨架。在基于显著对象种子的分割过程中,基于在退化区域中的边界不显著的观察现象,我们提出了一个新颖的叫做“法线扩展”的方法。基于在显著对象边缘的起始点,我们计算到每个起始点对应的传播方向,并且列出 了终止条件。通过这种方法,显著对象种子就可以扩展到目标对象的真实边界,那些在显著对象种子中不能找到对应边界的点会被标记。为了修补被丢失的退化区域,根据被标记的点的密度,超像素被融合到我们的最终对象分割结果中。The technical idea of the present invention is to construct a saliency-based object segmentation method using an automatic expansion mechanism, which can be roughly divided into two steps: generation of salient object seeds and segmentation based on salient object seeds. During the generation of salient object seeds, we generate initial salient object seeds based on the results of soft image abstraction methods, and optimize the salient object seeds by incorporating local autocorrelation consistency as the whole skeleton of the target object. Based on the observation that boundaries in degenerated regions are not salient during salient object seed-based segmentation, we propose a novel method called "normal extension". Based on the origins at the salient object edges, we compute the propagation direction to each origin and list the termination conditions. In this way, the salient object seeds are extended to the ground-truth boundaries of the target objects, and those points whose corresponding boundaries cannot be found in the salient object seeds are marked. To inpaint the lost degenerated regions, superpixels are fused into our final object segmentation results according to the densities of labeled points.

本发明的有益效果主要表现在:运用法线扩展的方法使显著对象种子扩展到目标对象的真实边界,结合局部自相关性以及超像素能有效地提高分割结果的准确性和完整性,避免了由严重图像退化所带来的分割结果丢失部分区域的问题,提高了显著对象分割对于图像退化的鲁棒性。The beneficial effects of the present invention are mainly manifested in: using the normal extension method to extend the salient object seed to the real boundary of the target object, combining local autocorrelation and superpixels can effectively improve the accuracy and integrity of the segmentation result, avoiding the The problem of partial area loss in segmentation results caused by severe image degradation improves the robustness of salient object segmentation to image degradation.

附图说明Description of drawings

图1根据局部自相关方程得到的M矩阵的两个特征向量作为一个椭圆的长短轴,由椭圆的长轴(即较小的特征值对象的特征向量)与水平线所成的夹角作为该像素点的延伸方向(引申为运动方向)。Figure 1 The two eigenvectors of the M matrix obtained according to the local autocorrelation equation are used as the major and minor axes of an ellipse, and the angle formed by the major axis of the ellipse (that is, the eigenvector of the smaller eigenvalue object) and the horizontal line is used as the pixel The extension direction of the point (extended to be the motion direction).

图2在显著对象种子上生成扩展起始点。左边是所用的卷积核,右边是卷积核作用于显著对象种子中一个像素的过程以及其结果。Figure 2 Generating extended onsets on salient object seeds. On the left is the convolution kernel used, and on the right is the process of applying the convolution kernel to a pixel in the salient object seed and its result.

图3左边为扩展方向的8种类型,右边是用0至7八个数字来标记这些方向。The left side of Figure 3 shows 8 types of extension directions, and the right side uses eight numbers from 0 to 7 to mark these directions.

具体实施方法Specific implementation method

下面结合附图对本发明做进一步说明。The present invention will be further described below in conjunction with the accompanying drawings.

参照图1~图3,一种适用于严重退化图像的基于显著性的对象分割方法具体步骤如下:Referring to Figures 1 to 3, the specific steps of a saliency-based object segmentation method suitable for severely degraded images are as follows:

(1)通过显著图生成初始化的显著对象种子。为了通过扩展能够获得整个对象,我们需要一个良好的目标对象的骨架。通常情况下,显著区域是前景区域的可能性更大。因此,我们在基于软图像抽象化方法得到的显著图的直方图中显著度更高的范围内寻找一个直方图的峰值。在我们的方法中,这个范围选取为(127,255],这个范围对应了显著度高的区域。然后,通过阈值T对显著图进行阈值分割得到一张二值化的分割结果。我们一个接一个地标记连通域,并且在标记前采用形态 学膨胀操作来保护目标对象更多的显著细节。我们在已经标记好的不连通区域中提取最主要的区域作为我们粗略的初始显著对象种子。(1) Salient object seeds initialized by saliency map generation. In order to be able to obtain the whole object by extension, we need a good skeleton of the target object. Typically, salient regions are more likely to be foreground regions. Therefore, we look for a histogram peak in the range with higher saliency in the histogram of the saliency map obtained by the soft image abstraction method. In our method, this range is selected as (127,255], which corresponds to the region with a high degree of saliency. Then, threshold segmentation is performed on the saliency map by the threshold T to obtain a binarized segmentation result. We one by one Mark the connected domain, and use the morphological dilation operation before marking to protect more salient details of the target object. We extract the most important area in the already marked disconnected area as our rough initial salient object seed.

(2)生成基于局部自相关性的骨架。下面这个公式是点(x,y)在以该点为中心的局部窗口w内的局部自相关方程:(2) Generate a skeleton based on local autocorrelation. The following formula is the local autocorrelation equation of a point (x, y) within a local window w centered on the point:

其中I(xk,yk)代表点(xk,yk)在窗口w(3×3)内的梯度,Δx和Δy分别代表在x和y两个方向上的位移量。Among them, I(x k , y k ) represents the gradient of the point (x k , y k ) in the window w(3×3), and Δx and Δy represent the displacements in the x and y directions, respectively.

这个公式可以被近似为:This formula can be approximated as:

其中in

我们计算从每个点获得的矩阵M(2×2)的两个特征值。其中较小特征值的特征向量对应椭圆的长轴方向(如图2所示),这个方向可以表示为点的延伸方向(运动的方向)。我们将每个点计算得到的值转换到方向空间[0,180]中,这样每个值都表示在一条直线上的正反两个方向。所生成的运动方向图每个像素上的值都对应某个的方向。We compute the two eigenvalues of the matrix M(2×2) obtained from each point. The eigenvector of the smaller eigenvalue corresponds to the major axis direction of the ellipse (as shown in Figure 2), and this direction can be expressed as the extension direction of the point (the direction of motion). We transform the value calculated at each point into the direction space [0,180], so that each value represents both positive and negative directions on a straight line. The value on each pixel of the generated motion direction map corresponds to a certain direction.

我们通过平均分配的方式将运动方向图归一化到4个方向{0,45,90,135}。在运动方向图归一化后的图像中数量最大的方向将被当做背景的方向。之后将背景的方向都去除,剩下的最大连通区域就被当做补充的对象种子。当然,对象种子上与背景方向相同的部分也会被丢失。我们沿着背景方向搜索背景中两个邻近方向来修补丢失的部分。当两个邻近相关区域在空间上靠近彼此时就将它们连在一起。We normalize the motion pattern to 4 directions {0, 45, 90, 135} by means of an even distribution. The direction with the largest number in the image normalized by the motion direction map will be taken as the direction of the background. Afterwards, the direction of the background is removed, and the remaining largest connected region is used as a supplementary object seed. Of course, the parts on the object seed that are in the same direction as the background are also lost. We search for two adjacent directions in the background along the background direction to patch the missing parts. Two adjacent related regions are linked together when they are spatially close to each other.

我们在这里假设:如果从步骤(1)中得到初始显著对象种子由若干个不连通的区域组成,这就表示从步骤(1)中得到的结果不能用来代表目标对象的整个骨 架。在这种情况下,基于局部自相关性的骨架被用来让初始显著对象种子优化成更好的显著对象种子。考虑到由局部自相关性得到的骨架和目标对象相比有些膨胀,所以应用了形态学的腐蚀操作来修正这一问题。当初始对象种子由若干不连通的区域组成时,我们将基于局部自相关性的骨架和初始对象种子融合在一起作为我们最终的显著对象种子。否则,仅仅将步骤(1)中得到结果作为我们最终的显著对象种子。在下一步之前,在显著对象种子里的空洞需要被填充,除了那些占了整个目标对象的20%以上的空洞(20%是我们通过大量实验得到阈值)。We assume here that if the initial salient object seed from step (1) consists of several disconnected regions, it means that the result from step (1) cannot be used to represent the whole skeleton of the target object. In this case, a skeleton based on local autocorrelation is used to let the initial salient object seeds be optimized into better salient object seeds. Considering that the skeleton obtained by local autocorrelation is somewhat bloated compared to the target object, a morphological erosion operation is applied to correct this problem. When the initial object seed consists of several disconnected regions, we fuse the local autocorrelation based skeleton with the initial object seed as our final salient object seed. Otherwise, just use the result obtained in step (1) as our final salient object seed. Before the next step, the holes in the salient object seeds need to be filled, except for those holes that account for more than 20% of the whole target object (20% is our threshold obtained through extensive experiments).

(3)在显著对象的边缘上生成用于扩展的起始点。通过我们的初始化方法,生成了显著对象种子的二值图Mb(只有值0和1)。我们提出了一种新颖的方法来将显著对象种子扩展到目标对象的真实边界。在显著对象种子的边界上的每个点都被当作我们扩展方法的起始点。下面将会介绍如何建立起始点集{Ps}的方法。(3) Generate starting points for extension on the edges of salient objects. By our initialization method, a binary map Mb of salient object seeds (only values 0 and 1) is generated. We propose a novel method to extend salient object seeds to the ground-truth boundaries of target objects. Every point on the boundary of the salient object seed is taken as a starting point for our extension method. The method of how to establish the starting point set {P s } will be introduced below.

我们构建了一个3×3大小的卷积核Ns(如图2所示),并且让显著对象种子图Mb和卷积核Ns作卷积操作。当操作窗口中中心点p的值是1时,Ns就会被使用计算出一个决定因子d。We construct a 3×3 convolution kernel N s (as shown in Figure 2), and let the salient object seed map M b and convolution kernel N s perform convolution operation. When the value of the center point p in the operating window is 1, N s is used to calculate a determination factor d.

其中pij显著对象种子在w窗口中第i行第j列的像素点的二进制值,而Nsij表示在Ns中第i行第j列的值。如果n不等于0或8(决定因子d为1),就表示所有点的值不都是一样的,于是我们就把这个中心点作为显著对象种子边缘上的一个起始点。Among them, p ij is the binary value of the salient object seed in the i-th row and j-column pixel in the w window, and N sij represents the value of the i-th row and j-column in N s . If n is not equal to 0 or 8 (the determination factor d is 1), it means that the values of all points are not the same, so we take this central point as a starting point on the edge of the salient object seed.

(4)为每个起始点初始化扩展的方向。基于起始点集{Ps},每个起始点的方向都会被计算。我们设想每个起始点都沿着法线方向扩展,所以我们需要计算这个法线方向。我们粗略地将我们的扩展方向分成8种类型,并且用0至7八个数字来标记它们,如图3所示。一个卷积核Nd被构造出来用来确定每个起始点的法线方 向。用于计算方向ld的标记值的公式如下所示:(4) Initialize the direction of expansion for each starting point. Based on the set of starting points {P s }, the direction of each starting point is calculated. We imagine that each starting point extends along the normal direction, so we need to calculate this normal direction. We roughly divide our extension directions into 8 types, and mark them with eight numbers from 0 to 7, as shown in Figure 3. A convolution kernel N d is constructed to determine the normal direction of each starting point. The formula used to calculate the marker value for direction l d is as follows:

其中pij是二值图Mb在卷积窗口w(3×3)中对应第i行第j列像素的值,表示对该像素值进行取反操作,如果原来是1则变为0,如果是0则变为1,而p11则是在窗口w中的第1行第1列像素的值。n1是当p11=1时除了中心点以外所有满足 的点的数量,而n2则是当p11=0时除了中心点以外所有满足{pij=1}的点的数量。Where p ij is the value of the binary image M b in the convolution window w(3×3) corresponding to the pixel in row i and column j, Indicates that the pixel value is inverted, if it is 1, it becomes 0, if it is 0, it becomes 1, and p 11 is the value of the pixel in the first row and the first column in the window w. n 1 is when p 11 = 1 all satisfies except the central point , and n 2 is the number of all points satisfying {p ij =1} except the central point when p 11 =0.

(5)设置扩展的终止条件并完成扩展操作。我们的方法要将显著对象种子扩展到目标对象的真实边界,所以需要创建一个边界来终止扩展。我们才有了一个叫做自适应阈值canny算子的算法来检测原始图像的边界,并规定了一个可扩展限制区域。这个限制区域是由显著对象种子进行三次膨胀得到的。(5) Set the termination condition of the extension and complete the extension operation. Our method extends the salient object seed to the true boundary of the target object, so a boundary needs to be created to terminate the extension. We have an algorithm called the adaptive threshold canny operator to detect the boundary of the original image and specify a scalable restricted area. This restricted region is obtained by three dilations of salient object seeds.

在限制区域中,每个边界上的点都有机会成为扩展的终止点。我们根据如图3所示的扩展方向可以将终止条件分成两类:{0,2,4,6}和{1,3,5,7}。每种类型都包含了若干种情况。In a restricted area, every point on the boundary has the opportunity to be the termination point of the extension. According to the extension direction shown in Figure 3, we can divide the termination conditions into two categories: {0,2,4,6} and {1,3,5,7}. Each type contains several cases.

当一个起始点沿着它的法线方向找到了它的终止点,就会进行起始点向所对应的终止点延伸的一个操作。我们把在起始点(x,y)和终止点(xe,ye)之间的延伸线上的点重新赋值。延伸线上的点集合可以表示为如下公式:When a start point finds its end point along its normal direction, an operation will be performed to extend the start point to the corresponding end point. We reassign the points on the extension line between the start point (x,y) and the end point (x e , y e ). The set of points on the extension line can be expressed as the following formula:

{p(x+dixΔx,y+diyΔy)=1|0≤Δx≤|x-xe|,0≤Δy≤|y-ye|}, (5.1){p(x+d ix Δx,y+d iy Δy)=1|0≤Δx≤|xx e |,0≤Δy≤|yy e |}, (5.1)

其中p(x+dixΔx,y+diyΔy)是延伸线上的点的二进制值,它表示在限制区域中延伸线上重新赋值的点会成为我们最终目标对象中的点。di的取值可以从集合{(-1,-1),(0,-1),(1,-1),(1,0),(1,1),(0,1),(-1,1),(-1,0)}中寻找,它对应图3中的8个方向标记值{0,1,2,3,4,5,6,7}。Where p(x+d ix Δx,y+d iy Δy) is the binary value of the point on the extension line, which means that the reassigned point on the extension line in the restricted area will become the point in our final target object. The value of d i can be selected from the set {(-1,-1),(0,-1),(1,-1),(1,0),(1,1),(0,1),( -1,1),(-1,0)}, which corresponds to the 8 direction marker values {0,1,2,3,4,5,6,7} in Figure 3.

(6)标记没能找到终止点的起始点作为退化区域标记点。根据在退化区域的边界一定是不显著的这个假设,如果一个起始点在扩展限制区域中不存在终止点的话,我们就将该起始点标记为一个在退化区域中的点。最后可以获得一个代表退化区域的标记点集合P1(6) Mark the starting point where the end point cannot be found as the degraded area marking point. According to the assumption that the boundary of the degenerate region must be insignificant, if a starting point does not have an end point in the expansion-limited region, we mark the starting point as a point in the degenerate region. Finally, a set of marker points P 1 representing the degraded area can be obtained.

(7)对扩展后的结果进行修补和平滑。和步骤(2)中所描述的生成显著对象种子过程的最后一步类似,这里需要一个操作来修补扩展后的结果中的孔洞。我们按照面积对这些不连通的孔洞进行排序,并且将孔洞中的点重新赋值成为目标对象中的点,除了那些面积占整个对象的20%以上的孔洞(这里的阈值20%是通过大量实验得到的)。最后,我们使用高斯滤波器来平滑结果中由误差引起的粗糙的边界。(7) Repair and smooth the expanded result. Similar to the last step in the process of generating salient object seeds described in step (2), here an operation is required to patch holes in the expanded results. We sort these disconnected holes according to the area, and reassign the points in the holes to the points in the target object, except for those holes whose area accounts for more than 20% of the entire object (the threshold of 20% here is obtained through a large number of experiments of). Finally, we use a Gaussian filter to smooth the rough edges in the result caused by errors.

这里必须要说明,我们所提出的“法线扩展”中初始化方向的方法不适用于获取直线的法线方向的情况。它只适用于计算一些大区域边界上的法线方向。并且一些由于生成起始点、初始化方向或者计算终止条件的过程所引起的误算将不会对我们的系统造成较大的影响,我们的系统具有很高的鲁棒性。It must be noted here that the method of initializing the direction in the "normal extension" proposed by us is not applicable to the case of obtaining the normal direction of a straight line. It is only suitable for calculating the normal direction on the boundary of some large regions. And some miscalculations caused by the process of generating the starting point, initializing the direction or calculating the termination condition will not have a great impact on our system, and our system has high robustness.

(8)根据退化区域标记点用超像素修补对象分割结果。基于在图像的退化区域中的边界肯定是不显著的这一个假设,我们采用由简单线性迭代聚类超像素分割方法得到的超像素来修补那些被丢失的退化区域。考虑到结合局部自相关性的一部分显著对象种子比真实目标对象要大,所以当基于局部自相关性的骨架结合到显著对象种子时,我们不执行这个基于超像素的对象融合的程序。(8) Use superpixels to repair object segmentation results according to degraded region markers. Based on the assumption that the boundaries in the degenerated regions of the image must be insignificant, we use superpixels obtained by a simple linear iterative clustering superpixel segmentation method to inpaint those degenerated regions that are lost. Considering that a part of the salient object seeds combined with local autocorrelation is larger than the real target object, we do not perform this superpixel-based object fusion procedure when the local autocorrelation based skeleton is combined with the salient object seeds.

其中Objf表示最终的目标对象,而Obje表示扩展之后所得到的结果。参数a=0表示基于局部自相关性的骨架没有应用到显著对象种子中,nli表示在超像素Si中标记点的数量,这些标记点是从步骤(6)中获得的,并且属于集合PlAmong them, Obj f represents the final target object, and Obj e represents the result obtained after expansion. The parameter a = 0 indicates that the skeleton based on local autocorrelation is not applied to the salient object seeds, n li indicates the number of labeled points in superpixels S i which are obtained from step (6) and belong to the set Pl .

本实施例的适用于严重退化图像的基于显著性的对象分割方法,对输入图像 使用基于软图像抽象化方法得到显著图,再经过通过阈值分割,抽取主要连通域作为粗略的显著对象种子;使用局部自相关性计算每个像素点的运动方向,如图1所示,滤除背景后得到基于的局部自相关性的骨架;根据粗略显著对象种子的优劣情况决定是否结合基于的局部自相关性的骨架形成最终的显著对象种子点;根据显著对象种子寻找用于扩展的起始点,如图2所示;对每个起始点计算其法线方向作为扩展方向,8种可能的扩展方向如图3所示;为每个起始点设置终点,找到终点的起始点用线和终点连接起来;没有找到终止点的起始点作为退化区域中的点进行标记;对扩展后的结果进行修补和平滑;最后,根据退化区域标记点用超像素修补对象分割结果。The saliency-based object segmentation method applicable to severely degraded images in this embodiment uses a soft image abstraction method to obtain a saliency map for the input image, and then through threshold segmentation, extracts the main connected domain as a rough salient object seed; Local autocorrelation calculates the motion direction of each pixel, as shown in Figure 1, after filtering out the background, the skeleton based on local autocorrelation is obtained; according to the pros and cons of rough salient object seeds, it is decided whether to combine the local autocorrelation based on The final salient object seed point is formed by the characteristic skeleton; the starting point for expansion is found according to the salient object seed, as shown in Figure 2; the normal direction of each starting point is calculated as the expansion direction, and the eight possible expansion directions are as follows: As shown in Figure 3; set the end point for each starting point, find the starting point of the ending point and connect it with the end point; the starting point without finding the ending point is marked as a point in the degenerated area; repair and smooth the expanded result ; Finally, the object segmentation results are patched with superpixels according to the degraded area markers.

Claims (1)

1. a kind of object segmentation methods based on conspicuousness suitable for serious degraded image, it is characterised in that:It is described to be based on showing The object segmentation methods of work property comprise the following steps:
(1) the notable object seed of initialization is generated by notable figure
One is found in the higher scope of significance in the histogram of the notable figure obtained based on soft picture abstraction method directly The peak value of square figure, the higher scope of the significance be chosen for (127,255], row threshold division is entered to notable figure by threshold value T and obtained To the segmentation result of a binaryzation;Connected domain is marked one by one, and using morphological dilation before mark To protect the more significantly details of destination object, topmost region is extracted as first in labeled good not connected region Begin notable object seed;
(2) skeleton based on local autocorrelation is generated
Local autocoorrelation of the point (x, y) in the local window w centered on the point:
<mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&amp;Sigma;</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> <mo>&amp;Element;</mo> <mi>w</mi> </mrow> </msub> <msup> <mrow> <mo>&amp;lsqb;</mo> <mi>I</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>I</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>+</mo> <mi>&amp;Delta;</mi> <mi>x</mi> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>+</mo> <mi>&amp;Delta;</mi> <mi>y</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> <mn>2</mn> </msup> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2.1</mn> <mo>)</mo> </mrow> </mrow>
Wherein, I (xk,yk) represent point (xk,yk) gradient in window w, Δ x and Δ y are represented in the two directions x and y respectively Displacement;
The formula (2.1) is approximately:
<mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&amp;cong;</mo> <msub> <mi>&amp;Sigma;</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> <mo>&amp;Element;</mo> <mi>w</mi> </mrow> </msub> <mo>&amp;lsqb;</mo> <msub> <mi>I</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>I</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>&amp;Delta;</mi> <mi>x</mi> </mtd> </mtr> <mtr> <mtd> <mi>&amp;Delta;</mi> <mi>y</mi> </mtd> </mtr> </mtable> </mfenced> <mn>2</mn> </msup> <mo>=</mo> <mo>&amp;lsqb;</mo> <mi>&amp;Delta;</mi> <mi>x</mi> <mo>,</mo> <mi>&amp;Delta;</mi> <mi>y</mi> <mo>&amp;rsqb;</mo> <mi>M</mi> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mi>&amp;Delta;</mi> <mi>x</mi> </mtd> </mtr> <mtr> <mtd> <mi>&amp;Delta;</mi> <mi>y</mi> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2.2</mn> <mo>)</mo> </mrow> </mrow>
Wherein
<mrow> <mi>M</mi> <mo>=</mo> <msub> <mi>&amp;Sigma;</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> <mo>&amp;Element;</mo> <mi>w</mi> </mrow> </msub> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>I</mi> <mi>x</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msub> <mi>I</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <msub> <mi>I</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>I</mi> <mi>x</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <msub> <mi>I</mi> <mi>y</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <msubsup> <mi>I</mi> <mi>y</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2.3</mn> <mo>)</mo> </mrow> </mrow>
The matrix M obtained from each point two characteristic values are calculated, wherein the oval length of the characteristic vector correspondence of smaller characteristic value Direction of principal axis, this direction can be expressed as bearing of trend a little, by each point calculate obtained value be transformed into director space [0, 180] in, each value is represented in positive and negative both direction point-blank, each pixel of direction of motion figure generated Value all corresponds to the direction of some;
Direction of motion figure is normalized into 4 directions { 0,45,90,135 } by way of mean allocation, returned in direction of motion figure The maximum direction of quantity will be taken as the direction of background in image after one change;The direction of background is all removed afterwards, it is remaining Largest connected region is just taken as the object seed of supplement;Two proximal directions are lost to repair along in background direction search background The part of mistake, just connects together them when two neighbouring relevant ranges are spatially near each other;
If obtaining initial significantly object seed from step (1) to be made up of several disconnected regions, this is meant that from step Suddenly the result obtained in (1) can not be used for representing the whole skeleton of destination object, in this case, based on local autocorrelation Skeleton be used to allow initial significantly object seed to be optimized to preferably significantly object seed, it is contemplated that obtained by local autocorrelation The skeleton arrived compares some expansions with destination object, so applying morphologic etching operation to correct this problem;Originally Notable object seed begin when being made up of some disconnected regions, by the skeleton based on local autocorrelation and initial significantly object Seed is merged as final notable object seed;Otherwise, result will be only obtained in step (1) as final to show Write object seed;
Cavity in final notable object seed needs to be filled, except those account for 1/5 threshold value of whole destination object Cavity above;
(3) starting point for extension is generated on the edge of final notable object seed
The starting point of our extended methods is taken as in borderline each point of final notable object seed, one is constructed The convolution kernel N of individual 3 × 3 sizes, and allow final notable object seed binary map MbWith convolution kernel NsMake convolution operation, work as behaviour When the value for making central point p in window is 1, NsWill be by using calculating a factor of determination d:
<mrow> <mi>d</mi> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>1</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>n</mi> <mo>&amp;NotEqual;</mo> <mn>0</mn> <mi>a</mi> <mi>n</mi> <mi>d</mi> <mi> </mi> <mi>n</mi> <mo>&amp;NotEqual;</mo> <mn>8</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> <mi>o</mi> <mi>r</mi> <mi> </mi> <mi>n</mi> <mo>=</mo> <mn>8</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3.1</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mi>n</mi> <mo>=</mo> <msub> <mi>&amp;Sigma;</mi> <mrow> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>&amp;Element;</mo> <mi>w</mi> </mrow> </msub> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>*</mo> <msub> <mi>N</mi> <mrow> <mi>s</mi> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3.2</mn> <mo>)</mo> </mrow> </mrow>
Wherein, pijThe final notable object seed pixel binary value that the i-th row jth is arranged in w windows, and NsijRepresent NsIn the value that arranges of the i-th row jth, if n is not equal to 0 and 8, factor of determination d is 1, represent value a little be not just as, Just it assign this convolution kernel central point as one of starting point on final notable object seed edge;
(4) direction of extension is initialized for each starting point
Based on starting point set { Ps, the direction of each starting point can be calculated, it is contemplated that each starting point expands all along normal direction Exhibition, so needing to calculate this normal direction, is divided into 8 types, and mark it with 0 to 7 eight numeral by propagation direction , a convolution kernel NdIt is constructed out for determining the normal direction of each starting point, in calculated direction ldMark value public affairs Formula is as follows:
<mrow> <msub> <mi>N</mi> <mi>d</mi> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>2</mn> </mtd> </mtr> <mtr> <mtd> <mn>7</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>3</mn> </mtd> </mtr> <mtr> <mtd> <mn>6</mn> </mtd> <mtd> <mn>5</mn> </mtd> <mtd> <mn>4</mn> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4.2</mn> <mo>)</mo> </mrow> </mrow>
Wherein, pijIt is binary map MbThe value of the i-th row jth row pixel of correspondence in convolution window w,Expression is entered to the pixel value Row inversion operation, is changed into 0 if being originally 1, if 0 is changed into 1, and p11It is then the row picture of the 1st row the 1st in window w The value of element, n1It is to work as p11When=1 in addition to central point all satisfaction { pij=1 } quantity of point, and n2It is then to work as p11When=0 All satisfaction { p in addition to central pointij=1 } quantity of point;(5) end condition of extension is set and extended operation is completed
The border of original image is detected using the algorithm of adaptive threshold canny operators, and defines an expansible limitation Region, the restricted area is to carry out triple-expansion by final notable object seed to obtain;
In confined areas, each borderline point has the opportunity to the terminating point as extension, and bar will be terminated according to propagation direction Part is divided into two classes:{ 0,2,4,6 } and { 1,3,5,7 }, each type all contains several situation;
When normal direction of the starting point along it have found its terminating point, starting point will be carried out to corresponding termination One operation of point extension, in starting point (x, y) and terminating point (xe,ye) between extension line on point assignment again, extension Point set on line is expressed as formula:
{p(x+dixΔx,y+diyΔ y)=1 | 0≤Δ x≤| x-xe|,0≤Δy≤|y-ye|, (5.1)
Wherein, p (x+dixΔx,y+diyΔ y) is the binary value of the point on extension line, and it represents extension line in confined areas On again assignment point can turn into our final goal objects in point;diValue from set (- 1, -1), (0, -1), (1, - 1), (1,0), (1,1), (0,1), (- 1,1), (- 1,0) } in find, its 8 bearing mark value of correspondence 0,1,2,3,4,5,6, 7};
(6) mark could not find the starting point of terminating point as degradation regions mark point
If terminating point is not present in extension restricted area in a starting point, by the starting point labeled as one in degradation regions In point, obtain a mark point set P for representing degradation regions1
(7) repairing is carried out and smooth to the result after extension
These disconnected cavities are ranked up according to area, and the point in cavity again assignment is turned into destination object Point, except those areas account for more than 20% cavity of whole object, finally, using Gaussian filter come in sharpening result by Coarse border caused by error;
(8) object segmentation is repaired with super-pixel according to degradation regions mark point
The degenerate region that those are lost is repaired using the super-pixel obtained by simple linear iteration cluster superpixel segmentation method Domain, when the skeleton based on local autocorrelation is attached to final notable object seed, the object based on super-pixel is not performed The process of fusion;
<mrow> <msub> <mi>Obj</mi> <mi>f</mi> </msub> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>Obj</mi> <mi>e</mi> </msub> <mo>+</mo> <msub> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>&amp;Element;</mo> <mi>S</mi> <mo>,</mo> <msub> <mi>n</mi> <mrow> <mi>l</mi> <mi>i</mi> </mrow> </msub> <mo>&gt;</mo> <mn>5</mn> </mrow> </msub> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>a</mi> <mo>=</mo> <mn>0</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>Obj</mi> <mi>e</mi> </msub> <mo>,</mo> <mi>a</mi> <mo>=</mo> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8.1</mn> <mo>)</mo> </mrow> </mrow>
Wherein, ObjfThe final destination object of expression, and ObjeResult obtained by representing after extending;Parameter a=0 represents base It is not applied in the skeleton of local autocorrelation in final notable object seed, nliRepresent in super-pixel SiMiddle mark point Quantity, these mark points are obtained from step (6), and belong to set Pl
CN201510069617.3A 2015-02-10 2015-02-10 A kind of object segmentation methods based on conspicuousness suitable for serious degraded image Active CN104915946B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510069617.3A CN104915946B (en) 2015-02-10 2015-02-10 A kind of object segmentation methods based on conspicuousness suitable for serious degraded image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510069617.3A CN104915946B (en) 2015-02-10 2015-02-10 A kind of object segmentation methods based on conspicuousness suitable for serious degraded image

Publications (2)

Publication Number Publication Date
CN104915946A CN104915946A (en) 2015-09-16
CN104915946B true CN104915946B (en) 2017-10-13

Family

ID=54084986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510069617.3A Active CN104915946B (en) 2015-02-10 2015-02-10 A kind of object segmentation methods based on conspicuousness suitable for serious degraded image

Country Status (1)

Country Link
CN (1) CN104915946B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106353032B (en) * 2015-10-10 2019-03-29 北京控制与电子技术研究所 A kind of celestial body centroid rapid detection method under deficient illumination condition
CN106447681B (en) * 2016-07-26 2019-01-29 浙江工业大学 A kind of object segmentation methods of non-uniform severe motion degraded image
CN106295509B (en) * 2016-07-27 2019-11-08 浙江工业大学 A Structured Tracking Method for Objects in Non-Uniformly Degraded Videos
CN106658004B (en) * 2016-11-24 2019-05-17 浙江大学 A kind of compression method and device based on image flat site feature
CN109242877B (en) * 2018-09-21 2021-09-21 新疆大学 Image segmentation method and device
CN109544554B (en) * 2018-10-18 2020-01-31 中国科学院空间应用工程与技术中心 A method and system for plant image segmentation and leaf skeleton extraction
CN115250939B (en) * 2022-06-14 2024-01-05 新瑞鹏宠物医疗集团有限公司 Pet hamper anti-misfeeding method and device, electronic equipment and storage medium
CN115375685B (en) * 2022-10-25 2023-03-24 临沂天元混凝土工程有限公司 Method for detecting sand and stone particle size abnormity in concrete raw material

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020965A (en) * 2012-11-29 2013-04-03 奇瑞汽车股份有限公司 Foreground segmentation method based on significance detection
CN103955934A (en) * 2014-05-06 2014-07-30 北京大学 Image blurring detecting algorithm combined with image obviousness region segmentation
CN104034732A (en) * 2014-06-17 2014-09-10 西安工程大学 Fabric defect detection method based on vision task drive

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8989437B2 (en) * 2011-05-16 2015-03-24 Microsoft Corporation Salient object detection by composition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020965A (en) * 2012-11-29 2013-04-03 奇瑞汽车股份有限公司 Foreground segmentation method based on significance detection
CN103955934A (en) * 2014-05-06 2014-07-30 北京大学 Image blurring detecting algorithm combined with image obviousness region segmentation
CN104034732A (en) * 2014-06-17 2014-09-10 西安工程大学 Fabric defect detection method based on vision task drive

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Image Partial Blur Detection and Classfication;Renting Liu et al.;《Computer Vision and Pattern Recognition ,2008.CVPR 2008. IEEE conference on》;20080628;论文第1-8页 *
融合模糊连通图和区域生长的MRI脑组织图像分割方法;吴建;《科学技术与工程》;20130228;第13卷(第5期);第1135-1140页 *

Also Published As

Publication number Publication date
CN104915946A (en) 2015-09-16

Similar Documents

Publication Publication Date Title
CN104915946B (en) A kind of object segmentation methods based on conspicuousness suitable for serious degraded image
CN114782691B (en) Robot target recognition and motion detection method, storage medium and device based on deep learning
Bleyer et al. Surface stereo with soft segmentation
Revaud et al. Epicflow: Edge-preserving interpolation of correspondences for optical flow
WO2022121031A1 (en) Finger vein image restoration method based on partial convolution and mask updating
CN109448015B (en) A collaborative image segmentation method based on saliency map fusion
CN104616247B (en) A kind of method for map splicing of being taken photo by plane based on super-pixel SIFT
Haines et al. Recognising planes in a single image
CN105741265A (en) Depth image processing method and depth image processing device
CN106895794B (en) A kind of method and device obtaining laser beam scan path
CN107545223A (en) Image-recognizing method and electronic equipment
Shi et al. Self-supervised shape alignment for sports field registration
CN109325434A (en) A Multi-feature Probabilistic Topic Model for Image Scene Classification
CN108710883A (en) A kind of complete conspicuousness object detecting method using contour detecting
Ladický et al. Learning the matching function
Yuan et al. Structure flow-guided network for real depth super-resolution
Song et al. Building extraction from high resolution color imagery based on edge flow driven active contour and JSEG
CN113724143A (en) Method and device for image restoration
Fangfang et al. Real-time lane detection for intelligent vehicles based on monocular vision
CN114943823B (en) Unmanned aerial vehicle image splicing method and system based on deep learning semantic perception
Selinger et al. Improving appearance-based object recognition in cluttered backgrounds
JP2005241886A (en) Extraction method of changed area between geographical images, program for extracting changed area between geographical images, closed area extraction method and program for extracting closed area
CN110599517A (en) Target feature description method based on local feature and global HSV feature combination
Zhang et al. Color-constrained dehazing model
Forsyth et al. Recognizing objects using color-annotated adjacency graphs

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200611

Address after: Room 1504-2, Dikai International Center, Jianggan District, Hangzhou, Zhejiang Province

Patentee after: HANGZHOU SHISHANG TECHNOLOGY Co.,Ltd.

Address before: The city Zhaohui six districts Chao Wang Road Hangzhou city Zhejiang province Zhejiang University of Technology No. 18 310014

Patentee before: ZHEJIANG University OF TECHNOLOGY