CN105678797A - Image segmentation method based on visual saliency model - Google Patents

Image segmentation method based on visual saliency model Download PDF

Info

Publication number
CN105678797A
CN105678797A CN201610123858.6A CN201610123858A CN105678797A CN 105678797 A CN105678797 A CN 105678797A CN 201610123858 A CN201610123858 A CN 201610123858A CN 105678797 A CN105678797 A CN 105678797A
Authority
CN
China
Prior art keywords
pixel
super
image
boundary
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610123858.6A
Other languages
Chinese (zh)
Inventor
胡海峰
曹向前
潘瑜
张伟
肖翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
SYSU CMU Shunde International Joint Research Institute
Original Assignee
SYSU CMU Shunde International Joint Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SYSU CMU Shunde International Joint Research Institute filed Critical SYSU CMU Shunde International Joint Research Institute
Priority to CN201610123858.6A priority Critical patent/CN105678797A/en
Publication of CN105678797A publication Critical patent/CN105678797A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

本发明提供一种基于视觉显著模型的图像分割方法,该方法首先进行图像的背景检测得到图像的边界连通性值,然后使用基于六边形简单线性迭代聚类HSLIC(Hexagonal?Simple?Linear?Iterative?Clustering)的SC(Superpixel?Contrast)方法得到图像的显著性图,最后使用得到的图像的边界连通性值和显著性图的显著性值作为图割方法区域项的输入,自动地进行图像分割,最后输出图像的显著性区域分割结果。

The present invention provides a kind of image segmentation method based on visual saliency model, and this method first carries out the background detection of image and obtains the boundary connectivity value of image, then uses HSLIC (Hexagonal? Simple? Linear? Iterative The SC (Superpixel? Contrast) method of Clustering) obtains the saliency map of the image, and finally uses the boundary connectivity value of the obtained image and the saliency value of the saliency map as the input of the area item of the graph cut method to automatically segment the image , and finally output the salient region segmentation result of the image.

Description

基于视觉显著模型的图像分割方法Image Segmentation Method Based on Visual Saliency Model

技术领域technical field

本发明涉及图像处理领域,更具体地,涉及一种基于视觉显著模型的图像分割方法。The present invention relates to the field of image processing, and more specifically, to an image segmentation method based on a visual saliency model.

背景技术Background technique

图像显著区域检测是图像处理领域的一个热门的研究方向,图像的显著性区域,也就是最引起人们视觉关注的地方,它往往包含了图像中的绝大部分信息,所以,它的应用范围很广泛。它可以用在目标识别,图像分割,自适应压缩,图像检索等领域中,一种有效的图像显著性区域的检测方法对于这些领域的发展有很大的帮助。目前现有的显著性区域检测的方法有很多,大致可分为两个方向:基于局部对比度的方法和基于全局对比度的方法。基于局部对比度的显著性检测方法中每个像素点的显著性值是由和它周围的一些像素的对比度来决定,而基于全局对比度的显著性检测方法则是由它和整张图片中所有像素的对比度来决定。一种有效的显著性检测的方法是基于鲁棒的背景检测的显著性优化方法,它在图像上定义了一种边界连通性值,能够有效地将背景区域和前景区域区别出来,并且在此基础上的图像显著性优化可以得到较好的显著性图。另外,现有的一种常用的图像分割的方法是图割方法。它采用图论中最大流最小割的思想,源节点为S,汇节点为T,区域项转化为S或T到每一个像素点的权重,边缘项转化为像素点之间的权重。通过求解最大流最小割,将图像分成前景和背景区域。Image salient region detection is a popular research direction in the field of image processing. The salient region of an image, that is, the place that attracts people's visual attention, often contains most of the information in the image, so its application range is very wide. widely. It can be used in target recognition, image segmentation, adaptive compression, image retrieval and other fields, and an effective detection method for image salient regions is of great help to the development of these fields. There are many existing salient region detection methods, which can be roughly divided into two directions: methods based on local contrast and methods based on global contrast. In the local contrast-based saliency detection method, the saliency value of each pixel is determined by the contrast with some surrounding pixels, while the global contrast-based saliency detection method is determined by it and all pixels in the entire image. to determine the contrast. An effective saliency detection method is a saliency optimization method based on robust background detection, which defines a boundary connectivity value on the image, which can effectively distinguish the background area from the foreground area, and here Based on the image saliency optimization, a better saliency map can be obtained. In addition, an existing commonly used image segmentation method is the graph cut method. It adopts the idea of maximum flow minimum cut in graph theory, the source node is S, the sink node is T, the area item is converted into the weight of each pixel from S or T, and the edge item is converted into the weight between pixels. The image is divided into foreground and background regions by solving the max-flow min-cut.

然而,基于鲁棒的背景检测的显著性优化方法在图像显著性区域的完整性和边界保持上效果不是很好;图割方法大多需要用户的手动输入,通过人眼的主观判断以及先验知识来初步确定前景和背景,因此,该方法不够灵活,且容易受用户主观判断的影响。However, the saliency optimization method based on robust background detection is not very effective in the integrity and boundary preservation of image saliency regions; most of the graph cut methods require manual input from the user, through the subjective judgment of the human eye and prior knowledge To preliminarily determine the foreground and background, therefore, this method is not flexible enough and is easily affected by the user's subjective judgment.

发明内容Contents of the invention

本发明提供一种基于视觉显著模型的图像分割方法,该方法利用图像的边界连通值和显著性图的显著性值作为图割方法区域项的输入,进行图像分割,最后输出图像的显著性区域分割结果。The invention provides an image segmentation method based on a visual saliency model, which uses the boundary connectivity value of the image and the saliency value of the saliency map as the input of the area item of the graph cut method to perform image segmentation, and finally outputs the saliency area of the image Split results.

为了达到上述技术效果,本发明的技术方案如下:In order to achieve the above-mentioned technical effect, the technical scheme of the present invention is as follows:

一种基于视觉显著模型的图像分割方法,包括以下步骤:A method for image segmentation based on a visual saliency model, comprising the following steps:

S1:对图像A进行超像素分割,得到图像A超像素的测地距离,生成区域,得到边界长度和边界连通性值;S1: Carry out superpixel segmentation on image A, obtain the geodesic distance of superpixels in image A, generate regions, and obtain boundary length and boundary connectivity values;

S2:利用六边形简单线性迭代聚类法HSLIC对图像A进行超像素分割,并对分割后的图像使用超像素对比度SC方法进行全局显著性检测得到图像A的显著性图的显著性值;S2: Use the hexagonal simple linear iterative clustering method HSLIC to perform superpixel segmentation on image A, and use the superpixel contrast SC method to perform global saliency detection on the segmented image to obtain the saliency value of the saliency map of image A;

S3:利用S1中得到的边界连通性值和S2中得到的显著性值为图像分割区域项进行图像分割,输出图像的显著性区域分割结果。S3: Segment the image using the boundary connectivity value obtained in S1 and the saliency value obtained in S2, and output the salient region segmentation result of the image.

进一步地,步骤S1的具体过程如下:Further, the specific process of step S1 is as follows:

S11:图像A进行超像素分割后,计算每个超像素的测地距离;S11: After image A is subjected to superpixel segmentation, calculate the geodesic distance of each superpixel;

S12:利用得到的每个超像素的测地距离计算每个超像素的生成区域;S12: Calculate the generation area of each superpixel by using the obtained geodesic distance of each superpixel;

S13:利用得到的每个超像素的生成区域计算每个超像素的边界长度;S13: Calculate the boundary length of each superpixel by using the obtained generation area of each superpixel;

S14:利用得到的每个超像素的生成区域和每个超像素的边界长度计算每个超像素的边界连通性值。S14: Calculate the boundary connectivity value of each superpixel by using the obtained generation area of each superpixel and the boundary length of each superpixel.

进一步地,所述步骤S21中对图像A进行超像素分割后的具体过程是:Further, the specific process after performing superpixel segmentation on image A in step S21 is:

对图像A进行简单的线性迭代聚类分割SLIC,记录每个超像素的标记号,每个像素所属的超像素类别,超像素邻接矩阵,以及图像边界上的超像素以备使用。Carry out simple linear iterative clustering and segmentation SLIC on image A, record the label number of each superpixel, the superpixel category to which each pixel belongs, the superpixel adjacency matrix, and the superpixels on the image boundary for use.

进一步地,所述步骤S11的具体过程如下:Further, the specific process of the step S11 is as follows:

S111:对分割后的图像进行颜色空间转换,由RGB空间转换为Lab空间;S111: Perform color space conversion on the segmented image, from RGB space to Lab space;

S112:根据超像素邻接矩阵,计算所有邻接超像素(pi,pi+1)在Lab空间的欧氏距离:S112: According to the superpixel adjacency matrix, calculate the Euclidean distance of all adjacent superpixels (p i , p i+1 ) in Lab space:

dd aa pp pp (( pp ii ,, pp ii ++ 11 )) == (( ll ii -- 11 ii ++ 11 )) 22 ++ (( aa ii -- aa ii ++ 11 )) 22 ++ (( bb ii -- bb ii ++ 11 )) 22

其中i的取值范围为1到N-1,N为图像超像素的个数,pi表示第i个超像素,pi+1表示第i+1个超像素,li,ai,bi分别是第i个超像素在Lab颜色空间的三个分量,li+1,ai+1,bi+1分别是第i+1个超像素在Lab颜色空间的三个分量;The value of i ranges from 1 to N-1, N is the number of image superpixels, p i represents the i-th superpixel, p i+1 represents the i+1-th superpixel, l i , a i , b i are the three components of the ith superpixel in the Lab color space, l i+1 , a i+1 , b i+1 are the three components of the i+1 superpixel in the Lab color space;

S113:任意两个超像素的测地距离dgeo(pi,pj)为:从超像素pi开始沿着一条最短的路到达超像素pj的距离:S113: The geodesic distance d geo (p i , p j ) of any two superpixels is: the distance from superpixel p i to superpixel p j along the shortest path:

dd gg ee oo (( pp ii ,, pp jj )) == mm ii nno pp kk == pp 11 ,, pp 22 ,, ...... ,, pp nno == pp jj ΣΣ kk == 11 nno -- 11 dd aa pp pp (( pp ii ,, pp kk ))

其中,pk,pi,p2,…,pn,pj都是分割后图像的超像素,i,j取值范围均为1到N,k取值范围均为1到n-1,n代表从pi到pj的路径上经过的超像素个数,min表示取最小值,当i=j时,dgeo(pi,pj)=0,表示一个超像素和它自己的测地距离为0。Among them, p k , p i , p 2 ,..., p n , pj are all superpixels of the segmented image, i, j range from 1 to N, k ranges from 1 to n-1, n represents the number of superpixels passed on the path from p i to p j , min represents the minimum value, when i=j, dgeo(p i , p j )=0, represents a super pixel and its own measurement The ground distance is 0.

进一步地,所述步骤S12的具体过程如下:Further, the specific process of the step S12 is as follows:

超像素pi的生成区域表示的是,超像素pi所属区域的一个软区域。该区域描述的是其他的超像素pj对于超像素pi所在区域的贡献大小,超像素pi的生成区域Area(pi)为:The generated region of superpixel p i represents a soft region of the region to which superpixel p i belongs. This area describes the contribution of other superpixels p j to the area where superpixel p i is located. The generation area Area(p i ) of superpixel pi is:

AA rr ee aa (( pp ii )) == ΣΣ jj == 11 NN expexp (( -- dd gg ee oo 22 (( pp ii ,, pp jj )) 22 22 σσ cc ll rr )) == ΣΣ jj == 11 NN SS (( pp ii ,, pp jj ))

其中,exp表示指数函数,i,j的取值范围均为1到N,N为图像超像素的个数,σclr表示调整超像素pj对pi的区域影响大小的参数,σclr=10,S(pi,pj)表示超像素pj对pi区域影响,pi,pj测地距离越小,它对pi的区域贡献越大。Among them, exp represents an exponential function, the value ranges of i and j are both 1 to N, N is the number of superpixels in the image, σ clr represents the parameter to adjust the influence of superpixel p j on the area of p i , σ clr = 10. S(p i , p j ) represents the influence of the superpixel p j on the area of p i , the smaller the geodesic distance between p i and p j , the greater its contribution to the area of p i .

进一步地,步骤S13的具体过程如下:Further, the specific process of step S13 is as follows:

超像素pi的边界长度描述的是图像边界上的超像素对于pi的区域的贡献大小Lenbnd(pi),计算定义为:The boundary length of the superpixel p i describes the contribution of superpixels on the image boundary to the area of p i Lenbnd(p i ), and the calculation is defined as:

LenLen bb nno dd (( pp ii )) == ΣΣ jj == 11 NN SS (( pp ii ,, pp jj )) ·· δδ (( pp jj ∈∈ BB nno dd ))

其中,Bnd是图像边界上的超像素的集合,对于图像边界上的超像素,δ(pj∈Bnd)为1,其他为0。Among them, Bnd is the set of superpixels on the image boundary, and δ(p j ∈ Bnd ) is 1 for the superpixels on the image boundary, and 0 for others.

进一步地,步骤S14的具体过程如下:Further, the specific process of step S14 is as follows:

超像素pi的边界连通性值描述的是pi属于图像的边界的可能性大小,边界连通性值是一个关于图像超像素边界长度和生成区域的一个函数:The boundary connectivity value of a superpixel pi describes the possibility that pi belongs to the boundary of the image, and the boundary connectivity value is a function of the length of the image superpixel boundary and the generated area:

BB nno dd CC oo nno (( pp ii )) == LenLen bb nno dd (( pp ii )) AA rr ee aa (( pp ii )) ..

进一步地,所述步骤S3的具体过程如下:Further, the specific process of the step S3 is as follows:

根据图论的思想,将超像素看成是图上的一个个节点,源节点为S,汇节点为T,区域项转化为S或T到每一个超像素的权重,边缘项转化为超像素之间的权重。通过求解最大流最小割,将图像分成前景和背景区域,在边缘项不变的情况下,使用步骤S1得到的图像边界连通性值和步骤S2得到的显著性图的显著性值作为区域项的权重输入,自动地进行图像分割,得到图像显著性区域分割结果,According to the idea of graph theory, the superpixel is regarded as a node on the graph, the source node is S, the sink node is T, the area item is converted into S or T to the weight of each superpixel, and the edge item is converted into superpixel between the weights. By solving the maximum flow minimum cut, the image is divided into foreground and background regions. In the case of constant edge items, use the image boundary connectivity value obtained in step S1 and the saliency value of the saliency map obtained in step S2 as the area item. Weight input, image segmentation is automatically performed, and the result of image salient area segmentation is obtained.

其中,区域项的权重为:Among them, the weight of the area item is:

ww ee ii gg hh tt (( pp ii )) == ww ** BB oo nno CC oo nno (( pp ii )) ++ (( 11 -- ww )) ** expexp (( -- SS 22 (( pp ii )) 22 ** σσ 22 ))

其中,w,σ分别是两个调节参数,w,σ∈[0.3,0.6],S(pi)为利用步骤S2得到的超像素pi的显著性值,BonCon(pi),S(pi)都归一化到[0,1]之间。Among them, w and σ are two adjustment parameters respectively, w, σ∈[0.3,0.6], S(p i ) is the saliency value of superpixel p i obtained by step S2, BonCon(p i ), S( p i ) are normalized to [0,1].

与现有技术相比,本发明技术方案的有益效果是:Compared with the prior art, the beneficial effects of the technical solution of the present invention are:

本发明首先进行图像的背景检测得到图像的边界连通性值,然后使用基于六边形简单线性迭代聚类HSLIC(HexagonalSimpleLinearIterativeClustering)的SC(SuperpixelContrast)方法得到图像的显著性图,最后使用得到的图像的边界连通性值和显著性图的显著性值作为图割方法区域项的输入,自动地进行图像分割,最后输出图像的显著性区域分割结果。The present invention first performs the background detection of the image to obtain the boundary connectivity value of the image, then uses the SC (Superpixel Contrast) method based on the hexagonal simple linear iterative clustering HSLIC (HexagonalSimpleLinearIterativeClustering) to obtain the saliency map of the image, and finally uses the obtained image The boundary connectivity value and the saliency value of the saliency map are used as the input of the area item of the graph cut method, and the image is automatically segmented, and finally the saliency area segmentation result of the image is output.

附图说明Description of drawings

图1为本发明流程图。Fig. 1 is the flow chart of the present invention.

具体实施方式detailed description

附图仅用于示例性说明,不能理解为对本专利的限制;The accompanying drawings are for illustrative purposes only and cannot be construed as limiting the patent;

为了更好说明本实施例,附图某些部件会有省略、放大或缩小,并不代表实际产品的尺寸;In order to better illustrate this embodiment, some parts in the drawings will be omitted, enlarged or reduced, and do not represent the size of the actual product;

对于本领域技术人员来说,附图中某些公知结构及其说明可能省略是可以理解的。For those skilled in the art, it is understandable that some well-known structures and descriptions thereof may be omitted in the drawings.

下面结合附图和实施例对本发明的技术方案做进一步的说明。The technical solutions of the present invention will be further described below in conjunction with the accompanying drawings and embodiments.

实施例1Example 1

一种基于视觉显著模型的图像分割方法,包括以下步骤:A method for image segmentation based on a visual saliency model, comprising the following steps:

S1:对图像A进行超像素分割,得到图像A超像素的测地距离,生成区域,得到边界长度和边界连通性值;S1: Carry out superpixel segmentation on image A, obtain the geodesic distance of superpixels in image A, generate regions, and obtain boundary length and boundary connectivity values;

S2:利用六边形简单线性迭代聚类法HSLIC对图像A进行超像素分割,并对分割后的图像使用超像素对比度SC方法进行全局显著性检测得到图像A的显著性图的显著性值;S2: Use the hexagonal simple linear iterative clustering method HSLIC to perform superpixel segmentation on image A, and use the superpixel contrast SC method to perform global saliency detection on the segmented image to obtain the saliency value of the saliency map of image A;

S3:利用S1中得到的边界连通性值和S2中得到的显著性值为图像分割区域项进行图像分割,输出图像的显著性区域分割结果。S3: Segment the image using the boundary connectivity value obtained in S1 and the saliency value obtained in S2, and output the salient region segmentation result of the image.

进一步地,步骤S1的具体过程如下:Further, the specific process of step S1 is as follows:

S11:图像A进行超像素分割后,计算每个超像素的测地距离;S11: After image A is subjected to superpixel segmentation, calculate the geodesic distance of each superpixel;

S12:利用得到的每个超像素的测地距离计算每个超像素的生成区域;S12: Calculate the generation area of each superpixel by using the obtained geodesic distance of each superpixel;

S13:利用得到的每个超像素的生成区域计算每个超像素的边界长度;S13: Calculate the boundary length of each superpixel by using the obtained generation area of each superpixel;

S14:利用得到的每个超像素的生成区域和每个超像素的边界长度计算每个超像素的边界连通性值。S14: Calculate the boundary connectivity value of each superpixel by using the obtained generation area of each superpixel and the boundary length of each superpixel.

步骤S21中对图像A进行超像素分割后的具体过程是:The specific process after performing superpixel segmentation on image A in step S21 is:

对图像A进行简单的线性迭代聚类分割SLIC,记录每个超像素的标记号,每个像素所属的超像素类别,超像素邻接矩阵,以及图像边界上的超像素以备使用。Carry out simple linear iterative clustering and segmentation SLIC on image A, record the label number of each superpixel, the superpixel category to which each pixel belongs, the superpixel adjacency matrix, and the superpixels on the image boundary for use.

步骤S11的具体过程如下:The specific process of step S11 is as follows:

S111:对分割后的图像进行颜色空间转换,由RGB空间转换为Lab空间;S111: Perform color space conversion on the segmented image, from RGB space to Lab space;

S112:根据超像素邻接矩阵,计算所有邻接超像素(pi,pi+1)在Lab空间的欧氏距离:S112: According to the superpixel adjacency matrix, calculate the Euclidean distance of all adjacent superpixels (p i , p i+1 ) in Lab space:

dd aa pp pp (( pp ii ,, pp ii ++ 11 )) == (( ll ii -- ll ii ++ 11 )) 22 ++ (( aa ii -- aa ii ++ 11 )) 22 ++ (( bb ii -- bb ii ++ 11 )) 22

其中i的取值范围为1到N-1,N为图像超像素的个数,pi表示第i个超像素,pi+1表示第i+1个超像素,li,ai,bi分别是第i个超像素在Lab颜色空间的三个分量,li+1,ai+1,bi+1分别是第i+1个超像素在Lab颜色空间的三个分量;The value of i ranges from 1 to N-1, N is the number of image superpixels, p i represents the i-th superpixel, p i+1 represents the i+1-th superpixel, l i , a i , b i are the three components of the ith superpixel in the Lab color space, l i+1 , a i+1 , b i+1 are the three components of the i+1 superpixel in the Lab color space;

S113:任意两个超像素的测地距离dgeo(pi,pj)为:从超像素pi开始沿着一条最短的路到达超像素pj的距离:S113: The geodesic distance d geo (p i , p j ) of any two superpixels is: the distance from superpixel p i to superpixel p j along the shortest path:

dd gg ee oo (( pp ii ,, pp jj )) == minmin pp kk == pp ii ,, pp 22 ,, ...... ,, pp nno == pp jj ΣΣ kk == 11 nno -- 11 dd aa pp pp (( pp ii ,, pp kk ))

其中,pk,pi,p2,…,pn,pj都是分割后图像的超像素,i,j取值范围均为1到N,k取值范围均为1到n-1,n代表从pi到pj的路径上经过的超像素个数,min表示取最小值,当i=j时,dgeo(pi,pj)=0,表示一个超像素和它自己的测地距离为0。Among them, p k , p i , p 2 ,..., p n , pj are all superpixels of the segmented image, i, j range from 1 to N, k ranges from 1 to n-1, n represents the number of superpixels passed on the path from p i to p j , min represents the minimum value, when i=j, dgeo(p i , p j )=0, represents a super pixel and its own measurement The ground distance is 0.

步骤S12的具体过程如下:The specific process of step S12 is as follows:

超像素pi的生成区域表示的是,超像素pi所属区域的一个软区域。该区域描述的是其他的超像素pj对于超像素pi所在区域的贡献大小,超像素pi的生成区域Area(pi)为:The generated region of superpixel p i represents a soft region of the region to which superpixel p i belongs. This area describes the contribution of other superpixels p j to the area where superpixel p i is located. The generation area Area(p i ) of superpixel pi is:

AA rr ee aa (( pp ii )) == ΣΣ jj == 11 NN expexp (( -- dd gg ee oo 22 (( pp ii ,, pp jj )) 22 σσ cc ll rr 22 )) == ΣΣ jj == 11 NN SS (( pp ii ,, pp jj ))

其中,exp表示指数函数,i,j的取值范围均为1到N,N为图像超像素的个数,σclr表示调整超像素pj对pi的区域影响大小的参数,σclr=10,S(pi,pj)表示超像素pj对pi区域影响,pi,pj测地距离越小,它对pi的区域贡献越大。Among them, exp represents an exponential function, the value ranges of i and j are both 1 to N, N is the number of superpixels in the image, σ clr represents the parameter to adjust the influence of superpixel p j on the area of p i , σ clr = 10. S(p i , p j ) represents the influence of the superpixel p j on the area of p i , the smaller the geodesic distance between p i and p j , the greater its contribution to the area of p i .

步骤S13的具体过程如下:The specific process of step S13 is as follows:

超像素pi的边界长度描述的是图像边界上的超像素对于pi的区域的贡献大小Lenbnd(pi),计算定义为:The boundary length of the superpixel p i describes the contribution of superpixels on the image boundary to the area of p i Lenbnd(p i ), and the calculation is defined as:

LenLen bb nno dd (( pp ii )) == ΣΣ jj == 11 NN SS (( pp ii ,, pp jj )) ·&Center Dot; δδ (( pp jj ∈∈ BB nno dd ))

其中,Bnd是图像边界上的超像素的集合,对于图像边界上的超像素,δ(pj∈Bnd)为1,其他为0。Among them, Bnd is the set of superpixels on the image boundary, and δ(p j ∈ Bnd ) is 1 for the superpixels on the image boundary, and 0 for others.

步骤S14的具体过程如下:The specific process of step S14 is as follows:

超像素pi的边界连通性值描述的是pi属于图像的边界的可能性大小,边界连通性值是一个关于图像超像素边界长度和生成区域的一个函数:The boundary connectivity value of a superpixel pi describes the possibility that pi belongs to the boundary of the image, and the boundary connectivity value is a function of the length of the image superpixel boundary and the generated area:

BB nno dd CC oo nno (( pp ii )) == LenLen bb nno dd (( pp ii )) AA rr ee aa (( pp ii )) ..

步骤S3的具体过程如下:The specific process of step S3 is as follows:

根据图论的思想,将超像素看成是图上的一个个节点,源节点为S,汇节点为T,区域项转化为S或T到每一个超像素的权重,边缘项转化为超像素之间的权重。通过求解最大流最小割,将图像分成前景和背景区域,在边缘项不变的情况下,使用步骤S1得到的图像边界连通性值和步骤S2得到的显著性图的显著性值作为区域项的权重输入,自动地进行图像分割,得到图像显著性区域分割结果,According to the idea of graph theory, the superpixel is regarded as a node on the graph, the source node is S, the sink node is T, the area item is converted into S or T to the weight of each superpixel, and the edge item is converted into superpixel between the weights. By solving the maximum flow minimum cut, the image is divided into foreground and background regions. In the case of constant edge items, use the image boundary connectivity value obtained in step S1 and the saliency value of the saliency map obtained in step S2 as the area item. Weight input, image segmentation is automatically performed, and the result of image salient area segmentation is obtained.

其中,区域项的权重为:Among them, the weight of the area item is:

ww ee ii gg hh tt (( pp ii )) ww ** BB oo nno CC oo nno (( pp ii )) ++ (( 11 -- ww )) ** expexp (( 11 -- SS 22 (( pp ii )) 22 ** σσ 22 ))

其中,w,σ分别是两个调节参数,w,σ∈[0.3,0.6],S(pi)为利用步骤S2得到的超像素pi的显著性值,BonCon(pi),S(pi)都归一化到[0,1]之间。Among them, w and σ are two adjustment parameters respectively, w, σ∈[0.3,0.6], S(p i ) is the saliency value of superpixel p i obtained by step S2, BonCon(p i ), S( p i ) are normalized to [0,1].

相同或相似的标号对应相同或相似的部件;The same or similar reference numerals correspond to the same or similar components;

附图中描述位置关系的用于仅用于示例性说明,不能理解为对本专利的限制;The positional relationship described in the drawings is only for illustrative purposes and cannot be construed as a limitation to this patent;

显然,本发明的上述实施例仅仅是为清楚地说明本发明所作的举例,而并非是对本发明的实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动。这里无需也无法对所有的实施方式予以穷举。凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明权利要求的保护范围之内。Apparently, the above-mentioned embodiments of the present invention are only examples for clearly illustrating the present invention, rather than limiting the implementation of the present invention. For those of ordinary skill in the art, other changes or changes in different forms can be made on the basis of the above description. It is not necessary and impossible to exhaustively list all the implementation manners here. All modifications, equivalent replacements and improvements made within the spirit and principles of the present invention shall be included within the protection scope of the claims of the present invention.

Claims (8)

1. the image partition method of the remarkable model of view-based access control model, it is characterised in that, comprise the following steps:
S1: image A is carried out super-pixel segmentation, obtains the survey ground distance of image A super-pixel, and formation zone, obtains boundary length and boundary connected value;
S2: utilize hexagon simple linear iteration clustering procedure HSLIC that image A is carried out super-pixel segmentation, and the image after segmentation uses super-pixel contrast gradient SC method carry out the significance value that overall situation significance detects the Saliency maps obtaining image A;
S3: utilize the significance value obtained in the boundary connected value obtained in S1 and S2 to carry out Iamge Segmentation, the salient region segmentation result of output image for Iamge Segmentation area item.
2. the image partition method of the remarkable model of view-based access control model according to claim 1, it is characterised in that, the detailed process of step S1 is as follows:
S11: after image A carries out super-pixel segmentation, calculates the survey ground distance of each super-pixel;
S12: distance calculates the formation zone of each super-pixel with utilizing the survey of each super-pixel obtained;
S13: utilize the formation zone of each super-pixel obtained to calculate the boundary length of each super-pixel;
S14: utilize the formation zone of each super-pixel obtained and the boundary length of each super-pixel to calculate the boundary connected value of each super-pixel.
3. the image partition method of the remarkable model of view-based access control model according to claim 2, it is characterised in that, the detailed process after image A being carried out super-pixel segmentation in described step S21 is:
Image A is carried out simple linear iteration cluster segmentation SLIC, records the mark number of each super-pixel, the super-pixel classification belonging to each pixel, super-pixel adjacency matrix, and the super-pixel in image boundary is in order to using.
4. the image partition method of the remarkable model of view-based access control model according to claim 3, it is characterised in that, the detailed process of described step S11 is as follows:
S111: the image after segmentation is carried out color space conversion, is converted to Lab space by rgb space;
S112: according to super-pixel adjacency matrix, calculates all adjacent super-pixel (pi, pi+1) in the Euclidean distance of Lab space:
d a p p ( p i , p i + 1 ) = ( l i - l i + 1 ) 2 + ( a i - a i + 1 ) 2 + ( b i - b i + 1 ) 2
Wherein the span of i be 1 to N-1, N be the number of image superpixel, piRepresent i-th super-pixel, pi+1Represent the i-th+1 super-pixel, li, ai, biIt is three components of i-th super-pixel at Lab color space respectively, li+1, ai+1, bi+1It is three components of the i-th+1 super-pixel at Lab color space respectively;
S113: the survey ground distance d of any two super-pixelgeo(pi, pj) it is: from super-pixel piStart to arrive super-pixel p along a road the shortestjDistance:
d g e o ( p i , p j ) = m i n p k = p i , p 2 , ... , p n = p j Σ k = 1 n - 1 d a p p ( p i , p k )
Wherein, pk, pi, p2..., pn, pj is the super-pixel of image after segmentation, and i, j span is 1 and is 1 to n-1, n representative from p to N, k spaniTo pjPath on the super-pixel number of process, min represents and gets minimum value, as i=j, dgeo (pi, pj)=0, represents that a super-pixel and its survey ground distance are 0.
5. the image partition method of the remarkable model of view-based access control model according to claim 4, it is characterised in that, the detailed process of described step S12 is as follows:
Super-pixel piFormation zone represent, super-pixel piA soft region of affiliated area. This region description be other super-pixel pjFor super-pixel piThe contribution of region, the formation zone Area (p of super-pixel pii) it is:
A r e a ( p i ) = Σ j = 1 N exp ( - d g e o 2 ( p i , p j ) 2 σ c l r 2 ) = Σ j = 1 N S ( p i , p j )
Wherein, exp represents exponential function, and it is the number of image superpixel that the span of i, j is 1 to N, N, σclrRepresent adjustment super-pixel pjTo piThe parameter of regional effect size, σclr=10, S (pi, pj) represent super-pixel pjTo piRegional effect, pi, pjSurveying ground distance more little, it is to piArea contribution more big.
6. the image partition method of the remarkable model of view-based access control model according to claim 5, it is characterised in that, the detailed process of step S13 is as follows:
Super-pixel piBoundary length describe be that the super-pixel in image boundary is for piThe contribution Lenbnd (p in regioni), calculating is defined as:
Len b n d ( p i ) = Σ j = 1 N S ( p i , p j ) · δ ( p j ∈ B n d )
Wherein, Bnd is the set of the super-pixel in image boundary, for the super-pixel in image boundary, and δ (pj∈ Bnd) it is 1, other are 0.
7. the image partition method of the remarkable model of view-based access control model according to claim 6, it is characterised in that, the detailed process of step S14 is as follows:
Super-pixel piBoundary connected value describe be piBelong to the possibility size on the border of image, boundary connected value be one about a function of image superpixel boundary length and formation zone:
B n d C o n ( p i ) = Len b n d ( p i ) A r e a ( p i ) .
8. the image partition method of the remarkable model of view-based access control model according to claim 7, it is characterised in that, the detailed process of described step S3 is as follows:
Thought according to graph theory, regards the node one by one on figure as by super-pixel, and source node is S, and sink nodes are T, and area item is converted into the weight of S or T to each super-pixel, the weight that edge item is converted between super-pixel. By solving max-flow min-cut, image is divided into prospect and background area, when edge item is constant, the significance value of the Saliency maps that the image boundary that obtains of step S1 is connected property value and step S2 obtains is used to input as the weight of area item, automatically Iamge Segmentation is carried out, obtain saliency region segmentation result
Wherein, the weight of area item is:
w e i g h t ( p i ) = w * B o n C o n ( p i ) + ( 1 - w ) * exp ( - S 2 ( p i ) 2 * σ 2 )
Wherein, w, σ are two regulating parameter respectively, w, σ ∈ [0.3,0.6], S (pi) super-pixel p for utilizing step S2 to obtainiSignificance value, BonCon (pi), S (pi) all normalize between [0,1].
CN201610123858.6A 2016-03-04 2016-03-04 Image segmentation method based on visual saliency model Pending CN105678797A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610123858.6A CN105678797A (en) 2016-03-04 2016-03-04 Image segmentation method based on visual saliency model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610123858.6A CN105678797A (en) 2016-03-04 2016-03-04 Image segmentation method based on visual saliency model

Publications (1)

Publication Number Publication Date
CN105678797A true CN105678797A (en) 2016-06-15

Family

ID=56307824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610123858.6A Pending CN105678797A (en) 2016-03-04 2016-03-04 Image segmentation method based on visual saliency model

Country Status (1)

Country Link
CN (1) CN105678797A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408529A (en) * 2016-08-31 2017-02-15 浙江宇视科技有限公司 Shadow removal method and apparatus
CN106919950A (en) * 2017-01-22 2017-07-04 山东大学 Probability density weights the brain MR image segmentation of geodesic distance
CN108364300A (en) * 2018-03-15 2018-08-03 山东财经大学 Method, system and computer-readable storage medium for image segmentation of vegetable leaf diseases
CN108717539A (en) * 2018-06-11 2018-10-30 北京航空航天大学 A kind of small size Ship Detection
CN109389601A (en) * 2018-10-19 2019-02-26 山东大学 Color image superpixel segmentation method based on similitude between pixel

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408529A (en) * 2016-08-31 2017-02-15 浙江宇视科技有限公司 Shadow removal method and apparatus
CN106919950A (en) * 2017-01-22 2017-07-04 山东大学 Probability density weights the brain MR image segmentation of geodesic distance
CN106919950B (en) * 2017-01-22 2019-10-25 山东大学 Brain MR Image Segmentation Method Based on Probability Density Weighted Geodesic Distance
CN108364300A (en) * 2018-03-15 2018-08-03 山东财经大学 Method, system and computer-readable storage medium for image segmentation of vegetable leaf diseases
CN108717539A (en) * 2018-06-11 2018-10-30 北京航空航天大学 A kind of small size Ship Detection
CN109389601A (en) * 2018-10-19 2019-02-26 山东大学 Color image superpixel segmentation method based on similitude between pixel
CN109389601B (en) * 2018-10-19 2019-07-16 山东大学 A color image superpixel segmentation method based on similarity between pixels

Similar Documents

Publication Publication Date Title
CN109685067B (en) A Semantic Image Segmentation Method Based on Region and Deep Residual Networks
CN107123123B (en) Image segmentation quality evaluating method based on convolutional neural networks
CN106022353B (en) A kind of linguistic indexing of pictures method based on super-pixel segmentation
CN101840577B (en) Image automatic segmentation method based on graph cut
CN104809729A (en) Robust automatic image salient region segmenting method
CN104392228B (en) Target class detection method in UAV images based on conditional random field model
CN105678797A (en) Image segmentation method based on visual saliency model
CN107862261A (en) Image people counting method based on multiple dimensioned convolutional neural networks
CN104732545B (en) The texture image segmenting method with quick spectral clustering is propagated with reference to sparse neighbour
CN103208115B (en) Based on the saliency method for detecting area of geodesic line distance
CN101286199A (en) An Image Segmentation Method Based on Region Growing and Ant Colony Clustering
CN104881681A (en) Image Sequence Classification Method Based on Mixed Graph Model
CN104123417B (en) A Method of Image Segmentation Based on Cluster Fusion
CN102254326A (en) Image segmentation method by using nucleus transmission
Xian et al. Neutro-connectedness cut
CN104537355A (en) Remarkable object detecting method utilizing image boundary information and area connectivity
CN108038857A (en) A kind of foreground target detection method based on semantic information and edge constraint
CN107369158A (en) The estimation of indoor scene layout and target area extracting method based on RGB D images
CN107358176A (en) Sorting technique based on high score remote sensing image area information and convolutional neural networks
CN103198479A (en) SAR image segmentation method based on semantic information classification
CN108022244A (en) A kind of hypergraph optimization method for being used for well-marked target detection based on foreground and background seed
CN104715251A (en) Salient object detection method based on histogram linear fitting
CN107577983A (en) A method for recursively finding regions of interest for identifying multi-label images
CN106056165A (en) Saliency detection method based on super-pixel relevance enhancing Adaboost classification learning
Ren et al. Research on infrared small target segmentation algorithm based on improved mask R-CNN

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20170316

Address after: 528300 Guangdong province Foshan city Shunde District Daliang South Road No. 9 Research Institute

Applicant after: Internation combination research institute of Carnegie Mellon University of Shunde Zhongshan University

Applicant after: Sun Yat-sen University

Address before: 528300 Daliang street, Shunde District, Guangdong,,, Carnegie Mellon University, Zhongshan University, Shunde

Applicant before: Internation combination research institute of Carnegie Mellon University of Shunde Zhongshan University

TA01 Transfer of patent application right
RJ01 Rejection of invention patent application after publication

Application publication date: 20160615

RJ01 Rejection of invention patent application after publication