CN104715476B - A kind of well-marked target detection method based on histogram power function fitting - Google Patents

A kind of well-marked target detection method based on histogram power function fitting Download PDF

Info

Publication number
CN104715476B
CN104715476B CN201510078176.3A CN201510078176A CN104715476B CN 104715476 B CN104715476 B CN 104715476B CN 201510078176 A CN201510078176 A CN 201510078176A CN 104715476 B CN104715476 B CN 104715476B
Authority
CN
China
Prior art keywords
superpixel
superpixels
salient
power function
gray level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510078176.3A
Other languages
Chinese (zh)
Other versions
CN104715476A (en
Inventor
杨春蕾
普杰信
刘中华
王晓红
董永生
梁灵飞
刘刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Xinma Technology Co Ltd
Original Assignee
Henan University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University of Science and Technology filed Critical Henan University of Science and Technology
Priority to CN201510078176.3A priority Critical patent/CN104715476B/en
Publication of CN104715476A publication Critical patent/CN104715476A/en
Application granted granted Critical
Publication of CN104715476B publication Critical patent/CN104715476B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

A kind of well-marked target detection method based on histogram power function fitting, including the classification of histogram power function fitting, super-pixel, marking area problem and well-marked target detect four steps.Beneficial effect of the present invention:Using FT notable figures, figure manifold ranking method, SLIC methods super-pixel classify, histogram power function fitting ask gray threshold image multiple target and scene complexity in the case of, detection efficiency is high, performance is good, high precision, solve a great problem of well-marked target detection field, and method provided by the present invention is performed, and speed is fast, algorithm complex is low, while can guarantee that accuracy of detection higher.

Description

一种基于直方图幂函数拟合的显著目标检测方法A Salient Object Detection Method Based on Histogram Power Function Fitting

技术领域technical field

本发明涉及图像显著目标检测领域,具体地说是一种基于直方图幂函数拟合的显著目标检测方法。The invention relates to the field of image salient target detection, in particular to a salient target detection method based on histogram power function fitting.

背景技术Background technique

众所周知,计算机性能和功能的飞速发展为机器智能提供了可靠的可行条件,随着机器学习、模式识别等学科的深入,人们越来越希望计算机可以更加自主更加智能的完成任务。要实现这个目标,需要计算机能够理解周围的环境。人类感知外界信息最主要方式是通过视觉,所以计算机理解周围环境的关键是具有视觉感知处理能力。As we all know, the rapid development of computer performance and functions provides reliable and feasible conditions for machine intelligence. With the deepening of disciplines such as machine learning and pattern recognition, people increasingly hope that computers can complete tasks more autonomously and intelligently. To achieve this goal, the computer needs to be able to understand the surrounding environment. The main way for humans to perceive external information is through vision, so the key to a computer's understanding of the surrounding environment is the ability to process visual perception.

显著目标是图像中人们最为关注的目标,一般包含更多人们感兴趣的、更有用的信息。因此,显著目标检测广泛应用于目标识别、图像分割、图像检索等领域。常用的显著目标检测技术主要有基于局部对比的显著区域检测技术,如:基于局部对比和模糊生长技术、多尺度中心-周围直方图和颜色空间分布对比技术等;以及基于全局对比的显著区域检测技术,如:Achanta 从频率域角度出发,提出一种基于全局对比的显著区域检测的方法(Frequency-tuned salient region detection,简称FT方法),该方法将经过高斯低通滤波图像中的每个像素值和整幅图像的平均像素值之间的欧几里得距离作为该点的显著值。但在以下两种情况下会失效:The salient target is the target that people pay the most attention to in the image, and generally contains more interesting and useful information. Therefore, salient object detection is widely used in object recognition, image segmentation, image retrieval and other fields. Commonly used salient object detection technologies mainly include salient region detection technologies based on local contrast, such as: based on local contrast and fuzzy growth technology, multi-scale center-surrounding histogram and color space distribution comparison technology; and salient region detection based on global contrast Technologies, such as: Achanta proposes a salient region detection method based on global contrast (Frequency-tuned salient region detection, referred to as FT method) from the perspective of the frequency domain. This method will pass through each pixel in the Gaussian low-pass filter image The Euclidean distance between the value and the average pixel value of the entire image is taken as the saliency value of the point. But it will fail in the following two cases:

(1)显著区域的颜色占图像中的大部分,通过该方法计算后,背景会具有更高的显著值;(1) The color of the salient area accounts for most of the image. After calculation by this method, the background will have a higher salient value;

(2)背景中含有少量突出的颜色,这样背景中的这部分颜色的显著值也会非常高。(2) The background contains a small amount of prominent color, so the saliency value of this part of the color in the background will be very high.

相关文献:ACHANTA R, HEMAMI S, ESTRADA F, et al. Frequency-tunedsalient region detection[C] // IEEE Conference on Computer Vision and PatternRecognition, 2009:1597–1604.Related literature: ACHANTA R, HEMAMI S, ESTRADA F, et al. Frequency-tunedsalient region detection[C] // IEEE Conference on Computer Vision and Pattern Recognition, 2009: 1597–1604.

此外,目前许多显著目标检测模型虽然在单显著目标和简单背景场景下的性能已接近测试集的标准,但在多目标和复杂背景下,尤其是在目标相融的背景下不能取得较好的表现。In addition, although the performance of many salient object detection models in single salient object and simple background scenes is close to the standard of the test set, they cannot achieve better performance in multi-object and complex backgrounds, especially in the background of target fusion. which performed.

图流形排序(Graph Based Manifold Ranking)是近期出现的一种聚类方法,通过计算图的邻接矩阵和度矩阵得到Laplacian正则化或非正则化矩阵,不同的变体可应用在不同的环境下。Chuan Yang等人将图流形排序应用于显著目标检测,将图像进行SLIC分割,分割后的超像素作为图结点,以图像边缘结点作为相关性查询的种子以检测背景,再求反差得到显著区域。这种方法在单目标和简单背景下的检测效果比较好,但是,当显著目标位于图像边缘、多目标场景、背景复杂或前景与背景相融的情况下,检测效果不够理想。Graph Based Manifold Ranking (Graph Based Manifold Ranking) is a clustering method that has appeared recently. It obtains Laplacian regularization or non-regularization matrix by calculating the adjacency matrix and degree matrix of the graph. Different variants can be applied in different environments. . Chuan Yang et al. applied graph manifold sorting to salient target detection, segmented the image by SLIC, and used the segmented superpixels as graph nodes, and used the image edge nodes as the seeds of correlation query to detect the background, and then obtained the contrast Significant area. The detection effect of this method is better in the case of single target and simple background, but the detection effect is not ideal when the salient target is located at the edge of the image, multi-target scene, complex background or foreground and background blending.

相关文献:Chuan Yang, Lihe Zhang, Huchuan Lu, Minghsuan Yang, SaliencyDetection via Graph-Based Manifold Ranking, CVPR2013,P3166-3173。Related literature: Chuan Yang, Lihe Zhang, Huchuan Lu, Minghsuan Yang, Saliency Detection via Graph-Based Manifold Ranking, CVPR2013, P3166-3173.

发明内容Contents of the invention

本发明所要解决的技术问题是提供一种基于直方图幂函数拟合的显著目标检测方法,通过计算FT算法获得的显著图的直方图数据找到一个灰度阈值,该阈值能够提取属于显著目标区域的超像素,并将这些超像素作为图流形排序的查询种子,再通过自适应二值化法提取所有可能存在显著像素的超像素,作为查询种子的补充,实现接近测试集标准的显著图,从而实现对位于图像边缘、多目标场景、背景复杂或前景与背景相融等较难检测的显著目标的检测。The technical problem to be solved by the present invention is to provide a salient target detection method based on histogram power function fitting, by calculating the histogram data of the saliency map obtained by the FT algorithm to find a gray threshold, which can extract the salient target area Superpixels, and these superpixels are used as query seeds for graph manifold sorting, and then all superpixels that may have salient pixels are extracted by adaptive binarization method, as a supplement to query seeds, to achieve a salient graph that is close to the standard of the test set , so as to realize the detection of salient targets that are difficult to detect, such as those located at the edge of the image, multi-target scenes, complex backgrounds, or fusion of foreground and background.

本发明为解决上述技术问题所采用的技术方案是:一种基于直方图幂函数拟合的显著目标检测方法,其特征在于:所述的目标检测方法包括以下步骤:The technical solution adopted by the present invention to solve the above-mentioned technical problems is: a method for detecting salient objects based on histogram power function fitting, characterized in that: the method for detecting objects includes the following steps:

步骤一:直方图幂函数拟合:将原图像用FT算法生成FT显著图并计算得到该显著图的灰度直方图数据,根据灰度直方图数据用最小二乘法拟合幂函数曲线方程求得FT显著图中用于超像素分类的灰度阈值x0Step 1: Histogram power function fitting: use the FT algorithm to generate the FT saliency map of the original image and calculate the gray histogram data of the saliency map, and use the least square method to fit the power function curve equation according to the gray histogram data to obtain Get the grayscale threshold x 0 for superpixel classification in the FT saliency map;

步骤二:超像素分类:将原图像用SLIC算法分割成n个超像素,根据步骤一得到的灰度阈值将超像素分为显著超像素和背景超像素;Step 2: Superpixel classification: The original image is divided into n superpixels by the SLIC algorithm, and the superpixels are divided into significant superpixels and background superpixels according to the gray threshold obtained in step 1;

步骤三:显著区域定位:找出存在显著像素的超像素;Step 3: Salient region localization: find superpixels with salient pixels;

步骤四:显著目标检测:用图流形排序法计算超像素相关度矩阵和每个超像素的相关性排序值,并通过将每个超像素的相关性排序值归一化得到每个超像素的显著度,将每个超像素的显著度赋值给其包含的所有像素生成最终的显著图。Step 4: Salient object detection: Calculate the superpixel correlation matrix and the correlation ranking value of each superpixel by graph manifold sorting method, and obtain each superpixel by normalizing the correlation ranking value of each superpixel The saliency of each superpixel is assigned to all the pixels it contains to generate the final saliency map.

本发明所述步骤一中用最小二乘法拟合幂函数曲线方程求灰度阈值的方法为:根据FT显著图的灰度直方图数据用最小二乘法拟合幂函数曲线方程;将得到的幂函数曲线方程求导,将导数为-1的点(x0,y0)作为背景灰度与显著灰度的拐点;x0作为FT显著图中分离背景和显著目标的灰度阈值;In step one of the present invention, the method of fitting the power function curve equation with the least squares method to ask the gray threshold value is: according to the gray histogram data of the FT saliency map, use the least squares method to fit the power function curve equation; the obtained power Derivation of the function curve equation, the point with a derivative of -1 (x 0 , y 0 ) is used as the inflection point between the background gray level and the salient gray level; x 0 is used as the gray level threshold for separating the background and the salient target in the FT saliency map;

本发明所述步骤二中将超像素分为显著超像素和背景超像素的方法为:The method for dividing superpixels into significant superpixels and background superpixels in step 2 of the present invention is as follows:

一、计算FT显著图中属于同一个超像素i的所有像素平均灰度mean_gray(i);1. Calculate the average gray level mean_gray(i) of all pixels belonging to the same superpixel i in the FT saliency map;

二、将所有超像素按编号生成一个指示向量Y1=[y1,y2,…yn]T,将所有超像素中平均灰度mean_gray(i)大于灰度阈值的归为显著超像素,其值yi(i=1,2,…,n)设为1,否则归为背景超像素,其值yi(i=1,2,…,n)设为0;2. Generate an indicator vector Y1=[y1,y2,...yn]T for all superpixels by number, and classify all superpixels whose average grayscale mean_gray(i) is greater than the grayscale threshold as significant superpixels, and their value yi (i=1,2,…,n) is set to 1, otherwise it is classified as a background superpixel, and its value yi (i=1,2,…,n) is set to 0;

本发明所述步骤三中显著区域定位的方法为:The method for locating the salient area in step 3 of the present invention is:

一、将 FT显著图二值化:采用自适应二值化法,把灰度高于FT显著图平均灰度2倍值的像素灰度设为255,灰度低于该灰度的像素灰度设为0;1. Binarize the FT saliency map: use the adaptive binarization method, set the gray level of pixels whose gray level is higher than 2 times the average gray level of the FT saliency map to 255, and set the gray level of pixels lower than this gray level degree is set to 0;

二、将所有超像素按编号生成一个指示向量Y2=[y1,y2,…yn]T,统计FT显著二值图中灰度为255的像素所属的超像素编号,把存在显著像素的超像素i的指示值yi设为1,其余设为0。2. Generate an indicator vector Y 2 =[y 1 ,y 2 ,…y n ] T for all superpixels by number, and count the superpixel number of the pixel whose gray level is 255 in the FT significant binary image, and put the significant The indicator value yi of the superpixel i of the pixel is set to 1, and the rest are set to 0.

本发明所述步骤四中超像素相关度矩阵的计算方法为:将原图像分割后的超像素组成图G=(V,E),其中V表示图G的所有超像素集合,E表示所有结点的全连接边集合,用图流形排序法计算超像素相关度矩阵C=(D-αW)-1,其中,D为图G的度矩阵,W为超像素的邻接矩阵,α为相关系数。The calculation method of the superpixel correlation matrix in the step 4 of the present invention is: the superpixels after the original image are divided into a graph G=(V, E), where V represents all superpixel sets of the graph G, and E represents all nodes The fully connected edge set of the graph manifold is used to calculate the superpixel correlation matrix C=(D-αW) -1 , where D is the degree matrix of graph G, W is the adjacency matrix of superpixels, and α is the correlation coefficient .

本发明所述步骤4中每个超像素的相关性排序值的计算方法为:The calculation method of the correlation ranking value of each superpixel in step 4 of the present invention is:

一、将所有超像素按编号生成一个指示向量Y=[y1,y2,…yn]T,令Y= Y1.| Y2,即Y取Y1和 Y2按位或运算的值;1. Generate an indicator vector Y=[y 1 ,y 2 ,…y n ] T for all superpixels by number, let Y= Y 1 .| Y 2 , that is, Y takes Y 1 and Y 2 bitwise OR operation value;

二、按照公式f*=(D-αW)-1Y求得每个超像素的相关性排序值。2. Obtain the correlation ranking value of each superpixel according to the formula f*=(D-αW) -1 Y.

本发明所述步骤二原图像被SLIC算法分割的超像素个数n为180-230个。In the second step of the present invention, the number n of superpixels in which the original image is divided by the SLIC algorithm is 180-230.

本发明的有益效果是:(1)使用FT显著图通过直方图数据幂函数拟合计算目标、背景分割阈值的方法获取显著超像素,用FT显著二值图定位显著目标可能存在的所有区域,提高了显著目标检测的精度;(2)用图流形排序法进行显著目标检测,可快速实现在单目标、单纯背景的简单场景和多目标、复杂场景的显著目标检测和显著区域的二值图分割,进一步保证了目标检测的精度,弥补了FT算法可能存在的失效情况;(3)FT显著图结合图流形排序检测比大多数显著目标检测方法的执行速度快、算法复杂度低,同时能保证较高的检测精度;(4)本发明所涉及的超像素分类采用前沿性能较好的像素聚类技术——SLIC法,聚类后的超像素能够有效保存显著目标边缘,保证了最后生成的显著图能较清晰地显示目标轮廓;(5)本发明所涉及的显著区域定位采用了FT显著图用自适应二值化法得到的二值图结合超像素信息的方法,能最大限度地保证显著像素所在区域被定位出来,有效地提高了检测的精度;(6)本发明所涉及的显著目标检测采用了类似谱聚类方法的图流形排序法,非正则的laplacian矩阵能够有效地从查询种子开始在图结点中搜寻与自己相关的结点,并按照相关性大小进行排序,使得显著图生成速度快、精度高;(7)本发明所涉及的直方图幂函数拟合求灰度阈值法是一种基于最小二乘法的误差理论,属于数值分析范畴,不是简单的数字图像处理技术。(8)本发明在图像多目标且场景复杂的情况下,检测效率高,性能好,精度高,解决了显著目标检测领域的一大难题。The beneficial effects of the present invention are: (1) use the FT saliency map to obtain the salient superpixels by using the histogram data power function fitting method to calculate the target and background segmentation thresholds, and use the FT saliency binary map to locate all areas where the salient objects may exist, Improve the accuracy of salient target detection; (2) Use the graph manifold sorting method for salient target detection, which can quickly realize salient target detection and binarization of salient areas in simple scenes with single targets, simple backgrounds, multi-targets, and complex scenes Graph segmentation further ensures the accuracy of target detection and makes up for the possible failure of FT algorithm; (3) FT saliency graph combined with graph manifold sorting detection is faster than most salient target detection methods and has lower algorithm complexity. At the same time, high detection accuracy can be guaranteed; (4) The superpixel classification involved in the present invention adopts the pixel clustering technology with better cutting-edge performance—SLIC method, and the clustered superpixels can effectively preserve the significant target edges, ensuring The finally generated saliency map can show the outline of the target more clearly; (5) the saliency region location involved in the present invention adopts the method of combining the binary image obtained by the FT saliency map with the adaptive binarization method and the superpixel information, which can maximize the Maximally ensure that the area where the salient pixels are located is located, effectively improving the detection accuracy; (6) The salient target detection involved in the present invention uses a graph manifold sorting method similar to the spectral clustering method, and the non-regular laplacian matrix can Effectively start from the query seed to search for the nodes related to itself in the graph nodes, and sort them according to the size of the correlation, so that the generation speed of the salient graph is fast and the precision is high; (7) the histogram power function involved in the present invention is simulated The combined gray threshold method is an error theory based on the least square method, which belongs to the category of numerical analysis, not a simple digital image processing technology. (8) The present invention has high detection efficiency, good performance, and high precision in the case of multi-target images and complex scenes, and solves a major problem in the field of salient target detection.

附图说明Description of drawings

图1为本发明总流程图;Fig. 1 is the general flowchart of the present invention;

图2为本发明幂函数拟合求灰度阈值流程图;Fig. 2 is a power function fitting of the present invention to ask the grayscale threshold flow chart;

图3为本发明超像素分类流程图;Fig. 3 is a flow chart of superpixel classification in the present invention;

图4为本发明显著区域定位流程图;Fig. 4 is a flow chart of prominent area positioning in the present invention;

图5为本发明基于图流形排序的显著目标检测流程图;Fig. 5 is a flow chart of salient target detection based on graph manifold sorting in the present invention;

图6为本发明显著图二值化流程图;Fig. 6 is a flow chart of binarization of saliency map in the present invention;

图7为本发明基本流程实例图。Fig. 7 is an example diagram of the basic flow of the present invention.

具体实施方式detailed description

本发明所述一种基于直方图幂函数拟合的显著目标检测方法,包括:直方图幂函数拟合、超像素分类、显著区域定位、显著目标检测等步骤。A salient object detection method based on histogram power function fitting described in the present invention comprises steps such as histogram power function fitting, superpixel classification, salient region positioning, and salient object detection.

为说明本发明基于直方图幂函数拟合的显著目标检测方法的具体实现方式,现结合实施例及附图阐述如下:In order to illustrate the specific implementation of the salient target detection method based on the histogram power function fitting of the present invention, it is described as follows in conjunction with the embodiments and accompanying drawings:

图1为本发明的总流程图,其通过以下步骤实现基于直方图幂函数拟合的显著目标检测方法的一个完整过程,包括:Fig. 1 is the general flowchart of the present invention, and it realizes a complete process of the salient target detection method based on histogram power function fitting through the following steps, comprising:

步骤一:使用FT算法生成原图像的FT显著图,FT显著图数据FTimname按照公式归一化后计算其直方图数据hist(i),其中i=0,1,2,…,255,hist(i)表示灰度为i的像素个数;Step 1: Use the FT algorithm to generate the FT saliency map of the original image, and the FT saliency map data FTimname according to the formula Calculate the histogram data hist(i) after normalization, where i=0,1,2,...,255, hist(i) indicates the number of pixels whose grayscale is i;

步骤二:为消除低灰度高频次给求拐点带来的不利影响,在拟合前将FT显著图直方图像素个数数据hist(i)全部除以100 (0≤i≤255),然后通过最小二乘法,用数据集{(i,hist(i))|0≤i≤255}拟合幂函数曲线y=AxB,得到系数A和B;Step 2: In order to eliminate the adverse effects of low grayscale and high frequency on the calculation of the inflection point, before fitting, all the pixel number data hist(i) of the FT saliency map histogram are divided by 100 (0≤i≤255), Then use the least squares method to fit the power function curve y=Ax B with the data set {(i,hist(i))|0≤i≤255} to obtain the coefficients A and B;

步骤三:对幂函数y=AxB求导,得dy/dx=A*BxB-1,令dy/dx=-1解得x0,将x0作为超像素分类的灰度阈值;Step 3: Deriving the power function y=Ax B to get dy/dx=A*Bx B-1 , let dy/dx=-1 solve to get x 0 , and use x 0 as the gray threshold for superpixel classification;

步骤四:使用SLIC算法将原图像分割成n个(n的值为180-230个)超像素,得到原图像素归属超像素的归属信息矩阵superpixels,superpixels(i,j)表示坐标为(i,j)的像素所属的超像素编号;Step 4: Use the SLIC algorithm to divide the original image into n superpixels (the value of n is 180-230) superpixels, and obtain the attribution information matrix superpixels of the original image pixel belonging to the superpixel, superpixels(i,j) means that the coordinates are (i , j) the superpixel number to which the pixel belongs;

步骤五:定义一个长度为超像素个数n的一维向量Y1= [y1, y2, . . . , yn]T,计算FT显著图中属于同一个超像素的所有像素灰度平均值mean_gray,并与x0比较,平均值大于x0的超像素i被视为显著超像素,yi的值设为1,否则设为0;Step 5: Define a one-dimensional vector Y 1 = [y 1 , y 2 , . The average mean_gray, and compared with x 0 , the superpixel i whose average value is greater than x 0 is regarded as a significant superpixel, the value of y i is set to 1, otherwise it is set to 0;

步骤六:将FT显著图二值化,灰度大于FT显著图2倍平均灰度的像素灰度设为255,其余设为0。定义一个长度为超像素个数n的一维向量Y2= [y1, y2, . . . , yn]T,将所有二值图中灰度为255的像素所属超像素i的指示值yi设为1,其他设为0;Step 6: Binarize the FT saliency map, set the gray level of pixels whose gray level is greater than 2 times the average gray level of the FT saliency map to 255, and set the rest to 0. Define a one-dimensional vector Y 2 = [y 1 , y 2 , . The value y i is set to 1, and the others are set to 0;

步骤七:将超像素视为结点组成图G=(V,E),其中V表示图G的结点集合即所有超像素集合,E表示所有结点的全连接边集合。先求超像素的邻接矩阵W,W的每个元素w,ci和cj表示超像素i和j在CIELAB颜色空间的颜色均值,σ为权值系数,可取0至1之间的常数,本发明取0.1,按照公式计算各结点的邻接权值,获得图G的邻接矩阵,按照公式求得图G的度矩阵D = diag{d11, . . . , dnn},设α为相关系数,可取0至1之间的常数,本发明取α=0.99,按照公式(D-αW)-1求得图G的相关度矩阵;Step 7: Treat superpixels as a node composition graph G=(V, E), where V represents the node set of graph G, that is, all superpixels, and E represents the fully connected edge set of all nodes. First seek the adjacency matrix W of the superpixel, each element w of W, ci and cj represent the color mean value of the superpixel i and j in the CIELAB color space, σ is a weight coefficient, which can be a constant between 0 and 1, the present invention Take 0.1, according to the formula Calculate the adjacency weight of each node to obtain the adjacency matrix of graph G, according to the formula Obtain the degree matrix D = diag {d 11 , . ) -1 to obtain the correlation matrix of graph G;

步骤八:将步骤五和步骤六中求出的显著指示向量和定位指示向量做按位或运算合并Y=Y1.|Y2,得到指示向量Y作为查询,按照公式求得每个超像素的显著度,按照公式归一化显著度,将归一化后的显著度赋值给超像素包含的所有像素,生成最终的显著图;Step 8: Combine the salient indicator vector and the positioning indicator vector calculated in step 5 and step 6 by bitwise OR operation Y=Y 1 .|Y 2 , and obtain the indicator vector Y as a query, according to the formula Find the saliency of each superpixel, according to the formula Normalize the saliency, assign the normalized saliency to all pixels contained in the superpixel, and generate the final saliency map;

步骤九:按照公式将显著图二值化从而获得显著区域二值化分割图。Step Nine: Follow the formula The salient map is binarized to obtain a binarized segmentation map of the salient region.

步骤二所述最小二乘法幂函数拟合的方式如下:The method of least squares power function fitting described in step 2 is as follows:

已知直方图数据点(i,hist(i)),其中i=1,2,…,m分布大致为一条幂函数曲线。作拟合两参的幂函数曲线,为求解方便,将两边取对数,令Known histogram data points (i, hist(i)), where i=1, 2,..., m distribution is roughly a power function curve. As a power function curve fitting two parameters , for the convenience of solving, the Taking the logarithm on both sides, let ;

该曲线不是通过所有的数据点,而是使偏差平方和为最小,其中每组数据与拟合曲线的偏差为Instead of passing through all the data points, the curve makes the sum of squared deviations is the minimum, where the deviation of each set of data from the fitted curve is ;

根据最小二乘原理,应取A和B使有极小值,故A和B应满足下列条件:According to the principle of least squares, A and B should be taken so that There is a minimum value, so A and B should meet the following conditions:

即得如下正规方程组That is, the following normal equations are obtained

求解该方程组,解得A和B,代入即得拟合幂函数曲线。Solve this system of equations to get A and B, and substitute That is, the fitted power function curve is obtained.

Claims (4)

1.一种基于直方图幂函数拟合的显著目标检测方法,其特征在于:所述的目标检测方法包括以下步骤:1. A salient target detection method based on histogram power function fitting, is characterized in that: described target detection method comprises the following steps: 步骤一:直方图幂函数拟合:将原图像用FT算法生成FT显著图并计算得到该显著图的灰度直方图数据,根据灰度直方图数据用最小二乘法拟合幂函数曲线方程求得FT显著图中用于超像素分类的灰度阈值x0Step 1: Histogram power function fitting: use the FT algorithm to generate the FT saliency map of the original image and calculate the gray histogram data of the saliency map, and use the least square method to fit the power function curve equation according to the gray histogram data to obtain Get the grayscale threshold x 0 for superpixel classification in the FT saliency map; 其中用最小二乘法拟合幂函数曲线方程求灰度阈值的具体方法为:根据灰度直方图数据用最小二乘法拟合幂函数曲线方程,将得到的幂函数曲线方程求导,将导数为-1的点(x0,y0)作为背景灰度与显著灰度的拐点,x0作为FT显著图中分离背景和显著目标的灰度阈值;Among them, the specific method of using the least squares method to fit the power function curve equation to obtain the gray threshold value is: according to the gray histogram data, use the least squares method to fit the power function curve equation, and derive the power function curve equation obtained, and the derivative is The point of -1 (x 0 , y 0 ) is used as the inflection point between the background gray level and the salient gray level, and x 0 is used as the gray level threshold to separate the background and the salient target in the FT saliency map; 步骤二:超像素分类:将原图像用SLIC算法分割成n个超像素,根据步骤一得到的灰度阈值将超像素分为显著超像素和背景超像素,其具体方法为:Step 2: Superpixel classification: The original image is divided into n superpixels using the SLIC algorithm, and the superpixels are divided into salient superpixels and background superpixels according to the gray threshold obtained in step 1. The specific method is as follows: 一、计算FT显著图中属于同一个超像素i的所有像素平均灰度mean_gray(i);1. Calculate the average gray level mean_gray(i) of all pixels belonging to the same superpixel i in the FT saliency map; 二、将所有超像素按编号生成一个指示向量Y1=[y1,y2,…yn]T,将所有超像素中平均灰度mean_gray(i)大于灰度阈值的归为显著超像素,其值yi(i=1,2,…,n)设为1,否则归为背景超像素,其值yi(i=1,2,…,n)设为0;2. Generate an indicator vector Y 1 =[y 1 ,y 2 ,…y n ] T for all superpixels by number, and classify all superpixels whose average gray level mean_gray(i) is greater than the gray level threshold as significant superpixels , its value y i (i=1,2,…,n) is set to 1, otherwise it is classified as a background superpixel, and its value y i (i=1,2,…,n) is set to 0; 步骤三:显著区域定位:找出存在显著像素的超像素,具体方法为:Step 3: Locate the salient region: find out the superpixels with salient pixels, the specific method is: 一、将 FT显著图二值化:采用自适应二值化法把灰度高于FT显著图平均灰度2倍值的像素灰度设为255,否则设为0;1. Binarize the FT saliency map: use the adaptive binarization method to set the gray level of the pixel whose gray level is higher than 2 times the average gray level of the FT saliency map to 255, otherwise set it to 0; 二、将所有超像素按编号生成一个指示向量Y2=[y1,y2,…yn]T,统计FT显著二值图中灰度为255的像素所属的超像素编号,把存在显著像素的超像素i的指示值yi设为1,否则设为0;2. Generate an indicator vector Y 2 =[y 1 ,y 2 ,…y n ] T for all superpixels by number, and count the superpixel number of the pixel whose gray level is 255 in the FT significant binary image, and put the significant The indicator value yi of the superpixel i of the pixel is set to 1, otherwise it is set to 0; 步骤四:显著目标检测:用图流形排序法计算超像素相关度矩阵和每个超像素的相关性排序值,并通过将每个超像素的相关性排序值归一化得到每个超像素的显著度,将每个超像素的显著度赋值给其包含的所有像素生成最终的显著图。Step 4: Salient object detection: Calculate the superpixel correlation matrix and the correlation ranking value of each superpixel by graph manifold sorting method, and obtain each superpixel by normalizing the correlation ranking value of each superpixel The saliency of each superpixel is assigned to all the pixels it contains to generate the final saliency map. 2.根据权利要求1所述的一种基于直方图幂函数拟合的显著目标检测方法,其特征在于:所述步骤四中超像素相关度矩阵的计算方法为:将原图像分割后的超像素组成图G=(V,E),其中V表示图G的所有超像素集合,E表示所有结点的全连接边集合,用图流形排序法计算超像素相关度矩阵C=(D-αW)-1,其中,D为图G的度矩阵,W为超像素的邻接矩阵,α为相关系数。2. A kind of salient target detection method based on histogram power function fitting according to claim 1, characterized in that: the calculation method of the superpixel correlation matrix in the step 4 is: the superpixel after the original image is divided Composition graph G=(V,E), where V represents all superpixel sets of graph G, E represents the fully connected edge set of all nodes, and the superpixel correlation matrix C=(D-αW is calculated by graph manifold sorting method ) -1 , where D is the degree matrix of graph G, W is the adjacency matrix of superpixels, and α is the correlation coefficient. 3.根据权利要求1所述的一种基于直方图幂函数拟合的显著目标检测方法,其特征在于:所述步骤四中每个超像素的相关性排序值的计算方法为:3. a kind of salient target detection method based on histogram power function fitting according to claim 1, is characterized in that: the calculation method of the correlation ranking value of each superpixel in the described step 4 is: 一、将所有超像素按编号生成一个指示向量Y=[y1,y2,…yn]T,令Y= Y1.| Y2,即Y取Y1和Y2按位或运算的值;1. Generate an indicator vector Y=[y 1 ,y 2 ,…y n ] T for all superpixels by number, let Y= Y 1 .| Y 2 , that is, Y takes Y 1 and Y 2 bitwise OR operation value; 二、按照公式f*=(D-αW)-1Y求得每个超像素的相关性排序值。2. Obtain the correlation ranking value of each superpixel according to the formula f*=(D-αW) -1 Y. 4.根据权利要求1所述的一种基于直方图幂函数拟合的显著目标检测方法,其特征在于:所述步骤二原图像被SLIC算法分割的超像素个数n为180-230个。4. A salient target detection method based on histogram power function fitting according to claim 1, characterized in that: the number n of superpixels in the step 2 where the original image is divided by the SLIC algorithm is 180-230.
CN201510078176.3A 2015-02-13 2015-02-13 A kind of well-marked target detection method based on histogram power function fitting Expired - Fee Related CN104715476B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510078176.3A CN104715476B (en) 2015-02-13 2015-02-13 A kind of well-marked target detection method based on histogram power function fitting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510078176.3A CN104715476B (en) 2015-02-13 2015-02-13 A kind of well-marked target detection method based on histogram power function fitting

Publications (2)

Publication Number Publication Date
CN104715476A CN104715476A (en) 2015-06-17
CN104715476B true CN104715476B (en) 2017-06-16

Family

ID=53414770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510078176.3A Expired - Fee Related CN104715476B (en) 2015-02-13 2015-02-13 A kind of well-marked target detection method based on histogram power function fitting

Country Status (1)

Country Link
CN (1) CN104715476B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105761238B (en) * 2015-12-30 2018-11-06 河南科技大学 A method of passing through gray-scale statistical data depth information extraction well-marked target
CN105913064B (en) * 2016-04-12 2017-03-08 福州大学 A fitting optimization method for image visual saliency detection
CN106056590B (en) * 2016-05-26 2019-02-22 重庆大学 A saliency detection method based on Manifold Ranking and combining foreground and background features
CN106296695B (en) * 2016-08-12 2019-05-24 西安理工大学 Adaptive threshold natural target image segmentation extraction algorithm based on conspicuousness
CN106447699B (en) * 2016-10-14 2019-07-19 中国科学院自动化研究所 Target detection and tracking method of high-speed rail catenary based on Kalman filter

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831402A (en) * 2012-08-09 2012-12-19 西北工业大学 Sparse coding and visual saliency-based method for detecting airport through infrared remote sensing image
CN104240244A (en) * 2014-09-10 2014-12-24 上海交通大学 Significant object detection method based on propagation modes and manifold ranking

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8649606B2 (en) * 2010-02-10 2014-02-11 California Institute Of Technology Methods and systems for generating saliency models through linear and/or nonlinear integration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831402A (en) * 2012-08-09 2012-12-19 西北工业大学 Sparse coding and visual saliency-based method for detecting airport through infrared remote sensing image
CN104240244A (en) * 2014-09-10 2014-12-24 上海交通大学 Significant object detection method based on propagation modes and manifold ranking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于先验融合和流形排序的显著目标检测";杨川等;《中国优秀硕士学位论文全文数据库信息科技辑》;20130815(第8期);摘要,第12页表2.1,第21页第1-3段,22第5段,第27页3.3.2节第1段,第31页表3.1 *

Also Published As

Publication number Publication date
CN104715476A (en) 2015-06-17

Similar Documents

Publication Publication Date Title
CN104715251B (en) A kind of well-marked target detection method based on histogram linear fit
CN107229917B (en) A common salient target detection method for multiple remote sensing images based on iterative clustering
Shigematsu et al. Learning RGB-D salient object detection using background enclosure, depth contrast, and top-down features
Lee et al. Multiple random walkers and their application to image cosegmentation
CN107730515B (en) Saliency detection method for panoramic images based on region growing and eye movement model
WO2017190656A1 (en) Pedestrian re-recognition method and device
CN107633226B (en) Human body motion tracking feature processing method
CN105761238B (en) A method of passing through gray-scale statistical data depth information extraction well-marked target
CN107564022A (en) Saliency detection method based on Bayesian Fusion
Herbon et al. Detection and segmentation of clustered objects by using iterative classification, segmentation, and Gaussian mixture models and application to wood log detection
CN104715476B (en) A kind of well-marked target detection method based on histogram power function fitting
CN105678278A (en) Scene recognition method based on single-hidden-layer neural network
WO2019007253A1 (en) Image recognition method, apparatus and device, and readable medium
CN112818905B (en) Finite pixel vehicle target detection method based on attention and spatio-temporal information
CN107977660A (en) Region of interest area detecting method based on background priori and foreground node
CN108734200A (en) Human body target visible detection method and device based on BING features
Bai et al. Principal pixel analysis and SVM for automatic image segmentation
Kumar et al. Automatic image segmentation using wavelets
CN108345835A (en) A kind of target identification method based on the perception of imitative compound eye
Zhou et al. Dynamic background subtraction using spatial-color binary patterns
Afzali et al. A supervised feature weighting method for salient object detection using particle swarm optimization
Elashry et al. Feature matching enhancement using the graph neural network (gnn-ransac)
CN114511925B (en) A pig drinking behavior identification method based on the mouth area and multi-feature fusion
Zhu et al. [Retracted] Basketball Object Extraction Method Based on Image Segmentation Algorithm
Lei et al. Hierarchical saliency detection via probabilistic object boundaries

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20191106

Address after: Room E601, innovation building, No. 68, Wenhua Road, Jinshui District, Zhengzhou City, Henan Province

Patentee after: Zhengzhou Xinma Technology Co., Ltd

Address before: 471000 Xiyuan Road, Jianxi District, Henan, No. 48, No.

Patentee before: Henan University of Science and Technology

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170616

Termination date: 20200213