CN107274416A - High spectrum image conspicuousness object detection method based on spectrum gradient and hierarchical structure - Google Patents

High spectrum image conspicuousness object detection method based on spectrum gradient and hierarchical structure Download PDF

Info

Publication number
CN107274416A
CN107274416A CN201710442878.4A CN201710442878A CN107274416A CN 107274416 A CN107274416 A CN 107274416A CN 201710442878 A CN201710442878 A CN 201710442878A CN 107274416 A CN107274416 A CN 107274416A
Authority
CN
China
Prior art keywords
mrow
msub
image
spectral
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710442878.4A
Other languages
Chinese (zh)
Other versions
CN107274416B (en
Inventor
魏巍
张艳宁
张磊
严杭琦
高凡
高一凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201710442878.4A priority Critical patent/CN107274416B/en
Publication of CN107274416A publication Critical patent/CN107274416A/en
Application granted granted Critical
Publication of CN107274416B publication Critical patent/CN107274416B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/259Fusion by voting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于光谱梯度与层次结构的高光谱图像显著性目标检测方法,用于解决现有高光谱图像显著性目标检测方法计算量大的技术问题。技术方案是首先生成光谱梯度图;再生成图像分割区域;建立基于图像层次结构的显著性检测模型;再建立基于背景先验及边缘特征的显著性计算方法;计算显著图结果。由于通过在原始高光谱图像的光谱维上计算光谱梯度,提取图像的光谱梯度特征,以减弱光照不均带来的不利影响。使用简单线性迭代聚类算法(Simple Linear Iterative Clustering,SLIC)生成超像素,对高光谱图像进行分割并加速计算进程,通过计算分割区域间的光谱特征对比度来衡量其显著性,计算量小。The invention discloses a hyperspectral image salient target detection method based on spectral gradient and hierarchical structure, which is used to solve the technical problem of large calculation amount in the existing hyperspectral image salient target detection method. The technical solution is to first generate a spectral gradient map; then generate image segmentation regions; establish a saliency detection model based on image hierarchy; then establish a saliency calculation method based on background prior and edge features; and calculate saliency map results. Since the spectral gradient is calculated on the spectral dimension of the original hyperspectral image, the spectral gradient feature of the image is extracted to reduce the adverse effects of uneven illumination. Using the simple linear iterative clustering algorithm (Simple Linear Iterative Clustering, SLIC) to generate superpixels, segment the hyperspectral image and accelerate the calculation process, and measure the significance by calculating the spectral feature contrast between the segmented regions, with a small amount of calculation.

Description

基于光谱梯度与层次结构的高光谱图像显著性目标检测方法Hyperspectral Image Salient Target Detection Method Based on Spectral Gradient and Hierarchical Structure

技术领域technical field

本发明涉及一种高光谱图像显著性目标检测方法,特别是涉及一种基于光谱梯度与层次结构的高光谱图像显著性目标检测方法。The invention relates to a hyperspectral image salient target detection method, in particular to a hyperspectral image salient target detection method based on spectral gradient and hierarchical structure.

背景技术Background technique

高光谱图像是利用成像光谱仪将视场中观测到的各种地物的光谱信息记录下来得到的影像数据。随着高光谱成像技术的日渐成熟,成像设备在其光谱分辨率和空间分辨率等指标上有了很大提升。使得原本主要在常规图像上开展的物体检测、识别和跟踪等课题逐渐得以延伸到高光谱数据上。对于高光谱图像显著性目标检测问题的相关研究尚处于发展阶段。Hyperspectral image is the image data obtained by recording the spectral information of various ground objects observed in the field of view by using an imaging spectrometer. With the maturity of hyperspectral imaging technology, imaging equipment has greatly improved its spectral resolution and spatial resolution. As a result, subjects such as object detection, recognition, and tracking, which were originally carried out on conventional images, can gradually be extended to hyperspectral data. Related research on the problem of salient object detection in hyperspectral images is still in the development stage.

现有的高光谱图像显著性目标检测方法主要采用Itti模型,将颜色特征替换为高光谱图像的光谱特征,使模型适用于高光谱图像。文献“S.L.Moan,A.Mansouri,et al.,Saliency for Spectral Image Analysis[J].IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing,2013.6(6):p.2472-2479”公开了一种高光谱图像显著性目标检测方法,该方法将光谱投影到CIELAB颜色空间中利用图像进行主成分分析(Principle Component Analysis,PCA)等方式对光谱信息进行利用。现有方法以像元作为显著性估计的基本单位,通过主成分分析、欧氏距离、光谱夹角(SpectralAngle)等手段来评估不同像元光谱之间的差异,借此衡量出各像元的显著性。这种由像元显著性反映全图显著性的做法难以避免在检测结果中物体边缘响应较大而内部响应很低的显著图不均一现象。此外,现有方法都依赖于单一模型,无法消除高光谱图像进行显著性检测研究时主要面临的难点有,图像中的亮度变化对光谱数据造成的影响,数据规模带来的巨大计算量,现有方法都依赖于单一模型。因此,急需要突破现有高光谱检测方法中的固有思路,提出新的高光谱图像显著性目标检测方法。The existing hyperspectral image salient target detection methods mainly use the Itti model, which replaces the color features with the spectral features of hyperspectral images, making the model suitable for hyperspectral images. The document "S.L.Moan, A.Mansouri, et al., Saliency for Spectral Image Analysis[J].IEEE Journal of Selected Topics inApplied Earth Observations and Remote Sensing, 2013.6(6):p.2472-2479" discloses a high Spectral image saliency target detection method, which projects the spectrum into the CIELAB color space and utilizes the spectral information by means of image principal component analysis (Principle Component Analysis, PCA). The existing method uses the pixel as the basic unit of significance estimation, and evaluates the difference between the spectra of different pixels by means of principal component analysis, Euclidean distance, and spectral angle (SpectralAngle), so as to measure the difference between each pixel. significant. This method of reflecting the saliency of the whole image by the saliency of the pixel is difficult to avoid the inhomogeneity of the saliency map in the detection result that the object edge response is large and the internal response is very low. In addition, the existing methods all rely on a single model, which cannot eliminate the main difficulties faced in the study of saliency detection in hyperspectral images. There are methods that all depend on a single model. Therefore, it is urgent to break through the inherent ideas in the existing hyperspectral detection methods and propose a new hyperspectral image salient target detection method.

发明内容Contents of the invention

为了克服现有高光谱图像显著性目标检测方法计算量大的不足,本发明提供一种基于光谱梯度与层次结构的高光谱图像显著性目标检测方法。该方法首先生成光谱梯度图;再生成图像分割区域;接着建立基于图像层次结构的显著性检测模型;再建立基于背景先验及边缘特征的显著性计算方法;最后计算显著图结果。由于通过在原始高光谱图像的光谱维上计算光谱梯度,提取图像的光谱梯度特征,以减弱光照不均带来的不利影响。同时,使用简单线性迭代聚类算法(Simple Linear Iterative Clustering,SLIC)生成超像素,对高光谱图像进行分割并加速计算进程,通过计算分割区域间的光谱特征对比度来衡量其显著性,计算量小。In order to overcome the shortage of large amount of calculation in existing methods for detecting salient objects in hyperspectral images, the present invention provides a method for detecting salient objects in hyperspectral images based on spectral gradient and hierarchical structure. The method first generates spectral gradient map; then generates image segmentation region; then establishes a saliency detection model based on image hierarchy; then establishes a saliency calculation method based on background prior and edge features; finally calculates the saliency map result. Since the spectral gradient is calculated on the spectral dimension of the original hyperspectral image, the spectral gradient feature of the image is extracted to reduce the adverse effects of uneven illumination. At the same time, the simple linear iterative clustering algorithm (Simple Linear Iterative Clustering, SLIC) is used to generate superpixels, the hyperspectral image is segmented and the calculation process is accelerated, and the significance is measured by calculating the spectral feature contrast between the segmented regions, with a small amount of calculation .

本发明解决其技术问题所采用的技术方案是:一种基于光谱梯度与层次结构的高光谱图像显著性目标检测方法,其特点是包括以下步骤:The technical solution adopted by the present invention to solve the technical problem is: a hyperspectral image salient target detection method based on spectral gradient and hierarchical structure, which is characterized in that it includes the following steps:

步骤一、生成光谱梯度图像。Step 1, generating a spectral gradient image.

对每个像元计算光谱梯度,生成光谱梯度图像,以便所提取的光谱梯度特征向量之间维持原始图像中的空间关系。The spectral gradient is calculated for each pixel to generate a spectral gradient image, so that the spatial relationship in the original image is maintained between the extracted spectral gradient feature vectors.

式中,是光谱梯度向量的第j个分量。是原始光谱向量第j个分量。Δλ是相邻波段波长差值。In the formula, is the jth component of the spectral gradient vector. is the jth component of the original spectral vector. Δλ is the wavelength difference between adjacent bands.

对一个高光谱数据块D中每一个像素对应的光谱向量使用公式(1)得到一个新的光谱梯度数据块X。Use the formula (1) to obtain a new spectral gradient data block X for the spectral vector corresponding to each pixel in a hyperspectral data block D.

步骤二、生成图像分割区域。Step 2, generating image segmentation regions.

对光谱梯度数据块X进行简单线性迭代聚类算法,具体步骤如下:Perform a simple linear iterative clustering algorithm on the spectral gradient data block X, the specific steps are as follows:

输入:光谱梯度图像X,期望超像素边长s,权重系数m;Input: spectral gradient image X, expected superpixel side length s, weight coefficient m;

输出:标记出各超像素的分割图像;Output: mark the segmented image of each superpixel;

1.初始化过程:1. Initialization process:

1)以s为间隔长度,在梯度图像X上初始化一组初始聚类中心C;1) Initialize a set of initial cluster centers C on the gradient image X with s as the interval length;

2)将各中心调整到3×3邻域内梯度最小值所在的位置;2) Adjust each center to the position where the gradient minimum in the 3×3 neighborhood is located;

3)设置每个像元对应的标签li=-1,到其当前所属中心的距离di=+∞;3) Set the label l i =-1 corresponding to each pixel, and the distance d i =+∞ to its current center;

2.迭代更新像元标签、各聚类中心:2. Iteratively update the pixel label and each cluster center:

1)对当前聚类中心Ck,在边长为2s正方形邻域内,按公式(2)计算邻域内各像元xi到Ck的距离D(xi,Ck);1) For the current clustering center C k , in a square neighborhood with a side length of 2s, calculate the distance D( xi ,C k ) from each pixel x i to C k in the neighborhood according to formula (2);

2)若D(xi,Ck)<di,则置xi对应的标签li=k,并更新di=D(xi,Ck);2) If D( xi ,C k )<d i , set the label l i corresponding to x i =k, and update d i =D( xi ,C k );

3)重复步骤1)、步骤2)直至前后两次迭代之间各中心的变化小于阈值;3) Repeat step 1) and step 2) until the change of each center between two iterations before and after is less than the threshold;

式中,dg(xi,Ck)是xi与Ck中光谱梯度部分的欧氏距离。ds(xi,Ck)是xi与Ck空间位置的欧式距离。m是两项距离间的权重系数。In the formula, d g ( xi , C k ) is the Euclidean distance between xi and C k in the spectral gradient part. d s ( xi , C k ) is the Euclidean distance between the spatial positions of xi and C k . m is the weight coefficient between two distances.

采用双核函数代替均值漂移算法中的密度函数完成均值漂移过程中的相关计算,其具体形式为The dual-kernel function is used to replace the density function in the mean shift algorithm to complete the relevant calculations in the mean shift process, and its specific form is

式中,xg是像元x对应的光谱梯度向量。xs是像元x所在的空间坐标。Tg是光谱特征对应核函数带宽。Ts是空间坐标对应核函数带宽。δ是归一化系数。In the formula, x g is the spectral gradient vector corresponding to the pixel x. x s is the space coordinate where the pixel x is located. T g is the bandwidth of the kernel function corresponding to the spectral feature. T s is the spatial coordinate corresponding to the bandwidth of the kernel function. δ is the normalization coefficient.

均值漂移算法具体步骤如下:The specific steps of the mean shift algorithm are as follows:

输入:超像素中心向量C={C1,C2,…,Ck,…Cn},光谱阈值Tg,空间阈值TsInput: superpixel center vector C={C 1 ,C 2 ,...,C k ,...C n }, spectral threshold T g , spatial threshold T s ;

输出:对输入超像素中心的标记向量lspOutput: label vector l sp for the center of the input superpixel;

1)以超像素向量Ck其作为初始中心进行均值漂移过程,记所得候选中心Cj′;1) Take the superpixel vector C k as the initial center to carry out the mean shift process, and record the obtained candidate center C j ′;

2)对出现在Cj′形成路径上的所有样本,计其对Cj′的投票数加1;2) For all the samples appearing on the path of C j ', add 1 to the number of votes for C j ';

3)遍历当前候选中心集合C′,寻找与Cj′的光谱梯度距离小于Tg/2,且空间距离小于Ts/2的首选中心Ci′;3) Traversing the current set of candidate centers C′, looking for the preferred center C i ′ whose spectral gradient distance from C j ′ is less than T g /2, and whose spatial distance is less than T s /2;

4)若则合并Ci′、Cj′上的计票,同时向C′添加Ci′、Cj′均值,并删除Ci′;否则,转步骤5);4) If Merge the vote counts on C i ′, C j ′, add the mean value of C i ′, C j ′ to C′, and delete C i ′; otherwise, go to step 5);

5)对各超像素中心,重复步骤1)到4),得到最终的各聚类中心;5) For each superpixel center, repeat steps 1) to 4) to obtain the final cluster centers;

6)对各超像素中心,取获得投票最多的聚类中心C′m为其归属,得lsp6) For each superpixel center, take the cluster center C′ m that received the most votes as its attribution, and obtain l sp .

步骤三、建立基于图像层次结构的显著性目标检测模型。Step 3: Establish a salient target detection model based on image hierarchy.

对步骤二中样本光谱特征相似性要求的光谱阈值Tg,以及控制样本邻域范围的空间阈值Ts,分别取值为0.1、0.2、0.3、0.4倍的max{r,c},和10、20、25、30。这样,底层的超像素经过不同粒度下的聚类后,总共将产生4个层次上聚类结果,形成一个4层的图像层次结构。将作为底层节点的超像素块全体记作其中表示了超像素的个数;另外,将第i层中的第j个分割区域抽象为节点如上所述,在层数为h的层次结构下考察超像素上最终的显著性结果,则相应的显著性检测模型表示为For the spectral threshold T g required by the sample spectral feature similarity in step 2, and the spatial threshold T s for controlling the sample neighborhood range, the values are respectively 0.1, 0.2, 0.3, 0.4 times max{r,c}, and 10 , 20, 25, 30. In this way, after the underlying superpixels are clustered at different granularities, a total of 4 levels of clustering results will be generated, forming a 4-layer image hierarchy. Denote the entire superpixel block as the bottom node as in Represents the number of superpixels; in addition, abstract the j-th segmentation region in the i-th layer as a node As mentioned above, superpixels are examined under the hierarchical structure with the number of layers h The final saliency result above, the corresponding saliency detection model is expressed as

式中,是返回第i层中所有包含的节点下标。ωi是节点所在层次的权重。是节点上的显著性数值。In the formula, is to return all the i-th layers containing The node index of . ω i is the node The weight of the level. is a node Significant value on .

步骤四、位置先验、背景先验及边缘特征显著性计算方法。Step 4. Calculation method of position prior, background prior and edge feature saliency.

1.位置先验与背景先验。1. Position prior and background prior.

位置先验的数学表达式如下:The mathematical expression of the location prior is as follows:

式中,是区域Ri中像元xk到图像中心点的欧式距离。In the formula, is the Euclidean distance from pixel x k in region R i to the center of the image.

选定距离图像四周10像素以内所有像素构成的方环区域为图像的边界区域RbThe square ring area formed by all the pixels within 10 pixels around the image is selected as the boundary area R b of the image.

对于图像层次结构中的节点计算其背景先验大小时遵循以下三条规则:For nodes in the image hierarchy The following three rules are followed when calculating its background prior size:

1)若则对施加的惩罚函数 1) if then yes imposed penalty function

2)否则,若则惩罚函数 2) Otherwise, if then the penalty function

3)交集规模愈大,惩罚应越重,即的绝对值更大;3) intersection scale The larger the value, the heavier the penalty should be, that is, The absolute value of is larger;

以上三条规则明确了以接触惩罚计算背景先验时的边界条件和影响因素。在满足规则的情况下,采用不同形式的惩罚函数得到不同的具体计算方法。惩罚函数的定义为The above three rules make it clear that the contact penalty is calculated Boundary conditions and influencing factors when the background is prior. In the case of satisfying the rules, different specific calculation methods are obtained by using different forms of penalty functions. The penalty function is defined as

式中,ξ是边界区域内每个像素所带的惩罚因子。In the formula, ξ is the penalty factor carried by each pixel in the boundary area.

2.边缘特征显著性。2. Significance of edge features.

将高光谱图像各波段的均值作为其对应的灰度图像Ihsi,使用Canny检测子得到边缘特征。利用高光谱图像空间维度上的边缘特征计算区域显著性,步骤如下:The mean value of each band of the hyperspectral image is taken as its corresponding grayscale image I hsi , and the edge features are obtained by using the Canny detector. The regional saliency is calculated by using the edge features on the spatial dimension of the hyperspectral image, and the steps are as follows:

输入:高光谱图像平均灰度图Ihsi,层次结构节点及其所在分割结果图IsegInput: hyperspectral image average grayscale image I hsi , hierarchy node and its segmentation result graph I seg ;

输出:上的边缘特征显著性 output: Marginal feature saliency on

1)对Ihsi使用Canny检测子提取边缘,得到结果 1) Use the Canny detector to extract the edge for I hsi , and get the result

2)对Iseg用3×3方差为1.5的高斯滤波器进行滤波,使区域边界宽度增加;2) Filter the I seg with a 3×3 Gaussian filter with a variance of 1.5 to increase the width of the region boundary;

3)对滤波后的Iseg求梯度幅值图像,经二值化得到边界提取结果 3) Calculate the gradient magnitude image of the filtered I seg , and obtain the boundary extraction result by binarization

4)利用下式将分割区域边界上的图像边缘累加,得到 4) Use the following formula to divide the region The image edges on the border are accumulated to get

式中,是分割区域的边界;中位于内的边缘特征累加运算。In the formula, is the split area the boundaries of yes in the Inner edge feature accumulation operation.

步骤五、计算显著图。Step 5. Calculate the saliency map.

在具体计算显著图时,要确定层次结构中各节点或者各层次上分割区域的显著性计算方法。由光谱梯度区域对比、边缘特征显著性、位置先验以及背景先验四个部分组成。位置先验计算公式如下:When calculating the saliency map, it is necessary to determine each node in the hierarchy or the division area on each level Significance calculation method. It consists of four parts: spectral gradient area comparison, edge feature saliency, position prior and background prior. The position prior calculation formula is as follows:

在施加先验时,仅对基于光谱特征区域对比的部分进行增强,不对基于边缘特征的部分进行操作。背景先验由于选取的图像边界宽度较小,对抑制背景有比较好的效果,故对基于两种特征的计算方法同时予以施加。最终得到显著性计算公式为When applying the prior, only the part based on the spectral feature area contrast is enhanced, and the part based on the edge feature is not operated. The background prior has a better effect on suppressing the background because the border width of the selected image is small, so the calculation method based on the two features is applied at the same time. Finally, the significance calculation formula is obtained as

式中,是权重系数。In the formula, is the weight coefficient.

本发明的有益效果是:该方法首先生成光谱梯度图;再生成图像分割区域;接着建立基于图像层次结构的显著性检测模型;再建立基于背景先验及边缘特征的显著性计算方法;最后计算显著图结果。由于通过在原始高光谱图像的光谱维上计算光谱梯度,提取图像的光谱梯度特征,以减弱光照不均带来的不利影响。同时,使用简单线性迭代聚类算法(Simple Linear Iterative Clustering,SLIC)生成超像素,对高光谱图像进行分割并加速计算进程,通过计算分割区域间的光谱特征对比度来衡量其显著性,计算量小。The beneficial effects of the present invention are as follows: the method first generates a spectral gradient map; then generates image segmentation regions; then establishes a saliency detection model based on image hierarchy; then establishes a saliency calculation method based on background prior and edge features; finally calculates Salient map results. Since the spectral gradient is calculated on the spectral dimension of the original hyperspectral image, the spectral gradient feature of the image is extracted to reduce the adverse effects of uneven illumination. At the same time, the simple linear iterative clustering algorithm (Simple Linear Iterative Clustering, SLIC) is used to generate superpixels, the hyperspectral image is segmented and the calculation process is accelerated, and the significance is measured by calculating the spectral feature contrast between the segmented regions, with a small amount of calculation .

下面结合具体实施方式对本发明作详细说明。The present invention will be described in detail below in combination with specific embodiments.

具体实施方式detailed description

本发明基于光谱梯度与层次结构的高光谱图像显著性目标检测方法具体步骤如下:The specific steps of the hyperspectral image salient target detection method based on spectral gradient and hierarchical structure in the present invention are as follows:

高光谱遥感图像为一立方体结构,空间维反映地面不同位置对应的像素在某一太阳光波段上的反射率,光谱维反映某一位置的像素在不同波段上入射光与反射光的关系。一幅高光谱图像可以表示成一p×n的数据集合Xn={x1,x2,...,xn},p为波段数,n为图像中像元总数;图像中某一像元可以表示成xi=(x1i,x2i,...,xpi)T,xpi是在第p个波段上的反射率。The hyperspectral remote sensing image has a cubic structure. The spatial dimension reflects the reflectance of pixels corresponding to different positions on the ground in a certain sunlight band, and the spectral dimension reflects the relationship between incident light and reflected light in different bands of pixels at a certain position. A hyperspectral image can be expressed as a p×n data set X n ={x 1 ,x 2 ,...,x n }, p is the number of bands, n is the total number of pixels in the image; a certain image in the image The element can be expressed as x i =(x 1i , x 2i ,...,x pi ) T , where x pi is the reflectivity on the p-th waveband.

步骤一、生成光谱梯度图。Step 1, generating a spectral gradient map.

光谱梯度(Spectral Gradient)指沿着原始光谱向量每两个相邻分量之差与对应波长之差的比值。而由一系列光谱梯度所构成的向量称为光谱梯度向量。我们将通过对每个像元计算光谱梯度而得到的结果称为光谱梯度图像,以便所提取的光谱梯度特征向量之间维持原始图像中的空间关系。Spectral Gradient refers to the ratio of the difference between every two adjacent components along the original spectrum vector to the difference of the corresponding wavelength. The vector composed of a series of spectral gradients is called a spectral gradient vector. We refer to the result obtained by calculating the spectral gradient for each pixel as a spectral gradient image, so that the spatial relationship in the original image is maintained between the extracted spectral gradient feature vectors.

式中,是光谱梯度向量的第j个分量。是原始光谱向量第j个分量。Δλ是相邻波段波长差值。In the formula, is the jth component of the spectral gradient vector. is the jth component of the original spectral vector. Δλ is the wavelength difference between adjacent bands.

对一个高光谱数据块D中每一个像素对应的光谱向量使用上述公式得到一个新的光谱梯度数据块X。光谱梯度在一定程度上能够减小因光照不均而导致的亮度差异,从而也就能够减弱由这种差异对后续算法步骤所造成的影响。A new spectral gradient data block X is obtained by using the above formula for the spectral vector corresponding to each pixel in a hyperspectral data block D. The spectral gradient can reduce the brightness difference caused by uneven illumination to a certain extent, and thus can also weaken the influence caused by this difference on the subsequent algorithm steps.

步骤二、生成图像分割区域。Step 2, generating image segmentation regions.

由于每一像素的显著性取决于其特征在邻域空间(通常为3x3像素范围)内的独特性,而这种像素级别的显著性难以有效地反映出相应宏观物体的显著性。超像素的生成有效减少图像中局部区域上的冗余信息,简化图像表达,降低图像后续处理任务的复杂度;另外,超像素有利于提取图像中层视觉信息,更符合人类对图像的感知方式。对光谱梯度数据X进行简单线性迭代聚类算法(Simple Linear Iterative Clustering,SLIC),SLIC算法具体步骤如下:Since the saliency of each pixel depends on the uniqueness of its features in the neighborhood space (usually in the range of 3x3 pixels), it is difficult for this pixel-level saliency to effectively reflect the saliency of the corresponding macroscopic object. The generation of superpixels effectively reduces redundant information in local areas of images, simplifies image expression, and reduces the complexity of subsequent image processing tasks; in addition, superpixels are conducive to extracting visual information in the middle layer of images, which is more in line with human perception of images. A simple linear iterative clustering algorithm (Simple Linear Iterative Clustering, SLIC) is performed on the spectral gradient data X. The specific steps of the SLIC algorithm are as follows:

输入:光谱梯度图像X,期望超像素边长s,权重系数m;Input: spectral gradient image X, expected superpixel side length s, weight coefficient m;

输出:标记出各超像素的分割图像;Output: mark the segmented image of each superpixel;

1.初始化过程:1. Initialization process:

1)以s为间隔长度,在梯度图像X上初始化一组初始聚类中心C;1) Initialize a set of initial cluster centers C on the gradient image X with s as the interval length;

2)将各中心调整到3×3邻域内梯度最小值所在的位置;2) Adjust each center to the position where the gradient minimum in the 3×3 neighborhood is located;

3)设置每个像元对应的标签li=-1,到其当前所属中心的距离di=+∞;3) Set the label l i =-1 corresponding to each pixel, and the distance d i =+∞ to its current center;

2.迭代更新像元标签、各聚类中心:2. Iteratively update the pixel label and each cluster center:

1)对当前聚类中心Ck,在边长为2s正方形邻域内,按公式(2)计算邻域内各像元xi到Ck的距离D(xi,Ck);1) For the current clustering center C k , in a square neighborhood with a side length of 2s, calculate the distance D( xi ,C k ) from each pixel x i to C k in the neighborhood according to formula (2);

2)若D(xi,Ck)<di,则置xi对应的标签li=k,并更新di=D(xi,Ck);2) If D( xi ,C k )<d i , set the label l i corresponding to x i =k, and update d i =D( xi ,C k );

3)重复步骤1)、步骤2)直至前后两次迭代之间各中心的变化小于阈值;3) Repeat step 1) and step 2) until the change of each center between two iterations before and after is less than the threshold;

式中,dg(xi,Ck)是xi与Ck中光谱梯度部分的欧氏距离。ds(xi,Ck)是xi与Ck空间位置的欧式距离。m是两项距离间的权重系数。In the formula, d g ( xi , C k ) is the Euclidean distance between xi and C k in the spectral gradient part. d s ( xi , C k ) is the Euclidean distance between the spatial positions of xi and C k . m is the weight coefficient between two distances.

通过上述SLIC超像素生成算法可以将光谱梯度图像分割为许多空间尺寸比较小的分割区。超像素分割得到的是一种过分割结果,现实场景中的物体往往被分割为非常多细碎的区域,不利于显著性目标检测算法在物体内部获得较均一的结果。所以对超像素进行聚类,以获得更高层次上的视觉特征,并改善检测算法结果的内部均一性。但是,由于本发明中待聚类的样本为各超像素中心,其中包含了光谱特征和空间坐标,因此本发明使用双核函数来代替均值漂移算法(Mean-Shift)中的密度函数来完成均值漂移过程中的相关计算,其具体形式为The above SLIC superpixel generation algorithm can divide the spectral gradient image into many partitions with relatively small spatial dimensions. The result of superpixel segmentation is an over-segmentation result. Objects in real scenes are often segmented into very small areas, which is not conducive to the salient target detection algorithm to obtain more uniform results inside the object. Therefore, superpixels are clustered to obtain higher-level visual features and improve the internal uniformity of detection algorithm results. However, since the sample to be clustered in the present invention is the center of each superpixel, which contains spectral features and spatial coordinates, the present invention uses a dual-kernel function to replace the density function in the mean-shift algorithm (Mean-Shift) to complete the mean shift The relevant calculation in the process, its specific form is

式中,xg是像元x对应的光谱梯度向量。xs是像元x所在的空间坐标。Tg是光谱特征对应核函数带宽(Kernel Bandwidth)。Ts是空间坐标对应核函数带宽。δ是归一化系数。In the formula, x g is the spectral gradient vector corresponding to the pixel x. x s is the space coordinate where the pixel x is located. T g is the Kernel Bandwidth corresponding to the spectral feature. T s is the spatial coordinate corresponding to the bandwidth of the kernel function. δ is the normalization coefficient.

Mean-Shift算法具体步骤如下:The specific steps of the Mean-Shift algorithm are as follows:

输入:超像素中心向量C={C1,C2,…,Ck,…Cn},光谱阈值Tg,空间阈值TsInput: superpixel center vector C={C 1 ,C 2 ,...,C k ,...C n }, spectral threshold T g , spatial threshold T s ;

输出:对输入超像素中心的标记向量lspOutput: label vector l sp for the center of the input superpixel;

1)以超像素向量Ck其作为初始中心进行均值漂移过程,记所得候选中心Cj′;1) Take the superpixel vector C k as the initial center to carry out the mean shift process, and record the obtained candidate center C j ′;

2)对出现在Cj′形成路径上的所有样本,计其对Cj′的投票数加1;2) For all the samples appearing on the path of C j ', add 1 to the number of votes for C j ';

3)遍历当前候选中心集合C′,寻找与Cj′的光谱梯度距离小于Tg/2,且空间距离小于Ts/2的首选中心Ci′;3) Traversing the current set of candidate centers C′, looking for the preferred center C i ′ whose spectral gradient distance from C j ′ is less than T g /2, and whose spatial distance is less than T s /2;

4)若则合并Ci′、Cj′上的计票,同时向C′添加Ci′、Cj′均值,并删除Ci′;否则,转步骤5);4) If Merge the vote counts on C i ′, C j ′, add the mean value of C i ′, C j ′ to C′, and delete C i ′; otherwise, go to step 5);

5)对各超像素中心,重复步骤1)到4),得到最终的各聚类中心;5) For each superpixel center, repeat steps 1) to 4) to obtain the final cluster centers;

6)对各超像素中心,取获得投票最多的聚类中心C′m为其归属,得lsp6) For each superpixel center, take the cluster center C′ m that has received the most votes as its attribution, and get l sp ;

步骤三、建立基于图像层次结构的显著性目标检测模型。Step 3: Establish a salient target detection model based on image hierarchy.

完成上述步骤后,若只对单一分割层次上考虑分割区域之间的空间关系并不能很好地描述其与所属语义区域(Semantic Region)之间的联系,从而使得诸如显著性检测这类与图像语义理解相关的处理任务难以获得良好的结果。事实上,图像的多个分割层次共同构成了其层次结构,对其进行有效利用能够改加强处理方法对于图像语义信息的利用。After completing the above steps, if only the spatial relationship between the segmented regions is considered on a single segmentation level, the relationship between them and the Semantic Region cannot be well described, so that such as saliency detection and image Processing tasks related to semantic understanding are difficult to achieve good results. In fact, multiple segmentation levels of an image jointly constitute its hierarchical structure, and its effective use can improve the utilization of image semantic information by enhanced processing methods.

对上一步骤中样本光谱特征相似性要求的光谱阈值Tg,以及控制样本邻域范围的空间阈值Ts,分别取值为0.1、0.2、0.3、0.4倍的max{r,c},和10、20、25、30。这样,底层的超像素经过不同粒度下的聚类后,总共将产生4个层次上聚类结果,形成一个4层的图像层次结构。将作为底层节点的超像素块全体记作其中表示了超像素的个数;另外,将第i层中的第j个分割区域抽象为节点如上所述,在层数为h的层次结构下考察超像素上最终的显著性结果,则相应的显著性检测模型可以表示为For the spectral threshold T g required by the sample spectral feature similarity in the previous step, and the spatial threshold T s for controlling the sample neighborhood range, the values are respectively 0.1, 0.2, 0.3, and 0.4 times max{r,c}, and 10, 20, 25, 30. In this way, after the underlying superpixels are clustered at different granularities, a total of 4 levels of clustering results will be generated, forming a 4-layer image hierarchy. Denote the entire superpixel block as the bottom node as in Represents the number of superpixels; in addition, abstract the j-th segmentation region in the i-th layer as a node As mentioned above, superpixels are examined under the hierarchical structure with the number of layers h On the final saliency result, the corresponding saliency detection model can be expressed as

式中,是返回第i层中所有包含的节点下标。ωi是节点所在层次的权重。是节点上的显著性数值。In the formula, is to return all the i-th layers containing The node index of . ω i is the node The weight of the level. is a node Significant value on .

步骤四、位置先验、背景先验及边缘特征显著性计算方法。Step 4. Calculation method of position prior, background prior and edge feature saliency.

考虑到人眼对于场景中央的关注程度往往高于周边区域,本发明引入了基于分割区域位置的先验与背景先验。背景先验同样假设显著性区域往往距离图像中央更近。所以分布在图像边界附近的区域通常更可能是背景部分,对这些边界区域施加惩罚以抑制其对检测方法的响应大小。本发明利用SLIC算法和Mean-Shift算法在一幅光谱梯度图像上构建图像层次结构,并给出了在该层次结构下的显著性目标检测模型。并新引入了基于边缘特征的显著性计算方法和背景先验来进一步提高模型对于区域显著性的描述能力。接下来将对位置先验,背景先验和边缘特征显著性计算方法进行具体介绍。Considering that human eyes tend to pay more attention to the center of the scene than the surrounding areas, the present invention introduces a priori based on the position of the segmented area and background priori. The background prior also assumes that salient regions tend to be closer to the center of the image. So regions distributed near image boundaries are usually more likely to be background parts, and penalties are imposed on these boundary regions to suppress their response size to detection methods. The invention uses the SLIC algorithm and the Mean-Shift algorithm to construct an image hierarchical structure on a spectral gradient image, and provides a salient target detection model under the hierarchical structure. And a new saliency calculation method based on edge features and background prior is introduced to further improve the model's ability to describe regional saliency. Next, the position prior, background prior and edge feature saliency calculation methods will be introduced in detail.

1.位置先验与背景先验。1. Position prior and background prior.

考虑到人眼对于场景中央的关注程度往往高于周边区域,位置先验的数学表达式如下:Considering that the human eye tends to pay more attention to the center of the scene than the surrounding areas, the mathematical expression of the position prior is as follows:

式中,是区域Ri中像元xk到图像中心点的欧式距离。In the formula, is the Euclidean distance from pixel x k in region R i to the center of the image.

本发明选定距离图像四周10像素以内所有像素构成的方环区域为图像的边界区域Rb。为了对与Rb存在交集的分割区域进行抑制,我们采用一种“接触惩罚”的方法来实现这一目标。对于图像层次结构中的节点计算其背景先验大小时应遵循以下三条规则:In the present invention, the square ring area formed by all pixels within 10 pixels from the surrounding image is selected as the boundary area R b of the image. To suppress segmented regions that intersect with Rb , we employ a “contact penalty” approach to achieve this. For nodes in the image hierarchy When calculating its background prior size, the following three rules should be followed:

1)若则对施加的惩罚函数 1) if then yes imposed penalty function

2)否则,若则惩罚函数 2) Otherwise, if then the penalty function

3)交集规模愈大,惩罚应越重,即的绝对值更大;3) intersection scale The larger the value, the heavier the penalty should be, that is, The absolute value of is larger;

以上三条规则明确了以接触惩罚计算背景先验时的边界条件和影响因素。在满足规则的情况下,采用不同形式的惩罚函数可以得到不同的具体计算方法。惩罚函数的定义为The above three rules make it clear that the contact penalty is calculated Boundary conditions and influencing factors when the background is prior. In the case of satisfying the rules, different specific calculation methods can be obtained by using different forms of penalty functions. The penalty function is defined as

式中,ξ是边界区域内每个像素所带的惩罚因子。In the formula, ξ is the penalty factor carried by each pixel in the boundary area.

2.边缘特征显著性。2. Significance of edge features.

人眼视觉系统对于图像边缘较为敏感,视觉注意力容易被边缘特征明显的图像区域所吸引,这主要是因为图像的边缘所在通常是像素灰度变化较剧烈的地方。所以本发明不仅使用了基于光谱梯度特征的区域对比方法,而且还引入了高光谱图像空间维度上的边缘特征来进一步提高效果。本发明将高光谱图像各波段的均值作为其对应的灰度图像Ihsi,使用Canny检测子得到边缘特征。本发明利用高光谱图像空间维度上的边缘特征计算区域显著性,步骤如下:The human visual system is more sensitive to the edge of the image, and the visual attention is easily attracted to the image area with obvious edge features, mainly because the edge of the image is usually the place where the gray level of the pixel changes sharply. Therefore, the present invention not only uses the region comparison method based on the spectral gradient feature, but also introduces the edge feature in the spatial dimension of the hyperspectral image to further improve the effect. The present invention takes the average value of each band of the hyperspectral image as its corresponding grayscale image I hsi , and uses Canny detectors to obtain edge features. The present invention utilizes the edge features on the space dimension of the hyperspectral image to calculate the regional saliency, and the steps are as follows:

输入:高光谱图像平均灰度图Ihsi,层次结构节点及其所在分割结果图IsegInput: hyperspectral image average grayscale image I hsi , hierarchy node and its segmentation result graph I seg ;

输出:上的边缘特征显著性 output: Marginal feature saliency on

1)对Ihsi使用Canny检测子提取边缘,得到结果 1) Use the Canny detector to extract the edge for I hsi , and get the result

2)对Iseg用3×3方差为1.5的高斯滤波器进行滤波,使区域边界宽度增加;2) Filter the I seg with a 3×3 Gaussian filter with a variance of 1.5 to increase the width of the region boundary;

3)对滤波后的Iseg求梯度幅值图像,经二值化得到边界提取结果 3) Calculate the gradient magnitude image of the filtered I seg , and obtain the boundary extraction result by binarization

4)利用下式将分割区域边界上的图像边缘累加,得到 4) Use the following formula to divide the region The image edges on the border are accumulated to get

式中,是分割区域的边界;中位于内的边缘特征累加运算。In the formula, is the split area the boundaries of yes in the Inner edge feature accumulation operation.

步骤五、计算显著图。Step 5. Calculate the saliency map.

在具体计算显著图时,主要是要确定层次结构中各节点或者说各层次上分割区域的显著性计算方法。如前所述,在本章方法中主要由光谱梯度区域对比、边缘特征显著性、位置先验以及背景先验四个部分组成。位置先验计算公式如下:When calculating the saliency map, it is mainly to determine each node in the hierarchical structure or the division area on each level Significance calculation method. As mentioned earlier, in this chapter method It mainly consists of four parts: spectral gradient area comparison, edge feature saliency, position prior and background prior. The position prior calculation formula is as follows:

在施加先验时,考虑到位置先验以指数函数对距离进行加权对中央区域的增强作用比较强,对提升检测方法性能并不完全有利;故仅对基于光谱特征区域对比的部分进行增强,而不对基于边缘特征的部分进行操作。而背景先验由于选取的图像边界宽度较小,对抑制背景有比较好的效果,故对基于两种特征的计算方法同时予以施加。最终得到本发明的显著性计算公式为When applying the prior, considering that the weighting of the distance by the position prior with an exponential function has a strong enhancement effect on the central area, it is not completely beneficial to improve the performance of the detection method; therefore, only the part based on the comparison of spectral feature areas is enhanced. No operation is performed on the part based on edge features. The background prior has a better effect on suppressing the background because the border width of the selected image is small, so the calculation method based on the two features is applied at the same time. Finally, the significance calculation formula of the present invention is obtained as

式中,是权重系数。In the formula, is the weight coefficient.

Claims (1)

1.一种基于光谱梯度与层次结构的高光谱图像显著性目标检测方法,其特征在于包括以下步骤:1. a hyperspectral image salient target detection method based on spectral gradient and hierarchical structure, is characterized in that comprising the following steps: 步骤一、生成光谱梯度图像;Step 1, generating a spectral gradient image; 对每个像元计算光谱梯度,生成光谱梯度图像,以便所提取的光谱梯度特征向量之间维持原始图像中的空间关系;Calculate the spectral gradient for each pixel to generate a spectral gradient image, so that the spatial relationship in the original image is maintained between the extracted spectral gradient feature vectors; <mrow> <msubsup> <mi>g</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>&amp;Delta;</mi> <mi>&amp;lambda;</mi> </mrow> </mfrac> <mo>{</mo> <msubsup> <mi>y</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msubsup> <mi>y</mi> <mi>i</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> <mrow><msubsup><mi>g</mi><mi>i</mi><mrow><mo>(</mo><mi>j</mi><mo>)</mo></mrow></msubsup><mo>=</mo><mfrac><mn>1</mn><mrow><mi>&amp;Delta;</mi><mi>&amp;lambda;</mi></mrow></mfrac><mo>{</mo><msubsup><mi>y</mi><mi>i</mi><mrow><mo>(</mo><mi>j</mi><mo>)</mo></mrow></msubsup><mo>-</mo><msubsup><mi>y</mi><mi>i</mi><mrow><mo>(</mo><mi>j</mi><mo>-</mo><mn>1</mn><mo>)</mo></mrow></msubsup><mo>}</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>1</mn><mo>)</mo></mrow></mrow> 式中,是光谱梯度向量的第j个分量;是原始光谱向量第j个分量;Δλ是相邻波段波长差值;In the formula, is the jth component of the spectral gradient vector; is the jth component of the original spectral vector; Δλ is the wavelength difference of adjacent bands; 对一个高光谱数据块D中每一个像素对应的光谱向量使用公式(1)得到一个新的光谱梯度数据块X;Use the formula (1) to obtain a new spectral gradient data block X for the spectral vector corresponding to each pixel in a hyperspectral data block D; 步骤二、生成图像分割区域;Step 2, generating image segmentation regions; 对光谱梯度数据块X进行简单线性迭代聚类算法,具体步骤如下:Perform a simple linear iterative clustering algorithm on the spectral gradient data block X, the specific steps are as follows: 输入:光谱梯度图像X,期望超像素边长s,权重系数m;Input: spectral gradient image X, expected superpixel side length s, weight coefficient m; 输出:标记出各超像素的分割图像;Output: mark the segmented image of each superpixel; 1.初始化过程:1. Initialization process: 1)以s为间隔长度,在梯度图像X上初始化一组初始聚类中心C;1) Initialize a set of initial cluster centers C on the gradient image X with s as the interval length; 2)将各中心调整到3×3邻域内梯度最小值所在的位置;2) Adjust each center to the position where the gradient minimum in the 3×3 neighborhood is located; 3)设置每个像元对应的标签li=-1,到其当前所属中心的距离di=+∞;3) Set the label l i =-1 corresponding to each pixel, and the distance d i =+∞ to its current center; 2.迭代更新像元标签、各聚类中心:2. Iteratively update the pixel label and each cluster center: 1)对当前聚类中心Ck,在边长为2s正方形邻域内,按公式(2)计算邻域内各像元xi到Ck的距离D(xi,Ck);1) For the current clustering center C k , in a square neighborhood with a side length of 2s, calculate the distance D( xi ,C k ) from each pixel x i to C k in the neighborhood according to formula (2); 2)若D(xi,Ck)<di,则置xi对应的标签li=k,并更新di=D(xi,Ck);2) If D( xi ,C k )<d i , set the label l i corresponding to x i =k, and update d i =D( xi ,C k ); 3)重复步骤1)、步骤2)直至前后两次迭代之间各中心的变化小于阈值;3) Repeat step 1) and step 2) until the change of each center between two iterations before and after is less than the threshold; <mrow> <mi>D</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>C</mi> <mi>k</mi> </msub> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <mrow> <msubsup> <mi>d</mi> <mi>g</mi> <mn>2</mn> </msubsup> <mrow> <mo>(</mo> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>C</mi> <mi>k</mi> </msub> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <msup> <mrow> <mo>&amp;lsqb;</mo> <mfrac> <mrow> <msub> <mi>d</mi> <mi>s</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>C</mi> <mi>k</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> <mi>s</mi> </mfrac> <mo>&amp;rsqb;</mo> </mrow> <mn>2</mn> </msup> <msup> <mi>m</mi> <mn>2</mn> </msup> </mrow> </msqrt> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> <mrow><mi>D</mi><mrow><mo>(</mo><mrow><msub><mi>x</mi><mi>i</mi></msub><mo>,</mo><msub><mi>C</mi><mi>k</mi></msub></mrow><mo>)</mo></mrow><mo>=</mo><msqrt><mrow><msubsup><mi>d</mi><mi>g</mi><mn>2</mn></msubsup><mrow><mo>(</mo><mrow><msub><mi>x</mi><mi>i</mi></msub><mo>,</mo><msub><mi>C</mi><mi>k</mi></msub></mrow><mo>)</mo></mrow><mo>+</mo><msup><mrow><mo>&amp;lsqb;</mo><mfrac><mrow><msub><mi>d</mi><mi>s</mi></msub><mrow><mo>(</mo><mrow><msub><mi>x</mi><mi>i</mi></msub><mo>,</mo><msub><mi>C</mi><mi>k</mi></msub></mrow><mo>)</mo></mrow></mrow><mi>s</mi></mfrac><mo>&amp;rsqb;</mo></mrow><mn>2</mn></msup><msup><mi>m</mi><mn>2</mn></msup></mrow></msqrt><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>2</mn><mo>)</mo></mrow></mrow> 式中,dg(xi,Ck)是xi与Ck中光谱梯度部分的欧氏距离;ds(xi,Ck)是xi与Ck空间位置的欧式距离;m是两项距离间的权重系数;In the formula, d g ( xi , C k ) is the Euclidean distance between xi and C k in the spectral gradient; d s ( xi , C k ) is the Euclidean distance between xi and C k ; m is The weight coefficient between two distances; 采用双核函数代替均值漂移算法中的密度函数完成均值漂移过程中的相关计算,其具体形式为The dual-kernel function is used to replace the density function in the mean shift algorithm to complete the relevant calculations in the mean shift process, and its specific form is <mrow> <msub> <mi>K</mi> <mrow> <msub> <mi>T</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>T</mi> <mi>g</mi> </msub> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mi>&amp;delta;</mi> <mrow> <msubsup> <mi>T</mi> <mi>s</mi> <mn>2</mn> </msubsup> <msubsup> <mi>T</mi> <mi>g</mi> <mn>2</mn> </msubsup> </mrow> </mfrac> <mi>k</mi> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <mfrac> <msub> <mi>x</mi> <mi>s</mi> </msub> <msub> <mi>T</mi> <mi>s</mi> </msub> </mfrac> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mi>k</mi> <mrow> <mo>(</mo> <mo>|</mo> <mo>|</mo> <mfrac> <msub> <mi>x</mi> <mi>g</mi> </msub> <msub> <mi>T</mi> <mi>g</mi> </msub> </mfrac> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> <mrow><msub><mi>K</mi><mrow><msub><mi>T</mi><mi>s</mi></msub><mo>,</mo><msub><mi>T</mi><mi>g</mi></msub></mrow></msub><mrow><mo>(</mo><mi>x</mi><mo>)</mo></mrow><mo>=</mo><mfrac><mi>&amp;delta;</mi><mrow><msubsup><mi>T</mi><mi>s</mi><mn>2</mn></msubsup><msubsup><mi>T</mi><mi>g</mi><mn>2</mn></msubsup></mrow></mfrac><mi>k</mi><mrow><mo>(</mo><mo>|</mo><mo>|</mo><mfrac><msub><mi>x</mi><mi>s</mi></msub><msub><mi>T</mi><mi>s</mi></msub></mfrac><mo>|</mo><msup><mo>|</mo><mn>2</mn></msup><mo>)</mo></mrow><mi>k</mi><mrow><mo>(</mo><mo>|</mo><mo>|</mo><mfrac><msub><mi>x</mi><mi>g</mi></msub><msub><mi>T</mi><mi>g</mi></msub></mfrac><mo>|</mo><msup><mo>|</mo><mn>2</mn></msup><mo>)</mo></mrow><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>3</mn><mo>)</mo></mrow></mrow> 式中,xg是像元x对应的光谱梯度向量;xs是像元x所在的空间坐标;Tg是光谱特征对应核函数带宽;Ts是空间坐标对应核函数带宽;δ是归一化系数;In the formula, x g is the spectral gradient vector corresponding to the pixel x; x s is the spatial coordinate of the pixel x; T g is the bandwidth of the kernel function corresponding to the spectral feature; T s is the bandwidth of the kernel function corresponding to the spatial coordinate; conversion coefficient; 均值漂移算法具体步骤如下:The specific steps of the mean shift algorithm are as follows: 输入:超像素中心向量C={C1,C2,…,Ck,…Cn},光谱阈值Tg,空间阈值TsInput: superpixel center vector C={C 1 ,C 2 ,...,C k ,...C n }, spectral threshold T g , spatial threshold T s ; 输出:对输入超像素中心的标记向量lspOutput: label vector l sp for the center of the input superpixel; 1)以超像素向量Ck其作为初始中心进行均值漂移过程,记所得候选中心C′j1) Take the superpixel vector C k as the initial center to carry out the mean shift process, and record the obtained candidate center C′ j ; 2)对出现在C′j形成路径上的所有样本,计其对C′j的投票数加1;2) For all the samples appearing on the formation path of C′ j , add 1 to the number of votes for C′ j ; 3)遍历当前候选中心集合C′,寻找与C′j的光谱梯度距离小于Tg/2,且空间距离小于Ts/2的首选中心C′i3) Traversing the current set of candidate centers C′, looking for the preferred center C′ i whose spectral gradient distance from C′ j is smaller than T g /2, and whose spatial distance is smaller than T s /2; 4)若则合并C′i、C′j上的计票,同时向C′添加C′i、C′j均值,并删除C′i;否则,转步骤5);4) if Then merge the vote counts on C′ i and C′ j , and add the average value of C′ i and C′ j to C′ at the same time, and delete C′ i ; otherwise, go to step 5); 5)对各超像素中心,重复步骤1)到4),得到最终的各聚类中心;5) For each superpixel center, repeat steps 1) to 4) to obtain the final cluster centers; 6)对各超像素中心,取获得投票最多的聚类中心C′m为其归属,得lsp6) For each superpixel center, take the cluster center C′ m that has received the most votes as its attribution, and get l sp ; 步骤三、建立基于图像层次结构的显著性目标检测模型;Step 3, establishing a salient target detection model based on image hierarchy; 对步骤二中样本光谱特征相似性要求的光谱阈值Tg,以及控制样本邻域范围的空间阈值Ts,分别取值为0.1、0.2、0.3、0.4倍的max{r,c},和10、20、25、30;这样,底层的超像素经过不同粒度下的聚类后,总共将产生4个层次上聚类结果,形成一个4层的图像层次结构;将作为底层节点的超像素块全体记作其中表示了超像素的个数;另外,将第i层中的第j个分割区域抽象为节点如上所述,在层数为h的层次结构下考察超像素上最终的显著性结果,则相应的显著性检测模型表示为For the spectral threshold T g required by the sample spectral feature similarity in step 2, and the spatial threshold T s for controlling the sample neighborhood range, the values are respectively 0.1, 0.2, 0.3, 0.4 times max{r,c}, and 10 , 20, 25, 30; in this way, after the underlying superpixels are clustered at different granularities, a total of 4 levels of clustering results will be generated, forming a 4-layer image hierarchy; the superpixel block will be used as the underlying node All written as in Represents the number of superpixels; in addition, abstract the j-th segmentation region in the i-th layer as a node As mentioned above, superpixels are examined under the hierarchical structure with the number of layers h The final saliency result above, the corresponding saliency detection model is expressed as 式中,是返回第i层中所有包含的节点下标;ωi是节点所在层次的权重;是节点上的显著性数值;In the formula, is to return all the i-th layers containing The node subscript of ; ω i is the node The weight of the level; is a node Significant value on ; 步骤四、位置先验、背景先验及边缘特征显著性计算方法;Step 4, position prior, background prior and edge feature saliency calculation method; 1.位置先验与背景先验;1. Position prior and background prior; 位置先验的数学表达式如下:The mathematical expression of the location prior is as follows: <mrow> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>&amp;omega;</mi> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <munder> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>&amp;Element;</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> </mrow> </munder> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mn>2</mn> <msubsup> <mi>d</mi> <msub> <mi>x</mi> <mi>k</mi> </msub> <mn>2</mn> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> <mrow><mi>p</mi><mrow><mo>(</mo><msub><mi>R</mi><mi>i</mi></msub><mo>)</mo></mrow><mo>=</mo><mfrac><mn>1</mn><mrow><mi>&amp;omega;</mi><mrow><mo>(</mo><msub><mi>R</mi><mi>i</mi></msub><mo>)</mo></mrow></mrow></mfrac><munder><mo>&amp;Sigma;</mo><mrow><msub><mi>x</mi><mi>k</mi></msub><mo>&amp;Element;</mo><msub><mi>R</mi><mi>i</mi></msub></mrow></munder><mi>exp</mi><mrow><mo>(</mo><mo>-</mo><mn>2</mn><msubsup><mi>d</mi><msub><mi>x</mi><mi>k</mi></msub><mn>2</mn></msubsup><mo>)</mo></mrow><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>5</mn><mo>)</mo></mrow></mrow> 式中,是区域Ri中像元xk到图像中心点的欧式距离;In the formula, is the Euclidean distance from the pixel x k in the region R i to the center of the image; 选定距离图像四周10像素以内所有像素构成的方环区域为图像的边界区域Rb;对于图像层次结构中的节点计算其背景先验大小时遵循以下三条规则:Select the square ring area formed by all pixels within 10 pixels around the image as the boundary area R b of the image; for the nodes in the image hierarchy The following three rules are followed when calculating its background prior size: 1)若则对施加的惩罚函数 1) if then yes imposed penalty function 2)否则,若则惩罚函数 2) Otherwise, if then the penalty function 3)与Rb交集规模愈大,惩罚应越重,即的绝对值更大;3) Intersection scale with R b The larger the value, the heavier the penalty should be, that is, The absolute value of is larger; 以上三条规则明确了以接触惩罚计算背景先验时的边界条件和影响因素;在满足规则的情况下,采用不同形式的惩罚函数得到不同的具体计算方法;惩罚函数的定义为The above three rules make it clear that the contact penalty is calculated Boundary conditions and influencing factors when the background is prior; in the case of satisfying the rules, different forms of penalty functions are used to obtain different specific calculation methods; the penalty function is defined as <mrow> <mi>&amp;kappa;</mi> <mrow> <mo>(</mo> <msub> <mover> <mi>N</mi> <mo>^</mo> </mover> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>&amp;xi;</mi> <mo>&amp;CenterDot;</mo> <mo>|</mo> <msub> <mover> <mi>N</mi> <mo>^</mo> </mover> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>&amp;cap;</mo> <msub> <mi>R</mi> <mi>b</mi> </msub> <mo>|</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> 2 <mrow><mi>&amp;kappa;</mi><mrow><mo>(</mo><msub><mover><mi>N</mi><mo>^</mo></mover><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>)</mo></mrow><mo>=</mo><mi>&amp;xi;</mi><mo>&amp;CenterDot;</mo><mo>|</mo><msub><mover><mi>N</mi><mo>^</mo></mover><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>&amp;cap;</mo><msub><mi>R</mi><mi>b</mi></msub><mo>|</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>6</mn><mo>)</mo></mrow></mrow> 2 式中,ξ是边界区域内每个像素所带的惩罚因子;In the formula, ξ is the penalty factor carried by each pixel in the boundary area; 2.边缘特征显著性;2. Significance of edge features; 将高光谱图像各波段的均值作为其对应的灰度图像Ihsi,使用Canny检测子得到边缘特征;利用高光谱图像空间维度上的边缘特征计算区域显著性,步骤如下:The mean value of each band of the hyperspectral image is used as its corresponding grayscale image I hsi , and the edge features are obtained by using the Canny detector; the regional saliency is calculated by using the edge features on the spatial dimension of the hyperspectral image, and the steps are as follows: 输入:高光谱图像平均灰度图Ihsi,层次结构节点及其所在分割结果图IsegInput: hyperspectral image average grayscale image I hsi , hierarchy node and its segmentation result graph I seg ; 输出:上的边缘特征显著性 output: Marginal feature saliency on 1)对Ihsi使用Canny检测子提取边缘,得到结果 1) Use the Canny detector to extract the edge for I hsi , and get the result 2)对Iseg用3×3方差为1.5的高斯滤波器进行滤波,使区域边界宽度增加;2) Filter the I seg with a 3×3 Gaussian filter with a variance of 1.5 to increase the width of the region boundary; 3)对滤波后的Iseg求梯度幅值图像,经二值化得到边界提取结果 3) Calculate the gradient magnitude image of the filtered I seg , and obtain the boundary extraction result by binarization 4)利用下式将分割区域边界上的图像边缘累加,得到 4) Use the following formula to divide the region The image edges on the border are accumulated to get 式中,是分割区域的边界;中位于内的边缘特征累加运算;In the formula, is the split area the boundaries of yes in the The edge feature accumulation operation within; 步骤五、计算显著图;Step five, calculate the saliency map; 在具体计算显著图时,要确定层次结构中各节点或者各层次上分割区域的显著性计算方法;由光谱梯度区域对比、边缘特征显著性、位置先验以及背景先验四个部分组成;位置先验计算公式如下:When calculating the saliency map, it is necessary to determine each node in the hierarchy or the division area on each level Significance calculation method; It consists of four parts: spectral gradient area comparison, edge feature salience, position prior, and background prior; the position prior calculation formula is as follows: <mrow> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mover> <mi>N</mi> <mo>^</mo> </mover> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mo>|</mo> <msub> <mover> <mi>N</mi> <mo>^</mo> </mover> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>|</mo> </mrow> </mfrac> <munder> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>x</mi> <mi>k</mi> </msub> <mo>&amp;Element;</mo> <msub> <mover> <mi>N</mi> <mo>^</mo> </mover> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </mrow> </munder> <mi>exp</mi> <mo>{</mo> <mo>-</mo> <mn>2</mn> <msubsup> <mi>d</mi> <msub> <mi>x</mi> <mi>k</mi> </msub> <mn>2</mn> </msubsup> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow> <mrow><mi>p</mi><mrow><mo>(</mo><msub><mover><mi>N</mi><mo>^</mo></mover><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>)</mo></mrow><mo>=</mo><mfrac><mn>1</mn><mrow><mo>|</mo><msub><mover><mi>N</mi><mo>^</mo></mover><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub><mo>|</mo></mrow></mfrac><munder><mo>&amp;Sigma;</mo><mrow><msub><mi>x</mi><mi>k</mi></msub><mo>&amp;Element;</mo><msub><mover><mi>N</mi><mo>^</mo></mover><mrow><mi>i</mi><mo>,</mo><mi>j</mi></mrow></msub></mrow></munder><mi>exp</mi><mo>{</mo><mo>-</mo><mn>2</mn><msubsup><mi>d</mi><msub><mi>x</mi><mi>k</mi></msub><mn>2</mn></msubsup><mo>}</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>8</mn><mo>)</mo></mrow></mrow> 在施加先验时,仅对基于光谱特征区域对比的部分进行增强,不对基于边缘特征的部分进行操作;背景先验由于选取的图像边界宽度较小,对抑制背景有比较好的效果,故对基于两种特征的计算方法同时予以施加;最终得到显著性计算公式为When applying the prior, only the part based on the comparison of the spectral feature area is enhanced, and the part based on the edge feature is not operated; the background prior has a better effect on suppressing the background because the selected image boundary width is small, so for Calculation methods based on the two features are applied simultaneously; the final significance calculation formula is 式中,是权重系数。In the formula, is the weight coefficient.
CN201710442878.4A 2017-06-13 2017-06-13 High spectrum image conspicuousness object detection method based on spectrum gradient and hierarchical structure Active CN107274416B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710442878.4A CN107274416B (en) 2017-06-13 2017-06-13 High spectrum image conspicuousness object detection method based on spectrum gradient and hierarchical structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710442878.4A CN107274416B (en) 2017-06-13 2017-06-13 High spectrum image conspicuousness object detection method based on spectrum gradient and hierarchical structure

Publications (2)

Publication Number Publication Date
CN107274416A true CN107274416A (en) 2017-10-20
CN107274416B CN107274416B (en) 2019-11-01

Family

ID=60066918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710442878.4A Active CN107274416B (en) 2017-06-13 2017-06-13 High spectrum image conspicuousness object detection method based on spectrum gradient and hierarchical structure

Country Status (1)

Country Link
CN (1) CN107274416B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108241854A (en) * 2018-01-02 2018-07-03 天津大学 A Deep Video Saliency Detection Method Based on Motion and Memory Information
CN108416746A (en) * 2018-02-07 2018-08-17 西北大学 Pattern enhancement method of painted cultural relics based on hyperspectral image dimensionality reduction and fusion
CN108427931A (en) * 2018-03-21 2018-08-21 合肥工业大学 The detection method of barrier before a kind of mine locomotive based on machine vision
CN109063537A (en) * 2018-06-06 2018-12-21 北京理工大学 The high spectrum image preprocess method mixed for abnormal Small object solution
CN109191482A (en) * 2018-10-18 2019-01-11 北京理工大学 A kind of image combination and segmentation method based on region adaptivity spectral modeling threshold value
CN109559364A (en) * 2018-11-27 2019-04-02 东南大学 A kind of figure building method based on smoothness constraint
CN109829480A (en) * 2019-01-04 2019-05-31 广西大学 The method and system of the detection of body surface bloom feature and material classification
CN109975794A (en) * 2019-03-29 2019-07-05 江西理工大学 A method of intelligent manufacturing system detection and control are carried out using high light spectrum image-forming ranging model
CN111160300A (en) * 2019-12-31 2020-05-15 北京理工大学重庆创新中心 Deep learning hyperspectral image saliency detection algorithm combined with global prior
CN111832630A (en) * 2020-06-23 2020-10-27 成都恒创新星科技有限公司 Target detection method based on first-order gradient neural network
CN112070098A (en) * 2020-08-20 2020-12-11 西安理工大学 Hyperspectral image salient target detection method based on frequency adjustment model
WO2021027193A1 (en) * 2019-08-12 2021-02-18 佳都新太科技股份有限公司 Face clustering method and apparatus, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469392A (en) * 2015-11-18 2016-04-06 西北工业大学 High spectral image significance detection method based on regional spectrum gradient characteristic comparison
CN105913023A (en) * 2016-04-12 2016-08-31 西北工业大学 Cooperated detecting method for ice of The Yellow River based on multispectral image and SAR image
CN106503739A (en) * 2016-10-31 2017-03-15 中国地质大学(武汉) The target in hyperspectral remotely sensed image svm classifier method and system of combined spectral and textural characteristics

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469392A (en) * 2015-11-18 2016-04-06 西北工业大学 High spectral image significance detection method based on regional spectrum gradient characteristic comparison
CN105913023A (en) * 2016-04-12 2016-08-31 西北工业大学 Cooperated detecting method for ice of The Yellow River based on multispectral image and SAR image
CN106503739A (en) * 2016-10-31 2017-03-15 中国地质大学(武汉) The target in hyperspectral remotely sensed image svm classifier method and system of combined spectral and textural characteristics

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HANGQI YAN ET AL.: "SALIENT OBJECT DETECTION IN HYPERSPECTRAL IMAGERY USING SPECTRAL GRADIENT CONTRAST", 《IGARSS 2016》 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108241854A (en) * 2018-01-02 2018-07-03 天津大学 A Deep Video Saliency Detection Method Based on Motion and Memory Information
CN108241854B (en) * 2018-01-02 2021-11-09 天津大学 Depth video saliency detection method based on motion and memory information
CN108416746A (en) * 2018-02-07 2018-08-17 西北大学 Pattern enhancement method of painted cultural relics based on hyperspectral image dimensionality reduction and fusion
CN108416746B (en) * 2018-02-07 2023-04-18 西北大学 Colored drawing cultural relic pattern enhancement method based on dimension reduction and fusion of hyperspectral images
CN108427931A (en) * 2018-03-21 2018-08-21 合肥工业大学 The detection method of barrier before a kind of mine locomotive based on machine vision
CN108427931B (en) * 2018-03-21 2019-09-10 合肥工业大学 The detection method of barrier before a kind of mine locomotive based on machine vision
CN109063537B (en) * 2018-06-06 2021-08-17 北京理工大学 A hyperspectral image preprocessing method for unmixing of abnormal small objects
CN109063537A (en) * 2018-06-06 2018-12-21 北京理工大学 The high spectrum image preprocess method mixed for abnormal Small object solution
CN109191482A (en) * 2018-10-18 2019-01-11 北京理工大学 A kind of image combination and segmentation method based on region adaptivity spectral modeling threshold value
CN109191482B (en) * 2018-10-18 2021-09-21 北京理工大学 Image merging and segmenting method based on regional adaptive spectral angle threshold
CN109559364A (en) * 2018-11-27 2019-04-02 东南大学 A kind of figure building method based on smoothness constraint
CN109559364B (en) * 2018-11-27 2023-05-30 东南大学 A Graph Construction Method Based on Smoothness Constraint
CN109829480A (en) * 2019-01-04 2019-05-31 广西大学 The method and system of the detection of body surface bloom feature and material classification
CN109975794A (en) * 2019-03-29 2019-07-05 江西理工大学 A method of intelligent manufacturing system detection and control are carried out using high light spectrum image-forming ranging model
CN109975794B (en) * 2019-03-29 2022-12-09 江西理工大学 A method for detection and control of intelligent manufacturing system using hyperspectral imaging ranging model
WO2021027193A1 (en) * 2019-08-12 2021-02-18 佳都新太科技股份有限公司 Face clustering method and apparatus, device and storage medium
CN111160300A (en) * 2019-12-31 2020-05-15 北京理工大学重庆创新中心 Deep learning hyperspectral image saliency detection algorithm combined with global prior
CN111160300B (en) * 2019-12-31 2022-06-28 北京理工大学重庆创新中心 A deep learning hyperspectral image saliency detection algorithm combined with global priors
CN111832630A (en) * 2020-06-23 2020-10-27 成都恒创新星科技有限公司 Target detection method based on first-order gradient neural network
CN112070098A (en) * 2020-08-20 2020-12-11 西安理工大学 Hyperspectral image salient target detection method based on frequency adjustment model
CN112070098B (en) * 2020-08-20 2024-02-09 西安理工大学 Hyperspectral image salient target detection method based on frequency adjustment model

Also Published As

Publication number Publication date
CN107274416B (en) 2019-11-01

Similar Documents

Publication Publication Date Title
CN107274416B (en) High spectrum image conspicuousness object detection method based on spectrum gradient and hierarchical structure
Henry et al. Road segmentation in SAR satellite images with deep fully convolutional neural networks
CN108549891B (en) Multi-scale diffusion well-marked target detection method based on background Yu target priori
Lu et al. Multi-scale strip pooling feature aggregation network for cloud and cloud shadow segmentation
Wang et al. Meta-learning based hyperspectral target detection using Siamese network
Cui et al. Superpixel-based extended random walker for hyperspectral image classification
CN113239830B (en) A remote sensing image cloud detection method based on full-scale feature fusion
CN112101271A (en) Hyperspectral remote sensing image classification method and device
Asokan et al. Machine learning based image processing techniques for satellite image analysis-a survey
Zhang et al. Salient object detection in hyperspectral imagery using multi-scale spectral-spatial gradient
Imani Anomaly detection using morphology-based collaborative representation in hyperspectral imagery
CN109829426B (en) Railway construction temporary building monitoring method and system based on high-resolution remote sensing image
Li et al. Unsupervised road extraction via a Gaussian mixture model with object-based features
Yuan et al. Efficient cloud detection in remote sensing images using edge-aware segmentation network and easy-to-hard training strategy
Guo et al. Dual graph U-Nets for hyperspectral image classification
Shang et al. Region-level SAR image segmentation based on edge feature and label assistance
Venugopal Sample selection based change detection with dilated network learning in remote sensing images
Bagwari et al. A comprehensive review on segmentation techniques for satellite images
CN109360191B (en) An image saliency detection method based on variational autoencoder
Jia et al. Shearlet-based structure-aware filtering for hyperspectral and LiDAR data classification
Manaf et al. Hybridization of SLIC and Extra Tree for Object Based Image Analysis in Extracting Shoreline from Medium Resolution Satellite Images.
Fengping et al. Road extraction using modified dark channel prior and neighborhood FCM in foggy aerial images
Fan et al. ResAt-UNet: a U-shaped network using ResNet and attention module for image segmentation of urban buildings
Ju et al. A novel fully convolutional network based on marker-controlled watershed segmentation algorithm for industrial soot robot target segmentation
CN110910497B (en) Method and system for realizing augmented reality map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant