CN104966085A - Remote sensing image region-of-interest detection method based on multi-significant-feature fusion - Google Patents

Remote sensing image region-of-interest detection method based on multi-significant-feature fusion Download PDF

Info

Publication number
CN104966085A
CN104966085A CN201510331174.0A CN201510331174A CN104966085A CN 104966085 A CN104966085 A CN 104966085A CN 201510331174 A CN201510331174 A CN 201510331174A CN 104966085 A CN104966085 A CN 104966085A
Authority
CN
China
Prior art keywords
color
saliency
image
remote sensing
color channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510331174.0A
Other languages
Chinese (zh)
Other versions
CN104966085B (en
Inventor
张立保
吕欣然
王士一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Normal University
Original Assignee
Beijing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Normal University filed Critical Beijing Normal University
Priority to CN201510331174.0A priority Critical patent/CN104966085B/en
Publication of CN104966085A publication Critical patent/CN104966085A/en
Application granted granted Critical
Publication of CN104966085B publication Critical patent/CN104966085B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开一种基于多显著特征融合的遥感图像感兴趣区域检测方法,属于遥感图像处理及图像识别技术领域。实施过程包括:1)获取一组输入遥感图像的颜色通道并计算各通道颜色直方图;2)由这些直方图计算各颜色通道标准化显著权重;3)计算信息量显著特征图;4)将一组输入遥感图像从RGB颜色空间转换至CIE Lab颜色空间;5)利用聚类算法获得簇;6)计算各簇显著值,得到共有显著特征图;7)融合信息量显著特征图与共有显著特征图获得最终显著图;8)通过最大类间方差法进行阈值分割提取感兴趣区域。与传统方法相比,本发明在无需先验知识库的前提下,实现了对遥感图像感兴趣区域的准确检测,可广泛用于环境监测、土地利用与农业调查等领域。

The invention discloses a remote sensing image interest region detection method based on multi-significant feature fusion, which belongs to the technical field of remote sensing image processing and image recognition. The implementation process includes: 1) Obtain a set of color channels of the input remote sensing image and calculate the color histogram of each channel; 2) Calculate the normalized saliency weight of each color channel from these histograms; 3) Calculate the information saliency feature map; 4) Combine a Group input remote sensing images are converted from RGB color space to CIE Lab color space; 5) Clusters are obtained by clustering algorithm; 6) Salient values of each cluster are calculated to obtain a common salient feature map; The final saliency map is obtained from the figure; 8) The region of interest is extracted by threshold segmentation using the maximum between-class variance method. Compared with the traditional method, the present invention realizes the accurate detection of the region of interest in the remote sensing image without prior knowledge base, and can be widely used in the fields of environmental monitoring, land utilization and agricultural investigation and the like.

Description

一种基于多显著特征融合的遥感图像感兴趣区域检测方法A Method for Detection of Regions of Interest in Remote Sensing Images Based on Fusion of Multiple Salient Features

技术领域technical field

本发明属于遥感图像处理及图像识别技术领域,具体涉及一种基于多显著特征融合的遥感图像感兴趣区域检测方法。The invention belongs to the technical field of remote sensing image processing and image recognition, and in particular relates to a method for detecting a region of interest in a remote sensing image based on fusion of multiple salient features.

背景技术Background technique

随着遥感技术的快速发展,遥感影像的数据规模迅速扩大,而遥感图像的感兴趣区域提取能够降低遥感图像分析处理的复杂度,因此遥感图像的感兴趣区域提取也是最近一段时间的关注热点,怎样准确、快速地实现遥感影像感兴趣区域检测已成为当下亟待解决的问题之一。该问题的有效解决将对缓解遥感影像高速获取与低速解译之间的矛盾具有重要意义,对土地利用、灾害评估、城镇规划以及环境监测等相关领域也具有重要的实际应用价值。With the rapid development of remote sensing technology, the data scale of remote sensing images is rapidly expanding, and the extraction of regions of interest in remote sensing images can reduce the complexity of remote sensing image analysis and processing, so the extraction of regions of interest in remote sensing images is also a hot spot of attention recently. How to accurately and quickly realize the detection of regions of interest in remote sensing images has become one of the problems to be solved urgently. An effective solution to this problem will be of great significance for alleviating the contradiction between high-speed acquisition and low-speed interpretation of remote sensing images, and will also have important practical application value for related fields such as land use, disaster assessment, town planning, and environmental monitoring.

传统遥感图像感兴趣区域检测大多是基于全局的,需要先验知识的。但是先验知识库的建立本身是一个很复杂的问题,需综合考虑专家知识库、目标区域特征、背景区域特点等信息。有的方法需要引入对颜色呈现和眼动的心理物理学数据的训练,有的方法对遥感影像感兴趣区域检测与分类则要借助同区域的数字地图。这些算法都需要先验知识库,且计算复杂度较高。The ROI detection of traditional remote sensing images is mostly based on the global situation and requires prior knowledge. However, the establishment of the prior knowledge base itself is a very complicated problem, which needs to comprehensively consider the information of the expert knowledge base, the characteristics of the target area, and the characteristics of the background area. Some methods need to introduce training on psychophysical data of color presentation and eye movement, and some methods rely on digital maps of the same area for the detection and classification of regions of interest in remote sensing images. These algorithms all require a priori knowledge base and have high computational complexity.

视觉注意模型为遥感图像感兴趣区检测提供了一个全新的视角,不同于传统的检测方法,视觉注意模型完全由数据驱动,不涉及知识库等外部因素的影响,并且具有识别快速结果准确等优势,视觉注意模型受到越来越多的关注,将视觉注意模型引入遥感图像感兴趣区域的检测具有重大的意义。The visual attention model provides a new perspective for the detection of regions of interest in remote sensing images. Unlike traditional detection methods, the visual attention model is completely driven by data, does not involve the influence of external factors such as knowledge bases, and has the advantages of rapid identification and accurate results. , the visual attention model has received more and more attention, and it is of great significance to introduce the visual attention model into the detection of regions of interest in remote sensing images.

在基于低层视觉特征的视觉注意模型方面,Itti等人在文章“A Model of Saliency-BasedVisual Attention for Rapid Scene Analysis”中提出了Itti视觉注意方法,该模型接近于人类视觉系统,利用各种视觉特性产生显著图。在基于数学方法的视觉注意模型方面,Harel等人在文章“Graph-Based Visual Saliency”中提出了基于图论的算法(Graph-based visual saliency,GBVS),该算法通过使用传统的Itti模型模拟视觉注意机制来完成特征提取步骤,继而使用图结构表示图像之间的像素关联,最后引入马尔可夫链(Markow chains)计算显著图。在基于频域分析注意模型方面,Achanta等人在文章“Frequency-tuned Salient RegionDetection”中提出用于显著区域检测的频率调谐法(Frequency-tuned,FT),将输入的RGB图像转换到CIELab颜色空间并进行高斯平滑,再减去图像特征向量的算术平均后,按点求幅度即得到均匀且边界清晰的显著图。In terms of the visual attention model based on low-level visual features, Itti et al. proposed the Itti visual attention method in the article "A Model of Saliency-Based Visual Attention for Rapid Scene Analysis". This model is close to the human visual system and uses various visual characteristics. Generate a saliency map. In terms of visual attention models based on mathematical methods, Harel et al. proposed a graph-based algorithm (Graph-based visual saliency, GBVS) in the article "Graph-Based Visual Saliency", which uses the traditional Itti model to simulate visual The attention mechanism is used to complete the feature extraction step, and then the graph structure is used to represent the pixel association between images, and finally Markow chains are introduced to calculate the saliency map. In terms of attention models based on frequency domain analysis, Achanta et al. proposed a frequency tuning method (Frequency-tuned, FT) for salient region detection in the article "Frequency-tuned Salient RegionDetection", which converts the input RGB image to CIELab color space After performing Gaussian smoothing, subtracting the arithmetic mean of the image feature vector, and calculating the magnitude by point, a uniform salient map with clear boundaries can be obtained.

基于低层视觉特征的视觉注意模型较好地模拟了人眼视觉的关注方式,但是没有充分考虑图像的频域特征,同时计算速度慢、效率低,难以达到实时应用的要求。基于频域分析方法的视觉注意模型形式简洁,易于解释和实现,但是当显著区域占整个图像的比例过大时,或者图像背景过于复杂时,该方法得到的显著图会误将部分背景标为显著区域,且其生物合理性不是非常清楚。近年来国内外学者也提出了将视觉显著性应用于遥感影像感兴趣区域检测的新算法。例如Zhang等人在文章“Fast Detection of Visual Saliency Regions inRemote Sensing Image based on Region Growing”中提出基于小波变换,降低图像分辨率,在视觉特征中引入二维离散矩变换,生成显著图。但是,这些算法都有共同的缺点,它们都只能将显著区域提取出来,却无法区分这些显著区域之间的差别。而一组具有相似感兴趣区域的遥感影像,如果能够利用它们的相似性,就可以排除对感兴趣区域检测有干扰的其他区域。The visual attention model based on low-level visual features can better simulate the attention of human eyes, but it does not fully consider the frequency domain characteristics of images, and at the same time, the calculation speed is slow and the efficiency is low, which is difficult to meet the requirements of real-time applications. The visual attention model based on the frequency domain analysis method is simple in form, easy to explain and implement, but when the salient region accounts for too much of the whole image, or the image background is too complex, the saliency map obtained by this method will mistakenly mark part of the background as significant region, and its biological plausibility is not very clear. In recent years, scholars at home and abroad have also proposed new algorithms that apply visual saliency to the detection of regions of interest in remote sensing images. For example, Zhang et al. proposed in the article "Fast Detection of Visual Saliency Regions in Remote Sensing Image based on Region Growing" to reduce image resolution based on wavelet transform and introduce two-dimensional discrete moment transform into visual features to generate saliency maps. However, these algorithms have a common shortcoming, they can only extract salient regions, but cannot distinguish the differences between these salient regions. For a group of remote sensing images with similar ROIs, if their similarity can be used, other areas that interfere with the ROI detection can be excluded.

在计算感兴趣区域掩模方面,传统方法常用一个固定半径圆来描述感兴趣区域,它在识别随机区域时会带来大量冗余信息,而使用单一阈值的速度非常快,但是感兴趣区域会有很多小碎片,区域描述不准确。最大类间方差法(Ostu方法)是一种自动的非参数、无监督的阈值选择法,该方法是自适应计算单阈值的简单高效方法,该方法具有计算简单、自适应强等优点。In terms of calculating the mask of the region of interest, the traditional method often uses a circle with a fixed radius to describe the region of interest, which will bring a lot of redundant information when identifying random regions, and the speed of using a single threshold is very fast, but the region of interest will be There are many small pieces and the area description is not accurate. The method of maximum inter-class variance (Ostu method) is an automatic non-parametric and unsupervised threshold selection method. This method is a simple and efficient method for adaptively calculating a single threshold. This method has the advantages of simple calculation and strong self-adaptation.

发明内容Contents of the invention

本发明的目的在于提供了一种基于多显著特征融合的遥感图像感兴趣区域检测方法,该方法用于对遥感图像的感兴趣区域进行精确检测。现有的感兴趣区域检测方法主要是基于全局的,需要先验知识的。但是先验知识库的建立本身是一个很复杂的问题,需综合考虑专家知识库、目标区域特征、背景区域特点等信息。所以本发明方法主要关注两个方面:The object of the present invention is to provide a method for detecting a region of interest in a remote sensing image based on the fusion of multiple salient features, which is used to accurately detect the region of interest in the remote sensing image. Existing ROI detection methods are mainly global-based and require prior knowledge. However, the establishment of the prior knowledge base itself is a very complicated problem, which needs to comprehensively consider the information of the expert knowledge base, the characteristics of the target area, and the characteristics of the background area. Therefore, the method of the present invention mainly focuses on two aspects:

1)无需基于全局搜索和建立先验知识库;1) There is no need to search and establish a priori knowledge base based on the global;

2)提升遥感图像感兴趣区域检测精度,获得更为准确的感兴趣区域信息。2) Improve the detection accuracy of the region of interest in the remote sensing image and obtain more accurate information of the region of interest.

本发明所使用的技术方案包括遥感图像的信息量显著特征图生成,共有显著特征图生成,最终显著图生成,感兴趣区域模板生成以及感兴趣区域生成五个主要过程,具体包括以下步骤:The technical solution used in the present invention includes five main processes of generating information-aware feature maps of remote sensing images, generating a total of salient feature maps, generating final saliency maps, generating region-of-interest templates, and generating region-of-interest, specifically including the following steps:

步骤一:计算颜色直方图,即输入一组尺寸为M×N的遥感图像,分别提取每幅图像的每一个颜色通道,用fc(x,y)表示在颜色通道c中(x,y)位置的颜色强度,构建每幅遥感图像在不同颜色通道的强度直方图Hc(i),其中M表示图像的长,N表示图像的宽,x、y分别表示图像的横、纵坐标,x=1、2……M,y=1、2……N,c表示颜色通道,c=1、2、3,i表示像素强度值,i=0、1……255;Step 1: Calculate the color histogram, that is, input a set of remote sensing images with a size of M×N, extract each color channel of each image separately, and use f c (x, y) to represent the color channel c (x, y ) position, construct the intensity histogram H c (i) of each remote sensing image in different color channels, where M represents the length of the image, N represents the width of the image, and x and y represent the horizontal and vertical coordinates of the image respectively, x=1, 2...M, y=1, 2...N, c represents the color channel, c=1, 2, 3, i represents the pixel intensity value, i=0, 1...255;

步骤二:计算颜色通道c的标准化显著权重,即根据颜色通道c的颜色直方图Hc(i),计算该颜色通道中每一个像素强度值i的信息量Inc(i),并将该信息量赋给与该像素强度值相等的像素点,完成全部计算与赋值后,得到颜色通道c的信息量图LOGc(x,y),利用该信息量图,得到颜色通道c的显著度hc,再利用各颜色通道的显著度,计算得到每幅图像的各颜色通道标准化显著权重wcStep 2: Calculate the normalized saliency weight of color channel c, that is, calculate the information amount In c (i) of each pixel intensity value i in the color channel according to the color histogram H c (i) of color channel c, and calculate the The amount of information is assigned to the pixel point equal to the intensity value of the pixel. After all calculations and assignments are completed, the information amount map LOG c (x, y) of the color channel c is obtained. Using this information amount map, the salience degree of the color channel c is obtained. h c , and then use the saliency of each color channel to calculate the normalized saliency weight w c of each color channel of each image;

步骤三:计算信息量显著特征图,即利用各颜色通道的标准化显著权重wc,加权计算得到每幅图像初步的信息量显著特征图,对初步获得的信息量显著特征图进行高斯平滑滤波,滤除噪声后得到每幅图像的最终的信息量显著特征图;Step 3: Calculate the informative feature map, that is, use the standardized saliency weight w c of each color channel to weight and calculate the preliminary informative feature map of each image, and perform Gaussian smoothing filtering on the initially obtained informative feature map, After filtering the noise, the final informative feature map of each image is obtained;

步骤四:将一组遥感图像从RGB颜色空间转换至CIE Lab颜色空间,即分别提取每幅图像每个像素的R、G、B三个颜色通道值,将它们转换至CIE Lab颜色空间,获取L、a、b三个分量,RGB颜色空间中,R表示red红色,G表示green绿色,B表示blue蓝色,CIELab颜色空间中,L表示亮度,L=0代表黑色,L=100代表白色,a表示颜色在红/绿之间的位置,a为负值代表绿色,a为正值代表红色,b表示颜色在蓝/黄之间的位置,b为负值代表蓝色,b为正值代表黄色;Step 4: Convert a set of remote sensing images from the RGB color space to the CIE Lab color space, that is, extract the R, G, and B color channel values of each pixel of each image, and convert them to the CIE Lab color space to obtain L, a, b three components, in the RGB color space, R means red red, G means green green, B means blue blue, in CIELab color space, L means brightness, L=0 means black, L=100 means white , a represents the position of the color between red/green, a negative value represents green, a positive value represents red, b represents the position of the color between blue/yellow, b represents blue when negative, and b represents positive Value represents yellow;

步骤五:利用k-means聚类算法完成CIE Lab颜色空间的像素聚类,即通过k-means聚类算法,将这组原始遥感图像映射到CIE Lab颜色空间上的所有像素点的值进行聚类,得到k个簇;Step 5: Use the k-means clustering algorithm to complete the pixel clustering in the CIE Lab color space, that is, use the k-means clustering algorithm to map this group of original remote sensing images to the values of all pixels in the CIE Lab color space for clustering Class, get k clusters;

步骤六:计算共有显著特征图,即将第j个簇中含有的像素数与图像总像素数相除,相除的结果定义为第j个簇的权重,其中j=1、2……k,得到所有k个簇的权重后,利用簇的权重与簇之间的距离计算簇的显著值,把簇的显著值赋给每一个属于该簇的像素点,由此获得一组共有显著特征图;Step 6: Calculate the common salient feature map, that is, divide the number of pixels contained in the jth cluster by the total number of pixels in the image, and the result of the division is defined as the weight of the jth cluster, where j=1, 2...k, After obtaining the weights of all k clusters, calculate the saliency value of the cluster by using the weight of the cluster and the distance between the clusters, and assign the saliency value of the cluster to each pixel belonging to the cluster, thereby obtaining a set of common saliency feature maps ;

步骤七:计算最终显著图,即利用各颜色通道直方图信息所获得的信息量显著特征图,与在CIE Lab颜色空间中通过k-means聚类获得的共有显著特征图相乘,从而获得多显著特征融合后的最终显著图;Step 7: Calculate the final saliency map, that is, the informative saliency map obtained by using the histogram information of each color channel is multiplied by the shared saliency map obtained by k-means clustering in the CIE Lab color space, so as to obtain multiple The final saliency map after salient feature fusion;

步骤八:感兴趣区域提取,即通过最大类间方差法得到最终显著图的分割阈值,利用该阈值将最终显著图分割为一幅二值图像模板,用“1”代表感兴趣区域,用“0”代表非感兴趣区,最后将二值图像模板与原始图像相乘得到最终的感兴趣区提取结果。Step 8: Extracting the region of interest, that is, the segmentation threshold of the final saliency map is obtained by the maximum inter-class variance method, and the final saliency map is segmented into a binary image template by using the threshold, with "1" representing the region of interest, and " 0" represents the non-interest area, and finally the binary image template is multiplied by the original image to obtain the final ROI extraction result.

附图说明Description of drawings

图1为本发明的流程图。Fig. 1 is a flowchart of the present invention.

图2为本发明所使用的一组四幅遥感图像示例图片。Fig. 2 is a group of four sample pictures of remote sensing images used in the present invention.

图3为本发明的特征图与最终显著图。(a)为示例图片的信息量显著特征图,(b)为示例图片的共有显著特征图,(c)为示例图片的最终显著图。Fig. 3 is the feature map and the final saliency map of the present invention. (a) is the informative saliency map of the example image, (b) is the shared saliency map of the example image, and (c) is the final saliency map of the example image.

图4为示例图片采用本发明方法和其他方法生成的显著图的比较。(a)为Itti方法生成的显著图,(b)为GBVS方法生成的显著图,(c)为FT方法生成的显著图,(d)为本发明方法生成的显著图。Fig. 4 is a comparison of saliency maps generated by the method of the present invention and other methods for an example image. (a) is the saliency map generated by the Itti method, (b) is the saliency map generated by the GBVS method, (c) is the saliency map generated by the FT method, and (d) is the saliency map generated by the method of the present invention.

图5为示例图片采用本发明方法和其他方法所检测到的感兴趣区域比较。(a)为Itti方法检测到的感兴趣区域图,(b)为GBVS方法检测到的感兴趣区域,(c)为FT方法检测到的感兴趣区域,(d)为本发明方法检测到的感兴趣区域。FIG. 5 is a comparison of ROIs detected by the method of the present invention and other methods in an example picture. (a) is the region of interest figure detected by the Itti method, (b) is the region of interest detected by the GBVS method, (c) is the region of interest detected by the FT method, (d) is the region of interest detected by the method of the present invention area of interest.

具体实施方式Detailed ways

下面结合附图对本发明做进一步详细说明。本发明的总体框架如图1所示,现介绍每一步实现细节。The present invention will be described in further detail below in conjunction with the accompanying drawings. The overall framework of the present invention is shown in Figure 1, and now introduces the implementation details of each step.

步骤一:计算颜色直方图;Step 1: Calculate the color histogram;

输入一组尺寸为M×N的遥感图像如图2所示,分别得到每幅图像Ip的每一个颜色通道,用fc(x,y)表示图像Ip在颜色通道c中(x,y)位置的颜色强度,构建该遥感图像在不同颜色通道的强度直方图Hc(i),其中M表示图像的长,N表示图像的宽,该组遥感图像的总数为Q,用表示数量为Q的遥感图像组,Ip表示一组遥感图像的第p幅,p=1、2……Q,x、y分别表示图像的横、纵坐标,x=1、2……M,y=1、2……N,c表示颜色通道,c=1、2、3,i表示像素强度值,i=0、1……255;Input a set of remote sensing images of size M×N As shown in Figure 2, each color channel of each image I p is obtained separately, and f c (x, y) is used to represent the color intensity of the image I p in the position (x, y) of the color channel c, and the remote sensing image is constructed The intensity histogram H c (i) of different color channels, where M represents the length of the image, N represents the width of the image, the total number of remote sensing images in this group is Q, and Indicates a group of remote sensing images with a quantity of Q, I p represents the pth piece of a group of remote sensing images, p=1, 2...Q, x, y represent the horizontal and vertical coordinates of the image respectively, x=1, 2...M , y=1, 2...N, c represents the color channel, c=1, 2, 3, i represents the pixel intensity value, i=0, 1...255;

该组图像中的每幅图像的每一个颜色通道的直方图可以用如下公式得到:The histogram of each color channel of each image in the group of images can be obtained by the following formula:

Hh cc (( ii )) == ΣΣ xx == 11 Mm ΣΣ ythe y == 11 NN δδ cc (( xx ,, ythe y )) // (( Mm ×× NN ))

其中,δc(x,y)表示颜色通道c的二值化图像,计算公式为:Among them, δ c (x, y) represents the binarized image of color channel c, and the calculation formula is:

δδ cc (( xx ,, ythe y )) == 11 ,, ff cc (( xx ,, ythe y )) == ii 00 ,, otherwieothers wie

步骤二:计算颜色通道c的标准化显著权重;Step 2: Calculate the normalized saliency weight of the color channel c;

根据图像Ip的颜色通道c的颜色直方图Hc(i),计算该颜色通道中每一个像素强度值i的信息量Inc(i),利用该信息量进行计算和赋值,最终得到图像Ip的各颜色通道标准化显著权重wc,具体由以下四个步骤实现;According to the color histogram H c (i) of the color channel c of the image I p , calculate the information amount In c (i) of each pixel intensity value i in the color channel, use the information amount to calculate and assign, and finally obtain the image The normalized saliency weight w c of each color channel of I p is specifically realized by the following four steps;

(1)根据图像Ip的颜色通道c中的颜色直方图Hc(i),利用如下公式计算该颜色通道中每一个像素强度值的信息量Inc(i):(1) According to the color histogram H c (i) in the color channel c of the image I p , use the following formula to calculate the information amount In c (i) of each pixel intensity value in the color channel:

In(i)c=-ln(Hc(i))In(i) c =-ln(H c (i))

(2)将该信息量赋给颜色通道c中与该像素强度值相等的像素点,得到颜色通道c的信息量图LOGc(x,y),即:(2) assign the amount of information to the pixel in the color channel c equal to the pixel intensity value, and obtain the information amount map LOG c (x, y) of the color channel c, namely:

i=fc(x,y)i = f c (x, y)

(3)利用颜色通道c的信息量图LOGc(x,y),计算得到显著度hc,计算公式如下:(3) Using the information quantity map LOG c (x, y) of the color channel c, the saliency degree h c is calculated, and the calculation formula is as follows:

hh cc == ΣΣ xx == 11 Mm ΣΣ ythe y == 11 NN LOGLOG cc (( xx ,, ythe y )) ΣΣ cc == 11 33 ΣΣ xx == 11 Mm ΣΣ ythe y == 11 NN LOGLOG cc (( xx ,, ythe y ))

其中有三个颜色通道,则h1表示颜色通道1的显著度,h2表示颜色通道2的显著度,h3表示颜色通道3的显著度;There are three color channels, h 1 represents the salience of color channel 1, h 2 represents the salience of color channel 2, and h 3 represents the salience of color channel 3;

(4)将颜色通道c的显著度除以三个颜色通道的显著度的,得数取负对数,得到颜色通道标准化显著权重wc(4) Divide the saliency of the color channel c by the salience of the three color channels, and take the negative logarithm of the result to obtain the normalized saliency weight w c of the color channel:

ww 11 == -- loglog (( hh 11 hh 11 ++ hh 22 ++ hh 33 )) ww 22 == -- loglog (( hh 22 hh 11 ++ hh 22 ++ hh 33 )) ww 33 == -- loglog (( hh 33 hh 11 ++ hh 22 ++ hh 33 ))

其中有三个颜色通道,则w1表示颜色通道1的标准化显著权重,w2表示颜色通道2的标准化显著权重,w3表示颜色通道3的标准化显著权重;There are three color channels, w 1 represents the normalized saliency weight of color channel 1, w 2 represents the normalized saliency weight of color channel 2, and w 3 represents the normalized saliency weight of color channel 3;

步骤三:计算信息量显著特征图;Step 3: Calculate the significant feature map of the amount of information;

利用图像Ip的各颜色通道的标准化显著权重wc,加权计算得到该图像初步的信息量显著特征图Smap(x,y),对初步的信息量显著特征图进行高斯平滑滤波,滤除噪声后得到最终的信息量显著特征图SS(x,y):Using the normalized saliency weight w c of each color channel of the image Ip , the weighted calculation obtains the preliminary information saliency feature map Smap(x, y) of the image, and performs Gaussian smoothing filtering on the preliminary information saliency feature map to filter out the noise Finally, the final informative feature map SS(x, y) is obtained:

SmapSmap (( xx ,, ythe y )) == ΣΣ cc == 11 33 ww cc ff cc (( xx ,, ythe y ))

其中,表示高斯平滑滤波器;in, Represents a Gaussian smoothing filter;

经过以上步骤,获得了遥感图像组中每一幅遥感图像的信息量显著特征图。After the above steps, the remote sensing image group is obtained Informative salient feature map of each remote sensing image.

步骤四:将遥感图像从RGB颜色空间转换至CIE Lab颜色空间;Step 4: Convert the remote sensing image from the RGB color space to the CIE Lab color space;

由于CIELab的颜色通道在一定程度上去除了亮度信息,反映的内容更接近于色彩感知的本质,因此可以更好地体现出颜色光滑性,基于CIE Lab空间在颜色均匀性上的明显优势,选择在CIE Lab颜色空间上进行聚类,下面先进性颜色空间转换:Since the color channel of CIELab removes the brightness information to a certain extent, the reflected content is closer to the essence of color perception, so it can better reflect the color smoothness. Based on the obvious advantages of the CIE Lab space in color uniformity, the choice of Clustering is performed on the CIE Lab color space, and the following advanced color space conversion is performed:

分别提取遥感图像组中每幅图像每个像素的R、G、B三个颜色通道值,将它们转换至CIE Lab颜色空间,获取每个像素的L、a、b三个分量,在CIE Lab颜色空间的遥感图像组记为RGB颜色空间中R表示red红色,G表示green绿色,B表示blue蓝色,CIE Lab颜色空间的三个通道分别代表亮度L,L=0代表黑色,L=100代表白色,颜色在红/绿之间的位置a,a为负值代表绿色,a为正值代表红色,颜色在蓝/黄之间的位置b,b为负值代表蓝色,b为正值代表黄色;Separately extract remote sensing image groups The R, G, B three color channel values of each pixel in each image are converted to the CIE Lab color space, and the L, a, and b three components of each pixel are obtained. The remote sensing image in the CIE Lab color space group as In the RGB color space, R means red, G means green, and B means blue. The three channels of CIE Lab color space represent brightness L, L=0 represents black, L=100 represents white, and the color is red/green The position a between, a negative value represents green, a positive value represents red, the color is between blue/yellow position b, b negative value represents blue, b positive value represents yellow;

步骤五:颜色特征聚类;Step five: color feature clustering;

利用k-means聚类算法,完成CIE Lab颜色空间的像素聚类,即在CIE Lab颜色空间上,将这一组图像所有像素点的值进行聚类,得到k个簇,具体实现步骤如下:Use the k-means clustering algorithm to complete the pixel clustering in the CIE Lab color space, that is, in the CIE Lab color space, cluster the values of all the pixels in this group of images to obtain k clusters. The specific implementation steps are as follows :

(1)提取遥感图像组中每幅图像在CIE Lab颜色空间的L、a、b三个通道,调整三个通道中的像素点值的范围,使调整后的三个通道的像素点值的范围相同;(1) Extract remote sensing image group In the three channels of L, a, and b of each image in the CIE Lab color space, adjust the range of the pixel point value in the three channels, so that the range of the pixel point value of the adjusted three channels is the same;

(2)同时对图像组中所有图像的三个通道的像素值进行计算,使每个像素点值与最近聚类中心的距离平方和最小,此时,所有最近聚类中心相同的像素点为一个簇,可利用如下公式计算距离平方和W:(2) Simultaneously calculate the pixel values of the three channels of all images in the image group, so that the sum of the squares of the distances between each pixel value and the nearest cluster center is the smallest. At this time, all the pixels with the same nearest cluster center are For a cluster, the sum of squared distances W can be calculated using the following formula:

WW == minmin (( ΣΣ rr == 11 nno || pip rr -- aa jj || 22 ))

式中pir表示像素值,其中r=1、2……n,n为图像像素点数,aj表示聚类中心,其中j=1、2……k;In the formula, pi r represents the pixel value, where r=1, 2...n, n is the number of image pixel points, and a j represents the cluster center, where j=1, 2...k;

步骤六:计算共有显著特征图;Step 6: Calculate the common salient feature map;

计算得到所有k个簇的权重后,则可利用簇的权重与簇之间的距离计算簇的显著值,把簇的显著值赋给每一个属于该簇的像素点,由此获得一组共有显著特征图,具体实现需要以下三个步骤:After calculating the weights of all k clusters, you can use the weights of the clusters and the distance between the clusters to calculate the saliency value of the cluster, and assign the saliency value of the cluster to each pixel belonging to the cluster, thus obtaining a set of common The salient feature map, the specific implementation requires the following three steps:

(1)将第j个簇lj中含有的像素数与图像组总像素数相除,相除的结果定义为第j个簇的权重ω(lj),其中j=1、2……k;(1) Divide the number of pixels contained in the j-th cluster l j by the total number of pixels in the image group, and the result of the division is defined as the weight ω(l j ) of the j-th cluster, where j=1, 2... k;

(2)定义D(lt,lj)为两个簇lt、lj的颜色距离,每一个簇的显著值CL(lj)可用如下公式计算:(2) Define D(l t , l j ) as the color distance between two clusters l t , l j , and the saliency value CL(l j ) of each cluster can be calculated by the following formula:

CLCL (( ll jj )) == ΣΣ ii ≠≠ jj ωω (( ll jj )) DD. ll (( ll tt ,, ll jj )) ωω (( ll jj ))

其中,in,

DD. (( ll tt ,, ll jj )) == -- lnln (( 11 -- 11 22 ΣΣ sthe s == 11 mm (( qq tsts -- qq jsjs )) 22 qq tsts ++ qq jsjs ))

式中j,t取值均为1、2……k,qts为第s个颜色在第t个簇的m种颜色中出现的概率,即第t个簇中有m种像素值,s=1、2……m;In the formula, the values of j and t are both 1, 2...k, q ts is the probability that the sth color appears in the m colors of the tth cluster, that is, there are m kinds of pixel values in the tth cluster, and s = 1, 2...m;

(3)经过聚类,使每一个像素点的显著值等于该像素点所在簇的显著值,由此获得共有特征显著图SM(x,y):(3) After clustering, the saliency value of each pixel is equal to the saliency value of the cluster where the pixel is located, thus obtaining the common feature saliency map SM(x, y):

当ILabp(x,y)∈lj,其中j=1、2……k,p=1、2……Q,When ILab p (x, y)∈l j , where j=1, 2...k, p=1, 2...Q,

SM(x,y)=CL(lj)SM(x,y)=CL(l j )

经过以上步骤,获得了遥感图像组中每一幅遥感图像的共有显著特征图。After the above steps, the remote sensing image group is obtained The common salient feature map of each remote sensing image in .

步骤七:计算最终显著图;Step 7: Calculate the final saliency map;

将通过各颜色通道获得的信息量显著特征图,与在CIE Lab颜色空间通过k-means聚类获得的共有显著特征图对应相乘,从而获得这组遥感图像中每幅图像多显著特征融合后的最终显著图S(x,y):Multiply the informative salient feature map obtained through each color channel with the shared salient feature map obtained by k-means clustering in the CIE Lab color space, so as to obtain the multi-significant feature map of each image in this group of remote sensing images. The final saliency map S(x, y) of :

S(x,y)=SS(x,y)×SM(x,y)S(x,y)=SS(x,y)×SM(x,y)

步骤八:感兴趣区域提取;Step 8: Region of interest extraction;

通过最大类间方差法得到最终显著图的分割阈值,利用该阈值将最终显著图分割为一幅二值图像模板,用“1”代表感兴趣区域,用“0”代表非感兴趣区,最后将二值图像模板与原始图像相乘得到最终的感兴趣区提取结果。The segmentation threshold of the final saliency map is obtained by the method of maximum inter-class variance, and the threshold is used to segment the final saliency map into a binary image template, using "1" to represent the region of interest, and "0" to represent the non-interest region, and finally Multiply the binary image template with the original image to get the final ROI extraction result.

本发明的效果可通过以下实验结果与分析进一步说明:Effect of the present invention can be further illustrated by following experimental results and analysis:

1.实验数据1. Experimental data

本发明从SPOT5卫星源图中选取了一组北京某郊区的可见光遥感图像,并分别从中截取生成了大小为1024×1024的一组图作为本文实验源图,如图2所示。The present invention selects a group of visible light remote sensing images of a certain suburb of Beijing from the SPOT5 satellite source image, and intercepts and generates a group of images with a size of 1024×1024 as the source image of this experiment, as shown in Figure 2.

2.对比实验2. Comparative experiment

为了评价本发明方法的性能,我们设计了如下的对比实验,选取了现有的具有代表性的视觉注意方法选取了ITTI方法,GBVS方法,FT方法与本发明方法进行性能对比。从主观上分别对比了不同方法生成的显著图和感兴趣区域图,如图4和图5所示。图4中,(a)为Itti方法生成的显著图,(b)为GBVS方法生成的显著图,(c)为FT方法生成的显著图,(d)为本发明方法生成的显著图。图5中,(a)为Itti方法生成的感兴趣区域图,(b)为GBVS方法生成的感兴趣区域图,(c)为FT方法生成的感兴趣区域图,(d)为本发明方法生成的感兴趣区域图。In order to evaluate the performance of the method of the present invention, we have designed the following comparative experiments, selected existing representative visual attention methods and selected ITTI method, GBVS method, FT method and the method of the present invention to carry out performance comparison. The saliency map and ROI map generated by different methods are compared subjectively, as shown in Figure 4 and Figure 5. In Figure 4, (a) is the saliency map generated by the Itti method, (b) is the saliency map generated by the GBVS method, (c) is the saliency map generated by the FT method, and (d) is the saliency map generated by the method of the present invention. In Fig. 5, (a) is the region of interest map that Itti method generates, (b) is the region of interest map that GBVS method generates, (c) is the region of interest map that FT method generates, (d) is the method of the present invention The resulting region-of-interest map.

经过对比可以看出,利用Itti模型得到的显著图分辨率很低,仅有原图大小的1/256,当最终提取感兴趣区域时,要将显著图放大。而GBVS模型是基于Itti模型的,只是在得到显著图时,利用马尔科夫链。由这两个模型得到的感兴趣区域都会比原本需要提取的区域范围大,即会提取出不需要的部分。利用FT模型,在背景频率变化不大时能够得到较好的提取结果,然而当背景频率变化大时,就会对提取结果造成干扰,而本文的算法则能得到较好的检测结果。After comparison, it can be seen that the resolution of the saliency map obtained by using the Itti model is very low, only 1/256 of the size of the original image. When finally extracting the region of interest, the saliency map should be enlarged. The GBVS model is based on the Itti model, but only uses the Markov chain when obtaining the saliency map. The region of interest obtained by these two models will be larger than the region that needs to be extracted originally, that is, the unnecessary part will be extracted. Using the FT model, better extraction results can be obtained when the background frequency changes little, but when the background frequency changes greatly, it will interfere with the extraction results, while the algorithm in this paper can get better detection results.

Claims (2)

1.一种基于多显著特征融合的遥感图像感兴趣区域检测方法,本方法针对一组遥感图像进行处理,首先利用遥感图像的颜色信息,通过构建不同颜色通道的颜色直方图并进行加权计算,得到信息量显著特征图,其次利用k-means聚类算法将一组遥感图像在CIE Lab颜色空间上进行聚类并计算显著值,从而获得CIE Lab颜色空间的一组共有显著特征图,然后融合上述两组图得到最终显著图,最后通过最大类间方差法进行阈值分割提取感兴趣区域,其特征在于,包括以下步骤:1. A method for detecting regions of interest in remote sensing images based on the fusion of multiple salient features. This method processes a group of remote sensing images. First, it uses the color information of the remote sensing images to construct color histograms of different color channels and perform weighted calculations. Obtain the salient feature map of the amount of information, and then use the k-means clustering algorithm to cluster a group of remote sensing images in the CIE Lab color space and calculate the salient value, so as to obtain a group of common salient feature maps in the CIE Lab color space, and then fuse The above two groups of images obtain the final saliency map, and finally perform threshold segmentation to extract the region of interest through the maximum inter-class variance method, which is characterized in that it includes the following steps: 步骤一:计算颜色直方图,即输入一组尺寸为M×N的遥感图像,分别提取每幅图像的每一个颜色通道,用fc(x,y)表示在颜色通道c中(x,y)位置的颜色强度,构建每幅遥感图像在不同颜色通道的强度直方图Hc(i),其中M表示图像的长,N表示图像的宽,x、y分别表示图像的横、纵坐标,x=1、2……M,y=1、2……N,c表示颜色通道,c=1、2、3,i表示像素强度值,i=0、1……255;Step 1: Calculate the color histogram, that is, input a set of remote sensing images with a size of M×N, extract each color channel of each image separately, and use f c (x, y) to represent the color channel c (x, y ) position, construct the intensity histogram H c (i) of each remote sensing image in different color channels, where M represents the length of the image, N represents the width of the image, and x and y represent the horizontal and vertical coordinates of the image respectively, x=1, 2...M, y=1, 2...N, c represents the color channel, c=1, 2, 3, i represents the pixel intensity value, i=0, 1...255; 步骤二:计算颜色通道c的标准化显著权重,即根据颜色通道c的颜色直方图Hc(i),计算该颜色通道中每一个像素强度值i的信息量Inc(i),并将该信息量赋给与该像素强度值相等的像素点,完成全部计算与赋值后,得到颜色通道c的信息量图LOGc(x,y),利用该信息量图,得到颜色通道c的显著度hc,再利用各颜色通道的显著度,计算得到每幅图像的各颜色通道标准化显著权重wcStep 2: Calculate the normalized saliency weight of color channel c, that is, calculate the information amount In c (i) of each pixel intensity value i in the color channel according to the color histogram H c (i) of color channel c, and calculate the The amount of information is assigned to the pixel point equal to the intensity value of the pixel. After all calculations and assignments are completed, the information amount map LOG c (x, y) of the color channel c is obtained. Using this information amount map, the salience degree of the color channel c is obtained. h c , and then use the saliency of each color channel to calculate the normalized saliency weight w c of each color channel of each image; 步骤三:计算信息量显著特征图,即利用各颜色通道的标准化显著权重wc,加权计算得到每幅图像初步的信息量显著特征图,对初步获得的信息量显著特征图进行高斯平滑滤波,滤除噪声后得到每幅图像的最终的信息量显著特征图;Step 3: Calculate the informative feature map, that is, use the standardized saliency weight w c of each color channel to weight and calculate the preliminary informative feature map of each image, and perform Gaussian smoothing filtering on the initially obtained informative feature map, After filtering the noise, the final informative feature map of each image is obtained; 步骤四:将一组遥感图像从RGB颜色空间转换至CIE Lab颜色空间,即分别提取每幅图像每个像素的R、G、B三个颜色通道值,将它们转换至CIE Lab颜色空间,获取L、a、b三个分量,RGB颜色空间中,R表示red红色,G表示green绿色,B表示blue蓝色,CIE Lab颜色空间中,L表示亮度,L=0代表黑色,L=100代表白色,a表示颜色在红/绿之间的位置,a为负值代表绿色,a为正值代表红色,b表示颜色在蓝/黄之间的位置,b为负值代表蓝色,b为正值代表黄色;Step 4: Convert a group of remote sensing images from the RGB color space to the CIE Lab color space, that is, extract the three color channel values of R, G, and B for each pixel of each image, and convert them to the CIE Lab color space to obtain L, a, b three components, in the RGB color space, R means red red, G means green green, B means blue blue, in CIE Lab color space, L means brightness, L=0 means black, L=100 means White, a represents the position of the color between red/green, a is negative for green, a is positive for red, b represents the position of the color between blue/yellow, b is negative for blue, b is Positive values represent yellow; 步骤五:利用k-means聚类算法完成CIE Lab颜色空间的像素聚类,即通过k-means聚类算法,将这组原始遥感图像映射到CIE Lab颜色空间上的所有像素点的值进行聚类,得到k个簇;Step 5: Use the k-means clustering algorithm to complete the pixel clustering in the CIE Lab color space, that is, use the k-means clustering algorithm to map this group of original remote sensing images to the values of all pixels in the CIE Lab color space for clustering Class, get k clusters; 步骤六:计算共有显著特征图,即将第j个簇中含有的像素数与图像总像素数相除,相除的结果定义为第j个簇的权重,其中j=1、2……k,得到所有k个簇的权重后,利用簇的权重与簇之间的距离计算簇的显著值,把簇的显著值赋给每一个属于该簇的像素点,由此获得一组共有显著特征图;Step 6: Calculate the common salient feature map, that is, divide the number of pixels contained in the jth cluster by the total number of pixels in the image, and the result of the division is defined as the weight of the jth cluster, where j=1, 2...k, After obtaining the weights of all k clusters, calculate the saliency value of the cluster by using the weight of the cluster and the distance between the clusters, and assign the saliency value of the cluster to each pixel belonging to the cluster, thereby obtaining a set of common saliency feature maps ; 步骤七:计算最终显著图,即利用各颜色通道直方图信息所获得的信息量显著特征图,与在CIE Lab颜色空间中通过k-means聚类获得的共有显著特征图相乘,从而获得多显著特征融合后的最终显著图;Step 7: Calculate the final saliency map, that is, the information saliency feature map obtained by using the histogram information of each color channel is multiplied by the shared saliency map obtained by k-means clustering in the CIE Lab color space, so as to obtain multiple The final saliency map after salient feature fusion; 步骤八:感兴趣区域提取,即通过最大类间方差法得到最终显著图的分割阈值,利用该阈值将最终显著图分割为一幅二值图像模板,用“1”代表感兴趣区域,用“0”代表非感兴趣区,最后将二值图像模板与原始图像相乘得到最终的感兴趣区提取结果。Step 8: Extracting the region of interest, that is, the segmentation threshold of the final saliency map is obtained by the maximum inter-class variance method, and the final saliency map is segmented into a binary image template by using the threshold, with "1" representing the region of interest, and " 0" represents the non-interest area, and finally the binary image template is multiplied by the original image to obtain the final ROI extraction result. 2.根据权利要求1所述的一种基于显著特征聚类的遥感图像感兴趣区域提取方法,其特征在于,所述步骤二的具体过程为:2. a kind of remote sensing image region of interest extraction method based on salient feature clustering according to claim 1, is characterized in that, the specific process of described step 2 is: 1)根据颜色通道c中的颜色直方图Hc(i),计算每一个像素强度值的信息量Inc(i):1) Calculate the information amount In c (i) of each pixel intensity value according to the color histogram H c (i) in the color channel c: In(i)c=-ln(Hc(i))In(i) c =-ln(H c (i)) 2)将该信息量赋给与该像素强度值相等的像素点,得到颜色通道c的信息量图LOGc(x,y),即:2) assign the information amount to the pixel point equal to the pixel intensity value, and obtain the information amount map LOG c (x, y) of the color channel c, namely: i=fc(x,y),i = f c (x, y), 3)利用颜色通道c的信息量图LOGc(x,y),计算得到显著度hc,由于图像包含了三个颜色通道,因此用h1表示颜色通道1的显著度,h2表示颜色通道2的显著度,h3表示颜色通道3的显著度:3) Calculate the saliency h c by using the information quantity map LOG c (x, y) of the color channel c. Since the image contains three color channels, h 1 represents the saliency of color channel 1, and h 2 represents the color The salience of channel 2, h 3 represents the salience of color channel 3: hh cc == ΣΣ xx == 11 Mm ΣΣ ythe y == 11 Mm LOGLOG cc (( xx ,, ythe y )) ΣΣ cc == 11 33 ΣΣ xx == 11 Mm ΣΣ ythe y == 11 Mm LOGLOG cc (( xx ,, ythe y )) 4)将颜色通道c的显著度除以三个颜色通道的显著度的和,得数取负对数,得到颜色通道标准化显著权重wc4) Divide the saliency of the color channel c by the sum of the salience of the three color channels, and take the negative logarithm of the result to obtain the normalized saliency weight w c of the color channel, ww 11 == -- loglog (( hh 11 hh 11 ++ hh 22 ++ hh 33 )) ww 22 == -- loglog (( hh 22 hh 11 ++ hh 22 ++ hh 33 )) ww 33 == -- loglog (( hh 33 hh 11 ++ hh 22 ++ hh 33 )) 由于图像包含了三个颜色通道,因此用w1表示颜色通道1的标准化显著权重,w2表示颜色通道2的标准化显著权重,w3表示颜色通道3的标准化显著权重。Since the image contains three color channels, w 1 represents the normalized saliency weight of color channel 1, w 2 represents the normalized saliency weight of color channel 2, and w 3 represents the normalized saliency weight of color channel 3.
CN201510331174.0A 2015-06-16 2015-06-16 A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features Expired - Fee Related CN104966085B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510331174.0A CN104966085B (en) 2015-06-16 2015-06-16 A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510331174.0A CN104966085B (en) 2015-06-16 2015-06-16 A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features

Publications (2)

Publication Number Publication Date
CN104966085A true CN104966085A (en) 2015-10-07
CN104966085B CN104966085B (en) 2018-04-03

Family

ID=54220120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510331174.0A Expired - Fee Related CN104966085B (en) 2015-06-16 2015-06-16 A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features

Country Status (1)

Country Link
CN (1) CN104966085B (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407978A (en) * 2016-09-24 2017-02-15 上海大学 Unconstrained in-video salient object detection method combined with objectness degree
CN106780422A (en) * 2016-12-28 2017-05-31 深圳市美好幸福生活安全系统有限公司 A kind of notable figure fusion method based on Choquet integrations
CN106951841A (en) * 2017-03-09 2017-07-14 广东顺德中山大学卡内基梅隆大学国际联合研究院 A Multi-target Tracking Method Based on Color and Distance Clustering
CN107239760A (en) * 2017-06-05 2017-10-10 中国人民解放军军事医学科学院基础医学研究所 A kind of video data handling procedure and system
CN108335307A (en) * 2018-04-19 2018-07-27 云南佳叶现代农业发展有限公司 Adaptive tobacco leaf picture segmentation method and system based on dark primary
CN108364288A (en) * 2018-03-01 2018-08-03 北京航空航天大学 Dividing method and device for breast cancer pathological image
CN108596920A (en) * 2018-05-02 2018-09-28 北京环境特性研究所 A kind of Target Segmentation method and device based on coloured image
CN108764106A (en) * 2018-05-22 2018-11-06 中国计量大学 Multiple dimensioned colour image human face comparison method based on cascade structure
CN109035254A (en) * 2018-09-11 2018-12-18 中国水产科学研究院渔业机械仪器研究所 Based on the movement fish body shadow removal and image partition method for improving K-means cluster
CN109858394A (en) * 2019-01-11 2019-06-07 西安电子科技大学 A kind of remote sensing images water area extracting method based on conspicuousness detection
CN109949906A (en) * 2019-03-22 2019-06-28 上海鹰瞳医疗科技有限公司 Pathological section image procossing and model training method and equipment
CN110232378A (en) * 2019-05-30 2019-09-13 苏宁易购集团股份有限公司 A kind of image interest point detecting method, system and readable storage medium storing program for executing
CN110268442A (en) * 2019-05-09 2019-09-20 京东方科技集团股份有限公司 Computer-implemented method of detecting foreign objects on background objects in an image, apparatus for detecting foreign objects on background objects in an image, and computer program product
CN110612534A (en) * 2017-06-07 2019-12-24 赫尔实验室有限公司 System for detecting salient objects in images
CN111339953A (en) * 2020-02-27 2020-06-26 广西大学 A monitoring method of Mikania micrantha based on cluster analysis
CN111400557A (en) * 2020-03-06 2020-07-10 北京市环境保护监测中心 Method and device for automatically identifying atmospheric pollution key area
CN113139934A (en) * 2021-03-26 2021-07-20 上海师范大学 Rice grain counting method
CN113469976A (en) * 2021-07-06 2021-10-01 浙江大华技术股份有限公司 Object detection method and device and electronic equipment
CN115131327A (en) * 2022-07-14 2022-09-30 电子科技大学 A color feature fusion detection method for color line defects in display screens
CN115468570A (en) * 2022-08-31 2022-12-13 北京百度网讯科技有限公司 Method, device, equipment and storage medium for extracting high-precision map ground elements
CN118570201A (en) * 2024-08-01 2024-08-30 吴江市兰天织造有限公司 Ultra-high density fabric detection method
CN118982681A (en) * 2024-07-24 2024-11-19 烟台嘉睛智能科技有限公司 Object identification method, system, device and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100239170A1 (en) * 2009-03-18 2010-09-23 Asnis Gary I System and method for target separation of closely spaced targets in automatic target recognition
US20120051606A1 (en) * 2010-08-24 2012-03-01 Siemens Information Systems Ltd. Automated System for Anatomical Vessel Characteristic Determination
CN103810710A (en) * 2014-02-26 2014-05-21 西安电子科技大学 Multispectral image change detection method based on semi-supervised dimensionality reduction and saliency map
CN104463224A (en) * 2014-12-24 2015-03-25 武汉大学 Hyperspectral image demixing method and system based on abundance significance analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100239170A1 (en) * 2009-03-18 2010-09-23 Asnis Gary I System and method for target separation of closely spaced targets in automatic target recognition
US20120051606A1 (en) * 2010-08-24 2012-03-01 Siemens Information Systems Ltd. Automated System for Anatomical Vessel Characteristic Determination
CN103810710A (en) * 2014-02-26 2014-05-21 西安电子科技大学 Multispectral image change detection method based on semi-supervised dimensionality reduction and saliency map
CN104463224A (en) * 2014-12-24 2015-03-25 武汉大学 Hyperspectral image demixing method and system based on abundance significance analysis

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407978A (en) * 2016-09-24 2017-02-15 上海大学 Unconstrained in-video salient object detection method combined with objectness degree
CN106780422A (en) * 2016-12-28 2017-05-31 深圳市美好幸福生活安全系统有限公司 A kind of notable figure fusion method based on Choquet integrations
CN106951841B (en) * 2017-03-09 2020-05-12 广东顺德中山大学卡内基梅隆大学国际联合研究院 Multi-target tracking method based on color and distance clustering
CN106951841A (en) * 2017-03-09 2017-07-14 广东顺德中山大学卡内基梅隆大学国际联合研究院 A Multi-target Tracking Method Based on Color and Distance Clustering
CN107239760A (en) * 2017-06-05 2017-10-10 中国人民解放军军事医学科学院基础医学研究所 A kind of video data handling procedure and system
CN107239760B (en) * 2017-06-05 2020-07-17 中国人民解放军军事医学科学院基础医学研究所 Video data processing method and system
CN110612534A (en) * 2017-06-07 2019-12-24 赫尔实验室有限公司 System for detecting salient objects in images
CN110612534B (en) * 2017-06-07 2023-02-21 赫尔实验室有限公司 System, computer-readable medium, and method for detecting salient objects in an image
CN108364288B (en) * 2018-03-01 2022-04-05 北京航空航天大学 Segmentation method and device for breast cancer pathological image
CN108364288A (en) * 2018-03-01 2018-08-03 北京航空航天大学 Dividing method and device for breast cancer pathological image
CN108335307A (en) * 2018-04-19 2018-07-27 云南佳叶现代农业发展有限公司 Adaptive tobacco leaf picture segmentation method and system based on dark primary
CN108596920A (en) * 2018-05-02 2018-09-28 北京环境特性研究所 A kind of Target Segmentation method and device based on coloured image
CN108764106B (en) * 2018-05-22 2021-12-21 中国计量大学 Multi-scale color image face comparison method based on cascade structure
CN108764106A (en) * 2018-05-22 2018-11-06 中国计量大学 Multiple dimensioned colour image human face comparison method based on cascade structure
CN109035254A (en) * 2018-09-11 2018-12-18 中国水产科学研究院渔业机械仪器研究所 Based on the movement fish body shadow removal and image partition method for improving K-means cluster
CN109858394A (en) * 2019-01-11 2019-06-07 西安电子科技大学 A kind of remote sensing images water area extracting method based on conspicuousness detection
CN109949906A (en) * 2019-03-22 2019-06-28 上海鹰瞳医疗科技有限公司 Pathological section image procossing and model training method and equipment
CN110268442A (en) * 2019-05-09 2019-09-20 京东方科技集团股份有限公司 Computer-implemented method of detecting foreign objects on background objects in an image, apparatus for detecting foreign objects on background objects in an image, and computer program product
CN110268442B (en) * 2019-05-09 2023-08-29 京东方科技集团股份有限公司 Computer-implemented method of detecting foreign objects on background objects in an image, apparatus for detecting foreign objects on background objects in an image, and computer program product
CN110232378B (en) * 2019-05-30 2023-01-20 苏宁易购集团股份有限公司 Image interest point detection method and system and readable storage medium
CN110232378A (en) * 2019-05-30 2019-09-13 苏宁易购集团股份有限公司 A kind of image interest point detecting method, system and readable storage medium storing program for executing
CN111339953A (en) * 2020-02-27 2020-06-26 广西大学 A monitoring method of Mikania micrantha based on cluster analysis
CN111400557A (en) * 2020-03-06 2020-07-10 北京市环境保护监测中心 Method and device for automatically identifying atmospheric pollution key area
CN111400557B (en) * 2020-03-06 2023-08-08 北京市环境保护监测中心 Method and device for automatically identifying important areas of atmospheric pollution
CN113139934B (en) * 2021-03-26 2024-04-30 上海师范大学 Rice grain counting method
CN113139934A (en) * 2021-03-26 2021-07-20 上海师范大学 Rice grain counting method
CN113469976A (en) * 2021-07-06 2021-10-01 浙江大华技术股份有限公司 Object detection method and device and electronic equipment
CN115131327A (en) * 2022-07-14 2022-09-30 电子科技大学 A color feature fusion detection method for color line defects in display screens
CN115131327B (en) * 2022-07-14 2024-04-30 电子科技大学 A color feature fusion method for color line defect detection in display screens
CN115468570A (en) * 2022-08-31 2022-12-13 北京百度网讯科技有限公司 Method, device, equipment and storage medium for extracting high-precision map ground elements
CN118982681A (en) * 2024-07-24 2024-11-19 烟台嘉睛智能科技有限公司 Object identification method, system, device and computer readable storage medium
CN118570201A (en) * 2024-08-01 2024-08-30 吴江市兰天织造有限公司 Ultra-high density fabric detection method
CN118570201B (en) * 2024-08-01 2024-10-11 吴江市兰天织造有限公司 Ultra-high density fabric detection method

Also Published As

Publication number Publication date
CN104966085B (en) 2018-04-03

Similar Documents

Publication Publication Date Title
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
CN106203430B (en) A kind of conspicuousness object detecting method based on foreground focused degree and background priori
CN103810503B (en) Depth study based method for detecting salient regions in natural image
CN103177458B (en) A kind of visible remote sensing image region of interest area detecting method based on frequency-domain analysis
CN103247059B (en) A kind of remote sensing images region of interest detection method based on integer wavelet and visual signature
CN107480620B (en) Remote sensing image automatic target identification method based on heterogeneous feature fusion
CN107392968B (en) Image saliency detection method fused with color contrast map and color space distribution map
CN104103082A (en) Image saliency detection method based on region description and priori knowledge
CN107291855A (en) A kind of image search method and system based on notable object
CN105261017A (en) Method for extracting regions of interest of pedestrian by using image segmentation method on the basis of road restriction
CN105528595A (en) Method for identifying and positioning power transmission line insulators in unmanned aerial vehicle aerial images
CN103218832B (en) Based on the vision significance algorithm of global color contrast and spatial distribution in image
CN107154048A (en) The remote sensing image segmentation method and device of a kind of Pulse-coupled Neural Network Model
CN101383008A (en) Image Classification Method Based on Visual Attention Model
CN104835175A (en) Visual attention mechanism-based method for detecting target in nuclear environment
CN107977660A (en) Region of interest area detecting method based on background priori and foreground node
CN104392459A (en) Infrared image segmentation method based on improved FCM (fuzzy C-means) and mean drift
CN104680545A (en) Method for detecting existence of salient objects in optical images
CN105426846A (en) Method for positioning text in scene image based on image segmentation model
Zhang et al. Salient region detection for complex background images using integrated features
CN104217440A (en) Method for extracting built-up area from remote sensing image
CN111881965B (en) Hyperspectral pattern classification and identification method, device and equipment for medicinal material production place grade
Yanagisawa et al. Face detection for comic images with deformable part model
Song et al. Depth-aware saliency detection using discriminative saliency fusion
WO2020119624A1 (en) Class-sensitive edge detection method based on deep learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180403

CF01 Termination of patent right due to non-payment of annual fee