CN102945378A - Method for detecting potential target regions of remote sensing image on basis of monitoring method - Google Patents

Method for detecting potential target regions of remote sensing image on basis of monitoring method Download PDF

Info

Publication number
CN102945378A
CN102945378A CN2012104088883A CN201210408888A CN102945378A CN 102945378 A CN102945378 A CN 102945378A CN 2012104088883 A CN2012104088883 A CN 2012104088883A CN 201210408888 A CN201210408888 A CN 201210408888A CN 102945378 A CN102945378 A CN 102945378A
Authority
CN
China
Prior art keywords
algorithm
remote sensing
image
remarkable
extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012104088883A
Other languages
Chinese (zh)
Other versions
CN102945378B (en
Inventor
韩军伟
张鼎文
郭雷
周培诚
程塨
姚西文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201210408888.3A priority Critical patent/CN102945378B/en
Publication of CN102945378A publication Critical patent/CN102945378A/en
Application granted granted Critical
Publication of CN102945378B publication Critical patent/CN102945378B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a method for detecting potential target regions of a remote sensing image on the basis of a monitoring method. The method can be applied to detection and positioning of various interested target regions of the remote sensing image under a complex background. The method is characterized by comprising the following steps: when detecting the various potential regions of the remote sensing image, extracting the corresponding saliency characteristic component; training by utilizing the saliency characteristic in a training sample to acquire parameters of an SVM (support vector machine) classifier; applying the classifier which is trained well to a test image to acquire a saliency map of the test image; and segmenting the saliency map by an adaptive threshold segmentation method to acquire a binary image of the target potential regions. The method is high in detection precision and low in false alarm rate and has obvious advantages in comparison with the conventional method.

Description

A kind of remote sensing images potential target method for detecting area based on measure of supervision
Technical field
The invention belongs to field of remote sensing image processing, relate to a kind of remote sensing images potential target method for detecting area based on measure of supervision, the remote sensing images multiclass interesting target zone that can be applied under the complex background is detected and accurate location.
Background technology
The target detection of remote sensing images is new technologies of rising along with the development of remote sensing technology, has the advantage of the aspects such as operating distance is far away, wide coverage, execution efficient height, and important military significance and civilian value are also arranged simultaneously.The target detection of complex scene remote sensing images, be exactly in the process of remote Sensing Image Analysis and decipher, for a specific class or a few class target, extract automatically the critical information useful to the decipher reasoning, and its association attributes of analytical calculation, for detecting, further decipher produces evidence.The complex scene of this moment, also wide just because of the remote sensing images area coverage, it is many to comprise target, and textural characteristics is complicated, and the identification difficulty is gained the name greatly.
Main Remote Sensing Target detection algorithm mainly contains two kinds of thinkings at present: the driving and top-down task-driven type of bottom-up low-level image feature.Because for remote sensing images, piece image tends to comprise a wide range of scene, contain much information, texture is complicated, and color is abundant, if can reasonably the useful part in these information be combined, then can draw gratifying testing result.If certainly can be by the priori of particular task target, this can reduce calculated amount, increase accuracy of identification, for example carrying out that bridge detects and water body when detecting, some scholars according to the feature extraction in bridge and waters a kind of based on the waters dividing method of little tree conversion and the Bridges Detection of knowledge driving.They at first set up the Bridge Knowledge storehouse to panchromatic high-resolution remote sensing image according to the bridge priori, utilize the little tree conversion to carry out feature extraction and cut apart the waters, carry out subsequently mathematical morphological operation to be communicated with the waters, the poor possible bridge fragment that obtains is done in waters before and after being communicated with, then detect the bridge candidate regions by possible bridge fragment, carry out at last characteristic matching and detect bridge.But this type of algorithm has several point defects: first, this algorithm at first needs to determine according to the initial seed point of manually choosing the condition in waters, then automatically the river is divided into two parts according to initial seed point present position difference, begun respectively two parts to be scanned by the following current speed mode of sweeping by initial seed point again, until with the river been scanned.This automanual method can not satisfy the demand that present people identify fully automatically to target.The second, this algorithm only is adapted to the detection of water body and bridge, if change target, then this algorithm can not be finished accurately target detection.
Summary of the invention
The technical matters that solves
For fear of the deficiencies in the prior art part, the present invention proposes the potential multi-class targets method for detecting area of a kind of remote sensing images based on measure of supervision, the potential zone that can automatically detect and orient multi-class targets from the remote sensing images with complex background has preferably testing result.
Technical scheme
A kind of remote sensing images potential target method for detecting area based on measure of supervision is characterized in that step is as follows:
Step 1 is extracted the low layer significant characteristics and is divided spirogram: be 200 * 200 pixels with the input picture down-sampling, then for each the pixel extraction low layer significant characteristics in the image.The present invention has chosen some relevant with human visual attention and features that can trigger biostimulation of generally acknowledging.Specific as follows:
1) 3 contrast property characteristic components in the extraction Itti model: direction contrast property characteristic component, intensity contrast characteristic component and color contrast characteristic component;
2) extract 3 color characteristic components of red, green, blue;
3) 5 color probability characteristics components in the extraction Judd model.These characteristic components are results that the median filter by 5 different scales calculates in the 3D of image Color Statistical space;
Step 2 is extracted the middle level significant characteristics and is divided spirogram: be 200 * 200 pixels with the input picture down-sampling, then Selection Model SR, SDS, FT, GBVS and WSCR be as middle level significant characteristics component extraction method, from frequency domain, and local contrast, the center ﹠ periphery contrast, the significant characteristics of the angle calculation input picture that sparse expression etc. are different.Specific as follows:
1) SR extraction algorithm: scale parameter SR_scale=3 is set, utilizes the SR extraction algorithm to obtain significant characteristics and divide spirogram SR_map, before carrying out the extraction of SR algorithm, original image is reduced into original
Figure BDA00002293619000021
And the Gaussian smoothing window size is gaussian_size=SR_scale * s in the set algorithm, and s is a constant, and its scope is used for regulating the Gaussian smoothing window size in [0.01,0.5];
2) SDS extraction algorithm: utilize the SDS algorithm to generate significant characteristics and divide spirogram SDS_map;
3) FT extraction algorithm: utilize the FT extraction algorithm to generate the FT significant characteristics and divide spirogram; The size that Gaussian smoothing window in the FT extraction algorithm wherein is set is gaussian_size=dims * s;
4) GBVS extraction algorithm: utilize the GBVS algorithm, extract significant characteristics and divide spirogram GBVS_map; Params.LINE=1 wherein is set, to add the straight-line detection passage; Params.useIttiKochInsteadOfGBVS=0 is set, calculates to utilize random field models;
5) SWCR extraction algorithm: utilize the SWCR algorithm to generate significant characteristics and divide spirogram SWCR_map; Patch_size=25 wherein is set, surroundratio=5; The figure block size that is used for contrast in described patch_size ∈ [5,50] the expression algorithm; The regional extent that is used for contrast around the segment of surroundratio ∈ [3,9] expression center;
Step 3 training classifier: from the image library that contains 150 width of cloth images, choose at random 130 width of cloth images as training sample, at first be 200 * 200 pixels with the training sample down-sampling, then manually marking out target in every width of cloth image generates groundtruth figure (this figure is binary map, the pixel value of target area is 255 among the figure, and other regional pixel values are 0).Choose at random some pixels in target area from each width of cloth training image and the nontarget area respectively, with on these pixels corresponding each significant characteristics divide remarkable value in the spirogram and they in groundtruth figure the value on the relevant position as training data, send in the svm classifier device and train, draw svm classifier device parameter
Step 4 is utilized sorter that step 3 draws that test pattern is carried out conspicuousness and is detected: with 20 remaining in image library width of cloth images as test sample book, at first be 200 * 200 pixels with image down sampling, then divide the remarkable value in the spirogram to consist of vector x with each the pixel correspondence in the image at each significant characteristics, be input to and utilize formula ω in the svm classifier device TX+b draws the remarkable figure Smap of every width of cloth image, and ω wherein, b are the classifier parameters that training draws in the step 2;
Step 5 marking area is cut apart: utilize the meanshift algorithm that original image is cut apart, drawing cut zone is r k, k=1,2...K, then the K zone sum that represents to be partitioned into wherein utilizes remarkable value among the remarkable figure that step 4 draws to come the average significantly value V in each zone that computed segmentation goes out k:
Figure BDA00002293619000031
Then utilize the average significantly value generation in each zone to cut apart remarkable figure Smap_seg, utilize at last adaptive threshold T aSmap_seg cut apart draw binary map BinaryMap; Adaptive threshold T wherein aBe set as:
Figure BDA00002293619000041
BinaryMap ( i , j ) = 1 , S ( x , y ) &GreaterEqual; T a 0 , S ( x , y ) < T a
Wherein | r k| represent the scope in k zone, m I, jRepresent to be positioned among the remarkable figure the remarkable value that coordinate (i, j) is located; W, H are respectively the pixel count along x axle and y axle of cutting apart remarkable figure Smap_seg, and S (x, y) is for cutting apart the remarkable value on the position (x, y) among the remarkable figure Smap_seg; T is a constant parameter, sets it and is a value among the t ∈ [1,2].
Described Itti model is the computation model in the A model of saliency-based visual attention for rapid scene analysis article.
Described Judd model is the computation model in the Learning to predict where humans look article.
Described SR significant characteristics extraction algorithm adopts the SR algorithm that proposes in the Saliency Detection:A Spectral Residual Approach article to carry out significant characteristics and extracts.
Described SDS algorithm utilizes the SDS algorithm that proposes among the article Salient region detection and segmentation.
Described FT algorithm utilizes the FT algorithm that proposes among the article Frequency-tuned salient region detection.
Described GBVS algorithm utilizes the improvement GBVS algorithm that proposes among the paper Airport Detection in Remote Sensing Images Based onVisual Attention.
The SWCR algorithm that described SWCR algorithm utilizes article Emergence of simple-cell receptive field properties bylearning a sparse code for natural images to propose.
Described meanshift algorithm utilizes the meanshift algorithm of mentioning among the article Frequency-tuned Salient Region Detection.
Beneficial effect
The present invention proposes a kind of remote sensing images potential target method for detecting area based on measure of supervision based on the vision attention theory, when the potential zone of remote sensing images multi-class targets is detected, at first extract corresponding significant characteristics component, then utilize the svm classifier device that training data is trained, obtain being suitable for the training aids parameter of remote sensing target classification, then each corresponding on the pixel in test pattern significant characteristics component composition of vector is sent in the sorter that trains, obtains the remarkable figure of test pattern.Utilize at last meanshift and adaptive threshold dividing method that remarkable figure is cut apart, draw the binary map in the potential zone of target.Can be applied to detection and the location in the potential zone of remote sensing images multiclass interesting target under the complex background.The method has higher accuracy of detection and lower false alarm rate, compares with existing method to have clear superiority.
The explanation of accompanying drawing table
Fig. 1: the basic flow sheet of the inventive method
Fig. 2: experimental result picture
Fig. 3: ROC curve comparison figure
Fig. 4: Precision-Recall curve comparison figure
Embodiment
Now in conjunction with the embodiments, the invention will be further described for accompanying drawing:
The hardware environment that is used for implementing is: Intel Pentium 2.93GHz CPU computing machine, 2.0GB internal memory, the software environment of operation is: Matlab R2011b and Windows XP.Choose 150 width of cloth and carried out the multi-class targets test experience from the remote sensing images that Google Earth obtains, mainly comprised tertiary target: aircraft, naval vessel, oil depot.
Implementation of the present invention is as follows:
1. extract the low layer significant characteristics and divide spirogram: dims=[200 is set, 200] image is carried out down-sampling, then for each the pixel extraction low layer significant characteristics in the image.Specific as follows:
● 3 contrast property characteristic components in the Itti model: direction contrast property characteristic component O_map, intensity contrast characteristic component I_map and color contrast characteristic component C_map;
Described Itti model is seen paper L.Itti, C.Koch, and E.Niebur.A model of saliency-based visualattention for rapid scene analysis.IEEE Transactions PAMI, 20 (11), 1998;
● 3 color characteristic component: R_map of red, green, blue, G_map, B_map;
● utilize the Judd model that scale parameter m=[024816 is set] 5 color probability characteristics components calculating:
Chist_1,Chist_2,Chist_3,Chist_4,Chist_5;
Described Judd model is seen paper T.Judd, K.Ehinger, F.Durand and, A.Torralba.Learning topredict where humans look, ICCV, 2009;
2. extract the middle level significant characteristics and divide spirogram: dims=[200 is set, 200] image is carried out down-sampling, Selection Model SR then, SDS, FT, GBVS and WSCR are as middle level significant characteristics component extraction method, and be specific as follows:
● SR algorithm: scale parameter SR_scale=3 is set, utilizes the SR extraction algorithm to obtain significant characteristics and divide spirogram SR_map, before carrying out the extraction of SR algorithm, original image is reduced into original
Figure BDA00002293619000061
And the Gaussian smoothing window size is that gaussian_size=rut _ scale cuts s in the set algorithm, and s is a constant, and its scope is used for regulating the Gaussian smoothing window size in [0.01,0.5].
Described SR algorithm is seen paper: X.Hou and L.Zhang.Saliency Detection:A Spectral ResidualApproach[C], IEEE Conference on Computer Vision and Pattern Recognition, 2007
● SDS algorithm: utilize the SDS algorithm to generate significant characteristics and divide spirogram SDS_map;
Described SDS algorithm is seen paper: R.Achanta, F.Estrada, P.Wils , ﹠amp; S.S ¨ usstrunk.Salient regiondetection and segmentation.International Conference on Computer Vision Systems, 2008
● FT passage: utilize the FT extraction algorithm to generate the FT significant characteristics and divide spirogram; The size that Gaussian smoothing window in the FT extraction algorithm wherein is set is gaussian_size=dims * s;
Described FT algorithm is seen paper: R.Achanta, S.Hemami, F.Estrada, and S.S ¨ usstrunk.Frequency-tuned salient region detection.In CVPR, 2009
● GBVS algorithm: utilize the GBVS algorithm, extract significant characteristics and divide spirogram GBVS_map; Params.LINE=wherein is set, to add the straight-line detection passage; Params.useIttiKochInsteadOfGBVS=is set, calculates to utilize random field models;
Described improvement GBVS algorithm is seen paper: Xin Wang, Bin Wang, and Liming Zhang ICONIP3, volume 7064 of Lecture Notes in Computer Science, page 475-484.Springer, 2008
● SWCR algorithm: utilize the SWCR algorithm to generate significant characteristics and divide spirogram SWCR_map; Patch_size=2 wherein is set, surroundratio=5; The figure block size that is used for contrast in described patch_size ∈ [5,50] the expression algorithm; The regional extent that is used for contrast around the segment of surroundratio ∈ [3,9] expression center;
Described SWCR algorithm is seen paper: Biao Han, Hao Zhu, Youdong Ding:Bottom-up saliencybased on weighted sparse coding residual.ACM Multimedia 2011:1117-1120
3. training classifier: from the image library that contains 150 width of cloth images, choose at random 130 width of cloth images as training sample, dims=[200 at first is set, 200] image is carried out down-sampling, then manually marking out target in every width of cloth image generates groundtruth figure (this figure is binary map, the pixel value of target area is 255 among the figure, and other regional pixel values are 0).Num_target ∈ [50 is chosen in target area from each width of cloth training image respectively, 500] and choose at random num_back ∈ [50 in the nontarget area, 500] individual pixel, with on these pixels corresponding each significant characteristics divide remarkable value in the spirogram and they in groundtruth figure the value on the relevant position as training data, send in the svm classifier device and train, draw svm classifier device parameter.
4. conspicuousness detects: the sorter that utilizes step 3 to draw carries out conspicuousness to test pattern and detects, with 20 remaining in image library width of cloth images as test sample book, dims=[200 at first is set, 200] image is carried out down-sampling, then divide the remarkable value in the spirogram to consist of vector x with each the pixel correspondence in the image at each significant characteristics, be input to and utilize formula ω in the svm classifier device TX+b draws the remarkable figure Smap of every width of cloth image, and ω wherein, b are the classifier parameters that training draws in the step 2
5. marking area is cut apart: the present invention at first utilizes the meanshift algorithm that original image is cut apart, and drawing cut zone is r k, k=1, then 2...K utilizes remarkable value among the remarkable figure that step 4 draws to come the average significantly value V in each zone that computed segmentation draws k:
V k = 1 | r k | &Sigma; i , j &Element; r k m i , j
The average significantly value of the zone after utilization is cut apart and these regional correspondences generates cuts apart remarkable figure Smap_seg, utilizes at last self-adapting division method that Smap_seg is carried out binarization segmentation and draws binary map BinaryMap.Adaptive threshold T wherein aBe set as:
T a = t W &times; H &Sigma; x = 0 W - 1 &Sigma; y = 0 H - 1 S ( x , y )
BinaryMap ( i , j ) = 1 , S ( x , y ) &GreaterEqual; T a 0 , S ( x , y ) < T a
Wherein | r k| represent the scope in k zone, m I, jRepresent to be positioned among the remarkable figure the remarkable value that coordinate (i, j) is located.W, H are respectively the pixel count along x axle and y axle of cutting apart remarkable figure Smap_seg, and S (x, y) is for cutting apart the remarkable value on the position (x, y) among the remarkable figure Smap_seg.T is a constant parameter, sets it here and is a value among the t=1.8.Described marking area partitioning algorithm is seen paper: R.Achanta, S.Hemami, F.Estrada, and S.S ¨ usstrunk.Frequency-tuned salient region detection.In CVPR, 2009
Select the validity of the remarkable figure that ROC curve and Precision-Recall curve try to achieve the present invention to assess.Wherein, the ROC curve definitions is under segmentation threshold changes, the variation relation of image false alarm rate (FPR) and real alert rate (TPR); The Precision-Recall curve definitions is under segmentation threshold changes, the variation relation of recall rate (TPR) and accuracy rate (Preci).Computing formula is as follows:
FPR = FP N
TPR = TP P
Preci = TP TP + FP
Wherein FP is the false-alarm that detects, and N is the zone of non-target among the ground truth; TP is detect real alert, and P is the zone of target among the ground truth.
Accompanying drawing 2 is some experimental results under the inventive method, can find out that the present invention is a kind of effective remote sensing images potential target method for detecting area, the ROC curve of the method among the present invention and other existing method results and Precision-Recall curve compared to draw more intuitively comparative result (seeing accompanying drawing 3,4).For the effect of more various conspicuousness detection algorithms that can be quantitative, we as evaluation index (such as table 1), can obviously embody the superiority of the inventive method with the AUC value in the ROC curve from this index.
Table one AUC value contrast table
Method FT SR SDS GBVS WSCR AC The present invention
AUC 0.88735 0.94567 0.92484 0.96054 0.93521 0.91753 0.97519

Claims (9)

1. remote sensing images potential target method for detecting area based on measure of supervision is characterized in that step is as follows:
Step 1 is extracted the low layer significant characteristics and is divided spirogram: be 200 * 200 pixels with the input picture down-sampling, then for each the pixel extraction low layer significant characteristics in the image, specific as follows:
1) 3 contrast property characteristic components in the extraction Itti model: direction contrast property characteristic component, intensity contrast characteristic component and color contrast characteristic component;
2) extract 3 color characteristic components of red, green, blue;
3) 5 color probability characteristics components in the extraction Judd model; These characteristic components are results that the median filter by 5 different scales calculates in the 3D of image Color Statistical space;
Step 2 is extracted the middle level significant characteristics and is divided spirogram: be 200 * 200 pixels with the input picture down-sampling, i.e. dims=[200,200], Selection Model SR then, SDS, FT, GBVS and WSCR are as middle level significant characteristics component extraction method, and be specific as follows:
1) SR extraction algorithm: scale parameter SR_scale=3 is set, utilizes the SR extraction algorithm to obtain significant characteristics and divide spirogram SR_map, before carrying out the extraction of SR algorithm, original image is reduced into original
Figure FDA00002293618900011
And the Gaussian smoothing window size is gaussian_size=SR_scale * s in the set algorithm, and s is a constant, and its scope is used for regulating the Gaussian smoothing window size in [0.01,0.5];
2) SDS extraction algorithm: utilize the SDS algorithm to generate significant characteristics and divide spirogram SDS_map;
3) FT extraction algorithm: utilize the FT extraction algorithm to generate the FT significant characteristics and divide spirogram; The size that Gaussian smoothing window in the FT extraction algorithm wherein is set is gaussian_size=dims * s;
4) GBVS extraction algorithm: utilize the GBVS algorithm, extract significant characteristics and divide spirogram GBVS_map; Params.LINE=1 wherein is set, to add the straight-line detection passage; Params.useIttiKochInsteadOfGBVS=0 is set, calculates to utilize random field models;
5) SWCR extraction algorithm: utilize the SWCR algorithm to generate significant characteristics and divide spirogram SWCR_map; Patch_size=25 wherein is set, surroundratio=5; The figure block size that is used for contrast in described patch_size ∈ [5,50] the expression algorithm; The regional extent that is used for contrast around the segment of surroundratio ∈ [3,9] expression center;
Step 3 training classifier: from the image library that contains 150 width of cloth images, choose at random 130 width of cloth images as training sample, at first be 200 * 200 pixels with the training sample down-sampling, then the target in every width of cloth image is generated groundtruth figure, selected pixels point at random in the target area from each width of cloth training image and the nontarget area respectively, with on the pixel of choosing corresponding each significant characteristics divide remarkable value in the spirogram and they in groundtruth figure the value on the relevant position as training data, send in the svm classifier device and train, draw svm classifier device parameter; Described groundtruth figure is binary map, and the pixel value of target area is 255 among the figure, and other regional pixel values are 0;
Step 4 is utilized sorter that step 3 draws that test pattern is carried out conspicuousness and is detected: with 20 remaining in image library width of cloth images as test sample book, at first be 200 * 200 pixels with image down sampling, then divide the remarkable value in the spirogram to consist of vector x with each the pixel correspondence in the image at each significant characteristics, be input to and utilize formula ω in the svm classifier device TX+b draws the remarkable figure Smap of every width of cloth image, and ω wherein, b are the classifier parameters that training draws in the step 2;
Step 5 marking area is cut apart: utilize the meanshift algorithm that original image is cut apart, drawing cut zone is r k, k=1,2...K, the K zone sum that represents to be partitioned into wherein, then utilize remarkable value among the remarkable figure that step 4 draws to come the average significantly value Vk in each zone that computed segmentation goes out:
Figure FDA00002293618900021
Then utilize the average significantly value generation in each zone to cut apart remarkable figure Smap_seg, utilize at last adaptive threshold T aSmap_seg cut apart draw binary map BinaryMap; Adaptive threshold T wherein aBe set as:
T a = t W &times; H &Sigma; x = 0 W - 1 &Sigma; y = 0 H - 1 S ( x , y )
BinaryMap ( i , j ) = 1 , S ( x , y ) &GreaterEqual; T a 0 , S ( x , y ) < T a
Wherein | r k| represent the scope in k zone, m I, jRepresent to be positioned among the remarkable figure the remarkable value that coordinate (i, j) is located; W, H are respectively the pixel count along x axle and y axle of cutting apart remarkable figure Smap_seg, and S (x, y) is for cutting apart the remarkable value on the position (x, y) among the remarkable figure Smap_seg; T is a constant parameter, sets it and is a value among the t ∈ [1,2].
2. described remote sensing images potential target method for detecting area based on measure of supervision according to claim 1, it is characterized in that: described Itti model is the computation model in the A model of saliency-based visual attention for rapid scene analysis article.
3. described remote sensing images potential target method for detecting area based on measure of supervision according to claim 1, it is characterized in that: described Judd model is the computation model in the Learning to predict where humans look article.
4. described remote sensing images potential target method for detecting area based on measure of supervision according to claim 1, it is characterized in that: described SR significant characteristics extraction algorithm adopts the SR algorithm that proposes in the Saliency Detection:A Spectral ResidualApproach article to carry out significant characteristics and extracts.
5. described remote sensing images potential target method for detecting area based on measure of supervision according to claim 1, it is characterized in that: described SDS algorithm utilizes the SDS algorithm that proposes among the article Salient region detection and segmentation.
6. described remote sensing images potential target method for detecting area based on measure of supervision according to claim 1, it is characterized in that: described FT algorithm utilizes the FT algorithm that proposes among the article Frequency-tuned salient region detection.
7. described remote sensing images potential target method for detecting area based on measure of supervision according to claim 1, it is characterized in that: described GBVS algorithm utilizes the improvement GBVS algorithm that proposes among the paper Airport Detection in Remote Sensing Images Based onVisualAttention.
8. described remote sensing images potential target method for detecting area based on measure of supervision according to claim 1 is characterized in that: the SWCR algorithm that described SWCR algorithm utilizes article Emergence of simple-cell receptive field properties bylearning a sparse code for natural images to propose.
9. described remote sensing images potential target method for detecting area based on measure of supervision according to claim 1, it is characterized in that: described meanshift algorithm utilizes the meanshift algorithm of mentioning among the article Frequency-tuned Salient Region Detection.
CN201210408888.3A 2012-10-23 2012-10-23 Method for detecting potential target regions of remote sensing image on basis of monitoring method Expired - Fee Related CN102945378B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210408888.3A CN102945378B (en) 2012-10-23 2012-10-23 Method for detecting potential target regions of remote sensing image on basis of monitoring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210408888.3A CN102945378B (en) 2012-10-23 2012-10-23 Method for detecting potential target regions of remote sensing image on basis of monitoring method

Publications (2)

Publication Number Publication Date
CN102945378A true CN102945378A (en) 2013-02-27
CN102945378B CN102945378B (en) 2015-06-10

Family

ID=47728317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210408888.3A Expired - Fee Related CN102945378B (en) 2012-10-23 2012-10-23 Method for detecting potential target regions of remote sensing image on basis of monitoring method

Country Status (1)

Country Link
CN (1) CN102945378B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310195A (en) * 2013-06-09 2013-09-18 西北工业大学 LLC-feature-based weak-supervision recognition method for vehicle high-resolution remote sensing images
CN104089925A (en) * 2014-06-30 2014-10-08 华南理工大学 Hyperspectral imaging-based Target area extraction method for detecting shrimp quality
CN104217440A (en) * 2014-09-28 2014-12-17 民政部国家减灾中心 Method for extracting built-up area from remote sensing image
CN104252624A (en) * 2014-08-29 2014-12-31 西安空间无线电技术研究所 Method for positioning and extracting images of point target of satellite-borne area
CN104408712A (en) * 2014-10-30 2015-03-11 西北工业大学 Information fusion-based hidden Markov salient region detection method
CN104933435A (en) * 2015-06-25 2015-09-23 中国计量学院 Machine vision construction method based on human vision simulation
CN104992183A (en) * 2015-06-25 2015-10-21 中国计量学院 Method for automatic detection of substantial object in natural scene
CN106056084A (en) * 2016-06-01 2016-10-26 北方工业大学 Remote sensing image port ship detection method based on multi-resolution hierarchical screening
CN107766810A (en) * 2017-10-10 2018-03-06 湖南省测绘科技研究所 A kind of cloud, shadow detection method
CN108369414A (en) * 2015-10-15 2018-08-03 施耐德电气美国股份有限公司 The visual monitoring system of load centre
CN108596893A (en) * 2018-04-24 2018-09-28 东北大学 a kind of image processing method and system
CN109977892A (en) * 2019-03-31 2019-07-05 西安电子科技大学 Ship Detection based on local significant characteristics and CNN-SVM

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100226564A1 (en) * 2009-03-09 2010-09-09 Xerox Corporation Framework for image thumbnailing based on visual similarity
CN102289657A (en) * 2011-05-12 2011-12-21 西安电子科技大学 Breast X ray image lump detecting system based on visual attention mechanism

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100226564A1 (en) * 2009-03-09 2010-09-09 Xerox Corporation Framework for image thumbnailing based on visual similarity
CN102289657A (en) * 2011-05-12 2011-12-21 西安电子科技大学 Breast X ray image lump detecting system based on visual attention mechanism

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ACHANTA.R等: "Frequency-tuned salient region detection", 《COMPUTER VISION AND PATTERN RECOGNITION 2009 CVPR 2009 IEEE CONFERENCE》, 31 December 2009 (2009-12-31), pages 1597 - 1604 *
ITTI, L等: "A model of saliency-based visual attention for rapid scene analysis", 《PATTERN ANALYSIS AND MACHINE INTELLIGENCE》, 31 December 1998 (1998-12-31), pages 1254 - 1259 *
JUDD, T等: "Learning to predict where humans look", 《COMPUTER VISION》, 31 December 2009 (2009-12-31), pages 2106 - 2113 *
RADHAKRISHNA ACHANTA等: "《Computer Vision System》", 31 December 2008, SPRINGER BERLIN HERDELBERG, article "Salient region detection and segmentation", pages: 66-75 *
XIAODI HOU等: "Saliency Detection: A Spectral Residual Approach", 《COMPUTER VISION AND PATTERN RECOGNITION》, 31 December 2007 (2007-12-31), pages 1 - 8 *
XIN WANG等: "《neural information processing》", 31 December 2011, SPRINGER BERLIN HEIDELBERG, article "Airport detection in remote sensing image based on visual attention", pages: 475-484 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310195B (en) * 2013-06-09 2016-12-28 西北工业大学 Based on LLC feature the Weakly supervised recognition methods of vehicle high score remote sensing images
CN103310195A (en) * 2013-06-09 2013-09-18 西北工业大学 LLC-feature-based weak-supervision recognition method for vehicle high-resolution remote sensing images
CN104089925A (en) * 2014-06-30 2014-10-08 华南理工大学 Hyperspectral imaging-based Target area extraction method for detecting shrimp quality
CN104089925B (en) * 2014-06-30 2016-04-13 华南理工大学 A kind of target area extracting method detecting peeled shrimp quality based on high light spectrum image-forming
CN104252624A (en) * 2014-08-29 2014-12-31 西安空间无线电技术研究所 Method for positioning and extracting images of point target of satellite-borne area
CN104252624B (en) * 2014-08-29 2017-07-07 西安空间无线电技术研究所 A kind of positioning and extracting method of spaceborne region point target image
CN104217440A (en) * 2014-09-28 2014-12-17 民政部国家减灾中心 Method for extracting built-up area from remote sensing image
CN104408712A (en) * 2014-10-30 2015-03-11 西北工业大学 Information fusion-based hidden Markov salient region detection method
CN104408712B (en) * 2014-10-30 2017-05-24 西北工业大学 Information fusion-based hidden Markov salient region detection method
CN104933435A (en) * 2015-06-25 2015-09-23 中国计量学院 Machine vision construction method based on human vision simulation
CN104992183A (en) * 2015-06-25 2015-10-21 中国计量学院 Method for automatic detection of substantial object in natural scene
CN104933435B (en) * 2015-06-25 2018-08-28 中国计量学院 Machine vision construction method based on simulation human vision
CN108369414A (en) * 2015-10-15 2018-08-03 施耐德电气美国股份有限公司 The visual monitoring system of load centre
CN108369414B (en) * 2015-10-15 2021-05-04 施耐德电气美国股份有限公司 Visual monitoring system of load center
CN106056084A (en) * 2016-06-01 2016-10-26 北方工业大学 Remote sensing image port ship detection method based on multi-resolution hierarchical screening
CN106056084B (en) * 2016-06-01 2019-05-24 北方工业大学 Remote sensing image port ship detection method based on multi-resolution hierarchical screening
CN107766810A (en) * 2017-10-10 2018-03-06 湖南省测绘科技研究所 A kind of cloud, shadow detection method
CN107766810B (en) * 2017-10-10 2021-05-14 湖南省测绘科技研究所 Cloud and shadow detection method
CN108596893A (en) * 2018-04-24 2018-09-28 东北大学 a kind of image processing method and system
CN108596893B (en) * 2018-04-24 2022-04-08 东北大学 Image processing method and system
CN109977892A (en) * 2019-03-31 2019-07-05 西安电子科技大学 Ship Detection based on local significant characteristics and CNN-SVM
CN109977892B (en) * 2019-03-31 2020-11-10 西安电子科技大学 Ship detection method based on local saliency features and CNN-SVM

Also Published As

Publication number Publication date
CN102945378B (en) 2015-06-10

Similar Documents

Publication Publication Date Title
CN102945378B (en) Method for detecting potential target regions of remote sensing image on basis of monitoring method
Yao et al. A coarse-to-fine model for airport detection from remote sensing images using target-oriented visual saliency and CRF
CN103049763B (en) Context-constraint-based target identification method
Gao et al. A novel target detection method for SAR images based on shadow proposal and saliency analysis
CN106408030B (en) SAR image classification method based on middle layer semantic attribute and convolutional neural networks
Zhang et al. Study on traffic sign recognition by optimized Lenet-5 algorithm
CN111428631B (en) Visual identification and sorting method for unmanned aerial vehicle flight control signals
CN103310195A (en) LLC-feature-based weak-supervision recognition method for vehicle high-resolution remote sensing images
CN102722712A (en) Multiple-scale high-resolution image object detection method based on continuity
CN102629380B (en) Remote sensing image change detection method based on multi-group filtering and dimension reduction
CN104408482A (en) Detecting method for high-resolution SAR (Synthetic Aperture Radar) image object
CN103177458A (en) Frequency-domain-analysis-based method for detecting region-of-interest of visible light remote sensing image
CN104182985A (en) Remote sensing image change detection method
CN102542295A (en) Method for detecting landslip from remotely sensed image by adopting image classification technology
CN111079596A (en) System and method for identifying typical marine artificial target of high-resolution remote sensing image
CN108171119B (en) SAR image change detection method based on residual error network
CN104463248A (en) High-resolution remote sensing image airplane detecting method based on high-level feature extraction of depth boltzmann machine
CN104240257A (en) SAR (synthetic aperture radar) image naval ship target identification method based on change detection technology
CN105005761A (en) Panchromatic high-resolution remote sensing image road detection method in combination with significance analysis
CN102968786B (en) A kind of non-supervisory remote sensing images potential target method for detecting area
CN108734200A (en) Human body target visible detection method and device based on BING features
CN103413154A (en) Human motion identification method based on normalized class Google measurement matrix
CN104268557B (en) Polarization SAR sorting technique based on coorinated training and depth S VM
CN103366373B (en) Multi-time-phase remote-sensing image change detection method based on fuzzy compatible chart
CN108664969A (en) Landmark identification method based on condition random field

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150610

Termination date: 20191023