CN102945378B - Method for detecting potential target regions of remote sensing image on basis of monitoring method - Google Patents

Method for detecting potential target regions of remote sensing image on basis of monitoring method Download PDF

Info

Publication number
CN102945378B
CN102945378B CN201210408888.3A CN201210408888A CN102945378B CN 102945378 B CN102945378 B CN 102945378B CN 201210408888 A CN201210408888 A CN 201210408888A CN 102945378 B CN102945378 B CN 102945378B
Authority
CN
China
Prior art keywords
algorithm
map
image
significant characteristics
utilize
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210408888.3A
Other languages
Chinese (zh)
Other versions
CN102945378A (en
Inventor
韩军伟
张鼎文
郭雷
周培诚
程塨
姚西文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201210408888.3A priority Critical patent/CN102945378B/en
Publication of CN102945378A publication Critical patent/CN102945378A/en
Application granted granted Critical
Publication of CN102945378B publication Critical patent/CN102945378B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a method for detecting potential target regions of a remote sensing image on the basis of a monitoring method. The method can be applied to detection and positioning of various interested target regions of the remote sensing image under a complex background. The method is characterized by comprising the following steps: when detecting the various potential regions of the remote sensing image, extracting the corresponding saliency characteristic component; training by utilizing the saliency characteristic in a training sample to acquire parameters of an SVM (support vector machine) classifier; applying the classifier which is trained well to a test image to acquire a saliency map of the test image; and segmenting the saliency map by an adaptive threshold segmentation method to acquire a binary image of the target potential regions. The method is high in detection precision and low in false alarm rate and has obvious advantages in comparison with the conventional method.

Description

A kind of remote sensing images potential target method for detecting area based on measure of supervision
Technical field
The invention belongs to field of remote sensing image processing, relate to a kind of remote sensing images potential target method for detecting area based on measure of supervision, the remote sensing images multiclass interesting target region detection under complex background can be applied to and accurately locate.
Background technology
The target detection of remote sensing images is new technologies of rising along with the development of remote sensing technology, has the advantage of the aspects such as operating distance is far away, wide coverage, execution efficiency are high, also has important military significance and civilian value simultaneously.The target detection of complex scene remote sensing images, be exactly in the process of remote Sensing Image Analysis and decipher, for a specific class or a few class target, extract the critical information useful to decipher reasoning automatically, and its association attributes of analytical calculation, produce evidence for further decipher detects.Complex scene now, also wide just because of remote sensing images area coverage, comprise target many, textural characteristics is complicated, identifies that difficulty is gained the name greatly.
Remote Sensing Target detection algorithm main at present mainly contains two kinds of thinkings: the driving and top-down task-driven type of bottom-up low-level image feature.Due to for remote sensing images, piece image often comprises a wide range of scene, contains much information, and texture is complicated, and various colors, if can reasonably the useful part in these information be combined, then can draw gratifying testing result.If certainly can by the priori of particular task target, this can reduce calculated amount, increase accuracy of identification, such as when carrying out bridge machinery and water body and detecting, some scholars are a kind of based on the Watershed segmentation method of little tree conversion and the Bridges Detection of Knowledge driving according to the feature extraction in bridge and waters.First they set up Bridge Knowledge storehouse to full-color high resolution remote sensing images according to bridge priori, utilize little tree to convert carry out feature extraction and split waters, carry out mathematical morphological operation subsequently to be communicated with waters, difference is done in the waters being communicated with front and back and obtains possible bridge fragment, then detect bridge candidate regions by possible bridge fragment, finally carry out characteristic matching and detect bridge.But this type of algorithm has a few point defect: first, first this algorithm needs the condition determining waters according to the initial seed point manually chosen, then automatically river is divided into two parts according to initial seed point present position difference, respectively two parts are scanned, until river is scanned by the following current speed mode of sweeping by initial seed point again.This automanual method can not meet the demand that present people identify completely automatically to target.The second, this algorithm is only adapted to the detection of water body and bridge, if change target, then this algorithm can not complete target detection accurately.
Summary of the invention
The technical matters solved
In order to avoid the deficiencies in the prior art part, the present invention proposes the potential multi-class targets method for detecting area of a kind of remote sensing images based on measure of supervision, automatically can detect from the remote sensing images with complex background and orient the potential region of multi-class targets, there is good testing result.
Technical scheme
Based on a remote sensing images potential target method for detecting area for measure of supervision, it is characterized in that step is as follows:
Step 1 extracts low layer significant characteristics component map: be 200 × 200 pixels by input picture down-sampling, then for each pixel extraction low layer significant characteristics in image.The present invention's have chosen that some generally acknowledge relevant to human visual attention and the feature of biostimulation can be triggered.Specific as follows:
1) 3 comparative characteristic components in Itti model are extracted: the comparative characteristic component in direction, intensity contrast characteristic component and color contrast characteristic component;
2) red, green, blue 3 color characteristic components are extracted;
3) 5 color probability characteristic components in Judd model are extracted.These characteristic components are the results calculated in the 3D Color Statistical space of image by the median filter of 5 different scales;
Step 2 extracts middle level significant characteristics component map: be 200 × 200 pixels by input picture down-sampling, then Selection Model SR, SDS, FT, GBVS and WSCR as middle level significant characteristics component extraction method, from frequency domain, local contrast, center & periphery contrasts, the significant characteristics of the angle calculation input picture that sparse expression etc. are different.Specific as follows:
1) SR extraction algorithm: arrange scale parameter SR_scale=3, utilizes SR extraction algorithm to obtain significant characteristics component map SR_map, is reduced into by original image original before carrying out SR algorithm and extracting and in set algorithm, Gaussian smoothing window size is gaussian_size=SR_scale × s, s is a constant, its scope in [0.01,0.5], for regulating Gaussian smoothing window size;
2) SDS extraction algorithm: utilize SDS algorithm to generate significant characteristics component map SDS_map;
3) FT extraction algorithm: utilize FT extraction algorithm to generate FT significant characteristics component map; The size wherein arranging Gaussian smoothing window in FT extraction algorithm is gaussian_size=dims × s;
4) GBVS extraction algorithm: utilize GBVS algorithm, extracts significant characteristics component map GBVS_map; Wherein params.LINE=1 is set, to add straight-line detection passage; Params.useIttiKochInsteadOfGBVS=0 is set, calculates to utilize random field models;
5) SWCR extraction algorithm: utilize SWCR algorithm to generate significant characteristics component map SWCR_map; Wherein patch_size=25 is set, surroundratio=5; Described patch_size ∈ [5,50] represents the figure block size for contrasting in algorithm; Surroundratio ∈ [3,9] represents the regional extent for contrasting around the segment of center;
Step 3 training classifier: random selecting 130 width image is as training sample from the image library containing 150 width images, first be 200 × 200 pixels by training sample down-sampling, then the target manually marked out in every width image generates groundtruth figure, and (this figure is binary map, in figure, the pixel value of target area is 255, and the pixel value in other regions is 0).Some pixels of random selecting from the target area each width training image and nontarget area respectively, using saliency value in each significant characteristics component map corresponding on these pixels and their value in groundtruth figure on relevant position as training data, send in SVM classifier and train, draw SVM classifier parameter
The sorter that step 4 utilizes step 3 to draw carries out conspicuousness detection to test pattern: using 20 width images remaining in image library as test sample book, first be 200 × 200 pixels by image down sampling, then form vector x by the corresponding saliency value in each significant characteristics component map of each pixel in image, be input in SVM classifier and utilize formula ω tx+b draws the remarkable figure Smap of every width image, and wherein ω, b train the classifier parameters drawn in step 2;
Step 5 marking area is split: utilize meanshift algorithm to be split by original image, show that cut zone is r k, k=1,2...K, wherein K represents the region sum be partitioned into, and the saliency value in the remarkable figure then utilizing step 4 to draw carrys out the average saliency value V in each region that computed segmentation goes out k:
Then utilize the average saliency value in each region to generate segmentation and significantly scheme Smap_seg, finally utilize adaptive threshold T asegmentation is carried out to Smap_seg and draws binary map BinaryMap; Wherein adaptive threshold T abe set as:
BinaryMap ( i , j ) = 1 , S ( x , y ) &GreaterEqual; T a 0 , S ( x , y ) < T a
Wherein | r k| represent the scope in a kth region, m i, jrepresent the saliency value being positioned at coordinate (i, j) place in remarkable figure; W, H are respectively the pixel count along x-axis and y-axis of the remarkable figure Smap_seg of segmentation, and S (x, y) be the saliency value that segmentation significantly schemes in Smap_seg on position (x, y); T is a constant parameter, sets it as a value in t ∈ [1,2].
Described Itti model is the computation model in A model of saliency-based visual attention for rapid scene analysis article.
Described Judd model is the computation model in Learning to predict where humans look article.
Described SR significant characteristics extraction algorithm adopts the SR algorithm proposed in Saliency Detection:A Spectral Residual Approach article to carry out significant characteristics extraction.
Described SDS algorithm utilizes the SDS algorithm proposed in article Salient region detection and segmentation.
Described FT algorithm utilizes the FT algorithm proposed in article Frequency-tuned salient region detection.
Described GBVS algorithm utilizes the improvement GBVS algorithm proposed in paper Airport Detection in Remote Sensing Images Based onVisual Attention.
The SWCR algorithm that described SWCR algorithm utilizes article Emergence of simple-cell receptive field properties bylearning a sparse code for natural images to propose.
Described meanshift algorithm utilizes the meanshift algorithm mentioned in article Frequency-tuned Salient Region Detection.
Beneficial effect
The present invention proposes a kind of remote sensing images potential target method for detecting area based on measure of supervision of view-based access control model theory of attention, when detecting the potential region of remote sensing images multi-class targets, first corresponding significant characteristics component is extracted, then SVM classifier is utilized to train training data, obtain the training aids parameter being suitable for remote sensing target classification, then each significant characteristics component composition of vector corresponding on pixel in test pattern is sent in the sorter trained, obtains the remarkable figure of test pattern.Finally utilize meanshift and adaptive threshold fuzziness method to split remarkable figure, draw the binary map in the potential region of target.The detection and positioning in the potential region of remote sensing images multiclass interesting target under complex background can be applied to.The method has higher accuracy of detection and lower false alarm rate, has clear superiority compared with the conventional method.
Attached caption
Fig. 1: the basic flow sheet of the inventive method
Fig. 2: experimental result picture
Fig. 3: ROC curve comparison figure
Fig. 4: Precision-Recall curve comparison figure
Embodiment
Now in conjunction with the embodiments, the invention will be further described for accompanying drawing:
Hardware environment for implementing is: Intel Pentium 2.93GHz CPU computing machine, 2.0GB internal memory, the software environment of operation is: Matlab R2011b and Windows XP.Have chosen the remote sensing images that 150 width obtain from Google Earth and carry out multi-class targets test experience, mainly include tertiary target: aircraft, naval vessel, oil depot.
The present invention is specifically implemented as follows:
1. extract low layer significant characteristics component map: dims=[200,200] is set down-sampling is carried out to image, then for each pixel extraction low layer significant characteristics in image.Specific as follows:
● 3 comparative characteristic components in Itti model: the comparative characteristic component O_map in direction, intensity contrast characteristic component I_map and color contrast characteristic component C_map;
Paper L.Itti is shown in by described Itti model, C.Koch, and E.Niebur.A model of saliency-based visualattention for rapid scene analysis.IEEE Transactions PAMI, 20 (11), 1998;
● red, green, blue 3 color characteristic components: R_map, G_map, B_map;
● 5 the color probability characteristic components utilizing Judd model to arrange scale parameter m=[024816] to calculate:
Chist_1,Chist_2,Chist_3,Chist_4,Chist_5;
Paper T.Judd is shown in by described Judd model, K.Ehinger, F.Durand and, A.Torralba.Learning topredict where humans look, ICCV, 2009;
2. extract middle level significant characteristics component map: arrange dims=[200,200] and carry out down-sampling to image, then Selection Model SR, SDS, FT, GBVS and WSCR are as middle level significant characteristics component extraction method, specific as follows:
● SR algorithm: arrange scale parameter SR_scale=3, utilizes SR extraction algorithm to obtain significant characteristics component map SR_map, is reduced into by original image original before carrying out SR algorithm and extracting and Gaussian smoothing window size is that gaussian_size=rut _ scale cuts s in set algorithm, s is a constant, its scope in [0.01,0.5], for regulating Gaussian smoothing window size.
Described SR algorithm is shown in paper: X.Hou and L.Zhang.Saliency Detection:A Spectral ResidualApproach [C], IEEE Conference on Computer Vision and Pattern Recognition, 2007
● SDS algorithm: utilize SDS algorithm to generate significant characteristics component map SDS_map;
Described SDS algorithm is shown in paper: R.Achanta, F.Estrada, P.Wils, & S.S ¨ usstrunk.Salient regiondetection and segmentation.International Conference on Computer Vision Systems, 2008
● FT passage: utilize FT extraction algorithm to generate FT significant characteristics component map; The size wherein arranging Gaussian smoothing window in FT extraction algorithm is gaussian_size=dims × s;
Described FT algorithm is shown in paper: R.Achanta, S.Hemami, F.Estrada, and S.S ¨ usstrunk.Frequency-tuned salient region detection.In CVPR, 2009
● GBVS algorithm: utilize GBVS algorithm, extracts significant characteristics component map GBVS_map; Wherein params.LINE=is set, to add straight-line detection passage; Params.useIttiKochInsteadOfGBVS=is set, calculates to utilize random field models;
Described improvement GBVS algorithm is shown in paper: Xin Wang, Bin Wang, and Liming Zhang ICONIP3, volume 7064 of Lecture Notes in Computer Science, page 475-484.Springer, 2008
● SWCR algorithm: utilize SWCR algorithm to generate significant characteristics component map SWCR_map; Wherein patch_size=2 is set, surroundratio=5; Described patch_size ∈ [5,50] represents the figure block size for contrasting in algorithm; Surroundratio ∈ [3,9] represents the regional extent for contrasting around the segment of center;
Described SWCR algorithm is shown in paper: Biao Han, Hao Zhu, Youdong Ding:Bottom-up saliencybased on weighted sparse coding residual.ACM Multimedia 2011:1117-1120
3. training classifier: random selecting 130 width image is as training sample from the image library containing 150 width images, first dims=[200 is set, 200] down-sampling is carried out to image, then the target manually marked out in every width image generates groundtruth figure, and (this figure is binary map, in figure, the pixel value of target area is 255, and the pixel value in other regions is 0).Num_target ∈ [50 is chosen respectively from the target area each width training image, 500] random selecting num_back ∈ [50 and in nontarget area, 500] individual pixel, using saliency value in each significant characteristics component map corresponding on these pixels and their value in groundtruth figure on relevant position as training data, send in SVM classifier and train, draw SVM classifier parameter.
4. conspicuousness detects: the sorter utilizing step 3 to draw carries out conspicuousness detection to test pattern, using 20 width images remaining in image library as test sample book, first dims=[200 is set, 200] down-sampling is carried out to image, then form vector x by the corresponding saliency value in each significant characteristics component map of each pixel in image, be input in SVM classifier and utilize formula ω tx+b draws the remarkable figure Smap of every width image, and wherein ω, b train the classifier parameters drawn in step 2
5. marking area segmentation: first the present invention utilizes meanshift algorithm to be split by original image, show that cut zone is r k, k=1,2...K, the saliency value in the remarkable figure then utilizing step 4 to draw carrys out the average saliency value V in each region that computed segmentation draws k:
V k = 1 | r k | &Sigma; i , j &Element; r k m i , j
Utilize average saliency value corresponding to the region after segmentation and these regions to generate segmentation and significantly scheme Smap_seg, finally utilize self-adapting division method to carry out binarization segmentation to Smap_seg and draw binary map BinaryMap.Wherein adaptive threshold T abe set as:
T a = t W &times; H &Sigma; x = 0 W - 1 &Sigma; y = 0 H - 1 S ( x , y )
BinaryMap ( i , j ) = 1 , S ( x , y ) &GreaterEqual; T a 0 , S ( x , y ) < T a
Wherein | r k| represent the scope in a kth region, m i, jrepresent the saliency value being positioned at coordinate (i, j) place in remarkable figure.W, H are respectively the pixel count along x-axis and y-axis of the remarkable figure Smap_seg of segmentation, and S (x, y) be the saliency value that segmentation significantly schemes in Smap_seg on position (x, y).T is a constant parameter, sets it here as the value of in t=1.8.Described marking area partitioning algorithm is shown in paper: R.Achanta, S.Hemami, F.Estrada, and S.S ¨ usstrunk.Frequency-tuned salient region detection.In CVPR, 2009
ROC curve and the validity of Precision-Recall curve to the remarkable figure that the present invention tries to achieve is selected to assess.Wherein, ROC curve definitions is under segmentation threshold change, the variation relation of image false alarm rate (FPR) and real alert rate (TPR); Precision-Recall curve definitions is under segmentation threshold change, the variation relation of recall rate (TPR) and accuracy rate (Preci).Computing formula is as follows:
FPR = FP N
TPR = TP P
Preci = TP TP + FP
Wherein FP is the false-alarm detected, N is the region of non-targeted in ground truth; TP is that the reality detected is alert, and P is order target area in ground truth.
Accompanying drawing 2 is some experimental results under the inventive method, can find out that the present invention is a kind of effective remote sensing images potential target method for detecting area, the ROC curve of the method in the present invention and other existing method results and Precision-Recall curve are compared and can draw comparative result (see accompanying drawing 3,4) more intuitively.In order to the effect of more various conspicuousness detection algorithms that can be quantitative, the AUC value in ROC curve as evaluation index (as table 1), obviously can be embodied the superiority of the inventive method by us from this index.
Table one AUC value contrast table

Claims (1)

1., based on a remote sensing images potential target method for detecting area for measure of supervision, it is characterized in that step is as follows:
Step 1 extracts low layer significant characteristics component map: be 200 × 200 pixels by input picture down-sampling, then for each pixel extraction low layer significant characteristics in image, specific as follows:
1) 3 comparative characteristic components in Itti model are extracted: the comparative characteristic component in direction, intensity contrast characteristic component and color contrast characteristic component;
2) red, green, blue 3 color characteristic components are extracted;
3) 5 color probability characteristic components in Judd model are extracted; These characteristic components are the results calculated in the 3D Color Statistical space of image by the median filter of 5 different scales;
Step 2 extracts middle level significant characteristics component map: be 200 × 200 pixels by input picture down-sampling, i.e. dims=[200,200], then Selection Model SR, and SDS, FT, GBVS and SWCR are as middle level significant characteristics component extraction method, specific as follows:
1) SR extraction algorithm: arrange scale parameter SR_scale=3, utilizes SR extraction algorithm to obtain significant characteristics component map SR_map, is reduced into by original image original before carrying out SR algorithm and extracting and in set algorithm, Gaussian smoothing window size is gaussian_size=SR_scale × s, s is a constant, its scope in [0.01,0.5], for regulating Gaussian smoothing window size;
2) SDS extraction algorithm: utilize SDS algorithm to generate significant characteristics component map SDS_map;
3) FT extraction algorithm: utilize FT extraction algorithm to generate FT significant characteristics component map; The size wherein arranging Gaussian smoothing window in FT extraction algorithm is gaussian_size=dims × s;
4) GBVS extraction algorithm: utilize GBVS algorithm, extracts significant characteristics component map GBVS_map; Wherein params.LINE=1 is set, to add straight-line detection passage; Params.useIttiKochInsteadOfGBVS=0 is set, calculates to utilize random field models;
5) SWCR extraction algorithm: utilize SWCR algorithm to generate significant characteristics component map SWCR_map; Wherein patch_size=25 is set, surroundratio=5; Described patch_size represents the figure block size for contrasting in algorithm; Surroundratio represents the regional extent for contrasting around the segment of center;
Step 3 training classifier: random selecting 130 width image is as training sample from the image library containing 150 width images, first be 200 × 200 pixels by training sample down-sampling, then the target in every width image is generated groundtruth figure, random selecting pixel from the target area each width training image and nontarget area respectively, using saliency value in each significant characteristics component map corresponding on the pixel chosen and their value in groundtruth figure on relevant position as training data, send in SVM classifier and train, draw SVM classifier parameter, described groundtruth figure is binary map, and in figure, the pixel value of target area is 255, and the pixel value in other regions is 0,
The sorter that step 4 utilizes step 3 to draw carries out conspicuousness detection to test pattern: using 20 width images remaining in image library as test sample book, first be 200 × 200 pixels by image down sampling, then form vector x by the corresponding saliency value in each significant characteristics component map of each pixel in image, be input in SVM classifier and utilize formula ω tx+b draws the remarkable figure Smap of every width image, and wherein ω, b train the classifier parameters drawn in step 3;
Step 5 marking area is split: utilize meanshift algorithm to be split by original image, show that cut zone is r k, k=1,2...K, wherein K represents the region sum be partitioned into, and the saliency value in the remarkable figure then utilizing step 4 to draw carrys out the average saliency value V in each region that computed segmentation goes out k:
Then utilize the average saliency value in each region to generate segmentation and significantly scheme Smap_seg, finally utilize adaptive threshold T asegmentation is carried out to Smap_seg and draws binary map BinaryMap; Wherein adaptive threshold T abe set as:
T a = t W &times; H &Sigma; x = 0 W - 1 &Sigma; y = 0 H - 1 S ( x , y )
BinaryMap ( i , j ) = 1 , S ( x , y ) &GreaterEqual; T a 0 , S ( x , y ) < T a
Wherein | r k| represent the scope in a kth region, m i,jrepresent the saliency value being positioned at coordinate (i, j) place in remarkable figure; W, H are respectively the pixel count along x-axis and y-axis of the remarkable figure Smap_seg of segmentation, and S (x, y) be the saliency value that segmentation significantly schemes in Smap_seg on position (x, y); T is a constant parameter, sets it as a value in t ∈ [1,2];
Described Itti model is L.Itti, C.Koch, and E.Niebur.A model of saliency-based visualattention for rapid scene analysis.IEEE Transactions PAMI, 20 (11), 1998; Computation model in article; Described Judd model is T.Judd, K.Ehinger, F.Durand and, A.Torralba.Learning topredict where humans look, ICCV, 2009; Computation model in article; Described SR significant characteristics extraction algorithm adopts X.Hou and L.Zhang.Saliency Detection:A Spectral Residual Approach [C], IEEE Conference on Computer Vision and Pattern Recognition, the SR algorithm proposed in 2007 articles carries out significant characteristics extraction; Described SDS algorithm utilizes article R.Achanta, F.Estrada, P.Wils, & S.S ¨ usstrunk.Salient region detection and segmentation.International Conference onComputer Vision Systems, the SDS algorithm proposed in 2008; Described FT algorithm utilizes article R.Achanta, S.Hemami, F.Estrada, and S.S ¨ usstrunk.Frequency-tuned salient regiondetection.In CVPR, the FT algorithm proposed in 2009; Described GBVS algorithm utilizes paper Xin Wang, BinWang, and Liming Zhang ICONIP 3, volume 7064 of Lecture Notes in ComputerScience, page 475-484.Springer, the improvement GBVS algorithm proposed in 2008; Described SWCR algorithm utilizes article Biao Han, the SWCR algorithm that Hao Zhu, Youdong Ding:Bottom-up saliency based on weightedsparse coding residual.ACM Multimedia 2011:1117-1120 proposes; Described meanshift algorithm utilizes article R.Achanta, S.Hemami, F.Estrada, and S.S ¨ usstrunk.Frequency-tuned salient region detection.In CVPR, the meanshift algorithm mentioned in 2009.
CN201210408888.3A 2012-10-23 2012-10-23 Method for detecting potential target regions of remote sensing image on basis of monitoring method Expired - Fee Related CN102945378B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210408888.3A CN102945378B (en) 2012-10-23 2012-10-23 Method for detecting potential target regions of remote sensing image on basis of monitoring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210408888.3A CN102945378B (en) 2012-10-23 2012-10-23 Method for detecting potential target regions of remote sensing image on basis of monitoring method

Publications (2)

Publication Number Publication Date
CN102945378A CN102945378A (en) 2013-02-27
CN102945378B true CN102945378B (en) 2015-06-10

Family

ID=47728317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210408888.3A Expired - Fee Related CN102945378B (en) 2012-10-23 2012-10-23 Method for detecting potential target regions of remote sensing image on basis of monitoring method

Country Status (1)

Country Link
CN (1) CN102945378B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310195B (en) * 2013-06-09 2016-12-28 西北工业大学 Based on LLC feature the Weakly supervised recognition methods of vehicle high score remote sensing images
CN104089925B (en) * 2014-06-30 2016-04-13 华南理工大学 A kind of target area extracting method detecting peeled shrimp quality based on high light spectrum image-forming
CN104252624B (en) * 2014-08-29 2017-07-07 西安空间无线电技术研究所 A kind of positioning and extracting method of spaceborne region point target image
CN104217440B (en) * 2014-09-28 2017-03-01 民政部国家减灾中心 A kind of method extracting built-up areas from remote sensing images
CN104408712B (en) * 2014-10-30 2017-05-24 西北工业大学 Information fusion-based hidden Markov salient region detection method
CN104933435B (en) * 2015-06-25 2018-08-28 中国计量学院 Machine vision construction method based on simulation human vision
CN104992183B (en) * 2015-06-25 2018-08-28 中国计量学院 The automatic testing method of well-marked target in natural scene
US10210393B2 (en) * 2015-10-15 2019-02-19 Schneider Electric USA, Inc. Visual monitoring system for a load center
CN106056084B (en) * 2016-06-01 2019-05-24 北方工业大学 Remote sensing image port ship detection method based on multi-resolution hierarchical screening
CN107766810B (en) * 2017-10-10 2021-05-14 湖南省测绘科技研究所 Cloud and shadow detection method
CN108596893B (en) * 2018-04-24 2022-04-08 东北大学 Image processing method and system
CN109977892B (en) * 2019-03-31 2020-11-10 西安电子科技大学 Ship detection method based on local saliency features and CNN-SVM

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289657A (en) * 2011-05-12 2011-12-21 西安电子科技大学 Breast X ray image lump detecting system based on visual attention mechanism

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8175376B2 (en) * 2009-03-09 2012-05-08 Xerox Corporation Framework for image thumbnailing based on visual similarity

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289657A (en) * 2011-05-12 2011-12-21 西安电子科技大学 Breast X ray image lump detecting system based on visual attention mechanism

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A model of saliency-based visual attention for rapid scene analysis;Itti, L等;《Pattern Analysis and Machine Intelligence》;19981231;1254-1259 *
Achanta.R等.Frequency-tuned salient region detection.《Computer Vision and Pattern Recognition 2009 CVPR 2009 IEEE Conference》.2009,1597-1604. *
Learning to predict where humans look;Judd, T等;《Computer Vision》;20091231;2106-2113 *
Radhakrishna Achanta等.Salient region detection and segmentation.《Computer Vision System》.Springer Berlin Herdelberg,2008,66-75. *
Saliency Detection: A Spectral Residual Approach;Xiaodi Hou等;《Computer Vision and Pattern Recognition》;20071231;1-8 *
Xin Wang等.Airport detection in remote sensing image based on visual attention.《neural information processing》.Springer Berlin Heidelberg,2011,475-484. *

Also Published As

Publication number Publication date
CN102945378A (en) 2013-02-27

Similar Documents

Publication Publication Date Title
CN102945378B (en) Method for detecting potential target regions of remote sensing image on basis of monitoring method
US11455735B2 (en) Target tracking method, device, system and non-transitory computer readable storage medium
Yao et al. A coarse-to-fine model for airport detection from remote sensing images using target-oriented visual saliency and CRF
CN106909902B (en) Remote sensing target detection method based on improved hierarchical significant model
Gao et al. A novel target detection method for SAR images based on shadow proposal and saliency analysis
CN103049763B (en) Context-constraint-based target identification method
Zhang et al. Study on traffic sign recognition by optimized Lenet-5 algorithm
Romdhane et al. An improved traffic signs recognition and tracking method for driver assistance system
Asokan et al. Machine learning based image processing techniques for satellite image analysis-a survey
CN103366373B (en) Multi-time-phase remote-sensing image change detection method based on fuzzy compatible chart
CN103310195A (en) LLC-feature-based weak-supervision recognition method for vehicle high-resolution remote sensing images
CN111428631B (en) Visual identification and sorting method for unmanned aerial vehicle flight control signals
Deng et al. Cloud detection in satellite images based on natural scene statistics and gabor features
CN111079596A (en) System and method for identifying typical marine artificial target of high-resolution remote sensing image
CN104182985A (en) Remote sensing image change detection method
Wang et al. Airport detection in remote sensing images based on visual attention
Cheng et al. Efficient sea–land segmentation using seeds learning and edge directed graph cut
Zhao et al. Multiresolution airport detection via hierarchical reinforcement learning saliency model
CN104463248A (en) High-resolution remote sensing image airplane detecting method based on high-level feature extraction of depth boltzmann machine
Li et al. SDBD: A hierarchical region-of-interest detection approach in large-scale remote sensing image
CN102968786B (en) A kind of non-supervisory remote sensing images potential target method for detecting area
CN103617413A (en) Method for identifying object in image
CN105512622A (en) Visible remote-sensing image sea-land segmentation method based on image segmentation and supervised learning
Zhang et al. A runway detection method based on classification using optimized polarimetric features and HOG features for PolSAR images
Mannan et al. Classification of degraded traffic signs using flexible mixture model and transfer learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150610

Termination date: 20191023

CF01 Termination of patent right due to non-payment of annual fee