CN102567735B - Method for automatically picking up control point sections of remote sensing images - Google Patents

Method for automatically picking up control point sections of remote sensing images Download PDF

Info

Publication number
CN102567735B
CN102567735B CN 201010615036 CN201010615036A CN102567735B CN 102567735 B CN102567735 B CN 102567735B CN 201010615036 CN201010615036 CN 201010615036 CN 201010615036 A CN201010615036 A CN 201010615036A CN 102567735 B CN102567735 B CN 102567735B
Authority
CN
China
Prior art keywords
image
symbol
coordinate
value
sigma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201010615036
Other languages
Chinese (zh)
Other versions
CN102567735A (en
Inventor
张翰墨
尤红建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Electronics of CAS
Original Assignee
Institute of Electronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Electronics of CAS filed Critical Institute of Electronics of CAS
Priority to CN 201010615036 priority Critical patent/CN102567735B/en
Publication of CN102567735A publication Critical patent/CN102567735A/en
Application granted granted Critical
Publication of CN102567735B publication Critical patent/CN102567735B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for automatically picking up control point sections of remote sensing images, which relates to the digital remote sensing image processing technology. As for a big remote sensing image, textural features (sum probability distribution variance of autocorrelation coefficient and gray level co-occurrence matrix) of the images are used, image sheets which meet requirements of a control point (ground control point (GCP), ground control point) section and have big textures are screened and judged according to provincial characteristics of the image sheets, so that a process of automatically picking up the control point sections is achieved. The method applies image information expressed by the textural features of the images, and effective and automatic picking-up of the control point sections of the remote sensing images is achieved through an automatic algorithm.

Description

The method of a kind of automatic extraction remote sensing images reference mark section
Technical field
The present invention relates to technical field of remote sensing image processing, is the method for a kind of automatic extraction remote sensing images reference mark section, be based on image coefficient of autocorrelation and gray level co-occurrence matrixes with the probability distribution variance.
Background technology
Along with the continuous increase at the rail remote sensing satellite, the remote sensing images quantity of obtaining also grows with each passing day, and utilizes the reference mark section of remote sensing images to carry out images match, and the image information storage becomes the important foundation of remote sensing image processing.Because the reference mark section need comprise significantly terrestrial object information, generally adopts artificial or semi-automatic mode to extract.
When carrying out the extraction of remote sensing images reference mark section, consider the effect of reference mark section, mainly be to provide reference picture for processing such as images match, need to comprise significantly terrestrial object information in its image information, road for example, road junction, the parking lot, the lake, airport etc. often need image to have bigger textural characteristics.The extracting method of the reference mark of the remote sensing images that generally adopt section at present, majority is based on artificial or semi-automatic, and efficient is not high and workload is huge.And when the amount of images increase was very fast, the extraction of can not in time cutting into slices was so real-time is poor.
Summary of the invention
The method that the purpose of this invention is to provide the section of a kind of automatic extraction remote sensing images reference mark, section artificial extraction in remote sensing images reference mark is wasted time and energy and the problem of real-time difference to solve.
For achieving the above object, technical solution of the present invention is:
The method of a kind of automatic extraction remote sensing images reference mark section, it comprises:
Step 1: extract the candidate image section automatically, and carry out normalized;
Step 2: the textural characteristics value of calculated candidate image slices, comprise coefficient of autocorrelation and based on the gradation of image co-occurrence matrix with the probability variance;
Step 3: according to the image feature value size that obtains, criterion be in five coefficient of autocorrelation any one greater than 0.25, and on this basis, require the gradation of image co-occurrence matrix with the probability variance greater than 1, judge whether have than the large texture feature, whether meet image reference mark slicing characteristics, determine satisfactory image slices.
The method of described automatic extraction remote sensing images reference mark section, its described step 1 is that traversal is extracted image slices from original remote sensing images, cuts into slices as the candidate; For piece image section f, the normalized formula is:
f(i,j)=(f(i,j)-μ)/σ
Wherein (i, j) presentation video pixel coordinate, coordinate range be [0 ..., N-1], f (i, j) in the presentation video coordinate be (symbol/expression removes for i, gradation of image value j), and μ is a gradation of image value average, and computing formula is:
μ = Σ i = 0 N - 1 Σ j = 0 N - 1 f ( i , j ) ( N - 1 ) × ( N - 1 )
Wherein (i, j) presentation video pixel coordinate, coordinate range be [0 ..., N-1], f (i, j) in the presentation video coordinate be (i, gradation of image value j), the symbol ∑ represent the summation, symbol * expression product;
σ is a gradation of image value standard deviation, and computing formula is:
σ = Σ i = 0 N - 1 Σ j = 0 N - 1 ( f ( i , j ) - μ ) 2 ( N - 1 ) × ( N - 1 )
Wherein (i, j) presentation video pixel coordinate, coordinate range be [0 ..., N-1], (i, j) coordinate is that (the symbol ∑ is represented summation for i, gradation of image value j), and symbol * expression product, symbol subscript 2 are represented to count square to f in the presentation video.
The method of described automatic extraction remote sensing images reference mark section, the eigenwert of its described step 2 calculated candidate image slices is:
A) five of the calculated candidate image slices coefficient of autocorrelation, for piece image section f, its coefficient of autocorrelation computing formula is:
ρ ( Δx , Δy ) = Σ i = 0 N - 1 Σ j = 0 N - 1 f ( i , j ) × f ( i + Δx , j + Δy ) Σ i = 0 N - 1 Σ j = 0 N - 1 f 2 ( i , j )
Wherein (i, j) presentation video pixel coordinate, coordinate range be [0 ..., N-1], f (i, j) coordinate is (i, gradation of image value j), Δ x in the presentation video, Δ y is an image displacement in the x and y direction, and the symbol ∑ is represented summation, and symbol * expression product, symbol subscript 2 are represented to count square; Wherein the displacement l of Cai Yonging is 10, chooses 4 direction θ 1, θ 2, θ 3, θ 4, value is respectively: 0 °, and 45 °, 90 °, 135 °, try to achieve the displacement Δ x on the four direction respectively, Δ y, computing formula is:
Δx=l×cosθ m
Δy=l×sinθ m
M=1 wherein, 2,3,4, symbol * expression product calculates four coefficient of autocorrelation ρ 1, ρ 2, ρ 3, ρ 4, then four coefficient of autocorrelation are averaged, obtain the 5th coefficient of autocorrelation ρ 5
B) the calculated candidate image slices based on gray level co-occurrence matrixes with the probability variance, at first ask for the gray level co-occurrence matrixes of image f, computing formula is:
Figure BDA0000041794860000032
Wherein, (i, j) be image f at x, the coordinate figure of y both direction, coordinate range be [0 ..., N-1], f (i, j) coordinate is (i, gradation of image value j) in the presentation video, Δ x, Δ y be image at x, the displacement on the y both direction, the symbol ∑ is represented summation, p, coordinate (i among the q difference presentation video f, j) and the gray-scale value located of coordinate (i+ Δ x, j+ Δ y), also be the coordinate of gray level co-occurrence matrixes C; Wherein, at first image is carried out the gray level compression, the gray level gl of employing is 16, and compress mode is:
f(i,j)=f(i,j)/256×(gl-1)
Wherein (i, j) presentation video pixel coordinate, coordinate range be [0 ..., N-1], f (i, j) in the presentation video coordinate be (i, gradation of image value j), symbol * expression product, symbol/expression removes;
The displacement l that adopts is 6, and chooses 4 direction θ 1, θ 2, θ 3, θ 4, value is respectively: 0 °, and 45 °, 90 °, 135 °, try to achieve four direction top offset amount Δ x respectively, Δ y, computing formula is:
Δx=l×cosθ m
Δy=l×sinθ m
M=1 wherein, 2,3,4, symbol * expression product calculates the gray level co-occurrence matrixes on all directions, calculates itself and probability distribution p then I+j(t), computing formula is:
p i + j ( t ) = Σ i = 0 N - 1 Σ j = 0 N - 1 f ( i , j )
Wherein (i, j) presentation video pixel coordinate, coordinate range be [0 ..., N-1], f (i, j) in the presentation video coordinate be (t is and probability (i+j) for i, gradation of image value j), its scope be [0 ..., 2*N-2], the symbol ∑ is represented summation;
According to four direction with probability distribution p I+j(t), calculate its entropy respectively, computing formula is:
entropy = - Σ t = 0 2 N - 2 p i + j ( t ) × log { p i + j ( t ) }
Wherein log represents to ask natural logarithm, and t is and probability (i+j), its scope be [0 ..., 2*N-2], ∑ is represented summation, symbol * expression product; According to the entropy that obtains, calculate and variance of probability distribution, computing formula is again:
t sv = Σ t = 0 2 N - 2 ( t - entropy ) 2 × p i + j ( t )
T is and probability (i+j), its scope be [0 ..., 2*N-2], symbol * expression product, symbol ∑ are represented summation, 2 expressions of symbol subscript count square;
At last, with asking on average of the four direction that calculates, obtain eigenwert T with the probability distribution variance Sv
The method of described automatic extraction remote sensing images reference mark section, its described step 3 is that the image feature value to step 2 gained carries out criterion: at first, for 5 ρ 1, ρ 2, ρ 3, ρ 4, ρ 5Value as long as one of them value greater than 0.25, can enter next round and differentiate, is next judged the feature T of image slices Sv, require greater than 1, greater than 1 promptly as the image slices of qualified reference mark (GCP).
The inventive method has been considered the textural characteristics of image essence, used image texture characteristic in analyzing coefficient of autocorrelation and based on the statistical nature of gray level co-occurrence matrixes, judge according to the image coefficient of autocorrelation that calculates with based on the size with the variance of probability distribution value of gray level co-occurrence matrixes, to obtain satisfactory reference mark (GCP) section.
The auto slice extractive technique of the inventive method has improved having now based on artificial and automanual extractive technique, has improved work efficiency and has guaranteed the real-time of handling.
Description of drawings
Fig. 1 is the method flow synoptic diagram of a kind of automatic extraction remote sensing images of the present invention reference mark section;
Fig. 2 is got the image synoptic diagram by the first step in the inventive method, second step; Wherein:
Fig. 2 A1 is the candidate image section synoptic diagram of section A;
Fig. 2 A2 is the normalized result schematic diagram of section A;
Fig. 2 B1 is the candidate image section synoptic diagram of section B;
Fig. 2 B2 is the normalized result schematic diagram of section B.
Embodiment
With reference to Fig. 1, the method that the section of a kind of remote sensing images of the present invention reference mark is extracted automatically, the specific implementation process is as follows:
Step 1: extract the candidate image section automatically, and carry out normalized; At first the traversing graph picture extracts image slices automatically, for image slices, tries to achieve its average μ earlier, and computing formula is:
μ = Σ i = 0 N - 1 Σ j = 0 N - 1 f ( i , j ) ( N - 1 ) × ( N - 1 )
Wherein (i, j) presentation video pixel coordinate, coordinate range be [0 ..., N-1], f (i, j) in the presentation video coordinate be (i, gradation of image value j), ∑ represent the summation, * expression product.
With σ be gradation of image value standard deviation, computing formula is:
σ = Σ i = 0 N - 1 Σ j = 0 N - 1 ( f ( i , j ) - μ ) 2 ( N - 1 ) × ( N - 1 )
Wherein (i, j) presentation video pixel coordinate, coordinate range be [0 ..., N-1], (i, j) coordinate is that (symbol * expression product, symbol ∑ are represented summation for i, gradation of image value j), and symbol subscript 2 expression counts square to f in the presentation video.Carry out normalized then, computing formula is:
f ( i , j ) = f ( i , j ) - μ σ
Wherein (i, j) presentation video pixel coordinate, coordinate range be [0 ..., N-1], (i, j) coordinate is (i, gradation of image value j) to f in the presentation video.
Step 2: the textural characteristics value of calculated candidate image slices, comprise coefficient of autocorrelation and based on the gradation of image co-occurrence matrix with the probability variance;
Wherein in the calculating of coefficient of autocorrelation, the displacement l of employing is 10, chooses 4 direction θ 1, and θ 2, and θ 3, and θ 4, and value is respectively: 0 °, and 45 °, 90 °, 135 °, try to achieve the displacement Δ x on the four direction respectively, Δ y, computing formula is:
Δx=l×cosθ m
Δy=l×sinθ m
M=1 wherein, 2,3,4, symbol * expression product calculates four coefficient of autocorrelation ρ 1, ρ 2, ρ 3, ρ 4, then four coefficient of autocorrelation are averaged, obtain the 5th coefficient of autocorrelation ρ 5,
In the calculating with the probability variance of gray level co-occurrence matrixes, at first image slices is carried out the gray level compression, the gray level gl that adopts in this method is 16, and compress mode is:
f ( i , j ) = f ( i , j ) 256 × ( gl - 1 )
Wherein (i, j) presentation video pixel coordinate, coordinate range be [0 ..., N-1], (i, j) coordinate is (i, gradation of image value j), symbol * expression product to f in the presentation video.
The displacement l that adopts is 6, and chooses 4 direction θ 1, θ 2, θ 3, θ 4, value is respectively: 0 °, and 45 °, 90 °, 135 °, try to achieve four direction top offset amount Δ x respectively, Δ y, computing formula is:
Δx=l×cosθ m
Δy=l×sinθ m
M=1 wherein, 2,3,4, symbol * expression product calculates the gray level co-occurrence matrixes on all directions, calculates itself and probability distribution p then I+j(t), computing formula is:
p i + j ( t ) = Σ i = 0 N - 1 Σ j = 0 N - 1 f ( i , j )
Wherein (i, j) presentation video pixel coordinate, coordinate range be [0 ..., N-1], f (i, j) in the presentation video coordinate be (i, gradation of image value j), t represent with probability distribution in the probable value of i+j, its scope be [0 ..., 2*N-2], the symbol ∑ is represented summation.
According to four direction with probability distribution p I+j(t), calculate its entropy respectively, computing formula is:
entropy = - Σ t = 0 2 N - 2 p i + j ( t ) log { p i + j ( t ) }
Wherein log represents to ask natural logarithm, and t is and probability (i+j), its scope be [0 ..., 2*N-2], the symbol ∑ is represented summation, symbol * expression product.
According to the entropy that obtains, calculate and variance of probability distribution, computing formula is again:
t sv = Σ t = 0 2 N - 2 ( t - entropy ) 2 p i + j ( t )
T is and probability (i+j), its scope be [0 ..., 2*N-2], symbol * expression product, symbol ∑ are represented summation, 2 expressions of symbol subscript count square.
At last, with asking on average of the four direction that calculates with the probability distribution variance, obtain gray level co-occurrence matrixes with probability variance eigenwert T Sv
Step 3: according to the image feature value size that obtains, criterion be in five coefficient of autocorrelation any one greater than 0.25, and on this basis, require the gradation of image co-occurrence matrix with the probability variance greater than 1, judge whether have than the large texture feature, whether meet image reference mark slicing characteristics, determine satisfactory image slices.Referring to shown in Figure 2, wherein, Fig. 2 A1 is the candidate image section synoptic diagram of section A, and Fig. 2 A2 is the normalized result schematic diagram of section A; Fig. 2 B1 is the candidate image section synoptic diagram of section B, and Fig. 2 B2 is the normalized result schematic diagram of section B.Fig. 2 A2 and Fig. 2 B2 make its gradation of image value be applicable to subsequent calculations and differentiation through normalized.
Illustrate as following table 1:
Table 1
Figure BDA0000041794860000091
Auto slice extractive technique of the present invention is the picture material of expressing according to the texture information of image slices, and image sheet is effectively screened, and finally obtains the reference mark sectioning image that is fit to.Both guaranteed the validity of the reference mark sectioning image of extraction, promptly had significantly terrestrial object information, also improved the efficient of the work of extracting greatly, with reference to following table 2.
Table 2

Claims (3)

1. a method of extracting the section of remote sensing images reference mark automatically is characterized in that, comprising:
Step 1: extract the candidate image section automatically, and carry out normalized;
Step 2: the textural characteristics value of calculated candidate image slices, comprise image five coefficient of autocorrelation and based on the gradation of image co-occurrence matrix with the probability variance;
Step 3:, judge whether have than the large texture feature, whether meet image reference mark slicing characteristics, determine satisfactory image slices according to the image feature value size that obtains; Wherein in five coefficient of autocorrelation of candidate section any one greater than 0.25, and the gradation of image co-occurrence matrix with the probability variance greater than 1, think that then this section has than the large texture feature, meet image reference mark slicing characteristics, be satisfactory image slices;
Wherein, the textural characteristics value of described step 2 calculated candidate image slices is:
A) five of the calculated candidate image slices coefficient of autocorrelation, for piece image section f, its coefficient of autocorrelation computing formula is:
ρ ( Δx , Δy ) = Σ i = 0 N - 1 Σ j = 0 N - 1 f ( i , j ) × f ( i + Δx , j + Δy ) Σ i = 0 N - 1 Σ j = 0 N - 1 f 2 ( i , j )
Wherein (i, j) presentation video pixel coordinate, coordinate range be [0 ..., N-1], f (i, j) coordinate is (i, gradation of image value j), Δ x in the presentation video, Δ y is an image displacement in the x and y direction, and the symbol ∑ is represented summation, and symbol * expression product, symbol subscript 2 are represented to count square; Wherein the displacement l of Cai Yonging is 10, chooses 4 direction θ 1, θ 2, θ 3, θ 4, value is respectively: 0 °, and 45 °, 90 °, 135 °, try to achieve the displacement Δ x on the four direction respectively, Δ y, computing formula is:
Δx=l×cosθ m
Δy=l×sinθ m
M=1 wherein, 2,3,4, symbol * expression product calculates four coefficient of autocorrelation ρ 1, ρ 2, ρ 3, ρ 4, then four coefficient of autocorrelation are averaged, obtain the 5th coefficient of autocorrelation ρ 5
B) the calculated candidate image slices based on gray level co-occurrence matrixes with the probability variance, at first ask for the gray level co-occurrence matrixes of image f, computing formula is:
Figure FDA00002883306600021
Wherein, (i, j) be image f at x, the coordinate figure of y both direction, coordinate range be [0 ..., N-1], f (i, j) in the presentation video coordinate be (i, gradation of image value j), Δ x, Δ y be image at x, the displacement on the y both direction, symbol ∑ represent the summation; P, q respectively coordinate among the presentation video f (i j) and the gray-scale value located of coordinate (i+ Δ x, j+ Δ y), also is the coordinate of gray level co-occurrence matrixes C; Wherein, at first image is carried out the gray level compression, the gray level gl of employing is 16, and compress mode is:
f(i,j)=f(i,j)/256×(gl-1)
Wherein (i, j) presentation video pixel coordinate, coordinate range be [0 ..., N-1], f (i, j) in the presentation video coordinate be (i, gradation of image value j), symbol * expression product, symbol/expression removes;
The displacement l that adopts is 6, and chooses 4 direction θ 1, θ 2, θ 3, θ 4, value is respectively: 0 °, and 45 °, 90 °, 135 °, try to achieve four direction top offset amount Δ x respectively, Δ y, computing formula is:
Δx=l×cosθ m
Δy=l×sinθ m
M=1 wherein, 2,3,4, symbol * expression product calculates the gray level co-occurrence matrixes on all directions, calculates itself and probability distribution p then I+j(t), computing formula is:
p i + j ( t ) = Σ i = 0 N - 1 Σ j = 0 N - 1 f ( i , j )
Wherein (i, j) presentation video pixel coordinate, coordinate range be [0 ..., N-1], f (i, j) in the presentation video coordinate be (t is and probability (i+j) for i, gradation of image value j), its scope be [0 ..., 2*N-2], the symbol ∑ is represented summation;
According to four direction with probability distribution p I+j(t), calculate its entropy respectively, computing formula is:
entropy = - Σ t = 0 2 N - 2 p i + j ( t ) × log { p i + j ( t ) }
Wherein log represents to ask natural logarithm, and t is and probability (i+j), its scope be [0 ..., 2*N-2], ∑ is represented summation, symbol * expression product; According to the entropy that obtains, calculate and variance of probability distribution, computing formula is again:
t sv = Σ t = 0 2 N - 2 ( t - entropy ) 2 × p i + j ( t )
T is and probability (i+j), its scope be [0 ..., 2*N-2], symbol * expression product, symbol ∑ are represented summation, 2 expressions of symbol subscript count square;
At last, with asking on average of the four direction that calculates, obtain eigenwert T with the probability distribution variance Sv
2. the method for automatic extraction remote sensing images as claimed in claim 1 reference mark section is characterized in that, described step 1 is that traversal is extracted image slices from original remote sensing images, cuts into slices as the candidate; For piece image section f, the normalized formula is:
f(i,j)=(f(i,j)-μ)/σ
Wherein (i, j) presentation video pixel coordinate, coordinate range be [0 ..., N-1], f (i, j) in the presentation video coordinate be (symbol/expression removes for i, gradation of image value j), and μ is a gradation of image value average, and computing formula is:
μ = Σ i = 0 N - 1 Σ j = 0 N - 1 f ( i , j ) ( N - 1 ) × ( N - 1 )
Wherein (i, j) presentation video pixel coordinate, coordinate range be [0 ..., N-1], f (i, j) in the presentation video coordinate be (i, gradation of image value j), the symbol ∑ represent the summation, symbol * expression product;
σ is a gradation of image value standard deviation, and computing formula is:
σ = Σ i = 0 N - 1 Σ j = 0 N - 1 ( f ( i , j ) - μ ) 2 ( N - 1 ) × ( N - 1 )
Wherein (i, j) presentation video pixel coordinate, coordinate range be [0 ..., N-1], (i, j) coordinate is that (the symbol ∑ is represented summation for i, gradation of image value j), and symbol * expression product, symbol subscript 2 are represented to count square to f in the presentation video.
3. the method for automatic extraction remote sensing images as claimed in claim 1 reference mark section is characterized in that, described step 3 is that the image feature value to step 2 gained carries out criterion: at first, and for 5 ρ 1, ρ 2, ρ 3, ρ 4, ρ 5Value as long as one of them value greater than 0.25, can enter next round and differentiate, is next judged the feature T of image slices Sv, require greater than 1, greater than 1 promptly as the image slices at qualified reference mark.
CN 201010615036 2010-12-30 2010-12-30 Method for automatically picking up control point sections of remote sensing images Expired - Fee Related CN102567735B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010615036 CN102567735B (en) 2010-12-30 2010-12-30 Method for automatically picking up control point sections of remote sensing images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010615036 CN102567735B (en) 2010-12-30 2010-12-30 Method for automatically picking up control point sections of remote sensing images

Publications (2)

Publication Number Publication Date
CN102567735A CN102567735A (en) 2012-07-11
CN102567735B true CN102567735B (en) 2013-07-24

Family

ID=46413108

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010615036 Expired - Fee Related CN102567735B (en) 2010-12-30 2010-12-30 Method for automatically picking up control point sections of remote sensing images

Country Status (1)

Country Link
CN (1) CN102567735B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10628919B2 (en) * 2017-08-31 2020-04-21 Htc Corporation Image segmentation method and apparatus
CN107742124A (en) * 2017-09-22 2018-02-27 北京航天控制仪器研究所 A kind of extracting method of weighted gradient direction co-occurrence matrix textural characteristics
CN110398738B (en) * 2019-06-09 2021-08-10 自然资源部第二海洋研究所 Method for inverting sea surface wind speed by using remote sensing image
CN110348314B (en) * 2019-06-14 2021-07-30 中国资源卫星应用中心 Method and system for monitoring vegetation growth by using multi-source remote sensing data
CN110598539A (en) * 2019-08-02 2019-12-20 焦作大学 Sports goods classification device and method based on computer vision recognition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1790052A (en) * 2005-12-19 2006-06-21 武汉大学 Area feature variation detection method based on remote sensing image and GIS data
CN101520896A (en) * 2009-03-30 2009-09-02 中国电子科技集团公司第十研究所 Method for automatically detecting cloud interfering naval vessel target by optical remote sensing image
CN101750606A (en) * 2009-11-24 2010-06-23 中国科学院对地观测与数字地球科学中心 Automatic and moderate orthographic projection correction method of satellite remote sensing image

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100488685B1 (en) * 2002-08-22 2005-05-11 한국과학기술원 Image Processing Method for Automatic Image Registration and Correction
CN1890693A (en) * 2003-12-08 2007-01-03 皇家飞利浦电子股份有限公司 Adaptive point-based elastic image registration

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1790052A (en) * 2005-12-19 2006-06-21 武汉大学 Area feature variation detection method based on remote sensing image and GIS data
CN101520896A (en) * 2009-03-30 2009-09-02 中国电子科技集团公司第十研究所 Method for automatically detecting cloud interfering naval vessel target by optical remote sensing image
CN101750606A (en) * 2009-11-24 2010-06-23 中国科学院对地观测与数字地球科学中心 Automatic and moderate orthographic projection correction method of satellite remote sensing image

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
基于特征的遥感图像自动配准算法;韦燕凤等;《电子学报》;20050131;第33卷(第1期);161-165 *
李国胜等.遥感影像配准中控制点的自动提取.《辽宁工程技术大学学报》.2005,第24卷(第1期),41-44.
谭庆全等.面向Internet的分布式遥感影像切割算法.《吉林大学学报(工学版)》.2010,第40卷(第1期),224-228. *
遥感影像配准中控制点的自动提取;李国胜等;《辽宁工程技术大学学报》;20050228;第24卷(第1期);41-44 *
韦燕凤等.基于特征的遥感图像自动配准算法.《电子学报》.2005,第33卷(第1期),161-165.

Also Published As

Publication number Publication date
CN102567735A (en) 2012-07-11

Similar Documents

Publication Publication Date Title
CN102567735B (en) Method for automatically picking up control point sections of remote sensing images
US10325152B1 (en) Method of extracting warehouse in port from hierarchically screened remote sensing image
CN102034239B (en) Local gray abrupt change-based infrared small target detection method
CN107038416B (en) Pedestrian detection method based on binary image improved HOG characteristics
CN104951799A (en) SAR remote-sensing image oil spilling detection and identification method
CN104239420A (en) Video fingerprinting-based video similarity matching method
CN101587189B (en) Texture elementary feature extraction method for synthetizing aperture radar images
CN105184804A (en) Sea surface small target detection method based on airborne infrared camera aerially-photographed image
CN107909083A (en) A kind of hough transform extracting method based on outline optimization
CN101674389B (en) Method for testing compression history of BMP image based on loss amount of image information
CN116029941B (en) Visual image enhancement processing method for construction waste
CN116630802A (en) SwinT and size self-adaptive convolution-based power equipment rust defect image detection method
CN112633242A (en) Port ore heap segmentation and reserve calculation method based on improved UNet network
CN103530635A (en) Coastline extracting method based on satellite microwave remote sensing image
CN115331102A (en) Remote sensing image river and lake shoreline intelligent monitoring method based on deep learning
CN114882010A (en) Surface defect detection method based on picture recognition
CN101894373B (en) Method for extracting frontal line of weather facsimile image adopting external rectangles
CN106570506B (en) Solar activity recognition method based on scale transformation model
CN103065296B (en) High-resolution remote sensing image residential area extraction method based on edge feature
CN108319927B (en) Method for automatically identifying diseases
CN115078263B (en) Seaweed remote sensing information extraction method considering tidal influence
CN109389053B (en) Method and system for detecting position information of vehicle to be detected around target vehicle
CN104766279B (en) ScanSAR sea ice image incident angle effect is by class bearing calibration
CN114758139A (en) Foundation pit accumulated water detection method
CN103679170A (en) Method for detecting salient regions based on local features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130724

Termination date: 20151230

EXPY Termination of patent right or utility model