CN114937145B - Remote sensing image feature point matching method based on geological information - Google Patents

Remote sensing image feature point matching method based on geological information Download PDF

Info

Publication number
CN114937145B
CN114937145B CN202210886795.5A CN202210886795A CN114937145B CN 114937145 B CN114937145 B CN 114937145B CN 202210886795 A CN202210886795 A CN 202210886795A CN 114937145 B CN114937145 B CN 114937145B
Authority
CN
China
Prior art keywords
matching
image
point
matching point
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210886795.5A
Other languages
Chinese (zh)
Other versions
CN114937145A (en
Inventor
廖戬
高小花
段红伟
董铱斐
李洁
邹圣兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shuhui Spatiotemporal Information Technology Co ltd
Original Assignee
Beijing Shuhui Spatiotemporal Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shuhui Spatiotemporal Information Technology Co ltd filed Critical Beijing Shuhui Spatiotemporal Information Technology Co ltd
Priority to CN202210886795.5A priority Critical patent/CN114937145B/en
Publication of CN114937145A publication Critical patent/CN114937145A/en
Application granted granted Critical
Publication of CN114937145B publication Critical patent/CN114937145B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Geophysics And Detection Of Objects (AREA)

Abstract

The invention discloses a remote sensing image feature point matching method based on geological information. The method utilizes the geological information of the remote sensing image to carry out the demarcation of the geographic grids, realizes the attribution of the edge points of the grid boundary to the geographic grids with the same geological information, effectively avoids the condition that the edge points cause the identification of correct matching points as error matching points, and further effectively eliminates the error matching points.

Description

Remote sensing image feature point matching method based on geological information
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to a remote sensing image feature point matching method based on geological information.
Background
In recent years, feature point matching is widely applied to the technical field of remote sensing image processing, and the matching speed, accuracy and robustness of image feature points are particularly important. Currently, there are a number of algorithms that can effectively extract stable features in an image: SIFT algorithm, ASIFT algorithm, PCA-SIFT algorithm, ORB algorithm, etc. But is limited by factors such as the detection precision of the characteristic points, illumination change and the like, so that some wrong matching points are generated, and at present, the RANSAC algorithm is mainly adopted for eliminating the wrong matching points. The RANSAC algorithm gradually becomes a mainstream method for eliminating mismatching due to the advantages of simple algorithm structure, easy implementation, strong robustness and the like, but the efficiency of the classical RANSAC algorithm is lower in a scene with a larger data set and a larger outlier proportion.
Disclosure of Invention
In view of the defects of the prior art, the remote sensing image feature point matching method based on the geological information improves the speed and the accuracy of matching point selection through the geological information.
In order to achieve the above object, the present invention provides a method for matching feature points of remote sensing images based on geological information, comprising:
s1, acquiring a reference image R and an image S to be matched, and preprocessing the image S to be matched;
s2, segmenting the reference image R and the image S to be matched respectively according to the segmentation scale H to obtain a sub-image set { S ] of the image S to be matched i And a set of sub-images { R } of the reference image R i },iThe number of the sub-images;
s3 will { S i And { R }and { R } i In (S) } i R i ) As a pair of sub-image pairs, respectively pair S i ,R i Extracting feature points, and obtaining a first matching point pair set through bidirectional coarse matching, wherein the first matching point pair set is composed of first matching point pairs, and the first matching point pairs are represented as: (
Figure DEST_PATH_IMAGE001
Figure 740041DEST_PATH_IMAGE002
) In which
Figure 358104DEST_PATH_IMAGE001
Is attributed to S i Is detected by the first matching point of (a),
Figure 769494DEST_PATH_IMAGE002
is attributed to R i Is/are as follows
Figure 535325DEST_PATH_IMAGE001
The corresponding first matching point is set to the first matching point,ja first number of matched point pairs for the first set of matched point pairs;
s4 construction of S based on geological information i Geography grid W iS And R i Geography grid W iR
S5 is based on W iS And W iR For a first matching point pair of the first set of matching point pairs (
Figure 424783DEST_PATH_IMAGE001
Figure 162932DEST_PATH_IMAGE002
) Screening to obtain a second matching point pair set M i
S6 repeatedly executes the steps S3-S5 to traverse { S } i And { R }and { R } i Obtaining a second matching point pair set { M } of all sub-image pairs in the set i };
S7 is based on { M i And (4) establishing an affine model for the matching point pairs in the image S, and matching the image S to be matched by using the affine model.
In an embodiment of the present invention, the step S4 includes:
s41 pairing S according to geography information i Performing spatial clustering to obtain S i First spatial clustering result T of iS While to R i Performing spatial clustering to obtain R i First spatial clustering result T of iR
S42 attributing S in the first matching point pair set i All the first matching points as the set of matching points to be matched
Figure 371322DEST_PATH_IMAGE001
And simultaneously attributing R to the first matching point pair set i As a reference set of matching points
Figure 499815DEST_PATH_IMAGE002
};
S43 reaction of S i Is set as S i To start withInitially defining a space Z 1 iS Front to
Figure 317598DEST_PATH_IMAGE001
Is selected to be included in Z 1 iS First matching point of inner, get Z 1 iS First matching point set
Figure 847937DEST_PATH_IMAGE001
Simultaneously with R } i Is set as R i Initial delineation of space Z 1 iR Front to
Figure 397867DEST_PATH_IMAGE002
Is selected to be included in Z 1 Ri First matching point of inner, get Z 1 Ri First matching point set
Figure 403869DEST_PATH_IMAGE002
}′;
S44 according to
Figure 635130DEST_PATH_IMAGE001
Calculating distribution variance in each coordinate axis direction according to the coordinate values of the first matching points in the structure, comparing the distribution variances, taking the coordinate axis direction corresponding to the largest distribution variance as a dividing direction, sorting the coordinate values according to the sizes in the dividing direction, taking the corresponding first matching point with the coordinate value as a median as a dividing point, and obtaining Z by combining the dividing point and the dividing direction 1 iS According to a virtual dividing line of
Figure 410188DEST_PATH_IMAGE002
Calculating distribution variance in each coordinate axis direction according to the coordinate values of the first matching points in the structure, comparing the distribution variances, taking the coordinate axis direction corresponding to the largest distribution variance as a dividing direction, sorting the coordinate values according to the sizes in the dividing direction, taking the corresponding first matching point with the coordinate value as a median as a dividing point, and obtaining the distribution variance in each coordinate axis direction according to the dividing point and the dividing directionZ 1 iR A virtual dividing line of (1);
s45 extracting surrounding Z 1 iS As Z, the first matching point of the virtual dividing line of 1 iS According to Z 1 iS Is at an edge point of S i First spatial clustering result T of iS Distribution of (2) to Z 1 iS Is corrected to obtain Z 1 iS And obtaining S i Second delimited space Z 2 iS While extracting the surrounding Z 1 iR As Z, the first matching point of the virtual dividing line of 1 iR According to Z 1 iR Is at R i First spatial clustering result T of iR Distribution of (2) to Z 1 iR Is corrected to obtain Z 1 iR And obtaining R i Second delimited space Z 2 iR
S46 for Z 2 iS Carrying out further iterative division to obtain new defined spaces, stopping iterative division until the number of the first matching points in each new defined space reaches a preset threshold value, and obtaining 2 k An S i Geography grid cell, all S i Geoscience grid cell composition S i Geography grid W iS Where k is the number of divisions, while for Z 2 iR Carrying out further iterative division to obtain new defined spaces, stopping iterative division until the number of the first matching points in each new defined space reaches a preset threshold value, and obtaining 2 l R is i Geography grid cell, all R i The geography grid unit forms R i Geography grid W iR Wherein l is the number of divisions.
In an embodiment of the present invention, the step S5 includes:
s51 at W iS In selecting includes
Figure 865440DEST_PATH_IMAGE001
And N geography grid units around the geography grid unit as
Figure 234105DEST_PATH_IMAGE001
While in W iR In selecting includes
Figure 617857DEST_PATH_IMAGE002
And N geography grid units around the geography grid unit as
Figure 388367DEST_PATH_IMAGE002
A support grid set of (1);
s52 calculation
Figure 14521DEST_PATH_IMAGE001
Support grid set of
Figure 995115DEST_PATH_IMAGE002
When the number is greater than the threshold value, the number of the first matching point pairs in the grid set is supportedτ 1 When in use, will
Figure 568179DEST_PATH_IMAGE001
Figure 52250DEST_PATH_IMAGE002
Joining M as a second matching point pair i In (1).
In an embodiment of the present invention, the method further includes:
obtaining a second matching point pair set { M i After that { M } is calculated i The number of all matched point pairs in the pixel array is larger than or equal to a threshold valueτ 1 Then step S7 is executed if the number is less than the threshold valueτ 2 Then, the following steps are performed:
rescutting the reference image R according to an image rescutting strategyObtaining a new sub-picture set { R' i Will { S } i And { R }and { R } i In (S) } i R′ i ) As a new pair of sub-images and proceeds to step S3.
In an embodiment of the present invention, the image re-segmentation strategy includes:
calculating { S } i The number of second matching points of each sub-image in the image is selected, and the sub-image S with the largest number of second matching points is selected imax;
In { R i Get S according to the screening model imax M sub-images with the similarity reaching the similarity threshold value, wherein m is more than or equal to 1;
image splicing is carried out on the m sub-images to obtain a spliced image R' m
Will S imax And R' m Obtaining the characteristic points and selecting the matching points to obtain S imax And R' m Is paired with the matching point, if S imax And R' m Is greater than a thresholdτ 3 Then according to S imax And R' m Calculating offset of the matching point pair, and re-segmenting the reference image R based on the offset and the segmentation scale H to obtain a new sub-image set { R' i }。
In an embodiment of the invention, the geological information includes at least one of DEM elevation information, grade information, and grade-rate information.
In an embodiment of the invention, the feature point extraction method includes at least one of a deep learning algorithm, a SIFT algorithm, a SURF algorithm, and an ORB algorithm.
In an embodiment of the invention, the screening model includes a deep learning model and a bag of words model.
Compared with the prior art, the method utilizes the geological information of the remote sensing image to demarcate the geographic grid, and eliminates the wrong matching points through the correctness of the adjacent matching points in the geographic grid. The geographic grid division is carried out by using the elastic boundary, so that the edge point is attributed to the geographic grid with the same geographic information, the edge point can effectively play a supporting role for the correct matching point, and the condition that the correct matching point is identified as the wrong matching point caused by the edge point is effectively avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a flow chart of a method for matching feature points of remote sensing images based on geological information according to the present invention;
FIG. 2 is a schematic diagram of the construction of a geoscience grid based on geoscience information provided by the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. It should be noted that, unless otherwise conflicting, the embodiments and features of the embodiments of the present invention may be combined with each other, and the technical solutions formed are all within the scope of the present invention.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
Referring to fig. 1, fig. 1 is a method for matching feature points of remote sensing images based on geological information, which includes the following steps:
and S1, acquiring the reference image R and the image S to be matched, and preprocessing the image S to be matched.
In this example, the reference image R and the image S to be matched are both optical remote sensing images. And preprocessing the image S to be matched such as radiation correction and atmospheric correction to obtain a remote sensing image which more truly reflects the earth surface information.
S2, according to the segmentation scale H, segmenting the reference image R and the image S to be matched respectively to obtain a sub-image set { S ] of the image S to be matched i And a set of sub-images { R } of the reference image R i },iThe number of sub-images.
Specifically, the implementation process of this step is as follows:
(1) setting reference image R offsetx 0 = 0,y 0 = 0。
(2) And setting an image segmentation scale H.
Since the remote sensing image is usually very large, in this example, a segmentation scale H =5000 pixels by 5000 pixels is set, and the reference image R and the image S to be matched are segmented to obtain a sub-image set { S ] of the image S to be matched i And a set of sub-images { R } of the reference image R i }。
(3) Calculating each sub-image S on the image S to be matched i The geographic extent of the image.
(4) According to each sub-image S of the image S to be matched i According to its geographical range and adding an offset (x 0 ,y 0 ) Slicing the reference image R according to the slicing scale H to obtain a sub-image R of the reference image R i
S3 will { S i And { R }and { R } i In (S) } i ,R i ) As a pair of sub-image pairs, respectively pair S i ,R i Extracting characteristic points, and obtaining a first matching point pair set through bidirectional coarse matching processing, wherein the first matching point pair set is composed of first matching point pairsA matching point pair is represented as: (
Figure 114884DEST_PATH_IMAGE001
Figure 192561DEST_PATH_IMAGE002
) Wherein
Figure 959529DEST_PATH_IMAGE001
Is attributed to S i First matching point of
Figure 173473DEST_PATH_IMAGE002
Is attributed to R i Is/are as follows
Figure 236369DEST_PATH_IMAGE001
The corresponding first matching point is set to the first matching point,ja first number of pairs of matched points for the first set of pairs of matched points.
The extraction method for extracting the feature points comprises at least one of a deep learning algorithm, a SIFT algorithm, a SURF algorithm and an ORB algorithm. SIFT method is adopted in this example, for each pair (S) i ,R i ) And extracting the characteristic points.
In obtaining S i ,R i After the characteristic points are obtained, performing rough matching processing by adopting a bidirectional matching method to obtain a first matching point pair set:
to attribution S i Each feature point of
Figure 598080DEST_PATH_IMAGE001
By Euclidean distance calculation
Figure 512946DEST_PATH_IMAGE001
At R i Nearest neighbor feature points in a set of feature points
Figure DEST_PATH_IMAGE003
And second nearest neighbor feature points
Figure 440451DEST_PATH_IMAGE004
Ratio of distances r 1 If r is 1 Less than a certain threshold (in this example, the threshold is set to 0.5), then
Figure 172784DEST_PATH_IMAGE001
And
Figure 21791DEST_PATH_IMAGE003
is a group of same name points and is added into the initial matching point pair set.
In the initial matching point pair set, pair attribution R i Any characteristic point
Figure 474769DEST_PATH_IMAGE002
By Euclidean distance calculation
Figure 53518DEST_PATH_IMAGE002
At home S i Nearest neighbor feature point of feature points
Figure DEST_PATH_IMAGE005
And second nearest neighbor feature points
Figure 97697DEST_PATH_IMAGE006
Ratio of distances r 2 If r is 2 Less than a certain threshold (in this example the threshold is set to 0.6), then this will be
Figure 263362DEST_PATH_IMAGE002
And most adjacent feature points
Figure 520031DEST_PATH_IMAGE005
As a group of homologous points and added to the first set of matching point pairs.
S4 construction of S based on geological information i Geography grid W iS And R i Geography grid W iR
The geological information is geographical information reflecting the distribution characteristics of the spatial positions of the surface feature entities, and comprises elevation information, gradient variability information, terrain relief information and the like, and the corresponding quantities respectively comprise elevation, gradient variability and terrain relief.
The method utilizes the geological information of the remote sensing image to carry out the demarcation of the geographic grid, and eliminates the error matching points through the correctness pair of the adjacent matching points in the geographic grid.
In the geographic grid division, the invention uses the elastic boundary to determine the geographic grid boundary, and realizes that the edge point belongs to the geographic grid with the same geographic information, so that the edge point can effectively play a supporting role for the correct matching point, and the condition that the correct matching point is identified as the wrong matching point caused by the edge point is effectively avoided.
Respectively constructing S based on geoscience information i Geography grid W iS And R i Geography grid W iR The construction steps are as follows:
s41 pairing S according to geography information i Performing spatial clustering to obtain S i First spatial clustering result T of iS While to R i Performing spatial clustering to obtain R i First spatial clustering result T of iR
Wherein S is obtained i First spatial clustering result T of iS And to obtain R i First spatial clustering result T of iR The geoscience information used may be the same, so as to pair S according to the geoscience information i Performing spatial clustering to obtain S i First spatial clustering result T of iS The description is given for the sake of example:
in some embodiments of the present invention, the topographical information may be selected and used at S based on the topographical information i Establishing a plurality of elevation clusters to obtain a spatial clustering result T iS
In other embodiments of the present invention, the general geoscience information may be obtained by weighting a plurality of pieces of geoscience information, and the spatial clustering result T may be obtained by spatially clustering the general geoscience information iS The method comprises the following specific steps:
obtaining the magnitude values of elevation information, gradient information and gradient variability at each pixel point;
normalizing the magnitude of each pixel point, wherein the normalization processing method comprises the following steps:
Figure DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 218865DEST_PATH_IMAGE008
is the minimum value of the magnitude of each pixel,
Figure DEST_PATH_IMAGE009
is the maximum value of the magnitude of each pixel.
Weighting and adding the normalized quantity values to obtain a comprehensive geoscience information value, and comprehensively carrying out spatial clustering on the geoscience information value to obtain a spatial clustering result T iS
S42 attributing S in the first matching point pair set i All the first matching points as the set of matching points to be matched
Figure 496263DEST_PATH_IMAGE001
And simultaneously attributing R to the first matching point pair set i As a reference set of matching points
Figure 54283DEST_PATH_IMAGE002
}。
S43 reaction of S i Is set as S i Is initially determined as a space Z 1 iS Front to
Figure 380222DEST_PATH_IMAGE001
Is selected to be included in Z 1 iS First matching point of inner, get Z 1 iS First matching point set
Figure 667984DEST_PATH_IMAGE001
Simultaneously with R } i Is set as R i Initially defining a space Z 1 iR Front to
Figure 522808DEST_PATH_IMAGE002
Is selected to be included in Z 1 Ri First matching point in, get Z 1 Ri First matching point set
Figure 663064DEST_PATH_IMAGE002
}′。
S44 according to
Figure 323853DEST_PATH_IMAGE001
Calculating distribution variance in each coordinate axis direction according to the coordinate values of the first matching points in the structure, comparing the distribution variances, taking the coordinate axis direction corresponding to the largest distribution variance as a dividing direction, sorting the coordinate values according to the sizes in the dividing direction, taking the corresponding first matching point with the coordinate value as a median as a dividing point, and obtaining Z by combining the dividing point and the dividing direction 1 iS According to a virtual dividing line of
Figure 341487DEST_PATH_IMAGE002
Calculating distribution variance in each coordinate axis direction according to the coordinate values of the first matching points in the structure, comparing the distribution variances, taking the coordinate axis direction corresponding to the largest distribution variance as a dividing direction, sorting the coordinate values according to the sizes in the dividing direction, taking the corresponding first matching point with the coordinate value as a median as a dividing point, and obtaining Z by combining the dividing point and the dividing direction 1 iR The virtual dividing line of (1).
S45 extracting surrounding Z 1 iS As Z, the first matching point of the virtual dividing line of 1 iS According to Z 1 iS Is at an edge point of S i First spatial clustering result T of iS Distribution of (2) to Z 1 iS Is corrected to obtain Z 1 iS And obtaining S i Second delimited space Z 2 iS While extracting the surrounding Z 1 iR As the first matching point of the virtual dividing line of Z 1 iR According to Z 1 iR At an edge point of R i First spatial clustering result T of iR Distribution of (2) to Z 1 iR Is corrected to obtain Z 1 iR And obtaining R i Second delimited space Z 2 iR
Specifically, in an embodiment of the present invention, S is obtained i Second delimited space Z 2 iS For illustration, the specific implementation process includes:
A. according to a selection strategy, in Z 1 iS And extracting the matching points near the virtual dividing line from the first matching point pair set in the space to obtain edge points.
In this example, the strategy is selected as a threshold method, that is, the edge points are obtained according to the threshold method, and the steps are as follows:
calculating the distances from all the matching points in the first matching point pair set to the virtual dividing line:
Figure 757425DEST_PATH_IMAGE010
wherein
Figure DEST_PATH_IMAGE011
Is a matching point
Figure 758879DEST_PATH_IMAGE012
The distance to the virtual dividing line is,
Figure 551255DEST_PATH_IMAGE012
as a matching point
Figure 157817DEST_PATH_IMAGE012
Is a coordinate value of the vertical dividing axis of (a),
Figure DEST_PATH_IMAGE013
is a coordinate value of the virtual dividing line on a vertical dividing axis. In this example, the dividing axis is set as the ordinate axis,
Figure 275814DEST_PATH_IMAGE012
is the coordinate value of the matching point on the abscissa axis,
Figure 233406DEST_PATH_IMAGE013
and the coordinate axis is the coordinate axis on the abscissa axis of the virtual dividing line.
If it is
Figure 65358DEST_PATH_IMAGE011
Less than a certain threshold (3 pixels in this example), the first matching point is determined to be an edge point.
B. According to the spatial clustering result T iS Correcting the virtual dividing line to obtain an actual dividing line, and aligning Z according to the actual dividing line 1 iS Splitting to obtain S i Second delimited space Z 2 iS Wherein Z is 2 iS Two geo-grid cells are included.
Further, in an embodiment of the present invention, Z is obtained 1 iS The actual dividing line is described as an example, specifically, the virtual dividing line is corrected according to the clustering result of the high-altitude geographic information, and the implementation process is as follows:
a is according to T iS Boundary of (Z) 1 iS Boundary, will T iS Divided into a plurality of geographical areas.
And b, screening out the geoscience region which contains the edge point and is intersected with the virtual dividing line to obtain a screened geoscience region set.
c, the virtual dividing line is intersected with the screening geoscience region set, and each screening geoscience region is divided into 2 screening geoscience regions.
d, sequentially replacing the segment of the virtual dividing line in the screened geographical area with the boundary of the screened geographical area with a small area to obtain an actual dividing line, wherein the specific process is as shown in fig. 2, and the process of correcting the virtual dividing line is described by a specific example:
(ii) As shown in FIG. 2 (a), based on T iS Boundary of (Z) 1 iS The boundaries divide the clustering results into 6 geoscience regions: the first geography area (10), the second geography area (20), the 3 rd geography area (30), the 4 th geography area (40), the 5 th geography area (50) and the 6 th geography area (60).
The broken line in (a) in fig. 2 is the virtual dividing line, the virtual dividing line segment in the 1 st school zone (10) is the 1 st virtual dividing line segment (100), and the virtual dividing line segment in the 2 nd school zone (20) is the 2 nd virtual dividing line segment (110); the virtual dividing line segment in the 3 rd geographic region (30) is a 3 rd virtual dividing line segment (120), the virtual dividing line segment in the 4 th geographic region (40) is a 4 th virtual dividing line segment (130), and the virtual dividing line segment in the 5 th geographic region (50) is a 5 th virtual dividing line segment (140).
The closed geological regions where the edge points exist and which intersect the virtual dividing line are the 1 st geological region (10), the 5 th geological region (50), and the 6 th geological region (60), and the 1 st geological region (10), the 5 th geological region (50), and the 6 th geological region (60) are used as the screening geological region.
Intersecting the virtual dividing line with a 1 st geological region (10), a 5 th geological region (50) and a 6 th geological region (60) in sequence, and dividing the 1 st geological region (10) into a 7 th geological region (11) and an 8 th geological region (12) respectively, wherein the area of the 7 th geological region (11) is larger than that of the 8 th geological region (12); dividing the 5 th geological region (50) into a 9 th geological region (51) and a 10 th geological region (52), wherein the area of the 9 th geological region (51) is larger than that of the 10 th geological region (52); the 6 th geological region (60) is divided into an 11 th geological region (61) and a 12 th geological region (62), wherein the area of the 12 th geological region (62) is larger than that of the 11 th geological region (61).
Selecting the 8 th geological region (12) with small region area in the 1 st geological region (10), and selecting non-Z on the boundary of the 8 th geological region (12) 1 iS A boundary and a first boundary (90) that is not a virtual partition line, replacing the first boundary (90) with a 5 th virtual partition line segment (140) of the virtual partition line segments.
Selecting a 10 th geoscience region (52) having a small regional area in the 5 th geoscience region (50), and selecting a non-Z region on the boundary of the 10 th geoscience region (52) 1 iS A boundary and a second boundary (80) that is not a virtual partition line, replacing the second boundary (80) with a 3 rd virtual partition line segment (120) of the virtual partition line segments.
Selecting the 11 th geological region (61) with small regional area in the 6 th geological region (60), and selecting non-Z on the boundary of the 11 th geological region (61) 1 iS A boundary and a third boundary (70) that is not a virtual partition line, the third boundary (70) replacing a 1 st virtual partition line segment (100) of the virtual partition line segments.
Replacing the virtual dividing line segment with the composition of the third boundary (70), the 2 nd virtual dividing line segment (110), the second boundary (80), the 4 th virtual dividing line segment (130) and the first boundary (90) according to the (c) - (d) steps to obtain an actual dividing line, and the result is shown as (b) in fig. 2.
e dividing the line pair Z according to the actual 1 iS Splitting to obtain S i Second delimited space Z 2 iS
S46 for Z 2 iS Carrying out further iterative division to obtain new defined spaces, stopping iterative division until the number of the first matching points in each new defined space reaches a preset threshold value, and obtaining 2 k An S i Geography grid cell, all S i Geoscience grid cell composition S i Geography grid W iS Wherein k is a radicalIn number of times, simultaneously for Z 2 iR Carrying out further iterative division to obtain new defined spaces, stopping iterative division until the number of the first matching points in each new defined space reaches a preset threshold value, and obtaining 2 l R is i Geography grid cell, all R i The geography grid unit forms R i Geography grid W iR Wherein l is the number of divisions.
In this example, in the iterative division, a minimum of 4 first matching points need to be included in the new division space, i.e. the preset threshold is 4.
S5 is based on W iS And W iR For a first matching point pair of the first set of matching point pairs (
Figure 854322DEST_PATH_IMAGE001
Figure 753008DEST_PATH_IMAGE002
) Screening to obtain a second matching point pair set M i
Specifically, the steps include:
s51 at W iS In selecting includes
Figure 56951DEST_PATH_IMAGE001
And N geography grid units around the geography grid unit as
Figure 66495DEST_PATH_IMAGE001
While in W iR In selecting includes
Figure 975545DEST_PATH_IMAGE002
And N geography grid units around the geography grid unit as
Figure 904187DEST_PATH_IMAGE002
Support grid set(s).
S52 calculation
Figure 836371DEST_PATH_IMAGE001
Support grid set of
Figure 508661DEST_PATH_IMAGE002
When the number is greater than the threshold value, the number of the first matching point pairs in the grid set is supportedτ 1 When in use, will
Figure 272217DEST_PATH_IMAGE001
Figure 247126DEST_PATH_IMAGE002
Joining M as a second matching point pair i In (1).
In the present example, the first and second substrates were,τ 1 the value is 3 alpha, alpha is a control parameter and is an integer which is more than or equal to 1.
S6 repeatedly executes the steps S3-S5 to traverse { S } i And { R }and { R } i Obtaining a second matching point pair set { M } of all sub-image pairs in the set i }。
S7 is based on { M i And (4) constructing an affine model by using the matching point pairs in the image S to be matched, and matching the image S to be matched by using the affine model.
The affine model used for image matching in this step is a conventional general technique, and is not described herein again.
In a preferred embodiment of the present invention, the second matching point pair set { M } is obtained in step S6 i And after optimization, matching the image S to be matched, specifically as follows:
calculation of { M i The number of all matched point pairs in the pixel array is larger than or equal to a threshold valueτ 2 Then step S7 is executed if the number is less than the threshold valueτ 2 Then, the following steps are executed:
re-segmenting the reference image R according to the image re-segmentation strategy to obtain a new sub-image set { R 'of the reference image R' i Will { S } i And { R }and { R } i In (S) } i R′ i ) AsA new pair of sub-images and proceeds to step S3.
Wherein the content of the first and second substances,τ 2 and taking 4.
In this embodiment, the image re-segmentation strategy includes the following steps:
calculating { S } i The number of second matching points of each sub-image in the image is selected, and the sub-image S with the largest number of second matching points is selected imax
In { R i Get S according to the screening model imax The similarity of the sub-images reaches m sub-images of the similarity threshold, and m is larger than or equal to 1.
Image splicing is carried out on the m sub-images to obtain a spliced image R' m
Will S imax And R' m Obtaining the characteristic points and selecting the matching points to obtain S imax And R' m The matched point pairs of (1).
If S imax And R' m Is greater than a thresholdτ 3 (in this example)τ 3 Value is 3), then according to S imax And R' m Is calculated for the offsetx 0 ′、y 0 ′。
And based on the offsetx 0 ′、y 0 'and a segmentation scale H =5000 pixels, re-segmenting the reference image R to obtain a new sub-image set { R' i }。
Wherein the screening model comprises a deep learning model and a bag of words model. In the present embodiment, a deep learning model is used as the screening model. The specific implementation process is as follows:
1) constructing a first grid retrieval deep convolution network model, wherein the network comprises 3 convolution layers, 3 pooling layers and 3 full-connection layers, and the specific network structure is as follows:
the 1 st convolutional layer is convolved by using 32 convolution kernels with the size of 3 multiplied by 3, and the obtained result is sent to the 1 st pooling layer after passing through a nonlinear activation function RELU; the size of the pooling core of the 1 st pooling layer is 2 x 2, an average pooling method is adopted, the step length of pooling is 2, and the obtained result is sent to the 2 nd convolutional layer; the 2 nd convolutional layer uses 64 convolutional kernels with the size of 3 multiplied by 3 to carry out convolution, and the obtained result is sent into the 2 nd pooling layer through a nonlinear activation function RELU function; the parameters of the 2 nd pooling layer are the same as those of the 1 st pooling layer, and the obtained result is sent to the 1 st full-connection layer; before entering a 1 st full-connection layer, data output by a 2 nd pooling layer is changed into a one-dimensional vector, the number of output nodes passing through the 1 st full-connection layer is changed into 500, and the one-dimensional vector is sent into the 2 nd full-connection layer; the number of output nodes of the 2 nd full connection layer of the data is changed into 10, and the data is sent into the 3 rd full connection layer; the data passes through a 3 rd full-connection layer to output a one-dimensional vector, and the number of nodes is 2; the loss function of the network adopts a comparative loss function and consists of a positive example part and a negative example part.
2) Acquiring a matching grid block, setting the label of the matching grid block as 0, and recording as a positive sample; acquiring a mismatched grid block, setting the label of the unmatched grid block as 1, and recording as a negative sample; all positive and negative samples together form a training set.
3) The first mesh retrieval deep convolutional mesh model is trained on a training set.
4) A similarity threshold T =0.9 is set.
5) Will S imax And a set of sub-images { R } of the reference image R i R of } i Sequentially inputting the input values into the final network model obtained in the step 3), and if the obtained network output loss value is less than the similar threshold value T, inputting R i Is a reaction with S imax The close-up image blocks.
6) Selecting m and S imax The nearest image block is taken as S imax M sub-images with the similarity reaching the similarity threshold, m =2 in this embodiment.
In summary, by means of the technical scheme of the invention, the method for matching the characteristic points of the remote sensing image based on the geological information is realized. The method fully utilizes the geological information of the remote sensing image to carry out the demarcation of the geographic grid, and eliminates the error matching points through the correctness pair of the adjacent matching points in the geographic grid. The geographic grid division is carried out by using the elastic boundary, so that the edge point is attributed to the geographic grid with the same geographic information, the edge point can effectively play a supporting role for the correct matching point, and the condition that the correct matching point is identified as the wrong matching point caused by the edge point is effectively avoided.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (6)

1. A remote sensing image feature point matching method based on geological information is characterized by comprising the following steps:
step S1, acquiring a reference image R and an image S to be matched, and preprocessing the image S to be matched, wherein the preprocessing comprises radiation correction and atmospheric correction;
step S2, according to the segmentation scale H, the reference image R and the image S to be matched are respectively segmented to obtain a sub-image set { S) of the image S to be matched i And a set of sub-images { R } of the reference image R i },iThe number of the sub-images;
step S3 will { S i The sub-image S of i And S i In { R i The corresponding sub-image R of i As a pair of sub-image pairs (S) i R i ) Are respectively paired with S i ,R i Extracting feature points, and obtaining a first matching point pair set through bidirectional coarse matching, wherein the first matching point pair set is composed of first matching point pairs, and the first matching point pairs are represented as: (
Figure 666301DEST_PATH_IMAGE001
Figure 820201DEST_PATH_IMAGE002
) Wherein
Figure 702707DEST_PATH_IMAGE001
Is attributed to S i The first matching point of (a) is,
Figure 55191DEST_PATH_IMAGE003
is attributed to R i Is/are as follows
Figure 935422DEST_PATH_IMAGE001
The corresponding first matching point is set to the first matching point,ja first number of matched point pairs for the first set of matched point pairs;
step S4 separately constructs S based on the geoscience information i Geography grid W iS And R i Geography grid W iR The geography information comprises elevation information, gradient rate information and topographic relief information, and the geography grids W iS Geography grid W iR The construction specifically comprises:
s41 pairing S according to geography information i Performing spatial clustering to obtain S i First spatial clustering result T of iS While to R i Performing spatial clustering to obtain R i First spatial clustering result T of iR
S42 attributing S in the first matching point pair set i All the first matching points as the set of matching points to be matched
Figure 373357DEST_PATH_IMAGE001
And simultaneously attributing R to the first matching point pair set i As a reference set of matching points
Figure 793974DEST_PATH_IMAGE003
};
S43 converting S into S i Is set as S i Is initially determined as a space Z 1 iS Front to
Figure 204226DEST_PATH_IMAGE001
Is selected to be included in Z 1 iS First matching point of inner, get Z 1 iS First matching point set (f) of
Figure 52097DEST_PATH_IMAGE001
Simultaneously with R } i Is set as R i Is initially determined as a space Z 1 iR Front to
Figure 977327DEST_PATH_IMAGE003
Is selected to be included in Z 1 Ri First matching point of inner, get Z 1 Ri First matching point set
Figure 139318DEST_PATH_IMAGE003
}′;
S44 is based on
Figure 466395DEST_PATH_IMAGE001
Calculating distribution variance in each coordinate axis direction according to the coordinate values of the first matching points in the structure, comparing the distribution variances, taking the coordinate axis direction corresponding to the largest distribution variance as a dividing direction, sorting the coordinate values according to the sizes in the dividing direction, taking the corresponding first matching point with the coordinate value as a median as a dividing point, and obtaining Z by combining the dividing point and the dividing direction 1 iS According to a virtual dividing line of
Figure 750745DEST_PATH_IMAGE003
Calculating distribution variance in each coordinate axis direction according to the coordinate values of the first matching points in the structure, comparing the distribution variances, taking the coordinate axis direction corresponding to the largest distribution variance as a dividing direction, sorting the coordinate values according to the sizes in the dividing direction, taking the corresponding first matching point with the coordinate value as a median as a dividing point, and obtaining Z by combining the dividing point and the dividing direction 1 iR A virtual dividing line of (1);
s45 extracting surrounding Z 1 iS As Z, the first matching point of the virtual dividing line of 1 iS According to Z 1 iS Is at an edge point of S i First spatial clustering result T of iS Distribution of (2) to Z 1 iS Is corrected to obtain Z 1 iS And obtaining S i Second delimited space Z 2 iS While extracting the surrounding Z 1 iR As Z, the first matching point of the virtual dividing line of 1 iR According to Z 1 iR At an edge point of R i First spatial clustering result T of iR Distribution of (2) to Z 1 iR Is corrected to obtain Z 1 iR And obtaining R i Second delimited space Z 2 iR
S46 for Z 2 iS Carrying out further iterative division to obtain new defined spaces, stopping iterative division until the number of the first matching points in each new defined space reaches a preset threshold value, and obtaining 2 k An S i Geography grid cell, all S i Geoscience grid cell composition S i Geography grid W iS Where k is the number of divisions, while for Z 2 iR Carrying out further iterative division to obtain new defined spaces, stopping iterative division until the number of the first matching points in each new defined space reaches a preset threshold value, and obtaining 2 l R is i Geography grid cell, all R i The geography grid unit forms R i Geography grid W iR WhereinlDividing times;
step S5 is based on W iS And W iR For a first matching point pair of the first set of matching point pairs (
Figure 100955DEST_PATH_IMAGE004
Figure 863375DEST_PATH_IMAGE005
) Screening to obtain a second matching point pair set M i
Step S6 repeats steps S3-S5, traversing S i And { R }and { R } i Obtaining a second matching point pair set { M } of all sub-image pairs in the set i };
Step S7 is based on { M } i And (4) establishing an affine model for the matching point pairs in the image S, and matching the image S to be matched by using the affine model.
2. The method for matching feature points of remote sensing images based on geological information as claimed in claim 1, wherein step S5 comprises:
s51 at W iS In selecting includes
Figure 44958DEST_PATH_IMAGE006
And N geography grid units around the geography grid unit as
Figure 500210DEST_PATH_IMAGE006
While in W iR In selecting includes
Figure 337716DEST_PATH_IMAGE005
And N geography grid units around the geography grid unit as
Figure 903826DEST_PATH_IMAGE005
A support grid set of (1);
s52 calculation
Figure 939915DEST_PATH_IMAGE006
Support grid set of
Figure 506682DEST_PATH_IMAGE005
When the number is greater than the threshold value, the number of the first matching point pairs in the grid set is supportedτ 1 When in use, will
Figure 893801DEST_PATH_IMAGE006
Figure 998023DEST_PATH_IMAGE005
Joining M as a second matching point pair i In (1).
3. The method for matching feature points of remote sensing images based on geological information as claimed in claim 1, wherein the method further comprises:
obtaining a second matching point pair set { M i After that { M } is calculated i The number of all matched point pairs in the pixel array is larger than or equal to a threshold valueτ 1 Then step S7 is executed if the number is less than the threshold valueτ 2 Then, the following steps are executed:
re-segmenting the reference image R according to the image re-segmentation strategy to obtain a new sub-image set { R 'of the reference image R' i Will { S } i And { R }and { R } i In (S) } i R′ i ) As a new pair of sub-images and proceeds to step S3.
4. The method for matching feature points of remote-sensing images based on geological information as claimed in claim 3, wherein the image recut strategy comprises:
calculating { S } i The number of second matching points of each sub-image in the image is selected, and the sub-image S with the largest number of second matching points is selected imax
In { R i Get S according to the screening model imax M sub-images with the similarity reaching the similarity threshold value, wherein m is more than or equal to 1;
image splicing is carried out on the m sub-images to obtain a spliced image R' m
Will S imax And R' m Obtaining the characteristic points and selecting the matching points to obtain S imax And R' m Is paired with the matching point, if S imax And R' m Is greater than a thresholdτ 3 Then according to S imax And R' m Calculating offset of the matching point pair, and re-segmenting the reference image R based on the offset and the segmentation scale H to obtain a new sub-image set { R' i }。
5. The method for matching feature points of remote sensing images based on geological information as claimed in claim 1, wherein the extraction method of feature point extraction comprises at least one of deep learning algorithm, SIFT algorithm, SURF algorithm and ORB algorithm.
6. The method as claimed in claim 4, wherein the filtering model includes a deep learning model and a bag-of-words model.
CN202210886795.5A 2022-07-26 2022-07-26 Remote sensing image feature point matching method based on geological information Active CN114937145B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210886795.5A CN114937145B (en) 2022-07-26 2022-07-26 Remote sensing image feature point matching method based on geological information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210886795.5A CN114937145B (en) 2022-07-26 2022-07-26 Remote sensing image feature point matching method based on geological information

Publications (2)

Publication Number Publication Date
CN114937145A CN114937145A (en) 2022-08-23
CN114937145B true CN114937145B (en) 2022-09-20

Family

ID=82868774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210886795.5A Active CN114937145B (en) 2022-07-26 2022-07-26 Remote sensing image feature point matching method based on geological information

Country Status (1)

Country Link
CN (1) CN114937145B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544711A (en) * 2013-11-08 2014-01-29 国家测绘地理信息局卫星测绘应用中心 Automatic registering method of remote-sensing image
CN104217209A (en) * 2013-06-03 2014-12-17 核工业北京地质研究院 Method for automatically eliminating wrongly-matched registration points in remote sensing image
CN108898096A (en) * 2018-06-27 2018-11-27 重庆交通大学 A kind of quick accurate extracting method of the information towards high score image
CN109801315A (en) * 2018-12-13 2019-05-24 天津津航技术物理研究所 A kind of infrared multispectral image method for registering based on edge extracting and cross-correlation
CN114092343A (en) * 2021-10-14 2022-02-25 北京数慧时空信息技术有限公司 Cloud removing method for remote sensing image
CN114612773A (en) * 2022-02-25 2022-06-10 武汉大学 Efficient sea ice motion extraction method and system suitable for SAR and optical images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2711893B1 (en) * 2011-05-13 2020-03-18 Beijing Electric Power Economic Research Institute Method and device for processing geological information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217209A (en) * 2013-06-03 2014-12-17 核工业北京地质研究院 Method for automatically eliminating wrongly-matched registration points in remote sensing image
CN103544711A (en) * 2013-11-08 2014-01-29 国家测绘地理信息局卫星测绘应用中心 Automatic registering method of remote-sensing image
CN108898096A (en) * 2018-06-27 2018-11-27 重庆交通大学 A kind of quick accurate extracting method of the information towards high score image
CN109801315A (en) * 2018-12-13 2019-05-24 天津津航技术物理研究所 A kind of infrared multispectral image method for registering based on edge extracting and cross-correlation
CN114092343A (en) * 2021-10-14 2022-02-25 北京数慧时空信息技术有限公司 Cloud removing method for remote sensing image
CN114612773A (en) * 2022-02-25 2022-06-10 武汉大学 Efficient sea ice motion extraction method and system suitable for SAR and optical images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Multi-Source Remote Sensing Image Registration Based on Local Deep Learning Feature;Yongxian Zhang等;《2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS》;20211012;正文第3412-3415页 *
基于SG-SIFT的光学遥感影像配准;余先川等;《北京邮电大学学报》;20141215(第06期);正文第17-22页 *

Also Published As

Publication number Publication date
CN114937145A (en) 2022-08-23

Similar Documents

Publication Publication Date Title
CN110232394B (en) Multi-scale image semantic segmentation method
CN109711288B (en) Remote sensing ship detection method based on characteristic pyramid and distance constraint FCN
CN112150493B (en) Semantic guidance-based screen area detection method in natural scene
CN104574347B (en) Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data
CN112084869B (en) Compact quadrilateral representation-based building target detection method
CN110163213B (en) Remote sensing image segmentation method based on disparity map and multi-scale depth network model
CN108416292B (en) Unmanned aerial vehicle aerial image road extraction method based on deep learning
CN111626947B (en) Map vectorization sample enhancement method and system based on generation of countermeasure network
CN113627228B (en) Lane line detection method based on key point regression and multi-scale feature fusion
CN108764039B (en) Neural network, building extraction method of remote sensing image, medium and computing equipment
CN110781756A (en) Urban road extraction method and device based on remote sensing image
CN106920235B (en) Automatic correction method for satellite-borne optical remote sensing image based on vector base map matching
CN109740483A (en) A kind of rice growing season detection method based on deep-neural-network
CN111582093A (en) Automatic small target detection method in high-resolution image based on computer vision and deep learning
Chen et al. A new process for the segmentation of high resolution remote sensing imagery
CN116645592B (en) Crack detection method based on image processing and storage medium
CN114494821B (en) Remote sensing image cloud detection method based on feature multi-scale perception and self-adaptive aggregation
CN111709929A (en) Lung canceration region segmentation and classification detection system
CN111242026A (en) Remote sensing image target detection method based on spatial hierarchy perception module and metric learning
CN112734822A (en) Stereo matching algorithm based on infrared and visible light images
CN113223042A (en) Intelligent acquisition method and equipment for remote sensing image deep learning sample
CN115223063A (en) Unmanned aerial vehicle remote sensing wheat new variety lodging area extraction method and system based on deep learning
CN113177592A (en) Image segmentation method and device, computer equipment and storage medium
CN116091946A (en) Yolov 5-based unmanned aerial vehicle aerial image target detection method
CN115147418A (en) Compression training method and device for defect detection model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant