CN106778510B - Method for matching high-rise building characteristic points in ultrahigh-resolution remote sensing image - Google Patents

Method for matching high-rise building characteristic points in ultrahigh-resolution remote sensing image Download PDF

Info

Publication number
CN106778510B
CN106778510B CN201611047223.9A CN201611047223A CN106778510B CN 106778510 B CN106778510 B CN 106778510B CN 201611047223 A CN201611047223 A CN 201611047223A CN 106778510 B CN106778510 B CN 106778510B
Authority
CN
China
Prior art keywords
image
feature
characteristic point
characteristic
point pairs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611047223.9A
Other languages
Chinese (zh)
Other versions
CN106778510A (en
Inventor
胡蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Normal University
Original Assignee
Jiangxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Normal University filed Critical Jiangxi Normal University
Priority to CN201611047223.9A priority Critical patent/CN106778510B/en
Publication of CN106778510A publication Critical patent/CN106778510A/en
Application granted granted Critical
Publication of CN106778510B publication Critical patent/CN106778510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for matching characteristic points of high-rise buildings in ultrahigh resolution remote sensing images, which aims at solving the problem that the characteristic point pairs falling on the high-rise buildings are reduced due to the change of spatial positions of the high-rise buildings in cities in the ultrahigh resolution remote sensing images with different visual angles, a plurality of characteristic points are obtained in the reference image and the image to be registered by adopting a plurality of characteristic point extraction methods, for the feature point pairs after initial matching, strict screening conditions are adopted firstly to obtain a small number of feature point pairs with high matching accuracy, the image visual angle difference between images is analyzed, and loosening the screening condition by using the image visual angle difference and the image displacement difference of the building to obtain more characteristic point pairs simultaneously containing errors and correctness, and finally further verifying the characteristic point pairs by using the similarity of the image blocks around the characteristic point pairs, removing the wrong characteristic point pairs and reserving more correctly matched characteristic point pairs.

Description

Method for matching high-rise building characteristic points in ultrahigh-resolution remote sensing image
Technical Field
The invention belongs to the technical field of digital image processing, relates to a registration method of multi-temporal remote sensing image feature points, in particular to a registration method of high-rise building feature points in a multi-temporal ultrahigh resolution remote sensing image, which is used for analyzing the matching of feature point pairs by utilizing the similarity of the feature point pairs to surrounding image blocks on the basis of analyzing the difference of the viewing angles of the multi-temporal remote sensing image.
Background
The registration of the multi-temporal remote sensing image is the basis of remote sensing image change detection and information fusion. In the analysis of the urban remote sensing image, on one hand, with the development of the remote sensing technology, the resolution of the remote sensing image is higher and higher; on the other hand, high-rise buildings are more and more popular in order to improve the urban land utilization efficiency. In the remote sensing image with ultrahigh resolution, the form and the position of a high-rise building in the image are obviously changed under different shooting visual angles, meanwhile, the surrounding ground objects are shielded, the image around the high-rise building can be changed to a certain extent, the matching of the characteristic points forms interference, and the number of correctly matched characteristic point pairs on the building is reduced.
The characteristic-based multi-temporal remote sensing image registration process typically comprises the following steps: (1) extracting characteristics: the method is characterized by mainly comprising point features, line features and area features, wherein the point features are researched a lot, the feature points are detected mainly by utilizing gray level changes around the points, and characteristics such as Forstner, Gabor, SUSAN, Harris, SIFT, SURF and the like appear in sequence, wherein the characteristic points such as SIFT, SURF and the like have good local invariance and are widely applied to image registration; the line feature and the region feature are mainly main lines and regions included in the image obtained by using a line extraction technique and a region segmentation technique. (2) And (3) feature matching: the features are matched mainly according to the description mode of the extracted features to form feature pairs, for example, point features SIFT and SURF utilize the similarity of gray level changes around feature points, line features utilize the similarity of linear directions, spatial relations among lines and the like, and region features utilize the similarity of region areas, shapes and the like. (3) Screening of feature pairs: and removing partial mismatched feature pairs in the feature matching by adopting a Mahalanobis distance mode, a RANSAC mode and the like. (4) Spatial transformation: and calculating a spatial transformation model by using the correctly matched feature pairs, and transforming the image to be matched by using the spatial transformation model to ensure that the reference image is consistent with the image to be matched, wherein typical spatial transformation models comprise affine transformation, perspective transformation, polynomial transformation and the like.
In the ultrahigh-resolution urban remote sensing images with different visual angles, the middle and high-rise buildings can present different forms in the images. At the moment, the side surfaces of the middle and high-rise buildings have different shapes in the images, the features are obtained on the side surfaces of the middle and high-rise buildings, meanwhile, the middle and high-rise buildings shield surrounding ground objects and influence the feature selection, and the obtained matchable features are few. Although the positions of the tops of the middle and high-rise buildings in the image are different, the basic structural features and the color features of the tops of the middle and high-rise buildings are better reserved and are key areas for feature extraction and matching. Because the heights of the middle and high-rise buildings are different, the features of the roofs of the middle and high-rise buildings cannot be subjected to the same spatial transformation, and part of correctly matched feature pairs are removed in the feature pair screening stage. The invention carries out the analysis aiming at the part of correctly matched feature pairs so as to reserve the part of correctly matched feature pairs, increase the number of the feature pairs and provide more and better information for subsequent information fusion, change detection and the like.
The feature point matching method provided by the invention mainly comprises three parts, namely a first part for extracting feature points and performing primary matching; a second part, strictly screening a small number of correctly matched characteristic point pairs, and analyzing the visual angle change information by using the correctly matched characteristic point pairs; and thirdly, re-screening the preliminarily matched feature point pairs by combining the visual angle change information, and analyzing the similarity of the feature point pairs to surrounding image blocks to verify the matching of the feature pairs.
Disclosure of Invention
The invention aims to solve the technical problem of providing a method for matching high-rise building feature points in an ultrahigh-resolution remote sensing image. Aiming at the problem that the position change of a high-rise building in a high-resolution remote sensing image under different visual angles causes less matched feature point pairs on the high-rise building, the method screens out the feature point pairs with the same feature angle change by utilizing the consistency of visual angle difference information between a reference image and an image to be matched, and verifies by utilizing the similarity of surrounding image blocks of the feature point pairs to obtain more correctly matched feature point pairs falling on the high-rise building.
In order to achieve the purpose, the matching of the high-rise building feature points in the ultrahigh-resolution remote sensing image comprises the following steps:
step (1), extracting feature points: taking two time phase ultrahigh-resolution remote sensing images, one as a reference image and the other as an image to be matched, geometrically correcting the two images, wherein the two images have the same resolution, and extracting characteristic points with local invariance from the two images respectively;
step (2), primary matching of feature points: matching the characteristic points in the reference image and the characteristic points in the image to be matched by combining the description characteristics of the characteristic points to obtain an initial matching characteristic point pair set, wherein the characteristic point pairs in the set contain more error matching;
and (3) strictly screening the characteristic points (primary screening): strictly screening the characteristic point pairs in the initial matching characteristic point pair set to obtain a more accurate primary screening characteristic point pair set, wherein the characteristic point pairs contained in the set are basically matched correctly;
step (4), analyzing image visual angle difference information and image position difference information: the method comprises the steps of utilizing a primary screening characteristic point pair set, searching visual angle difference information between two images in a statistical mode to obtain an angle between a correct characteristic point pair between the two images, and meanwhile utilizing highest building and image resolution information in the images to obtain image position difference;
and (5) secondary screening of feature pairs by utilizing image visual angle difference and image position difference: screening out the characteristic point pairs meeting the image visual angle difference and the image position difference in the initial matching characteristic point pair set to obtain a secondary screening characteristic point pair set;
and (6) analyzing the similarity of the characteristic points to the surrounding image blocks: for each pair of feature points in the secondary screening feature point pair set, taking image blocks around the feature points, and analyzing the similarity of the corresponding image blocks in a mutual information mode;
step (7), feature point pair verification: and if the similarity of any image block around the feature point pair meets a threshold, the feature point pair is considered to be matched, otherwise, the feature point pair is considered to be not matched, and the feature point pair is removed from the secondary screening feature point pair set to obtain a new correctly matched feature pair set.
In the step (4), when the image view angle difference information is analyzed, a region with concentrated feature pair angles is selected as the image view angle difference, and the region value range is suggested to be within 10-15 degrees.
In the step (4), the image position difference information is mainly analyzed by referring to the feature such as a high-rise building, which is influenced by the shooting angle of view, and the maximum position difference is shown in the image, and the maximum position difference is within 2 times of the highest building height/image resolution, and the value of the difference can be further reduced according to the angle of view difference between the two images.
In step (6), the feature point pair includes 9 image blocks for surrounding image blocks: 1) 1 image block with the feature point as the center; 2) the feature points are respectively positioned in 4 image blocks at the centers of the left edge, the right edge, the upper edge and the lower edge of the image block; 3) the feature points are respectively located in 4 image blocks of 4 vertexes of the image block.
The principle of the invention is as follows: a method for matching high-rise building characteristic points in a remote sensing image with ultrahigh resolution is characterized in that a remote sensing image pair to be processed is geometrically corrected and has the same resolution, and the steps are specifically explained as follows:
step (1), extracting feature points: in remote sensing image registration, more stable feature points can be obtained by feature points with local invariance, and more feature points are obtained by selecting multiple feature point extraction modes such as SIFT and SURF in two images, so that more candidate feature points are obtained;
step (2), primary matching of feature points: respectively matching the feature points extracted in different modes by combining the description characteristics of the feature points to obtain an initial matching feature point pair set, wherein the feature point pairs in the set contain more error matches;
and (3) strictly screening the characteristic points (primary screening): removing the characteristic point pairs which are in error matching by using methods such as RANSAC (random sample consensus), Mahalanobis distance and the like for the characteristic point pairs in the initial matching characteristic point pair set, simultaneously deleting some correct point pairs, and after strict screening, basically matching the contained characteristic point pairs in the obtained primary screening characteristic point pair set correctly;
step (4), analyzing image visual angle difference information and image position difference information: the visual angles of the two images are fixed, so that the formed angle of the characteristic point pairs falling on the high-rise building is fixed, and the point pairs in the characteristic point pair set screened at one time are basically matched correctly, so that the region with relatively concentrated angles can be regarded as the visual angle difference information between the two images in a statistical mode; meanwhile, the position difference of the high-rise building in the image caused by the shooting visual angle is jointly determined by the building height, the visual angle difference and the image resolution, and the position difference formed by the visual angle difference is 2 times of the building height/image resolution at most;
and (5) secondary screening of feature pairs by utilizing image visual angle difference and image position difference: analyzing the characteristic point pairs one by one in the initial matching characteristic point pair set, and if the angle difference formed by the characteristic point pairs is basically consistent with the image visual angle difference and the position difference of the characteristic point pairs is within the image position difference range, putting the characteristic point pairs into a secondary screening characteristic point pair set;
and (6) analyzing the similarity of the characteristic points to the surrounding image blocks: the top of a high-rise building is relatively stable in images with different visual angles, and has better consistency when appearing in the images, but because the shooting visual angles are different, and the ground features around the top are different, the local feature point pairs falling on the top of the building have better similarity;
step (7), feature point pair verification: and if the similarity of any image block around the feature point pair meets a threshold, the feature point pair is considered to be matched, otherwise, the feature point pair is considered to be not matched, and the feature point pair is removed from the secondary screening feature point pair set to obtain a new correctly matched feature pair set.
The invention has the beneficial effects that:
1. the image visual angle difference information under different shooting visual angles can be analyzed based on the image;
2. more correctly matched feature pairs can be obtained, particularly feature pairs on high-rise buildings;
3. and carrying out information fusion and change detection processing on the building based on the matched building characteristic points.
Drawings
FIG. 1 is a flow chart of a method for matching characteristic points of high-rise buildings in an ultrahigh-resolution remote sensing image;
FIG. 2 is an example of a reference image;
FIG. 3 is an example of an image to be registered;
FIG. 4 is an image block with feature points as the center (black dots in the figure are feature points);
FIG. 5 is a diagram of an image block with feature points as edge center points (black points in the diagram are feature points);
fig. 6 is an image block with feature points as corner points (black points in the figure are feature points);
fig. 7 is an image feature point registration case.
Detailed Description
The invention is further described below with reference to the accompanying drawings and the detailed description, the invention mainly comprises the following steps: extracting feature points, primarily matching the feature points, strictly screening the feature point pairs (primary screening), analyzing image visual angle difference information and image position difference information, secondarily screening the feature pairs by utilizing the image visual angle difference and the image position difference, analyzing the similarity of image blocks around the feature point pairs, and verifying the feature point pairs. For an urban area with dense high-rise buildings, for example, the same-resolution ultrahigh-resolution multispectral remote sensing images shot at two moments in a certain area are taken as shown in fig. 2 and fig. 3 (which are converted into black and white images for display), because of different shooting angles, the positions of the buildings in the images are deviated, and are influenced by the height difference and the shooting visual angle difference of the high-rise buildings, so that characteristic point pairs on the high-rise buildings are difficult to screen. The invention screens out partial characteristic point pairs and verifies by utilizing the constant visual angle difference of the images under two time phases. The following describes in detail the specific implementation steps of the present invention, taking the images in fig. 2 and 3 as examples.
As shown in fig. 1, the implementation of the present invention is divided into 7 main steps, which are:
(1) from reference picture I1And image I to be registered2Middle, extract point characteristics
In the invention, one of the remote sensing images in two time phases is selected as a reference image I1The other as the image I to be registered2
The method selects point features with local invariance, including SIFT, SURF and Harris-Laplace, and obtains a feature point set P1 respectivelyST、P1SF、P1HL、P2ST、P2SF、P2HLWherein P1ST、P1SF、P1HLRepresents from I1The extracted SIFT, SURF and Harris-Laplace feature points in the Chinese character, P2ST、P2SF、P2HLRepresents from I2Extracted SIFT, SURF and Harris-Laplace feature points.
(2) Initial matching of feature points
Feature points are matched by combining feature point self description characteristics, namely feature point set P1STAnd feature point set P2STMatching and feature point set P1SFAnd feature point set P2SFMatching and feature point set P1HLAnd feature point set P2HLMatching to obtain an initial matching characteristic point pair set CST、CSF、CHLAnd the characteristic point pairs respectively correspond to the SIFT, SURF and Harris-Laplace characteristic points, and the characteristic point pairs are partially matched correctly and partially matched incorrectly.
(3) Characteristic point pair strict screening (one-time screening)
Using RANSAC method, using the transformation of whether the characteristic point pairs are consistent to the initial matched characteristic point pair set CST、CSF、CHLThe characteristic point pairs are screened once, affine transformation, perspective transformation and the like can be selected by combining the image distortion condition, and perspective transformation is selected according to the method. And after screening, obtaining a screened feature point pair set F, wherein the correctly matched feature point pairs in the feature point pair set F account for more, and only a few or no mismatched feature point pairs exist. The description of F is as follows:
F={<P11,P21>,<P12,P22>,<P13,P23>,...,<P1n,P2n>}
that is, F includes n pairs of characteristic points, among which<P1i,P2i>(i 1, 2.. n.) denotes a pair of characteristic points, characteristic point P1iRepresenting a picture from a reference picture I1Characteristic point P2iRepresenting images from I to be matched2。P1i、P2iContaining position information of feature points, respectively<x1i,y1i>And<x2i,y2i>and (4) showing.
(4) Image perspective difference information and image position difference information analysis
Position information of each pair of characteristic point pairs is taken out from the characteristic point pair set F of the primary screening, angles between each pair of characteristic point pairs are respectively calculated, and an angle set { β is obtained1,β2,...,βnTherein βiThe (i ═ 1, 2.. times.n) corresponds to the i-th pair of characteristic point pairs in F. Taking the ith pair of characteristic point pairs as an example, the position information of the characteristic point pairs is<x1i,y1i>And<x2i,y2i>the angle β of the characteristic point pair is calculated by the following formulai
Figure BDA0001161525650000051
Wherein atag is the arctag function, and further by the following judgment and formula, βiConversion to the 0-360 degree range:
if x1i-x2i< 0, then βi=βi+180;
If x1i-x2i> 0, and y1i-y2i< 0, then βi=βi+360。
Searching an interval [ A, A + B ] with more concentrated angles of the characteristic point pair by using the interval B]I.e., { β1,β2,...,βnIn the interval [ A, A + B ]]The inner number is the largest, and if the interval B spans 0 degrees or 360 degrees, the segment statistics can be further adopted. The middle angle of the region can be considered as the difference in image viewing angles present in the two images.
[A,A+B]=arg maxnum{βi}(i=1,2,..,n)
Since the viewing angle difference is formed by the difference in the capturing angles of the two images, which are fixed, and the two images are corrected, the angles between the correct feature point pairs are relatively uniform, and it is recommended to set the interval B to 10 to 15 degrees.
Calculating the maximum distance Dis of the characteristic point pairs in the two imagesmaxThe height of the highest building in the image is known to be H meters, and the maximum position deviation of the images of the highest building in the image is known to be images shot under different viewing angles
Figure BDA0001161525650000061
Where R is the resolution of the image. Dis can be further reduced according to the visual angle difference informationmax
(5) Secondary screening feature pairs by using image visual angle difference and image position difference
In the initial matching feature point pair set, sequentially from CST、CSF、CHLTaking out each pair of characteristic point pairs, assuming that the point pair is in image I1And image I2In the middle positions are respectively<x1i,y1i>And<x2i,y2i>and (4) obtaining a secondary screening characteristic point pair set by taking the image visual angle difference and the maximum distance of the characteristic point pairs in the step (4) as constraint conditions, wherein the secondary screening characteristic point pair set specifically comprises the following steps:
1) obtaining an angle α existing between the characteristic point pairs by adopting the method in the step (4);
2) calculate the distance Dis between two points according to the following formulapr
Figure BDA0001161525650000062
3) If the angle α is in the interval A, A + B]And DisPrLess than DismaxAnd putting the characteristic point pair into a secondary screening characteristic point pair set.
(6) Similarity analysis of characteristic points to surrounding image blocks
And aiming at the image blocks around the feature point pairs, verifying the similarity of each pair of feature point pairs in the secondary screening feature point pair set by adopting a mode of calculating image block mutual information. And if any image blocks around the characteristic points are similar, namely the mutual information of the image blocks meets the condition, the characteristic point pairs are considered to be similar. Supposing that a certain characteristic point pair in the secondary screening characteristic point pair set is in the image I1And image I2In the middle positions are respectively<x1i,y1i>And<x2i,y2i>the process of taking 9 image blocks around the feature point specifically comprises the following steps:
1) the feature point is the center of the image block, and is 1 image block in total, as shown in FIG. 4, in image I1And image I2In (A) is respectively marked as B11And B21
2) The feature point is the edge center of the image block, and the total number of the feature points is 4 image blocks, as shown in FIG. 5, in the image I1And image I2In (A) is respectively marked as B12And B22、B13And B23、B14And B24、B15And B25
3) The feature points are the corner points of the image blocks, and the total number of the feature points is 4 image blocks, as shown in FIG. 6, in an image I1And image I2In (A) is respectively marked as B16And B26、B17And B27、B18And B28、B19And B29
Wherein B ispq(p 1, 2; q 1, 2.., 9) denotes an image IpThe middle q image block with Size of SizeBlk. Mutual information of corresponding image blocks in the two images is calculated respectively, and the mutual information { MI (maximum average index) of 9 image blocks can be obtained1,MI2,...,MI9}。
(7) Feature point pair verification
Setting a threshold value BMIIf the mutual information of any image block pair of the feature point pair is greater than the threshold value TMII.e. by
Figure BDA0001161525650000071
The pair of characteristic points is considered to be a match. The feature point matching results of fig. 2 and 3 are shown in fig. 7, which is based on the principle that the roof (especially, a plane roof) of a high-rise building has better stability in appearance relative to the side surface of the high-rise building under different viewing angles, but the position in the image is shifted, which shows that the surrounding ground features in the image have certain changes. And finding the image block where the roof is located by the image blocks around the feature points to match the feature points by utilizing the stability presented by the roof.
The invention provides a multi-temporal ultrahigh resolution remote sensing image registration method which is specially provided for improving the matching of characteristic point pairs on high-rise buildings in urban areas. Obviously, the multi-temporal remote sensing image registration method is suitable for various types of remote sensing images, such as panchromatic images, multispectral images, hyperspectral images, SAR images, infrared images and the like.
Portions of the invention not disclosed in detail are well within the skill of the art.
The method for matching the characteristic points of the remote sensing image of the urban high-rise building with the ultrahigh resolution under the same region and the multiple visual angles is described in detail above, but obviously, the specific implementation form of the method is not limited to the method. Various obvious modifications to it, which will be apparent to those skilled in the art, without departing from the scope of the claims of the present invention, are within the scope of the present invention.

Claims (4)

1. A method for matching high-rise building feature points in an ultrahigh-resolution remote sensing image is characterized by comprising the following steps: the method comprises the following steps:
step (1), extracting feature points: taking two time phase ultrahigh-resolution remote sensing images, one as a reference image and the other as an image to be matched, geometrically correcting the two images, wherein the two images have the same resolution, and extracting characteristic points with local invariance from the two images respectively;
step (2), primary matching of feature points: matching the characteristic points in the reference image and the image to be matched to obtain an initial matching characteristic point pair set, wherein the characteristic point pairs in the characteristic point pair set contain error matching;
and (3) strictly screening the characteristic point pairs: strictly screening the characteristic point pairs in the initial matching characteristic point pair set to obtain a more accurate primary screening characteristic point pair set, wherein the characteristic point pairs contained in the characteristic point pair set are basically matched correctly;
step (4), analyzing image visual angle difference information and image position difference information: utilizing a primary screening characteristic point pair set to calculate an angle between each pair of characteristic point pairs, adopting a statistical mode to take an area with relatively concentrated angles as an image visual angle difference between two images, and simultaneously utilizing height information of a highest building in the image and resolution information of the image to obtain the maximum image position difference of the characteristic point pairs presented in the image;
and (5) secondary screening of characteristic point pairs by utilizing the image visual angle difference and the maximum image position difference: calculating the angle difference and the position difference of each pair of feature point pairs in the initial matched feature point pair set, and screening out the feature point pairs with the angle difference consistent with the image visual angle difference and the position difference smaller than the maximum image position difference to obtain a secondary screened feature point pair set;
and (6) analyzing the similarity of the characteristic points to the surrounding image blocks: aiming at each pair of feature points in the secondary screening feature point pair set, taking image blocks around the feature points, and analyzing the similarity of the corresponding image blocks in a mutual information mode;
step (7), feature point pair verification: and if the similarity between any image block pair around the feature point pair meets a threshold, the feature point pair is considered to be matched, otherwise, the feature point pair is considered to be not matched, and the feature point pair set is removed from the secondary screening feature point pair set to obtain a new correctly matched feature pair set.
2. The method for matching the characteristic points of the high-rise buildings in the ultrahigh-resolution remote sensing image according to claim 1, characterized by comprising the following steps: in the step (4), when the image visual angle difference information is analyzed, the region with the concentrated angle of the characteristic point pair is selected as the image visual angle difference, and the angle value range is within 10-15 degrees.
3. The method for matching the characteristic points of the high-rise buildings in the ultrahigh-resolution remote sensing image according to claim 1, characterized by comprising the following steps: in the step (4), when the image position difference information is analyzed, the image position difference refers to a position difference of a high-rise building, which is shown in an image pair under the influence of a shooting visual angle, and the maximum value of the position difference is 2 times of the highest building height/image resolution.
4. The method for matching the characteristic points of the high-rise buildings in the ultrahigh-resolution remote sensing image according to claim 1, characterized by comprising the following steps: the feature point pair in step (6) is configured to have 9 image blocks for surrounding image blocks: A) 1 image block with the feature point as the center; B) the feature points are respectively positioned in 4 image blocks at the centers of the left edge, the right edge, the upper edge and the lower edge of the image block; C) the feature points are respectively located in 4 image blocks of 4 vertexes of the image block.
CN201611047223.9A 2016-11-25 2016-11-25 Method for matching high-rise building characteristic points in ultrahigh-resolution remote sensing image Active CN106778510B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611047223.9A CN106778510B (en) 2016-11-25 2016-11-25 Method for matching high-rise building characteristic points in ultrahigh-resolution remote sensing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611047223.9A CN106778510B (en) 2016-11-25 2016-11-25 Method for matching high-rise building characteristic points in ultrahigh-resolution remote sensing image

Publications (2)

Publication Number Publication Date
CN106778510A CN106778510A (en) 2017-05-31
CN106778510B true CN106778510B (en) 2020-03-20

Family

ID=58974246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611047223.9A Active CN106778510B (en) 2016-11-25 2016-11-25 Method for matching high-rise building characteristic points in ultrahigh-resolution remote sensing image

Country Status (1)

Country Link
CN (1) CN106778510B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107980140B (en) * 2017-10-16 2021-09-14 厦门熵基科技有限公司 Palm vein identification method and device
CN107862345B (en) * 2017-12-01 2020-05-08 北京智芯原动科技有限公司 License plate comparison method and device
CN110197504B (en) * 2019-06-05 2021-07-20 首都师范大学 Image registration method and device, electronic equipment and computer-readable storage medium
CN111291768B (en) * 2020-02-17 2023-05-30 Oppo广东移动通信有限公司 Image feature matching method and device, equipment and storage medium
CN111784697B (en) * 2020-06-30 2024-03-08 北京百度网讯科技有限公司 Image processing method, device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2172856A2 (en) * 2008-09-25 2010-04-07 Canon Kabushiki Kaisha Image processing apparatus, image processing method and program
CN103020945A (en) * 2011-09-21 2013-04-03 中国科学院电子学研究所 Remote sensing image registration method of multi-source sensor
CN104992438A (en) * 2015-06-26 2015-10-21 江西师范大学 Large-time-span remote sensing image registration method combining with historical image sequence
CN105551058A (en) * 2016-02-02 2016-05-04 天津大学 Cylindrical surface image matching method combining with SURF feature extraction and curve fitting
CN106023187A (en) * 2016-05-17 2016-10-12 西北工业大学 Image registration method based on SIFT feature and angle relative distance

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2172856A2 (en) * 2008-09-25 2010-04-07 Canon Kabushiki Kaisha Image processing apparatus, image processing method and program
CN103020945A (en) * 2011-09-21 2013-04-03 中国科学院电子学研究所 Remote sensing image registration method of multi-source sensor
CN104992438A (en) * 2015-06-26 2015-10-21 江西师范大学 Large-time-span remote sensing image registration method combining with historical image sequence
CN105551058A (en) * 2016-02-02 2016-05-04 天津大学 Cylindrical surface image matching method combining with SURF feature extraction and curve fitting
CN106023187A (en) * 2016-05-17 2016-10-12 西北工业大学 Image registration method based on SIFT feature and angle relative distance

Also Published As

Publication number Publication date
CN106778510A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN106778510B (en) Method for matching high-rise building characteristic points in ultrahigh-resolution remote sensing image
US11244197B2 (en) Fast and robust multimodal remote sensing image matching method and system
CN104574421B (en) Large-breadth small-overlapping-area high-precision multispectral image registration method and device
CN105931295B (en) A kind of geologic map Extracting Thematic Information method
CN103971115B (en) Automatic extraction method for newly-increased construction land image spots based on NDVI and PanTex index
CN104376548B (en) A kind of quick joining method of image based on modified SURF algorithm
CA2949844C (en) System and method for identifying, analyzing, and reporting on players in a game from video
CN104751465A (en) ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint
CN109949340A (en) Target scale adaptive tracking method based on OpenCV
CN108961286B (en) Unmanned aerial vehicle image segmentation method considering three-dimensional and edge shape characteristics of building
CN104197933B (en) High magnitude slides enhancing and the extracting method of fixed star in a kind of range of telescope
CN109118528A (en) Singular value decomposition image matching algorithm based on area dividing
CN109409208A (en) A kind of vehicle characteristics extraction and matching process based on video
CN103310453A (en) Rapid image registration method based on sub-image corner features
CN104167003A (en) Method for fast registering remote-sensing image
CN102938066A (en) Method for reconstructing outer outline polygon of building based on multivariate data
CN102521597B (en) Hierarchical strategy-based linear feature matching method for images
CN105654421B (en) Based on the projective transformation image matching method for converting constant low-rank texture
CN104268526B (en) A kind of Chinese character picture match and deformation method
CN107833242A (en) One kind is based on marginal information and improves VIBE moving target detecting methods
CN105894503A (en) Method for restoring Kinect plant color and depth detection images
CN104200461A (en) Mutual information image selected block and sift (scale-invariant feature transform) characteristic based remote sensing image registration method
CN106407983A (en) Image body identification, correction and registration method
CN108257155B (en) Extended target stable tracking point extraction method based on local and global coupling
CN102982524B (en) Splicing method for corn ear order images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant