CN105160686A - Improved scale invariant feature transformation (SIFT) operator based low altitude multi-view remote-sensing image matching method - Google Patents

Improved scale invariant feature transformation (SIFT) operator based low altitude multi-view remote-sensing image matching method Download PDF

Info

Publication number
CN105160686A
CN105160686A CN201510688554.XA CN201510688554A CN105160686A CN 105160686 A CN105160686 A CN 105160686A CN 201510688554 A CN201510688554 A CN 201510688554A CN 105160686 A CN105160686 A CN 105160686A
Authority
CN
China
Prior art keywords
image
point
unique point
sampling
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510688554.XA
Other languages
Chinese (zh)
Other versions
CN105160686B (en
Inventor
邵振峰
李从敏
周维勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201510688554.XA priority Critical patent/CN105160686B/en
Publication of CN105160686A publication Critical patent/CN105160686A/en
Application granted granted Critical
Publication of CN105160686B publication Critical patent/CN105160686B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Abstract

The invention discloses an improved scale invariant feature transformation (SIFT) operator based low altitude multi-view remote-sensing image matching method. Firstly, an optimized DoG operator is used for performing feature point detection on a first image and a second image, then, a local area sampling simulation based SIFT descriptor is used for describing a feature point so as to form a feature vector, and finally, an initial matching feature point is formed by use of a nearest neighbor distance ratio (NNDR) policy, and matching points are purified by use of an epipolar geometric constraint based random sample consensus (RANSAC) algorithm. Through adoption of the method, problems of multi-view and weak texture in low altitude remote-sensing image matching can be solved effectively.

Description

A kind of low latitude various visual angles Remote Sensing Images Matching Method based on improving SIFT operator
Technical field
The invention belongs to Remote Sensing Image Processing Technology field, relate to a kind of Remote Sensing Images Matching Method, relating to a kind of low latitude various visual angles Remote Sensing Images Matching Method based on improving SIFT operator.
Background technology
Image Matching is research emphasis in photogrammetric and remote sensing fields and focus always, and be that image is directed, orthography makes, one of committed step of image autoregistration and splicing, three-dimensional reconstruction, directly has influence on precision and effect that subsequent product makes.Compared with traditional remote sensing image, the acquisition environment of low altitude remote sensing image is more complicated, particularly from the image that different visual angles obtains, the problem such as there is serious geometric distortion, similar grain, texture fracture, shade, block, substantially reduce the similarity of same target on the different images with overlapping region, bring great difficulty to the coupling of low latitude image.Therefore, how from the low latitude various visual angles image with serious geometric distortion, to obtain reliable and stable match point to be significant.
Image Matching Algorithm is mainly divided into two classes: the coupling based on image greyscale and the coupling based on image feature.Matching process based on image greyscale directly operates the pixel in window ranges certain on image, homotopy mapping is carried out by the Gray Correlation of pixel in window ranges, although these class methods can obtain higher matching precision when image radiation and geometric form diminish, but it is more responsive to the grey scale change of image, is not suitable for geometric distortion, similar grain, texture fracture, shade in the various visual angles image of low latitude, the problem such as blocks.Matching process based on image feature is mated by geometric properties such as the point, line, surface in extraction image, can supplement the deficiency of the matching process based on gray scale preferably.
At present, in feature-based matching method, most widely used SIFT (ScaleInvariantFeatureTransformation) algorithm being Lowe and proposing, the problem that the rotation in image geometry distortion, scaling and translation change can be processed preferably, but in the problem that image is with great visual angle right, still cannot obtain gratifying effect.In order to improve the matching effect for different visual angles image, scholars carry out multiple research and improvement to SIFT operator, propose ASIFT (Affine-SITT), the various visual angles such as ISIFT (Iterative-SITT) and MM-SIFT (Multi-resolutionMSERbasedSIFT) image matching method.ASIFT is according to camera affine model, longitude, latitude and yardstick three parameters are sampled, with analog image view transformation, thus set up affine space image, then on all analog images, carry out SIFT feature point detection and coupling, thus the method has complete affine-invariant features, even if in larger visual angle change situation, also good matching result can be obtained, meanwhile, its time complexity increases thereupon, limits its range of application.In order to balance matching result and time efficiency, ISIFT method utilizes a geometric transformation modeling wherein width image of iterative estimate, then utilizes an only width analog image to carry out SIFT with another width image and mates, substantially increase matching efficiency.But when when between image, geometric distortion is larger, this algorithm can lose efficacy because geometric model solves failure.MM-SIFT meets the theory of same affine model according to regional area corresponding in image, carries out circle normalization, then in normalized elliptic region, carry out SIFT coupling to the locally ellipticity region in two width images.The method substantially increases time efficiency, and exist when changing with great visual angle when between image, still some correct match points can be obtained, but it is large to the regional area dependence extracted, when it adopts MSER operator extraction regional area, not optimal threshold, and existing Region Segmentation Algorithm is used for from various visual angles, and image is still immature.
Summary of the invention
For the deficiency that existing Image Matching technology exists, the object of this invention is to provide a kind of time efficiency high, and to the more stable low latitude various visual angles image matching method improved based on SIFT of image geometry distortion.
The technical solution adopted in the present invention is: a kind of low latitude various visual angles Remote Sensing Images Matching Method based on improving SIFT operator, is characterized in that, comprise the following steps:
Step 1: the DoG operator of use adaptive threshold detects the unique point in the first image and the second image;
Step 2: enter to describe to unique point based on the SIFT descriptor improved;
For the unique point that the first image and the second image extract, adopt respectively in its neighborhood and based on border circular areas sampling with based on the gray-scale value of elliptic region sampling, unique point is described, final formation 128 dimensional feature vectors;
Step 3: the nearest neighbor distance between distinguished point based characteristic of correspondence vector obtains initial matching point set with this strategy of ratio (NNDR, NearestNeighborDistanceRatio) of time nearest neighbor distance;
Step 4: the RANSAC algorithm based on Epipolar geometric constraint is purified to initial matching point set, obtains final match point.
As preferably, the DoG operator of the use adaptive threshold described in step 1 detects the unique point in the first image and the second image, that its expression formula is according to the half-tone information self-adaptation definite threshold T_contrast in the certain window ranges centered by unique point:
T _ c o n t r a s t = 1 4 · r 2 Σ i = m - r i = m + r Σ j = n - r j = n + r 1 N 2 Σ k = i - N - 1 2 k = i + N - 1 2 Σ l = j - N - 1 2 l = j + N - 1 2 ( x k l - x ‾ N ) ;
Wherein, (m, n) is the coordinate of unique point in image, and N is window size, x klthe gray-scale value of pixel at DoG metric space, be the average gray of pixel in window ranges, r is decided by the yardstick of image, can be calculated as follows:
r = 4 · σ 0 · 2 2 S i M I ;
Wherein, S ifor the yardstick at current signature point place, M irepresent the quantity of image in DoG metric space, σ 0represent the degree that image is level and smooth.
As preferably, the gray-scale value that the employing described in step 2 is sampled based on border circular areas is described unique point, for the arbitrary characteristics point (x of the first image i, y i), the border circular areas sampled point in its neighborhood expression formula be:
x i j k = x i + r i j s i n ( θ k ) ;
y i j k = y i + r i j c o s ( θ k ) ;
Wherein, r ijfor sample radius, θ kfor sampling angle.
As preferably, the gray-scale value that the employing described in step 2 is sampled based on elliptic region is described unique point, for the second image arbitrary characteristics point (x ' i, y ' i), elliptic region sampled point in its neighborhood (x ' i jk, y ' i jk) expression formula be:
x′ i jk=x′ i+a ijcos(θ k)cos(θ 0)+b ijcos(θ k)sin(θ 0);
y′ i jk=y′ i+a ijsin(θ k)sin(θ 0)-b ijcos(θ k)cos(θ 0);
Wherein, a ijfor ellipse sampling major axis, b ijfor ellipse sampling minor axis, b ij=a ij/ γ, γ>=1, θ 0for oval direction, θ kfor sampling angle.
As preferably, the gradient calculation expression formula of sampled point is as follows:
d θ ( x i j k , y i j k ) = I ( x i j k , y i j k ) - I ( x i j ( k - 1 ) , y i j ( k - 1 ) ) ;
d γ ( x i j k , y i j k ) = I ( x i j k , y i j k ) - I ( x i ( j - 1 ) k , y i ( j - 1 ) k ) ;
m ( x i j k , y i j k ) = d θ 2 ( x i j k , y i j k ) + d γ 2 ( x i j k , y i j k ) .
Wherein, d .(. .) represent the Grad in this θ and γ direction, m (. .) representing the gradient direction of this point, the gray-scale value of sampled point then adopts the mode of bilinear interpolation to ask for, and is designated as
As preferably, the strategy of iterative is adopted to determine elliptic parameter γ and θ 0, concrete steps are as follows:
Step 2.1: a random selecting M unique point from the first image, its proper vector is designated as
Step 2.2: when carrying out ellipse sampling in the second image, if γ=γ s, θ 0=0, obtain all unique point proper vectors, be designated as use the most contiguous minimum distance method to mate the first image and the second image, note correct coupling number is n;
Step 2.3: change γ and θ 0value, wherein γ ∈ [γ s, γ e], θ 0∈ [0 °, 90 °], obtains the D ' in multiple analog situation and corresponding correct match point n i;
Step 2.4: the matching result in more multiple situation, as acquisition max (n i) time, corresponding γ *and θ *be optimum solution;
Step 2.5: select max (n in the first image i) time unique point as initial point, re-start step 2.1-step 2.5, until γ *and θ *value tend towards stability.
As preferably, after completing neighbor point sampling in step 2, the gray-scale value of sampled point is asked for by bilinear interpolation method.
Compared with prior art, the present invention has following features and beneficial effect,
1, adopt adaptive threshold in the feature point extraction stage, make DoG operator can adapt to the image of different texture, thus ensure that subsequent match result;
2, in the feature interpretation stage, adopt the mode of oval sampling to go to simulate visual angle change, add the similarity of same target on different visual angles image, substantially improve the matching effect of various visual angles and weak texture region;
3, adopt localized region to carry out simulating instead of processing view picture image, while raising matching precision, take into account time efficiency.
Accompanying drawing explanation
Fig. 1: be the process flow diagram of the embodiment of the present invention;
Fig. 2: be the border circular areas sampling schematic diagram in the embodiment of the present invention;
Fig. 3: be the elliptic region sampling schematic diagram in the embodiment of the present invention.
Embodiment
Understand for the ease of those of ordinary skill in the art and implement the present invention, below in conjunction with drawings and Examples, the present invention is described in further detail, should be appreciated that exemplifying embodiment described herein is only for instruction and explanation of the present invention, is not intended to limit the present invention.
A kind of low latitude various visual angles Remote Sensing Images Matching Method based on improving SIFT operator that the present invention proposes, first adopts the DoG operator of optimization difference extract minutiae on reference image and image to be matched; Secondly the SIFT descriptor based on local area simulation is adopted to carry out feature interpretation; Then based on the ratio strategy of minimum distance with time adjacency, unique point is mated, and adopt the RANSAC based on pole geometry to purify to initial matching result, obtain final match point.
During concrete enforcement, the present invention can adopt computer software technology to realize automatic operational scheme.For describing technical solution of the present invention in detail, ask for an interview Fig. 1, a kind of low latitude various visual angles Remote Sensing Images Matching Method based on improving SIFT operator provided by the invention, comprises the following steps:
Step 1: the DoG operator of use adaptive threshold detects the unique point in the first image and the second image;
In order to adapt to the image of different details, the present invention does not adopt the fixed threshold in original DoG, but according to the half-tone information self-adaptation definite threshold T_contrast in the certain window ranges centered by unique point, its expression formula is:
T _ c o n t r a s t = 1 4 · r 2 Σ i = m - r i = m + r Σ j = n - r j = n + r 1 N 2 Σ k = i - N - 1 2 k = i + N - 1 2 Σ l = j - N - 1 2 l = j + N - 1 2 ( x k l - x ‾ N ) ;
Wherein, (m, n) is the coordinate of unique point in image, and N is window size, x klthe gray-scale value of pixel at DoG metric space, be the average gray of pixel in window ranges, r is decided by the yardstick of image, can be calculated as follows:
r = 4 · σ 0 · 2 2 S i M I ;
Wherein, S ifor the yardstick at current signature point place, M irepresent the quantity of image in DoG metric space, σ 0represent the degree that image is level and smooth.
Step 2: enter to describe to unique point based on the SIFT descriptor improved;
For the unique point that the first image and the second image extract, adopt respectively in its neighborhood and based on border circular areas sampling with based on the gray-scale value of elliptic region sampling, unique point is described, final formation 128 dimensional feature vectors;
Research object of the present invention is low latitude various visual angles remote sensing image, due to the complicacy of image-forming condition, serious geometric distortion is there is between different visual angles image, overall image is made not meet same affined transformation, but the part plan region having the various visual angles image of overlapping region corresponding can represent by same affined transformation.In image geometry, relation between circle and ellipse can be used for portraying an affined transformation, therefore, the present invention is in the feature interpretation stage, unique point in one width image describes scope and adopts border circular areas, and the description scope in another width image adopts the elliptic region of simulation, and object is the view transformation of analog image, increase the similarity of the half-tone information in unique point neighborhood, thus improve the matching effect of various visual angles image.
When describing the unique point border circular areas of a wherein width image, to each unique point (x i, y i), the size available radius that its part-circular describes region represents, its expression formula is as follows.
w i = ( 3 σ i × 2 × 5 + 1 ) / 2 ;
Wherein, σ ifor unique point place yardstick.
Directly use the gray-scale value in adjacent integers coordinate different from original SIFT, the present invention uses the mode of sampling and interpolation to obtain the half-tone information in neighborhood, and sample mode as shown in Figure 2.Sampling expression formula part-circular being described to radius in region and angle is respectively:
r ij=jw i/s;
θ k=kπ/180;
Wherein, s is total radius sampling number, j=1 ..., s, k are sampling angle, k=1 ..., 360 °/n ..., 360 °.
According to sample radius and sampling angle, can calculating sampling point coordinate:
x i j k = x i + r i j s i n ( θ k ) ;
y i j k = y i + r i j c o s ( θ k ) ;
Wherein, r ijfor sample radius, θ kfor sampling angle.
The gray-scale value of sampled point then adopts the mode of bilinear interpolation to ask for, and is designated as bilinear interpolation is a kind of common method, here repeats no more.
When adopting elliptic region Expressive Features point to an other width image, the content that unique point elliptic region comprises should be as far as possible identical with the content that same place border circular areas comprises to ensure matching effect.For each unique point (x ' i, y ' i), according to major axis a ij, minor axis b ijand oval direction θ 0three elliptic parameters, can determine the sample range of elliptic region.A ijaccount form and r ijidentical, if b ij=a ij/ γ, then the parameter controlling elliptic region change is γ and θ 0.When ellipse is sampled, sample mode as shown in Figure 3, each sampled point (x ' i jk, y ' i jk) coordinate can according to following formulae discovery:
x′ i jk=x′ i+a ijcos(θ k)cos(θ 0)+b ijcos(θ k)sin(θ 0);
y′ i jk=y′ i+a ijsin(θ k)sin(θ 0)-b ijcos(θ k)cos(θ 0);
Wherein j=1 ..., s, θ 0∈ [0 °, 180 °], θ k=k π/180, k=1 ..., 360 °/n ..., 360 °.
After completing neighbor point sampling, the gray-scale value of elliptic region sampled point adopts the mode of bilinear interpolation to obtain equally.
When compute gradient, different from original SIFT, do not use adjacent feature point, as (x i, y i), (x i, y i-1), (x i-1, y i) etc., but adopt above-mentioned neighbouring sample point, as deng, describe region for circle, the concrete calculation expression of gradient is as follows:
d θ ( x i j k , y i j k ) = I ( x i j k , y i j k ) - I ( x i j ( k - 1 ) , y i j ( k - 1 ) ) ;
d γ ( x i j k , y i j k ) = I ( x i j k , y i j k ) - I ( x i ( j - 1 ) k , y i ( j - 1 ) k ) ;
m ( x i j k , y i j k ) = d θ 2 ( x i j k , y i j k ) + d γ 2 ( x i j k , y i j k ) .
Wherein, d .(. .) represent the Grad in this θ and γ direction, m (. .) represent the gradient direction of this point.
According to the shade of gray that above formula obtains, according to the account form compute gradient direction histogram of original SIFT, and then the proper vector of each unique point can be obtained.
Can the above-mentioned SIFT descriptor based on local area simulation sampling effectively solve the geometric deformation problem with great visual angle, and key is the simulation degree of elliptic region to border circular areas, and the parameter namely controlling elliptic region change is γ and θ 0.The present invention adopts iterative strategy to solve γ and θ 0value, to ensure matching effect, concrete steps are as follows:
Step 2.1: a random selecting M unique point from the first image, its proper vector is designated as
Step 2.2: when carrying out ellipse sampling in the second image, if γ=γ s, θ 0=0, obtain all unique point proper vectors, be designated as use the most contiguous minimum distance method to mate the first image and the second image, note correct coupling number is n;
Step 2.3: change γ and θ 0value, wherein γ ∈ [γ s, γ e], θ 0∈ [0 °, 90 °], obtains the D ' in multiple analog situation and corresponding correct match point n i;
Step 2.4: the matching result in more multiple situation, as acquisition max (n i) time, corresponding γ *and θ *be optimum solution;
Step 2.5: select max (n in the first image i) time unique point as initial point, re-start step 2.1-step 2.5, until γ *and θ *value tend towards stability.
Step 3: adopt NNDR matching strategy matching characteristic point
Adopt arest neighbors and time neighbour's ratio (NNDR, NearestNeighborDistanceRatio) strategy mates, during concrete enforcement, calculate the Euclidean distance between each feature descriptor, for each feature, if the distance of the most close with it two features meets minor increment be less than certain threshold value △ with the ratio of time small distance, then think that this coupling is to being that a pair correct coupling is right;
Step 4: utilize the RANSAC based on Epipolar geometric constraint to reject error matching points
After completing Feature Points Matching, in initial matching result, some erroneous matching can be there are.Classic method be utilize RANSAC (RandomSampleConsensus) to estimate affine transformation matrix between image is to reject erroneous matching, but for image with great visual angle, view picture image is not also suitable for same affined transformation, therefore, in the specific implementation, adopt the method for Epipolar geometric constraint, by estimate image picture between fundamental matrix reject erroneous matching.
Should be understood that, the part that this instructions does not elaborate all belongs to prior art.
Should be understood that; the above-mentioned description for preferred embodiment is comparatively detailed; therefore the restriction to scope of patent protection of the present invention can not be thought; those of ordinary skill in the art is under enlightenment of the present invention; do not departing under the ambit that the claims in the present invention protect; can also make and replacing or distortion, all fall within protection scope of the present invention, request protection domain of the present invention should be as the criterion with claims.

Claims (7)

1., based on the low latitude various visual angles Remote Sensing Images Matching Method improving SIFT operator, it is characterized in that, comprise the following steps:
Step 1: the DoG operator of use adaptive threshold detects the unique point in the first image and the second image;
Step 2: enter to describe to unique point based on the SIFT descriptor improved;
For the unique point that the first image and the second image extract, adopt respectively in its neighborhood and based on border circular areas sampling with based on the gray-scale value of elliptic region sampling, unique point is described, final formation 128 dimensional feature vectors;
Step 3: the nearest neighbor distance between distinguished point based characteristic of correspondence vector obtains initial matching point set with this strategy of ratio of time nearest neighbor distance;
Step 4: the RANSAC algorithm based on Epipolar geometric constraint is purified to initial matching point set, obtains final match point.
2. the low latitude various visual angles Remote Sensing Images Matching Method based on improving SIFT operator according to claim 1, it is characterized in that: the DoG operator of the use adaptive threshold described in step 1 detects the unique point in the first image and the second image, that its expression formula is according to the half-tone information self-adaptation definite threshold T_contrast in the certain window ranges centered by unique point:
T _ c o n t r a s t = 1 4 · r 2 Σ i = m - r i = m + r Σ j = n - r j = n + r 1 N 2 Σ k = i - N - 1 2 k = i + N - 1 2 Σ l = j - N - 1 2 l = j + N - 1 2 ( x k l - x ‾ N ) ;
Wherein, (m, n) is the coordinate of unique point in image, and N is window size, x klthe gray-scale value of pixel at DoG metric space, be the average gray of pixel in window ranges, r is decided by the yardstick of image, can be calculated as follows:
r = 4 · σ 0 · 2 2 S i M I ;
Wherein, S ifor the yardstick at current signature point place, M irepresent the quantity of image in DoG metric space, σ 0represent the degree that image is level and smooth.
3. the low latitude various visual angles Remote Sensing Images Matching Method based on improving SIFT operator according to claim 1, it is characterized in that: the gray-scale value that the employing described in step 2 is sampled based on border circular areas is described unique point, for the arbitrary characteristics point (x of the first image i, y i), the border circular areas sampled point in its neighborhood expression formula be:
x i j k = x i + r i j s i n ( θ k ) ;
y i j k = y i + r i j c o s ( θ k ) ;
Wherein, r ijfor sample radius, θ kfor sampling angle.
4. the low latitude various visual angles Remote Sensing Images Matching Method based on improving SIFT operator according to claim 1, it is characterized in that: the gray-scale value that the employing described in step 2 is sampled based on elliptic region is described unique point, for the second image arbitrary characteristics point (x ' i, y ' i), elliptic region sampled point in its neighborhood (x ' i jk, y ' i jk) expression formula be:
x′ i jk=x′ i+a ijcos(θ k)cos(θ 0)+b ijcos(θ k)sin(θ 0);
y′ i jk=y′ i+a ijsin(θ k)sin(θ 0)-b ijcos(θ k)cos(θ 0);
Wherein, a ijfor ellipse sampling major axis, b ijfor ellipse sampling minor axis, b ij=a ij/ γ, γ>=1, θ 0for oval direction, θ kfor sampling angle.
5. the low latitude various visual angles Remote Sensing Images Matching Method based on improving SIFT operator according to claim 1, is characterized in that: the unique point gradient calculation expression formula based on area sampling described in step 2 is as follows:
d θ ( x i j k , y i j k ) = I ( x i j k , y i j k ) - I ( x i j ( k - 1 ) , y i j ( k - 1 ) ) ;
d γ ( x i j k , y i j k ) = I ( x i j k , y i j k ) - I ( x i ( j - 1 ) k , y i ( j - 1 ) k ) ;
m ( x i j k , y i j k ) = d θ 2 ( x i j k , y i j k ) + d γ 2 ( x i j k , y i j k ) ;
Wherein, d (. .) represent the Grad in this θ and γ direction, m (. .) represent the gradient direction of this point, the gray-scale value of sampled point then adopts the mode of bilinear interpolation to ask for, and is designated as wherein * is general symbol.
6. the low latitude various visual angles Remote Sensing Images Matching Method based on improving SIFT operator according to claim 4, is characterized in that: adopt the strategy of iterative to determine elliptic parameter γ and θ 0, concrete steps are as follows:
Step 2.1: a random selecting M unique point from the first image, its proper vector is designated as
Step 2.2: when carrying out ellipse sampling in the second image, if γ=γ s, θ 0=0, obtain all unique point proper vectors, be designated as use most adjacency method to mate the first image and the second image, note correct coupling number is n;
Step 2.3: change γ and θ 0value, wherein γ ∈ [γ s, γ e], θ 0∈ [0 °, 90 °], obtains the D ' in multiple analog situation and corresponding correct match point n i;
Step 2.4: the matching result in more multiple situation, as acquisition max (n i) time, corresponding γ *and θ *be optimum solution;
Step 2.5: select max (n in the first image i) time unique point as initial point, re-start step 2.1-step 2.5, until γ *and θ *value tend towards stability.
7. the low latitude various visual angles Remote Sensing Images Matching Method based on improving SIFT operator according to claim 1, is characterized in that: after completing neighbor point sampling in step 2, the gray-scale value of sampled point is asked for by bilinear interpolation method.
CN201510688554.XA 2015-10-21 2015-10-21 A kind of low latitude various visual angles Remote Sensing Images Matching Method based on improvement SIFT operators Active CN105160686B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510688554.XA CN105160686B (en) 2015-10-21 2015-10-21 A kind of low latitude various visual angles Remote Sensing Images Matching Method based on improvement SIFT operators

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510688554.XA CN105160686B (en) 2015-10-21 2015-10-21 A kind of low latitude various visual angles Remote Sensing Images Matching Method based on improvement SIFT operators

Publications (2)

Publication Number Publication Date
CN105160686A true CN105160686A (en) 2015-12-16
CN105160686B CN105160686B (en) 2017-08-25

Family

ID=54801528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510688554.XA Active CN105160686B (en) 2015-10-21 2015-10-21 A kind of low latitude various visual angles Remote Sensing Images Matching Method based on improvement SIFT operators

Country Status (1)

Country Link
CN (1) CN105160686B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469415A (en) * 2015-12-28 2016-04-06 云南师范大学 Multi-view remote sensing image fusion method
CN107909094A (en) * 2017-10-31 2018-04-13 天津大学 A kind of adaptive threshold multiple target SIFT matching algorithm implementation methods
CN108021886A (en) * 2017-12-04 2018-05-11 西南交通大学 A kind of unmanned plane repeats texture image part remarkable characteristic matching process
CN109579847A (en) * 2018-12-13 2019-04-05 歌尔股份有限公司 Extraction method of key frame, device and smart machine in synchronous superposition
CN109784223A (en) * 2018-12-28 2019-05-21 珠海大横琴科技发展有限公司 A kind of multi-temporal remote sensing image matching process and system based on convolutional neural networks
CN110856112A (en) * 2019-11-14 2020-02-28 深圳先进技术研究院 Crowd-sourcing perception multi-source information fusion indoor positioning method and system
CN111739079A (en) * 2020-06-18 2020-10-02 东华理工大学 Multi-source low-altitude stereo pair fast matching method based on semantic features
CN115019181A (en) * 2022-07-28 2022-09-06 北京卫星信息工程研究所 Remote sensing image rotating target detection method, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140624A (en) * 2007-10-18 2008-03-12 清华大学 Image matching method
CN103400384A (en) * 2013-07-22 2013-11-20 西安电子科技大学 Large viewing angle image matching method capable of combining region matching and point matching

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140624A (en) * 2007-10-18 2008-03-12 清华大学 Image matching method
CN103400384A (en) * 2013-07-22 2013-11-20 西安电子科技大学 Large viewing angle image matching method capable of combining region matching and point matching

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MIN CHEN 等: "Invariant matching method for different viewpoint angle images", 《APPLIED OPTICS》 *
MIN CHEN 等: "Robust Affi ne-Invariant Line Matching for High Resolution Remote Sensing Images", 《PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING》 *
陈月玲: "结合区域匹配和点匹配的大视角图像匹配方法", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469415A (en) * 2015-12-28 2016-04-06 云南师范大学 Multi-view remote sensing image fusion method
CN107909094A (en) * 2017-10-31 2018-04-13 天津大学 A kind of adaptive threshold multiple target SIFT matching algorithm implementation methods
CN108021886B (en) * 2017-12-04 2021-09-14 西南交通大学 Method for matching local significant feature points of repetitive texture image of unmanned aerial vehicle
CN108021886A (en) * 2017-12-04 2018-05-11 西南交通大学 A kind of unmanned plane repeats texture image part remarkable characteristic matching process
CN109579847A (en) * 2018-12-13 2019-04-05 歌尔股份有限公司 Extraction method of key frame, device and smart machine in synchronous superposition
CN109579847B (en) * 2018-12-13 2022-08-16 歌尔股份有限公司 Method and device for extracting key frame in synchronous positioning and map construction and intelligent equipment
US11466988B2 (en) 2018-12-13 2022-10-11 Goertek Inc. Method and device for extracting key frames in simultaneous localization and mapping and smart device
CN109784223B (en) * 2018-12-28 2020-09-01 珠海大横琴科技发展有限公司 Multi-temporal remote sensing image matching method and system based on convolutional neural network
CN109784223A (en) * 2018-12-28 2019-05-21 珠海大横琴科技发展有限公司 A kind of multi-temporal remote sensing image matching process and system based on convolutional neural networks
CN110856112A (en) * 2019-11-14 2020-02-28 深圳先进技术研究院 Crowd-sourcing perception multi-source information fusion indoor positioning method and system
CN110856112B (en) * 2019-11-14 2021-06-18 深圳先进技术研究院 Crowd-sourcing perception multi-source information fusion indoor positioning method and system
CN111739079A (en) * 2020-06-18 2020-10-02 东华理工大学 Multi-source low-altitude stereo pair fast matching method based on semantic features
CN111739079B (en) * 2020-06-18 2022-10-11 东华理工大学 Multisource low-altitude stereopair fast matching method based on semantic features
CN115019181A (en) * 2022-07-28 2022-09-06 北京卫星信息工程研究所 Remote sensing image rotating target detection method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN105160686B (en) 2017-08-25

Similar Documents

Publication Publication Date Title
CN105160686A (en) Improved scale invariant feature transformation (SIFT) operator based low altitude multi-view remote-sensing image matching method
CN103727930B (en) A kind of laser range finder based on edge matching and camera relative pose scaling method
Chen et al. Feature detection and description for image matching: from hand-crafted design to deep learning
WO2019007004A1 (en) Image feature extraction method for person re-identification
CN104200461A (en) Mutual information image selected block and sift (scale-invariant feature transform) characteristic based remote sensing image registration method
WO2015010451A1 (en) Method for road detection from one image
CN110866871A (en) Text image correction method and device, computer equipment and storage medium
CN103364410B (en) Crack detection method of hydraulic concrete structure underwater surface based on template search
CN110097093A (en) A kind of heterologous accurate matching of image method
CN106845495B (en) Broken curve method of closing in a kind of image
CN104850850A (en) Binocular stereoscopic vision image feature extraction method combining shape and color
CN103400384A (en) Large viewing angle image matching method capable of combining region matching and point matching
CN103700101A (en) Non-rigid brain image registration method
CN103886589A (en) Goal-oriented automatic high-precision edge extraction method
CN104021559A (en) Image registration method based on mutual information and Harris corner point detection
CN104616308A (en) Multiscale level set image segmenting method based on kernel fuzzy clustering
CN108765475A (en) A kind of building three-dimensional point cloud method for registering based on deep learning
CN104881029A (en) Mobile robot navigation method based on one point RANSAC and FAST algorithm
CN112883850A (en) Multi-view aerospace remote sensing image matching method based on convolutional neural network
CN103955950A (en) Image tracking method utilizing key point feature matching
CN103593675A (en) Vein matching method based on logarithm polar coordinate transformation
CN110909778B (en) Image semantic feature matching method based on geometric consistency
CN113658129B (en) Position extraction method combining visual saliency and line segment strength
CN109766850B (en) Fingerprint image matching method based on feature fusion
CN110135435B (en) Saliency detection method and device based on breadth learning system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant