CN103886611B - A kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically - Google Patents

A kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically Download PDF

Info

Publication number
CN103886611B
CN103886611B CN201410139272.XA CN201410139272A CN103886611B CN 103886611 B CN103886611 B CN 103886611B CN 201410139272 A CN201410139272 A CN 201410139272A CN 103886611 B CN103886611 B CN 103886611B
Authority
CN
China
Prior art keywords
image
point
characteristic
matching
digitized video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410139272.XA
Other languages
Chinese (zh)
Other versions
CN103886611A (en
Inventor
彭桂辉
梁菲
赵铁梅
左涛
刘敏
郭永春
姚春雨
王慧芳
宋袁龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sian Coal and Aeronautics Information Industry Co Ltd
Original Assignee
Sian Coal and Aeronautics Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sian Coal and Aeronautics Information Industry Co Ltd filed Critical Sian Coal and Aeronautics Information Industry Co Ltd
Priority to CN201410139272.XA priority Critical patent/CN103886611B/en
Publication of CN103886611A publication Critical patent/CN103886611A/en
Application granted granted Critical
Publication of CN103886611B publication Critical patent/CN103886611B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically, including step:First, image preprocessing:Treat matched two width digitized video and carry out denoising and filtering process respectively, two width digitized videos to be matched are the digitized video of the two width measured regions obtained using Aerial Photogrammetry;Digitized video described in two width is two dimensional image;2nd, SIFT feature is extracted:SIFT feature extraction carries out digitized video described in two width using the data processor respectively, obtains the SIFT feature of all characteristic points of digitized video and each characteristic point description described in two width;3rd, structure kd tree bidirectional researches determine match point;4th, elimination of rough difference:Elimination of rough difference is carried out using RANSAC algorithms.The method of the present invention step is simple, design rationally, realize that convenient and robustness is good, using effect is good, can effectively solve the problems, such as that existing matching process be easy to cause that it fails to match or error hiding.

Description

A kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically
Technical field
The invention belongs to aerophotogrammetry processing technology field, is suitable for aeroplane photography flight matter more particularly, to one kind Measure the image matching method checked automatically.
Background technology
Image Matching refers to identify same place between two width or several images by certain Image Matching Algorithm.Image Matching can be used in no field, such as Photogrammetric Processing, Digital Image Processing, Medical Image Processing, remote sensing image number According to processing etc., different application fields has Image Matching different requirements.Aeroplane photography flight reappearance is checked automatically Image Matching for, it is desirable to the correctness of match point is high, and precision reaches within 2 pixels.In practical applications Image noise, image rotation and the influence of deformation of image are suffered from algorithm, especially in aeroplane photography, since landform rises The influence of volt, the camera style of central projection and flight environment of vehicle, makes aeroplane photography picture rotate and deform larger, it is therefore desirable to pass through The method of Image Matching carries out aeroplane photography flight reappearance and checks automatically, but existing matching process is during automatically processing It is easy to cause that it fails to match or the problem of error hiding.
Thus, nowadays lack a kind of method and step is simple, design rationally, realize that convenient and robustness is good, using effect The good image matching method for being suitable for aeroplane photography flight reappearance and checking automatically, it can carry out the flight reappearance of aeroplane photography Check, and correctness is high, can effectively solve existing matching process be easy to cause that it fails to match during automatically processing or The problem of error hiding.
The content of the invention
It is suitable for navigating in view of the above-mentioned deficiencies in the prior art, the technical problem to be solved by the present invention is that providing one kind The image matching method that empty photographic aviation quality checks automatically, its method and step is simple, design is reasonable, realizes convenient and robustness It is good, using effect is good, can solve the problems, such as that existing matching process be easy to cause that it fails to match or error hiding.
In order to solve the above technical problems, the technical solution adopted by the present invention is:One kind is suitable for aeroplane photography flight reappearance Automatically the image matching method checked, it is characterised in that this method comprises the following steps:
Step 1: image preprocessing:Using data processor treat matched two width digitized video carry out respectively denoising and Filtering process, two width digitized videos to be matched are the numeral of the two width measured regions obtained using Aerial Photogrammetry Image;Digitized video described in two width is two dimensional image;Digitized video described in two width is respectively the first image and the second image;
Step 2: SIFT feature is extracted:SIFT is carried out using the data processor respectively to digitized video described in two width Feature extraction, obtains the SIFT feature of all characteristic points of digitized video and each characteristic point description described in two width;
Step 3: structure kd tree bidirectional researches determine match point:Kd trees are built using the data processor and are carried out double It is as follows to search, process:
Step 301, match for the first time, comprises the following steps:
Step 3011, structure kd trees:Using second image as benchmark image to be matched, and by second image All characteristic points SIFT feature description son structure kd trees;
Step 3012, characteristic matching:Using kd trees constructed in step 3011, and using k-nearest neighbor to step 2 SIFT feature description of all characteristic points of first image of middle extraction carries out characteristic matching, finds out second image In all characteristic points with first Image Matching, in second image with the characteristic point of first Image Matching for With point;
Step 302, match for second, comprises the following steps:
Step 3021, structure kd trees:Using first image as benchmark image to be matched, and by first image All characteristic points SIFT feature description son structure kd trees;
Step 3022, characteristic matching:Using kd trees constructed in step 3011, and using k-nearest neighbor to step All match points for drawing are matched in 3012 and carry out characteristic matching, find out in first image with second Image Matching All characteristic points, the characteristic point in first image with second Image Matching is reliable matching point;
Step 4: elimination of rough difference:By the data processor and RANSAC algorithms are used, to being matched in step 3022 All reliable matching points gone out carry out elimination of rough difference.
A kind of above-mentioned image matching method for being suitable for aeroplane photography flight reappearance and checking automatically, it is characterized in that:Step 1 In denoising is carried out to the digital picture and during filtering process, Gaussian smoothing filter first is carried out to the digitized video, then to height Image after this smothing filtering carries out Wallis filtering process.
A kind of above-mentioned image matching method for being suitable for aeroplane photography flight reappearance and checking automatically, it is characterized in that:Step 2 In to the digitized video carry out SIFT feature extraction when, process is as follows:
Step 201, Gaussian kernel process of convolution:According to formula L (x, y, σm)=G(x,y,σm) × I (x, y), to number to be extracted Word image carries out M Gaussian kernel process of convolution, obtains the multiscale space of digitized video to be extracted, and the multiscale space includes M gaussian pyramid space L (x, y, σ of digitized video to be extractedm);Wherein(x, Y) be two dimensional image pixel position coordinates, I (x, y) is the digitized video to be extracted, σmFor the side of Gauss normal distribution Difference;M is Gaussian kernel process of convolution number, m is positive integer and m=1,2 ..., M;M is positive integer and M > 3;
Step 202, Gaussian difference process and extreme point detection:First, according to difference of Gaussian function:
D(x,y,σi-1)=[G(x,y,kσi-1)-G(x,y,σi-1)]×I(x,y)=L(x,y,σi)-L(x,y,σi-1), to phase Neighbour two gaussian pyramid spaces L (x, y, σm) Gaussian difference process is carried out, M-1 Gauss residual pyramid space is obtained, and examine Measure out the extreme point in each Gauss residual pyramid space;Wherein i is positive integer and i=2,3 ..., M;Wherein, σi=kσi-1,
Step 203, characteristic point screening:Extreme point to detecting each Gauss residual pyramid space drawn in step 202 Screened respectively, the extreme point screening technique all same in the M-1 Gauss residual pyramid spaces;Wherein, it is high to i-th This residual pyramid space D (x, y, σi-1) any one extreme point when being screened, process is as follows:
Step 2031, by Gauss residual pyramid space D (x, y, the σ described in step 202i-1) current to be screened Represented at extreme point with Taylor expansion, and take first two of the Taylor expansion to obtain D (Xmax), whereinX=(x, y, σ in formulai-1), T representing matrix transposition, D0For the Taylor expansion Section 1;
Step 2032, whenWhen, current extreme point to be screened retained, and will be retained Extreme point is as characteristic point;In formula, tr (Hhess) it is to use HhessThe mark of the matrix H essian of expression, Det (Hhess) use HhessTable The determinant of the matrix H essian shown, γ=10;
Matrixtr(Hhess)=Dxx+Dyy, Det (Hhess)=DxxDyy-Dxy 2;DxxAnd DyyRespectively For second-order partial differential coefficient of the Taylor expansion described in step 2031 on x directions and y directions, DxyFor Taylor described in step 2031 The mixed partial derivative of expansion in the x and y direction;
Step 2033, according to the method described in step 2031 to step 2032, to the M-1 Gauss residual pyramids All extreme points in space are screened respectively, and obtain all characteristic points of digitized video to be extracted;
Step 204, characteristic point principal direction determine:Using local image feature, to each characteristic point filtered out in step 203 Principal direction be determined;
Step 205, generation SIFT feature description:The SIFT feature of each characteristic point filtered out in generation step 203 is retouched Son is stated, SIFT feature description is the descriptor of 128 dimensional feature vectors.
A kind of above-mentioned image matching method for being suitable for aeroplane photography flight reappearance and checking automatically, it is characterized in that:Step 1 In digitized video described in two width resolution ratio it is identical.
A kind of above-mentioned image matching method for being suitable for aeroplane photography flight reappearance and checking automatically, it is characterized in that:Step Son is described to the SIFT feature of all characteristic points of first image extracted in step 2 using k-nearest neighbor in 3012 When carrying out characteristic matching, by each characteristic point of first image, kd trees constructed in step 3011 respectively carry out approximation most Neighbor search, finds out all characteristic points with first Image Matching in second image;
Characteristic matching is carried out to matching all match points drawn in step 3012 using k-nearest neighbor in step 3022 When, all match points for drawing kd trees progress approximate KNN constructed in step 3021 respectively will be matched in step 3012 Search, finds out all characteristic points with second Image Matching in first image.
A kind of above-mentioned image matching method for being suitable for aeroplane photography flight reappearance and checking automatically, it is characterized in that:Step 204 Before middle progress characteristic point principal direction determines, method using the three-dimensional conic section of fitting is also needed to being filtered out in step 203 The position of each characteristic point is determined.
A kind of above-mentioned image matching method for being suitable for aeroplane photography flight reappearance and checking automatically, it is characterized in that:Step 1 In when carrying out Gaussian smoothing filter processing to the digitized video, choose N × N windows and be filtered, wherein N for odd number and N >= 3。
A kind of above-mentioned image matching method for being suitable for aeroplane photography flight reappearance and checking automatically, it is characterized in that:Step 205 In the SIFT feature of any one characteristic point to being filtered out in step 203 when describing son and generating, first using this feature point in The heart takes the window of 16 × 16 pixel sizes, then this window is divided into the image fritter of 16 4 × 4 pixel sizes, and calculates every Accumulative gradient magnitude on a image fritter on 8 gradient directions, and gradient orientation histogram is drawn along 8 gradient directions, most Throughout one's life into 4 × 4 × 8 totally 128 dimension SIFT features vector;The SIFT feature vector is by gradient-norm m (x, y) and gradient direction θ (x, y) is formed;Each image fritter is made of as a seed point, SIFT feature description 16 seed points.
A kind of above-mentioned image matching method for being suitable for aeroplane photography flight reappearance and checking automatically, it is characterized in that:It is right in step 204 When the principal direction of any one characteristic point is determined, with the gradient-norm m (x, y) of characteristic point to be determined for ordinate and with its gradient side Gradient orientation histogram is drawn for abscissa to θ (x, y), draws the corresponding gradient side of highest gradient-norm in gradient orientation histogram To the principal direction for this feature point;Wherein, gradient-norm Gradient direction
A kind of above-mentioned image matching method for being suitable for aeroplane photography flight reappearance and checking automatically, it is characterized in that:Step 204 Middle when being determined using local image feature to the principal direction of each characteristic point, the principal direction of characteristic point refers to around this feature point The corresponding gradient direction of highest gradient-norm in 8 pixels.
The present invention has the following advantages compared with prior art:
1st, method and step is simple, design is reasonable and it is convenient to realize.
2nd, easy to use and be easy to grasp, detailed process is as follows:First, two digitized videos to be matched are inputted, and Denoising and filter preprocessing are carried out to it;Afterwards, feature point extraction is carried out to two width digitized videos using SIFT algorithms, obtains band The characteristic point of feature vector descriptor;Then, kd trees are built with the characteristic point of two width digitized videos respectively, carry out bidirectional research, And obtain reliable match point;Finally, random sampling uniformity is used to obtained match point(Ransac)Algorithm is rejected wrong Mismatching point, ensures that final match point is correct, reliable.
3rd, matching process needs not rely on GPS/IMU data:Existing aeroplane photography flight reappearance inspection method is required for Accurate GPS/IMU data assisted image matching is provided, and to obtain accurate GPS/IMU data needs precise ephemeris, generally Need wait 2~3 days, real-time is poor.And GPS/IMU data are not required in the matching process that the present invention uses at all, can still obtain To the match point for meeting aeroplane photography flight reappearance inspection requirements.
4th, the robustness of matching process is good:Existing aeroplane photography image matching method is all only filtered pretreatment, and Non- denoising, thus often it fails to match when hypsography is larger, and the matching process that the present invention uses has been used first to remove and made an uproar Sound, then the preprocess method being filtered, therefore image detail feature is enhanced, it can be extracted correctly in various landform Match point.
5th, it is high to match point reliability:Existing aeroplane photography image matching method is using excluding gross error point after unidirectional matching Method, the match point for producing mistake using above-mentioned existing matching process in the case where image has rotation and deformation is more, and slightly Almost it is not easy to be removed.The method that the present invention carries out bidirectional research using kd trees, matching algorithm robustness is good, largely Improve image have rotation and deformation in the case of matching result, improve matched correctness while speed is ensured.
6th, using effect is good and practical value is high, mainly solves in aeroplane photography, due to being subject to hypsography, center to throw The influence of the camera style and flight environment of vehicle of shadow, makes the rotation of aerial stereo images and deformation larger, influences flight reappearance and check automatically In Image Matching correctness problem.The present invention greatly improves the robust of the Image Matching in flight reappearance checks automatically Property, different terrain is suitably adapted for, has the advantages that stability is strong, correctness is high, depend on GPS/IMU data.The present invention is adopted Matching process can be effectively applicable to aeroplane photography flight reappearance and check automatically, solve the shadow that area is changed greatly in height difference As rotation and deformation cause the influence of Image Matching that it fails to match or the problem of error hiding, by being carried out to image to be matched Denoising, filter preprocessing, then use kd trees to carry out bidirectional research after extracting SIFT feature, it can rotate and deform larger in image When obtain reliable match point, in the case where ensureing matching speed improve matching correctness.
In conclusion the method for the present invention step is simple, design is reasonable, realizes that convenient and robustness is good, using effect is good, energy Effectively solve the problems, such as that existing matching process be easy to cause that it fails to match or error hiding.
Below by drawings and examples, technical scheme is described in further detail.
Brief description of the drawings
Fig. 1 is the method flow block diagram of the present invention.
Embodiment
A kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically as shown in Figure 1, including it is following Step:
Step 1: image preprocessing:Using data processor treat matched two width digitized video carry out respectively denoising and Filtering process, two width digitized videos to be matched are the numeral of the two width measured regions obtained using Aerial Photogrammetry Image.Digitized video described in two width is two dimensional image.Digitized video described in two width is respectively the first image and the second image.
In the present embodiment, the resolution ratio of digitized video is identical described in two width.But the size of digitized video described in two width can be with Differ.
In the present embodiment, when carrying out denoising and filtering process to the digital picture in step 1, first to the digital shadow Wallis filtering process is carried out as carrying out Gaussian smoothing filter, then to the image after Gaussian smoothing filter.
When carrying out Gaussian smoothing filter processing to the digitized video in step 1, choose N × N windows and be filtered, its Middle N is odd number and N >=3.In actual use, can be according to the value of the big minor adjustment N of image.In the present embodiment, to the digital shadow During as carrying out Gaussian smoothing filter processing, choosing 3 × 3 windows and being filtered.
, can according to specific needs, to spectral window to remove the noise contained in the digitized video during actual progress denoising Mouth size adjusts accordingly.
When being filtered after the completion of denoising, used Wallis filtering methods effectively enhancing raw video while, Also noise can be effectively suppressed, particularly the filtering method can greatly enhance the image texture pattern of different scale in image, so In the point feature in extracting image(When SIFT feature extraction is carried out specifically in step 2)Can improve point feature quantity and Precision, thus the reliability and precision of Image Matching result can be effectively improved.
Step 2: SIFT feature is extracted:SIFT is carried out using the data processor respectively to digitized video described in two width Feature extraction, obtains the SIFT feature of all characteristic points of digitized video and each characteristic point description described in two width.
Step 3: structure kd tree bidirectional researches determine match point:Kd trees are built using the data processor and are carried out double It is as follows to search, process:
Step 301, match for the first time, comprises the following steps:
Step 3011, structure kd trees:Using second image as benchmark image to be matched, and by second image All characteristic points SIFT feature description son structure kd trees;
Step 3012, characteristic matching:Using kd trees constructed in step 3011, and using k-nearest neighbor to step 2 SIFT feature description of all characteristic points of first image of middle extraction carries out characteristic matching, finds out second image In all characteristic points with first Image Matching, in second image with the characteristic point of first Image Matching for With point;
Step 302, match for second, comprises the following steps:
Step 3021, structure kd trees:Using first image as benchmark image to be matched, and by first image All characteristic points SIFT feature description son structure kd trees;
Step 3022, characteristic matching:Using kd trees constructed in step 3011, and using k-nearest neighbor to step All match points for drawing are matched in 3012 and carry out characteristic matching, find out in first image with second Image Matching All characteristic points, the characteristic point in first image with second Image Matching is reliable matching point.
In the present embodiment, using k-nearest neighbor to all of first image that is extracted in step 2 in step 3012 When SIFT feature description of characteristic point carries out characteristic matching, by each characteristic point of first image respectively in step 3011 Constructed kd trees carry out approximate KNN search, find out all features with first Image Matching in second image Point;
Characteristic matching is carried out to matching all match points drawn in step 3012 using k-nearest neighbor in step 3022 When, all match points for drawing kd trees progress approximate KNN constructed in step 3021 respectively will be matched in step 3012 Search, finds out all characteristic points with second Image Matching in first image.
Step 4: elimination of rough difference:By the data processor and RANSAC algorithms are used, to being matched in step 3022 All reliable matching points gone out carry out elimination of rough difference.
In the present embodiment, using RANSAC algorithm(That is RANSAC algorithms)After excluding gross error point, obtain reliable Property high, match point that matching precision is high, so as to which the correctness of final match point result is effectively ensured.
When SIFT feature extraction is carried out in the present embodiment, in step 2, feature extraction is carried out using SIFT algorithms, wherein SIFT algorithms are also known as scale invariant feature transfer algorithm.
In the present embodiment, when carrying out SIFT feature extraction to the digitized video in step 2, process is as follows:
Step 201, Gaussian kernel process of convolution:According to formula L (x, y, σm)=G(x,y,σm) × I (x, y), to number to be extracted Word image carries out M Gaussian kernel process of convolution, obtains the multiscale space of digitized video to be extracted, and the multiscale space includes M gaussian pyramid space L (x, y, σ of digitized video to be extractedm);Wherein(x, Y) be two dimensional image pixel position coordinates, I (x, y) is the digitized video to be extracted, σmFor the side of Gauss normal distribution Difference;M is Gaussian kernel process of convolution number, m is positive integer and m=1,2 ..., M;M is positive integer and M > 3.
Wherein, σmThe also referred to as metric space factor.
Step 202, Gaussian difference process and extreme point detection:First, according to difference of Gaussian function:
D(x,y,σi-1)=[G(x,y,kσi-1)-G(x,y,σi-1)]×I(x,y)=L(x,y,σi)-L(x,y,σi-1), to phase Neighbour two gaussian pyramid spaces L (x, y, σm) Gaussian difference process is carried out, M-1 Gauss residual pyramid space is obtained, and examine Measure out the extreme point in each Gauss residual pyramid space;Wherein i is positive integer and i=2,3 ..., M;Wherein, σi=kσi-1,
In the present embodiment, after carrying out carrying out Gaussian difference process in Gaussian kernel process of convolution and step 202 in step 201, Obtain empty using the difference of Gaussian function of different scale steric factor and original digitized video convolution generation Gaussian difference scale Between(DOG scale-space), which is made of M-1 Gauss residual pyramid space.
In the present embodiment, in step 202 carry out extreme point detection when, searched for by metric space, find out using pixel as The extreme point of observation unit.
In the present embodiment, extreme point detection is carried out using difference of Gaussian.At this time, the extreme point drawn is detected to be also referred to as For key point.
During actual progress extreme point detection, by judged pixel compared with 26 pixels that need to compare, find out Extreme point.26 pixels that need to compare include adjacent 8 being located at current judged pixel on same metric space Pixel and currently judge corresponding 2 × 9 pixels on neighbouring two metric spaces of pixel.
Step 203, characteristic point screening:Extreme point to detecting each Gauss residual pyramid space drawn in step 202 Screened respectively, the extreme point screening technique all same in the M-1 Gauss residual pyramid spaces.Wherein, it is high to i-th This residual pyramid space D (x, y, σi-1) any one extreme point when being screened, process is as follows:
Step 2031, by Gauss residual pyramid space D (x, y, the σ described in step 202i-1) current to be screened Represented at extreme point with Taylor expansion, and take first two of the Taylor expansion to obtain D (Xmax), whereinX=(x, y, σ in formulai-1), T representing matrix transposition, D0For the Taylor expansion Section 1.
Step 2032, whenWhen, current extreme point to be screened retained, and will be retained Extreme point is as characteristic point;In formula, tr (Hhess) it is to use HhessThe mark of the matrix H essian of expression, Det (Hhess) use HhessTable The determinant of the matrix H essian shown, γ=10.
Matrixtr(Hhess)=Dxx+Dyy, Det (Hhess)=DxxDyy-Dxy 2;DxxAnd DyyRespectively For second-order partial differential coefficient of the Taylor expansion described in step 2031 on x directions and y directions, DxyFor Taylor described in step 2031 The mixed partial derivative of expansion in the x and y direction.
Step 2033, according to the method described in step 2031 to step 2032, to the M-1 Gauss residual pyramids All extreme points in space are screened respectively, and obtain all characteristic points of digitized video to be extracted.
In the present embodiment, extreme point detection is carried out using difference of Gaussian, thus arithmetic speed greatly improves.
Step 204, characteristic point principal direction determine:Using local image feature, to each characteristic point filtered out in step 203 Principal direction be determined.Also, after being determined to the principal direction of each characteristic point, each characteristic point can be effectively ensured and be respectively provided with rotation Turn consistency.
In the present embodiment, when being determined using local image feature to the principal direction of each characteristic point, the main side of characteristic point To referring to around this feature point the corresponding gradient direction of highest gradient-norm in 8 pixels.
In the present embodiment, when being determined to the principal direction of any one characteristic point, with the gradient of characteristic point to be determined Mould m (x, y) is ordinate and is that abscissa draws gradient orientation histogram with its gradient direction θ (x, y), draws ladder Spend the principal direction that the corresponding gradient direction of highest gradient-norm in direction histogram is this feature point;Wherein, gradient-normGradient direction
In the present embodiment, before progress characteristic point principal direction determines in step 204, also need using the three-dimensional conic section of fitting The position of each characteristic point of the method to being filtered out in step 203 be determined.
Step 205, generation SIFT feature description:The SIFT feature of each characteristic point filtered out in generation step 203 is retouched Son is stated, SIFT feature description is the descriptor of 128 dimensional feature vectors.
In the present embodiment, when the SIFT feature of any one characteristic point to being filtered out in step 203 describes son and generates, The window of 16 × 16 pixel sizes is first taken centered on this feature point, then this window is divided into the figure of 16 4 × 4 pixel sizes As fritter, and the accumulative gradient magnitude on each image fritter on 8 gradient directions is calculated, and ladder is drawn along 8 gradient directions Spend direction histogram, ultimately generate 4 × 4 × 8 totally 128 dimension SIFT features vector;The SIFT feature vector is by gradient-norm m (x, y) and gradient direction θ (x, y) are formed;Each image fritter is as a seed point, and SIFT feature description is by 16 Seed point forms.
In recent years, with deepening continuously to image processing techniques, image matching algorithm has also obtained very big concern, Sift operators are to be proposed by D.G.Lowe in 1999, are mainly used for Object identifying at that time, D.G.Lowe is to the calculation within 2004 Son has carried out comprehensive summary, and formally propose it is a kind of based on metric space and to image scaling, rotation even affine transformation The image local feature to maintain the invariance describes operator --- SIFT (Scale Invariant Feature Transform), SIFT operators are a kind of operators for extracting local feature, and extreme point, extraction position, scale and invariable rotary are found in metric space Measure, the matching problem in the case of translation, rotation, affine transformation occurs between processing two images, there is very strong matching capacity. But in traditional SIFT algorithms, when have found the feature with Feature Descriptor(That is SIFT feature description)Afterwards, using time The form search all the points gone through so that matched speed is slow, can not realize the real-time matching of big image.Kd trees are binary search Tree is generalized to a kind of main memory data structure of multidimensional data, has very strong versatility, has penetrated at present in every field, Such as:Data organization and index, image procossing etc.;But limitation of the common search based on Processing Algorithm, speed is still not to the utmost such as People's will.And in technical solution disclosed in this invention, SIFT algorithms are combined with kd trees, when actually being matched, respectively with The characteristic point structure kd trees of two digitized videos to be matched filtered out carry out bidirectional research, and using k-nearest neighbor to two width Digitized video to be matched is matched, and can not only obtain reliable match point, and accelerates the search in multi-C vector Journey, accelerates calculating speed, matching speed is fast and realizes the real-time matching that conveniently, can realize big image.
The above, is only presently preferred embodiments of the present invention, not the present invention is imposed any restrictions, it is every according to the present invention Any simple modification, change and the equivalent structure change that technical spirit makees above example, still fall within skill of the present invention In the protection domain of art scheme.

Claims (6)

  1. A kind of 1. image matching method for being suitable for aeroplane photography flight reappearance and checking automatically, it is characterised in that this method include with Lower step:
    Step 1: image preprocessing:Matched two width digitized video is treated using data processor and carries out denoising and filtering respectively Processing, two width digitized videos to be matched are the digital shadow of the two width measured regions obtained using Aerial Photogrammetry Picture;Digitized video described in two width is two dimensional image;Digitized video described in two width is respectively the first image and the second image;
    The resolution ratio of digitized video is identical described in two width in step 1;
    When carrying out denoising and filtering process to the digitized video in step 1, denoising is first carried out, then be filtered place Reason;Also, when being filtered processing, first to the digitized video carry out Gaussian smoothing filter, then to Gaussian smoothing filter after Image carries out Wallis filtering process;
    Step 2: SIFT feature is extracted:SIFT feature is carried out using the data processor respectively to digitized video described in two width Extraction, obtains the SIFT feature of all characteristic points of digitized video and each characteristic point description described in two width;
    Step 3: structure kd tree bidirectional researches determine match point:Kd trees are built using the data processor and carry out two-way search Rope, process are as follows:
    Step 301, match for the first time, comprises the following steps:
    Step 3011, structure kd trees:Using second image as benchmark image to be matched, and by the institute of second image There are the SIFT feature description son structure kd trees of characteristic point;
    Step 3012, characteristic matching:Using kd trees constructed in step 3011, and using k-nearest neighbor to being carried in step 2 All characteristic points of first image taken SIFT feature description son carry out characteristic matching, find out in second image with All characteristic points of first Image Matching, characteristic point in second image with first Image Matching is matches Point;
    Step 302, match for second, comprises the following steps:
    Step 3021, structure kd trees:Using first image as benchmark image to be matched, and by the institute of first image There are the SIFT feature description son structure kd trees of characteristic point;
    Step 3022, characteristic matching:Using kd trees constructed in step 3011, and using k-nearest neighbor in step 3012 Match all match points drawn and carry out characteristic matching, find out all spies with second Image Matching in first image Point is levied, the characteristic point in first image with second Image Matching is reliable matching point;
    It is special to the SIFT of all characteristic points of first image extracted in step 2 using k-nearest neighbor in step 3012 When sign description carries out characteristic matching, by each characteristic point of first image kd trees constructed in step 3011 respectively into Row approximate KNN is searched for, and finds out all characteristic points with first Image Matching in second image;
    , will when carrying out characteristic matching to matching all match points drawn in step 3012 using k-nearest neighbor in step 3022 All match points drawn kd trees progress approximate KNN search constructed in step 3021 respectively is matched in step 3012, Find out all characteristic points with second Image Matching in first image;
    Step 4: elimination of rough difference:By the data processor and RANSAC algorithms are used, matching in step 3022 is drawn All reliable matching points carry out elimination of rough difference.
  2. 2. a kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically described in accordance with the claim 1, its It is characterized in that:When carrying out SIFT feature extraction to the digitized video in step 2, process is as follows:
    Step 201, Gaussian kernel process of convolution:According to formula L (x, y, σm)=G (x, y, σm) × I (x, y), to digital shadow to be extracted As carrying out M Gaussian kernel process of convolution, the multiscale space of digitized video to be extracted is obtained, the multiscale space includes and waits to carry Take M gaussian pyramid space L (x, y, σ of digitized videom);Wherein(x, y) is The pixel position coordinates of two dimensional image, I (x, y) are the digitized video to be extracted, σmFor the variance of Gauss normal distribution;m For Gaussian kernel process of convolution number, m is positive integer and m=1,2 ..., M;M is positive integer and M > 3;
    Step 202, Gaussian difference process and extreme point detection:First, according to difference of Gaussian function:
    D(x,y,σi-1)=[G (x, y, k σi-1)-G(x,y,σi-1)] × I (x, y)=L (x, y, σi)-L(x,y,σi-1), to adjacent Two gaussian pyramid spaces L (x, y, σm) Gaussian difference process is carried out, M-1 Gauss residual pyramid space is obtained, and detect Draw the extreme point in each Gauss residual pyramid space;Wherein i is positive integer and i=2,3 ..., M;Wherein, σi=k σi-1,
    Step 203, characteristic point screening:The extreme point that each Gauss residual pyramid space drawn is detected in step 202 is distinguished Screened, the extreme point screening technique all same in the M-1 Gauss residual pyramid spaces;Wherein, it is residual to i-th of Gauss Poor pyramid space D (x, y, σi-1) any one extreme point when being screened, process is as follows:
    Step 2031, by Gauss residual pyramid space D (x, y, the σ described in step 202i-1) in current extreme value to be screened Represented at point with Taylor expansion, and take first two of the Taylor expansion to obtain D (Xmax), whereinX=(x, y, σ in formulai-1), T representing matrix transposition, D0For the Taylor expansion Section 1;
    Step 2032, whenWhen, current extreme point to be screened is retained, and the extreme value that will be retained Point is used as characteristic point;In formula, tr (Hhess) it is to use HhessThe mark of the matrix H essian of expression, Det (Hhess) use HhessRepresent The determinant of matrix H essian, γ=10;
    Matrixtr(Hhess)=Dxx+Dyy, Det (Hhess)=Dxx Dyy-Dxy 2;DxxAnd DyyRespectively Second-order partial differential coefficient of the Taylor expansion described in step 2031 on x directions and y directions, DxyFor Taylor's exhibition described in step 2031 The mixed partial derivative of open type in the x and y direction;
    Step 2033, according to the method described in step 2031 to step 2032, to the M-1 Gauss residual pyramid spaces All extreme points screened respectively, and obtain all characteristic points of digitized video to be extracted;
    Step 204, characteristic point principal direction determine:Using local image feature, the master to each characteristic point filtered out in step 203 Direction is determined;
    Step 205, generation SIFT feature description:SIFT feature description of each characteristic point filtered out in generation step 203, SIFT feature description is the descriptor of 128 dimensional feature vectors.
  3. 3. a kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically described in accordance with the claim 2, its It is characterized in that:Before progress characteristic point principal direction determines in step 204, the method using the three-dimensional conic section of fitting is also needed to step The position of each characteristic point filtered out in rapid 203 is determined.
  4. 4. a kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically described in accordance with the claim 1, its It is characterized in that:When carrying out Gaussian smoothing filter processing to the digitized video in step 1, choose N × N windows and be filtered, its Middle N is odd number and N >=3.
  5. 5. a kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically described in accordance with the claim 2, its It is characterized in that:When the SIFT feature of any one characteristic point in step 205 to being filtered out in step 203 describes son and generates, The window of 16 × 16 pixel sizes is first taken centered on this feature point, then this window is divided into the figure of 16 4 × 4 pixel sizes As fritter, and the accumulative gradient magnitude on each image fritter on 8 gradient directions is calculated, and ladder is drawn along 8 gradient directions Spend direction histogram, ultimately generate 4 × 4 × 8 totally 128 dimension SIFT features vector;The SIFT feature vector is by gradient-norm m (x, y) and gradient direction θ (x, y) are formed;Each image fritter is as a seed point, and SIFT feature description is by 16 Seed point forms.
  6. 6. a kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically described in accordance with the claim 2, its It is characterized in that:When being determined in step 204 using local image feature to the principal direction of each characteristic point, the principal direction of characteristic point Refer to around this feature point the corresponding gradient direction of highest gradient-norm in 8 pixels.
CN201410139272.XA 2014-04-08 2014-04-08 A kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically Expired - Fee Related CN103886611B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410139272.XA CN103886611B (en) 2014-04-08 2014-04-08 A kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410139272.XA CN103886611B (en) 2014-04-08 2014-04-08 A kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically

Publications (2)

Publication Number Publication Date
CN103886611A CN103886611A (en) 2014-06-25
CN103886611B true CN103886611B (en) 2018-04-27

Family

ID=50955484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410139272.XA Expired - Fee Related CN103886611B (en) 2014-04-08 2014-04-08 A kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically

Country Status (1)

Country Link
CN (1) CN103886611B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104180794B (en) * 2014-09-02 2016-03-30 西安煤航信息产业有限公司 The disposal route in digital orthoimage garland region
CN106023109B (en) * 2016-05-19 2019-06-07 西华大学 A kind of sparse denoising method based on the similar sample study in region
US10580135B2 (en) 2016-07-14 2020-03-03 Shanghai United Imaging Healthcare Co., Ltd. System and method for splicing images
CN106504229B (en) * 2016-09-30 2019-11-12 上海联影医疗科技有限公司 The detection method of characteristic point in image
CN106851164A (en) * 2017-03-28 2017-06-13 戴金辰 Record image, video generation reservation method
CN107917699B (en) * 2017-11-13 2020-01-17 中国科学院遥感与数字地球研究所 Method for improving aerial three quality of mountain landform oblique photogrammetry
CN108470354B (en) * 2018-03-23 2021-04-27 云南大学 Video target tracking method and device and implementation device
CN109166143A (en) * 2018-07-06 2019-01-08 航天星图科技(北京)有限公司 A kind of big regional network stereo mapping satellite image matching process
CN109509262B (en) * 2018-11-13 2023-02-28 上海盎维信息技术有限公司 Intelligent enhanced modeling method and device based on artificial intelligence
CN109740591A (en) * 2018-11-14 2019-05-10 国网浙江宁波市鄞州区供电有限公司 A kind of Meter recognition algorithm
CN109598750B (en) * 2018-12-07 2023-05-23 中国地质大学(武汉) Large-scale difference image feature point matching method based on deformation space pyramid
CN110334622B (en) * 2019-06-24 2022-04-19 电子科技大学 Pedestrian retrieval method based on adaptive feature pyramid
CN110490268A (en) * 2019-08-26 2019-11-22 山东浪潮人工智能研究院有限公司 A kind of feature matching method of the improvement nearest neighbor distance ratio based on cosine similarity
CN111062927A (en) * 2019-12-18 2020-04-24 广东电网有限责任公司 Method, system and equipment for detecting image quality of unmanned aerial vehicle
CN111639662A (en) * 2019-12-23 2020-09-08 珠海大横琴科技发展有限公司 Remote sensing image bidirectional matching method and device, electronic equipment and storage medium
CN111209920B (en) * 2020-01-06 2022-09-23 桂林电子科技大学 Airplane detection method under complex dynamic background
CN113066130A (en) * 2021-04-21 2021-07-02 国家基础地理信息中心 Aerial photography image center position calculating method and device, electronic equipment and readable storage medium
CN113343920A (en) * 2021-07-01 2021-09-03 中诚信征信有限公司 Method and device for classifying face recognition photos, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169581A (en) * 2011-04-18 2011-08-31 北京航空航天大学 Feature vector-based fast and high-precision robustness matching method
CN103593832A (en) * 2013-09-25 2014-02-19 重庆邮电大学 Method for image mosaic based on feature detection operator of second order difference of Gaussian

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169581A (en) * 2011-04-18 2011-08-31 北京航空航天大学 Feature vector-based fast and high-precision robustness matching method
CN103593832A (en) * 2013-09-25 2014-02-19 重庆邮电大学 Method for image mosaic based on feature detection operator of second order difference of Gaussian

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种改进的基于SIFT特征的快速匹配算法;唐红梅 等;《电视技术》;中国电子科技集团公司第三研究所;20130802;第37卷(第15期);全文 *
基于SIFT算法的图像匹配方法研究;汪松;《中国优秀硕士学位论文全文数据库 信息科技辑》;中国学术期刊(光盘版)电子杂志社;20140115(第1期);第19页至第49页 *

Also Published As

Publication number Publication date
CN103886611A (en) 2014-06-25

Similar Documents

Publication Publication Date Title
CN103886611B (en) A kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically
CN111815757B (en) Large member three-dimensional reconstruction method based on image sequence
CN111028277B (en) SAR and optical remote sensing image registration method based on pseudo-twin convolution neural network
CN104867126B (en) Based on point to constraint and the diameter radar image method for registering for changing region of network of triangle
CN107067415B (en) A kind of object localization method based on images match
CN103218783B (en) Satellite remote sensing images fast geometric correcting method based on control point image database
CN109784223B (en) Multi-temporal remote sensing image matching method and system based on convolutional neural network
CN104599258B (en) A kind of image split-joint method based on anisotropic character descriptor
CN107424181A (en) A kind of improved image mosaic key frame rapid extracting method
CN109409292A (en) The heterologous image matching method extracted based on fining characteristic optimization
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
CN106991695A (en) A kind of method for registering images and device
CN107063228A (en) Targeted attitude calculation method based on binocular vision
CN108921939A (en) A kind of method for reconstructing three-dimensional scene based on picture
CN107025449B (en) Oblique image straight line feature matching method constrained by local area with unchanged visual angle
CN106558072A (en) A kind of method based on SIFT feature registration on remote sensing images is improved
CN112254656B (en) Stereoscopic vision three-dimensional displacement measurement method based on structural surface point characteristics
CN106485740A (en) A kind of combination point of safes and the multidate SAR image registration method of characteristic point
CN107677274A (en) Unmanned plane independent landing navigation information real-time resolving method based on binocular vision
CN105488541A (en) Natural feature point identification method based on machine learning in augmented reality system
CN108550166A (en) A kind of spatial target images matching process
CN113538501A (en) Low-altitude image DSM generation building edge refinement method
CN109509216A (en) A kind of Landsat and SPOT satellite image automatic registration method
CN110222638B (en) Aircraft target detection method based on sketch candidate box strategy and Fast R-CNN
CN105678720A (en) Image matching judging method and image matching judging device for panoramic stitching

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180427