CN103886611A - Image matching method suitable for automatically detecting flight quality of aerial photography - Google Patents

Image matching method suitable for automatically detecting flight quality of aerial photography Download PDF

Info

Publication number
CN103886611A
CN103886611A CN201410139272.XA CN201410139272A CN103886611A CN 103886611 A CN103886611 A CN 103886611A CN 201410139272 A CN201410139272 A CN 201410139272A CN 103886611 A CN103886611 A CN 103886611A
Authority
CN
China
Prior art keywords
image
matching
unique
point
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410139272.XA
Other languages
Chinese (zh)
Other versions
CN103886611B (en
Inventor
彭桂辉
梁菲
赵铁梅
左涛
刘敏
郭永春
姚春雨
王慧芳
宋袁龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sian Coal and Aeronautics Information Industry Co Ltd
Original Assignee
Sian Coal and Aeronautics Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sian Coal and Aeronautics Information Industry Co Ltd filed Critical Sian Coal and Aeronautics Information Industry Co Ltd
Priority to CN201410139272.XA priority Critical patent/CN103886611B/en
Publication of CN103886611A publication Critical patent/CN103886611A/en
Application granted granted Critical
Publication of CN103886611B publication Critical patent/CN103886611B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an image matching method suitable for automatically detecting the flight quality of the aerial photography. The method comprises the steps that firstly, images are preprocessed, denoising and filtering are respectively carried out on two digital images to be matched, and the two digital images to be matched are obtained with the aerial photography measuring method in the measured area and are two-dimensional images; secondly, SIFT characteristic extracting is carried out, the SIFT characteristic extraction is respectively carried out on the two digital images through a data processor, all characteristic points of the two digital images and SIFT characteristic descriptors of all the characteristic points are obtained; thirdly, a kd tree is established to carry out the bi-directional search to determine matching points; fourthly, gross error removing is carried out according to the RANSAC algorithm. According to the method, the steps are simple, design is reasonable, the achieving process is convenient, robustness and the using effect are good, and the method can effectively solve the problem that according to an existing method, matching is prone to failure matching or error matching.

Description

A kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality
Technical field
The invention belongs to photogrammetric measurement processing technology field, especially relate to a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality.
Background technology
Image Matching refers to by certain Image Matching Algorithm identifies same place between two width or several images.Image Matching can be in no field, and as aspects such as Photogrammetric Processing, Digital Image Processing, Medical Image Processing, remote sensing image data processing, different applications has different requirements to Image Matching.Concerning the Image Matching of aeroplane photography flight quality self-verifying, require the correctness of match point high, and precision reach 2 pixels with interior.Matching algorithm is often subject to the impact of image noise, image rotation and deformation of image in actual applications, especially in aeroplane photography, due to the impact of camera style and the flight environment of vehicle of topographic relief, central projection, make the rotation of aeroplane photography picture and distortion larger, therefore need to carry out the self-verifying of aeroplane photography flight quality by the method for Image Matching, but existing matching process easily causes in automatic processing procedure, it fails to match or the problem of mistake coupling.
Thereby, nowadays lack the image matching method that is suitable for the self-verifying of aeroplane photography flight quality that a kind of method step is simple, reasonable in design, realization is convenient and robustness is good, result of use is good, it can check aerophotographic flight quality, and correctness is high, can effectively solves existing matching process and easily cause the problem that it fails to match or mistake is mated in automatic processing procedure.
Summary of the invention
Technical matters to be solved by this invention is for above-mentioned deficiency of the prior art, a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality is provided, its method step is simple, reasonable in design, realization is convenient and robustness is good, result of use is good, can solve existing matching process and easily cause the problem that it fails to match or mistake is mated.
For solving the problems of the technologies described above, the technical solution used in the present invention is: a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality, is characterized in that the method comprises the following steps:
Step 1, image pre-service: adopt data processor to carry out respectively denoising and filtering processing to two width digitized videos to be matched, two width digitized videos to be matched are the digitized video that adopts two measured regions obtaining of Aerial Photogrammetry; Described in two width, digitized video is two dimensional image; Described in two width, digitized video is respectively the first image and the second image;
Step 2, SIFT feature extraction: adopt described data processor to carry out respectively SIFT feature extraction to digitized video described in two width, obtain all unique points of digitized video and the SIFT Feature Descriptor of each unique point described in two width;
Match point is determined in step 3, the bidirectional research of structure kd tree: adopt described data processor to build kd and set and carry out bidirectional research, process is as follows:
Step 301, for the first time coupling, comprise the following steps:
Step 3011, structure kd tree: using described the second image as benchmark image to be matched, and the SIFT Feature Descriptor of all unique points of described the second image is built to kd tree;
Step 3012, characteristic matching: utilize kd tree constructed in step 3011, and adopt the SIFT Feature Descriptor of all unique points of nearest neighbor algorithm to described the first image extracting in step 2 to carry out characteristic matching, finding out in described the second image and all unique points of described the first Image Matching, is match point with the unique point of described the first Image Matching in described the second image;
Step 302, for the second time coupling, comprise the following steps:
Step 3021, structure kd tree: using described the first image as benchmark image to be matched, and the SIFT Feature Descriptor of all unique points of described the first image is built to kd tree;
Step 3022, characteristic matching: utilize kd tree constructed in step 3011, and all match points that adopt nearest neighbor algorithm to draw coupling in step 3012 carry out characteristic matching, finding out in described the first image and all unique points of described the second Image Matching, is reliable matching point with the unique point of described the second Image Matching in described the first image;
Step 4, elimination of rough difference: by described data processor and employing RANSAC algorithm, all reliable matching points that coupling in step 3022 is drawn carry out elimination of rough difference.
Above-mentioned a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality, it is characterized in that: while described digital picture being carried out to denoising and filtering processing in step 1, first described digitized video is carried out to Gaussian smoothing filtering, the more filtered image of Gaussian smoothing is carried out to Wallis filtering processing.
Above-mentioned a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality, is characterized in that: while described digitized video being carried out to SIFT feature extraction in step 2, process is as follows:
Step 201, gaussian kernel process of convolution: according to formula L (x, y, σ m)=G (x, y, σ m) × I (x, y), carries out gaussian kernel process of convolution M time to digitized video to be extracted, obtains the multiscale space of digitized video to be extracted, the M that described multiscale space comprises digitized video to be extracted gaussian pyramid space L (x, y, σ m); Wherein
Figure BDA0000488296030000031
(x, y) is the pixel position coordinates of two dimensional image, and I (x, y) is described digitized video to be extracted, σ mfor the variance of Gauss normal distribution; M is gaussian kernel process of convolution number of times, m be positive integer and m=1,2 ..., M; M is positive integer and M > 3;
Step 202, Gaussian difference process and extreme point detect: first, and according to difference of Gaussian function:
D (x, y, σ i-1)=[G (x, y, k σ i-1)-G (x, y, σ i-1)] × I (x, y)=L (x, y, σ i)-L (x, y, σ i-1), to adjacent two gaussian pyramid spaces L (x, y, σ m) carry out Gaussian difference process, obtain M-1 Gauss's residual pyramid space, and detect the extreme point that draws each Gauss's residual pyramid space; Wherein i be positive integer and i=2,3 ..., M; Wherein, σ i=k σ i-1,
Figure BDA0000488296030000032
Step 203, unique point screening: the extreme point that detects the each Gauss's residual pyramid space drawing in step 202 is screened respectively, and the extreme point screening technique in M-1 described Gauss's residual pyramid space is all identical; Wherein, to i Gauss's residual pyramid space D (x, y, σ i-1) any extreme point while screening, process is as follows:
Step 2031, by Gauss's residual pyramid space D (x, y, σ described in step 202 i-1) represent by Taylor expansion at current extreme point place to be screened, and get first two of described Taylor expansion and obtain D (X max), wherein
Figure BDA0000488296030000033
x=in formula (x, y, σ i-1), T representing matrix transposition, D 0for the Section 1 of described Taylor expansion;
Step 2032, when
Figure BDA0000488296030000034
time, current extreme point to be screened is retained, and using retained extreme point as unique point; In formula, tr (H hess) be to use H hessthe mark of the matrix H essian representing, Det (H hess) use H hessthe determinant of the matrix H essian representing, γ=10;
Matrix
Figure BDA0000488296030000041
tr (H hess)=D xx+ D yy, Det (H hess)=D xxd yy-D xy 2; D xxand D yybe respectively the second-order partial differential coefficient of Taylor expansion described in step 2031 in x direction and y direction, D xyfor the mixed partial derivative in the x and y direction of Taylor expansion described in step 2031;
Step 2033, according to step 2031 to the method described in step 2032, all extreme points in M-1 described Gauss's residual pyramid space are screened respectively, and obtain all unique points of digitized video to be extracted;
Step 204, unique point principal direction are determined: utilize local image feature, the principal direction of the each unique point filtering out in step 203 is determined;
Step 205, generation SIFT Feature Descriptor: generate the SIFT Feature Descriptor of the each unique point filtering out in step 203, described SIFT Feature Descriptor is the descriptor of 128 dimensional feature vectors.
Above-mentioned a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality, is characterized in that: in step 1, described in two width, the resolution of digitized video is identical.
Above-mentioned a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality, it is characterized in that: while adopting the SIFT Feature Descriptor of all unique points of nearest neighbor algorithm to described the first image extracting in step 2 to carry out characteristic matching in step 3012, by each unique point of described the first image respectively in step 3011 constructed kd tree carry out approximate KNN search, find out in described the second image all unique points with described the first Image Matching;
When all match points that adopt nearest neighbor algorithm to draw coupling in step 3012 in step 3022 carry out characteristic matching, by all match points of drawing of coupling in step 3012 respectively in step 3021 constructed kd tree carry out approximate KNN search, find out in described the first image all unique points with described the second Image Matching.
Above-mentioned a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality, it is characterized in that: in step 204, carry out before unique point principal direction determines, also needing to adopt the method for the three-dimensional quafric curve of matching to determine the position of the each unique point filtering out in step 203.
Above-mentioned a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality, is characterized in that: while described digitized video being carried out to Gaussian smoothing filtering processing in step 1, choose N × N window and carry out filtering, wherein N is odd number and N >=3.
Above-mentioned a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality, it is characterized in that: when the SIFT Feature Descriptor of any unique point filtering out in step 203 generation in step 205, first centered by this unique point, get the window of 16 × 16 pixel sizes, again this window is divided into the image fritter of 16 4 × 4 pixel sizes, and calculate the accumulative total gradient magnitude on 8 gradient directions on each image fritter, and draw gradient orientation histograms along 8 gradient directions, finally generate 4 × 4 × 8 SIFT proper vectors of totally 128 dimensions; Described SIFT proper vector is made up of gradient-norm m (x, y) and gradient direction θ (x, y); Each image fritter is as a Seed Points, and described SIFT Feature Descriptor is made up of 16 Seed Points.
Above-mentioned a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality, it is characterized in that: when the principal direction of any unique point being determined in step 204, with the gradient-norm m (x of unique point to be determined, y) be ordinate and with its gradient direction θ (x, y) be that horizontal ordinate is drawn gradient orientation histogram, the principal direction that gradient direction corresponding to high gradient mould is this unique point of drawing in gradient orientation histogram; Wherein, gradient-norm m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2 , Gradient direction θ ( x , y ) = arctan L ( x + 1 , y ) - L ( x - 1 , y ) L ( x , y + 1 ) - L ( x , y - 1 ) .
Above-mentioned a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality, it is characterized in that: while utilizing local image feature to determine the principal direction of each unique point in step 204, the principal direction of unique point refers to this unique point gradient direction corresponding to high gradient mould in 8 pixels around.
The present invention compared with prior art has the following advantages:
1, method step simple, reasonable in design and realize convenient.
2, use easy and simple to handle and be easy to grasp, detailed process is as follows: first, input two digitized videos to be matched, and it is carried out to denoising and filter preprocessing; Afterwards, utilize SIFT algorithm to carry out feature point extraction to two width digitized videos, obtain the unique point with proper vector descriptor; Then, build kd tree with the unique point of two width digitized videos respectively, carry out bidirectional research, and obtain reliable match point; Finally, adopt random sampling consistance (Ransac) algorithm to reject error matching points to obtained match point, guarantee that final match point is correct, reliable.
3, matching process does not need to rely on GPS/IMU data: existing aeroplane photography flight detection method for quality all needs to provide the auxiliary Image Matching of GPS/IMU data accurately, need precise ephemeris and will obtain GPS/IMU data accurately, generally need to wait for 2~3 days, real-time is poor.And the matching process that the present invention adopts does not need GPS/IMU data, still can be met the match point of aeroplane photography flight quality check requirement.
4, the robustness of matching process is good: existing aeroplane photography image matching method all only carries out filter preprocessing, and not denoising, thereby often it fails to match in the time that topographic relief is larger, and the matching process that the present invention adopts has used and has first removed noise, carry out again the preprocess method of filtering, therefore strengthen image detail feature, can extract correct match point in various landform.
5, match point reliability is high: existing aeroplane photography image matching method adopts the method for excluding gross error point after unidirectional coupling, in the situation that image has rotation and distortion, adopt above-mentioned existing matching process to produce wrong match point more, and rough error point is not easy disallowable.The present invention adopts kd to set the method for carrying out bidirectional research, and matching algorithm robustness is good, and having improved largely image has the matching result in the situation of rotating and be out of shape, and has improved the correctness of coupling in the speed of assurance.
6, result of use is good and practical value is high, mainly solve in aeroplane photography, owing to being subject to the impact of camera style and flight environment of vehicle of topographic relief, central projection, make the rotation of aerial stereo images and distortion larger, the correctness problem of the Image Matching in the self-verifying of impact flight quality.The present invention has greatly improved the robustness of Image Matching in flight quality self-verifying, can be suitable for different terrain, has that stability is strong, correctness is high, do not rely on the advantages such as GPS/IMU data.Matching process of the present invention can effectively be applicable to the self-verifying of aeroplane photography flight quality, solve in the discrepancy in elevation and changed impact on Image Matching of the image rotation of larger area and distortion and cause that it fails to match or the problem of mistake coupling, by image to be matched is carried out to denoising, filter preprocessing, after extracting again SIFT unique point, adopt kd tree to carry out bidirectional research, can obtain reliable match point when larger in image rotation and distortion, in the situation that guaranteeing matching speed, improve coupling correctness.
In sum, the inventive method step is simple, reasonable in design, realization is convenient and robustness is good, result of use is good, can effectively solve existing matching process and easily cause the problem that it fails to match or mistake is mated.
Below by drawings and Examples, technical scheme of the present invention is described in further detail.
Accompanying drawing explanation
Fig. 1 is method flow block diagram of the present invention.
Embodiment
A kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality as shown in Figure 1, comprises the following steps:
Step 1, image pre-service: adopt data processor to carry out respectively denoising and filtering processing to two width digitized videos to be matched, two width digitized videos to be matched are the digitized video that adopts two measured regions obtaining of Aerial Photogrammetry.Described in two width, digitized video is two dimensional image.Described in two width, digitized video is respectively the first image and the second image.
In the present embodiment, the resolution of digitized video is identical described in two width.But described in two width, the size of digitized video can be not identical.
In the present embodiment, while described digital picture being carried out to denoising and filtering processing in step 1, first described digitized video is carried out to Gaussian smoothing filtering, the more filtered image of Gaussian smoothing is carried out to Wallis filtering processing.
While described digitized video being carried out to Gaussian smoothing filtering processing in step 1, choose N × N window and carry out filtering, wherein N is odd number and N >=3.When actual use, can be according to the value of image size adjustment N.In the present embodiment, when described digitized video is carried out to Gaussian smoothing filtering processing, choose 3 × 3 windows and carry out filtering.
Actual while carrying out denoising, for removing the noise containing in described digitized video, can according to specific needs, filter window size be adjusted accordingly.
While carrying out filtering after denoising completes, the Wallis filtering method adopting is in effectively strengthening raw video, also can effectively suppress noise, particularly this filtering method can strengthen the image texture pattern of different scale in image greatly, so in the time of the some feature of extracting in image, (while specifically carrying out SIFT feature extraction in step 2) can improve quantity and the precision of a feature, thereby can effectively improve reliability and the precision of Image Matching result.
Step 2, SIFT feature extraction: adopt described data processor to carry out respectively SIFT feature extraction to digitized video described in two width, obtain all unique points of digitized video and the SIFT Feature Descriptor of each unique point described in two width.
Match point is determined in step 3, the bidirectional research of structure kd tree: adopt described data processor to build kd and set and carry out bidirectional research, process is as follows:
Step 301, for the first time coupling, comprise the following steps:
Step 3011, structure kd tree: using described the second image as benchmark image to be matched, and the SIFT Feature Descriptor of all unique points of described the second image is built to kd tree;
Step 3012, characteristic matching: utilize kd tree constructed in step 3011, and adopt the SIFT Feature Descriptor of all unique points of nearest neighbor algorithm to described the first image extracting in step 2 to carry out characteristic matching, finding out in described the second image and all unique points of described the first Image Matching, is match point with the unique point of described the first Image Matching in described the second image;
Step 302, for the second time coupling, comprise the following steps:
Step 3021, structure kd tree: using described the first image as benchmark image to be matched, and the SIFT Feature Descriptor of all unique points of described the first image is built to kd tree;
Step 3022, characteristic matching: utilize kd tree constructed in step 3011, and all match points that adopt nearest neighbor algorithm to draw coupling in step 3012 carry out characteristic matching, finding out in described the first image and all unique points of described the second Image Matching, is reliable matching point with the unique point of described the second Image Matching in described the first image.
In the present embodiment, while adopting the SIFT Feature Descriptor of all unique points of nearest neighbor algorithm to described the first image extracting in step 2 to carry out characteristic matching in step 3012, by each unique point of described the first image respectively in step 3011 constructed kd tree carry out approximate KNN search, find out in described the second image all unique points with described the first Image Matching;
When all match points that adopt nearest neighbor algorithm to draw coupling in step 3012 in step 3022 carry out characteristic matching, by all match points of drawing of coupling in step 3012 respectively in step 3021 constructed kd tree carry out approximate KNN search, find out in described the first image all unique points with described the second Image Matching.
Step 4, elimination of rough difference: by described data processor and employing RANSAC algorithm, all reliable matching points that coupling in step 3022 is drawn carry out elimination of rough difference.
In the present embodiment, adopt after random sampling consistency algorithm (being RANSAC algorithm) excluding gross error point, obtain the match point that reliability is high, matching precision is high, thereby can effectively guarantee the correctness of final match point result.
In the present embodiment, while carrying out SIFT feature extraction, adopt SIFT algorithm to carry out feature extraction in step 2, wherein SIFT algorithm claims again yardstick invariant features transfer algorithm.
In the present embodiment, while described digitized video being carried out to SIFT feature extraction in step 2, process is as follows:
Step 201, gaussian kernel process of convolution: according to formula L (x, y, σ m)=G (x, y, σ m) × I (x, y), carries out gaussian kernel process of convolution M time to digitized video to be extracted, obtains the multiscale space of digitized video to be extracted, the M that described multiscale space comprises digitized video to be extracted gaussian pyramid space L (x, y, σ m); Wherein (x, y) is the pixel position coordinates of two dimensional image, and I (x, y) is described digitized video to be extracted, σ mfor the variance of Gauss normal distribution; M is gaussian kernel process of convolution number of times, m be positive integer and m=1,2 ..., M; M is positive integer and M > 3.
Wherein, σ malso referred to as the metric space factor.
Step 202, Gaussian difference process and extreme point detect: first, and according to difference of Gaussian function:
D (x, y, σ i-1)=[G (x, y, k σ i-1)-G (x, y, σ i-1)] × I (x, y)=L (x, y, σ i)-L (x, y, σ i-1), to adjacent two gaussian pyramid spaces L (x, y, σ m) carry out Gaussian difference process, obtain M-1 Gauss's residual pyramid space, and detect the extreme point that draws each Gauss's residual pyramid space; Wherein i be positive integer and i=2,3 ..., M; Wherein, σ i=k σ i-1,
Figure BDA0000488296030000092
In the present embodiment, in gaussian kernel process of convolution and step 202, carry out after Gaussian difference process through carrying out in step 201, acquisition utilizes the difference of Gaussian function of different scale steric factor and original digitized video convolution to generate difference of Gaussian metric space (DOG scale-space), and this difference of Gaussian metric space is by M-1 Gauss's residual pyramid spatial composing.
In the present embodiment, while carrying out extreme point detection in step 202, search for by metric space, find out the extreme point take pixel as observation unit.
In the present embodiment, utilize difference of Gaussian to carry out extreme point detection.Now, detect the extreme point drawing also referred to as key point.
Actual carry out extreme point detect time, by judge pixel with need comparison 26 pixels compare, find out extreme point.Need 26 pixels relatively to comprise with current judged pixel and be positioned at 2 × 9 pixels corresponding on 8 adjacent pixels on same metric space and neighbouring two metric spaces of current judged pixel.
Step 203, unique point screening: the extreme point that detects the each Gauss's residual pyramid space drawing in step 202 is screened respectively, and the extreme point screening technique in M-1 described Gauss's residual pyramid space is all identical.Wherein, to i Gauss's residual pyramid space D (x, y, σ i-1) any extreme point while screening, process is as follows:
Step 2031, by Gauss's residual pyramid space D (x, y, σ described in step 202 i-1) represent by Taylor expansion at current extreme point place to be screened, and get first two of described Taylor expansion and obtain D (X max), wherein
Figure BDA0000488296030000101
x=in formula (x, y, σ i-1), T representing matrix transposition, D 0for the Section 1 of described Taylor expansion.
Step 2032, when
Figure BDA0000488296030000102
time, current extreme point to be screened is retained, and using retained extreme point as unique point; In formula, tr (H hess) be to use H hessthe mark of the matrix H essian representing, Det (H hess) use H hessthe determinant of the matrix H essian representing, γ=10.
Matrix
Figure BDA0000488296030000103
tr (H hess)=D xx+ D yy, Det (H hess)=D xxd yy-D xy 2; D xxand D yybe respectively the second-order partial differential coefficient of Taylor expansion described in step 2031 in x direction and y direction, D xyfor the mixed partial derivative in the x and y direction of Taylor expansion described in step 2031.
Step 2033, according to step 2031 to the method described in step 2032, all extreme points in M-1 described Gauss's residual pyramid space are screened respectively, and obtain all unique points of digitized video to be extracted.
In the present embodiment, adopt difference of Gaussian to carry out extreme point detection, thereby arithmetic speed significantly improve.
Step 204, unique point principal direction are determined: utilize local image feature, the principal direction of the each unique point filtering out in step 203 is determined.And, after the principal direction of each unique point is determined, can effectively guarantee that each unique point all has rotational invariance.
In the present embodiment, while utilizing local image feature to determine the principal direction of each unique point, the principal direction of unique point refers to this unique point gradient direction corresponding to high gradient mould in 8 pixels around.
In the present embodiment, when the principal direction of any unique point is determined, with the gradient-norm m (x of unique point to be determined, y) be ordinate and with its gradient direction θ (x, y) be that horizontal ordinate is drawn gradient orientation histogram, the principal direction that gradient direction corresponding to high gradient mould is this unique point of drawing in gradient orientation histogram; Wherein, gradient-norm m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2 , Gradient direction θ ( x , y ) = arctan L ( x + 1 , y ) - L ( x - 1 , y ) L ( x , y + 1 ) - L ( x , y - 1 ) .
In the present embodiment, in step 204, carry out before unique point principal direction determines, also needing to adopt the method for the three-dimensional quafric curve of matching to determine the position of the each unique point filtering out in step 203.
Step 205, generation SIFT Feature Descriptor: generate the SIFT Feature Descriptor of the each unique point filtering out in step 203, described SIFT Feature Descriptor is the descriptor of 128 dimensional feature vectors.
In the present embodiment, when the SIFT Feature Descriptor of any unique point filtering out in step 203 is generated, first centered by this unique point, get the window of 16 × 16 pixel sizes, again this window is divided into the image fritter of 16 4 × 4 pixel sizes, and calculate the accumulative total gradient magnitude on 8 gradient directions on each image fritter, and draw gradient orientation histograms along 8 gradient directions, finally generate 4 × 4 × 8 SIFT proper vectors of totally 128 dimensions; Described SIFT proper vector is made up of gradient-norm m (x, y) and gradient direction θ (x, y); Each image fritter is as a Seed Points, and described SIFT Feature Descriptor is made up of 16 Seed Points.
In recent years, along with deepening continuously to image processing techniques, image matching algorithm has also obtained very large concern, sift operator is proposed in 1999 by D.G.Lowe, be mainly used at that time object identification, within 2004, D.G.Lowe has carried out comprehensive summary to this operator, and formally propose a kind of based on metric space and to image scaling, rotate the image local feature that even affined transformation maintains the invariance and describe operator---SIFT (Scale Invariant Feature Transform), SIFT operator is a kind of operator that extracts local feature, find extreme point at metric space, extracting position, yardstick and rotational invariants, process between two width images translation occurs, rotation, matching problem in affined transformation situation, there is very strong matching capacity.But in traditional SIFT algorithm, when after the feature (being SIFT Feature Descriptor) having found with Feature Descriptor, adopt the form search institute of traversal a little, make the speed of coupling slow, cannot realize the real-time matching of large image.Kd tree is a kind of main memory data structure that binary search tree is generalized to multidimensional data, has very strong versatility, be penetrated at present in every field, as: Organization of Data and index, image processing etc.; But the limitation of common search based on Processing Algorithm, speed or not fully up to expectations.And in technical scheme disclosed in this invention, SIFT algorithm is combined with kd tree, when actual coupling, build kd tree with the unique point of two digitized videos to be matched filtering out respectively and carry out bidirectional research, and adopt nearest neighbor algorithm to mate two digitized videos to be matched, can not only obtain reliable match point, and accelerate the search procedure in multi-C vector, accelerated computing velocity, matching speed is fast and realization is convenient, can realize the real-time matching of large image.
The above; it is only preferred embodiment of the present invention; not the present invention is imposed any restrictions, every any simple modification of above embodiment being done according to the technology of the present invention essence, change and equivalent structure change, and all still belong in the protection domain of technical solution of the present invention.

Claims (10)

1. an image matching method that is suitable for the self-verifying of aeroplane photography flight quality, is characterized in that the method comprises the following steps:
Step 1, image pre-service: adopt data processor to carry out respectively denoising and filtering processing to two width digitized videos to be matched, two width digitized videos to be matched are the digitized video that adopts two measured regions obtaining of Aerial Photogrammetry; Described in two width, digitized video is two dimensional image; Described in two width, digitized video is respectively the first image and the second image;
Step 2, SIFT feature extraction: adopt described data processor to carry out respectively SIFT feature extraction to digitized video described in two width, obtain all unique points of digitized video and the SIFT Feature Descriptor of each unique point described in two width;
Match point is determined in step 3, the bidirectional research of structure kd tree: adopt described data processor to build kd and set and carry out bidirectional research, process is as follows:
Step 301, for the first time coupling, comprise the following steps:
Step 3011, structure kd tree: using described the second image as benchmark image to be matched, and the SIFT Feature Descriptor of all unique points of described the second image is built to kd tree;
Step 3012, characteristic matching: utilize kd tree constructed in step 3011, and adopt the SIFT Feature Descriptor of all unique points of nearest neighbor algorithm to described the first image extracting in step 2 to carry out characteristic matching, finding out in described the second image and all unique points of described the first Image Matching, is match point with the unique point of described the first Image Matching in described the second image;
Step 302, for the second time coupling, comprise the following steps:
Step 3021, structure kd tree: using described the first image as benchmark image to be matched, and the SIFT Feature Descriptor of all unique points of described the first image is built to kd tree;
Step 3022, characteristic matching: utilize kd tree constructed in step 3011, and all match points that adopt nearest neighbor algorithm to draw coupling in step 3012 carry out characteristic matching, finding out in described the first image and all unique points of described the second Image Matching, is reliable matching point with the unique point of described the second Image Matching in described the first image;
Step 4, elimination of rough difference: by described data processor and employing RANSAC algorithm, all reliable matching points that coupling in step 3022 is drawn carry out elimination of rough difference.
2. according to a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality claimed in claim 1, it is characterized in that: while described digital picture being carried out to denoising and filtering processing in step 1, first described digitized video is carried out to Gaussian smoothing filtering, the more filtered image of Gaussian smoothing is carried out to Wallis filtering processing.
3. according to a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality described in claim 1 or 2, it is characterized in that: while described digitized video being carried out to SIFT feature extraction in step 2, process is as follows:
Step 201, gaussian kernel process of convolution: according to formula L (x, y, σ m)=G (x, y, σ m) × I (x, y), carries out gaussian kernel process of convolution M time to digitized video to be extracted, obtains the multiscale space of digitized video to be extracted, the M that described multiscale space comprises digitized video to be extracted gaussian pyramid space L (x, y, σ m); Wherein
Figure FDA0000488296020000021
(x, y) is the pixel position coordinates of two dimensional image, and I (x, y) is described digitized video to be extracted, σ mfor the variance of Gauss normal distribution; M is gaussian kernel process of convolution number of times, m be positive integer and m=1,2 ..., M; M is positive integer and M > 3;
Step 202, Gaussian difference process and extreme point detect: first, and according to difference of Gaussian function:
D (x, y, σ i-1)=[G (x, y, k σ i-1)-G (x, y, σ i-1)] × I (x, y)=L (x, y, σ i)-L (x, y, σ i-1), to adjacent two gaussian pyramid spaces L (x, y, σ m) carry out Gaussian difference process, obtain M-1 Gauss's residual pyramid space, and detect the extreme point that draws each Gauss's residual pyramid space; Wherein i be positive integer and i=2,3 ..., M; Wherein, σ i=k σ i-1,
Figure FDA0000488296020000022
Step 203, unique point screening: the extreme point that detects the each Gauss's residual pyramid space drawing in step 202 is screened respectively, and the extreme point screening technique in M-1 described Gauss's residual pyramid space is all identical; Wherein, to i Gauss's residual pyramid space D (x, y, σ i-1) any extreme point while screening, process is as follows:
Step 2031, by Gauss's residual pyramid space D (x, y, σ described in step 202 i-1) represent by Taylor expansion at current extreme point place to be screened, and get first two of described Taylor expansion and obtain D (X max), wherein
Figure FDA0000488296020000031
x=in formula (x, y, σ i-1), T representing matrix transposition, D 0for the Section 1 of described Taylor expansion;
Step 2032, when time, current extreme point to be screened is retained, and using retained extreme point as unique point; In formula, tr (H hess) be to use H hessthe mark of the matrix H essian representing, Det (H hess) use H hessthe determinant of the matrix H essian representing, γ=10;
Matrix tr (H hess)=D xx+ D yy, Det (H hess)=D xxd yy-D xy 2; D xxand D yybe respectively the second-order partial differential coefficient of Taylor expansion described in step 2031 in x direction and y direction, D xyfor the mixed partial derivative in the x and y direction of Taylor expansion described in step 2031;
Step 2033, according to step 2031 to the method described in step 2032, all extreme points in M-1 described Gauss's residual pyramid space are screened respectively, and obtain all unique points of digitized video to be extracted;
Step 204, unique point principal direction are determined: utilize local image feature, the principal direction of the each unique point filtering out in step 203 is determined;
Step 205, generation SIFT Feature Descriptor: generate the SIFT Feature Descriptor of the each unique point filtering out in step 203, described SIFT Feature Descriptor is the descriptor of 128 dimensional feature vectors.
4. according to a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality described in claim 1 or 2, it is characterized in that: in step 1, described in two width, the resolution of digitized video is identical.
5. according to a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality described in claim 1 or 2, it is characterized in that: while adopting the SIFT Feature Descriptor of all unique points of nearest neighbor algorithm to described the first image extracting in step 2 to carry out characteristic matching in step 3012, by each unique point of described the first image respectively in step 3011 constructed kd tree carry out approximate KNN search, find out in described the second image all unique points with described the first Image Matching;
When all match points that adopt nearest neighbor algorithm to draw coupling in step 3012 in step 3022 carry out characteristic matching, by all match points of drawing of coupling in step 3012 respectively in step 3021 constructed kd tree carry out approximate KNN search, find out in described the first image all unique points with described the second Image Matching.
6. according to a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality described in claim 1 or 2, it is characterized in that: in step 204, carry out before unique point principal direction determines, also needing to adopt the method for the three-dimensional quafric curve of matching to determine the position of the each unique point filtering out in step 203.
7. according to a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality claimed in claim 2, it is characterized in that: while described digitized video being carried out to Gaussian smoothing filtering processing in step 1, choose N × N window and carry out filtering, wherein N is odd number and N >=3.
8. according to a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality claimed in claim 3, it is characterized in that: when the SIFT Feature Descriptor of any unique point filtering out in step 203 generation in step 205, first centered by this unique point, get the window of 16 × 16 pixel sizes, again this window is divided into the image fritter of 16 4 × 4 pixel sizes, and calculate the accumulative total gradient magnitude on 8 gradient directions on each image fritter, and draw gradient orientation histogram along 8 gradient directions, final generation 4 × 4 × 8 SIFT proper vectors of totally 128 dimensions, described SIFT proper vector is made up of gradient-norm m (x, y) and gradient direction θ (x, y), each image fritter is as a Seed Points, and described SIFT Feature Descriptor is made up of 16 Seed Points.
9. according to a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality claimed in claim 3, it is characterized in that: when the principal direction of any unique point being determined in step 204, with the gradient-norm m (x of unique point to be determined, y) be ordinate and with its gradient direction θ (x, y) be that horizontal ordinate is drawn gradient orientation histogram, the principal direction that gradient direction corresponding to high gradient mould is this unique point of drawing in gradient orientation histogram; Wherein, gradient-norm m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2 , Gradient direction θ ( x , y ) = arctan L ( x + 1 , y ) - L ( x - 1 , y ) L ( x , y + 1 ) - L ( x , y - 1 ) .
10. according to a kind of image matching method that is suitable for the self-verifying of aeroplane photography flight quality claimed in claim 3, it is characterized in that: while utilizing local image feature to determine the principal direction of each unique point in step 204, the principal direction of unique point refers to this unique point gradient direction corresponding to high gradient mould in 8 pixels around.
CN201410139272.XA 2014-04-08 2014-04-08 A kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically Expired - Fee Related CN103886611B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410139272.XA CN103886611B (en) 2014-04-08 2014-04-08 A kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410139272.XA CN103886611B (en) 2014-04-08 2014-04-08 A kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically

Publications (2)

Publication Number Publication Date
CN103886611A true CN103886611A (en) 2014-06-25
CN103886611B CN103886611B (en) 2018-04-27

Family

ID=50955484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410139272.XA Expired - Fee Related CN103886611B (en) 2014-04-08 2014-04-08 A kind of image matching method for being suitable for aeroplane photography flight reappearance and checking automatically

Country Status (1)

Country Link
CN (1) CN103886611B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104180794A (en) * 2014-09-02 2014-12-03 西安煤航信息产业有限公司 Method for treating texture distortion area of digital orthoimage
CN106023109A (en) * 2016-05-19 2016-10-12 西华大学 Region similar example learning-based sparse denoising method
CN106504237A (en) * 2016-09-30 2017-03-15 上海联影医疗科技有限公司 Determine method and the image acquiring method of matching double points
CN106851164A (en) * 2017-03-28 2017-06-13 戴金辰 Record image, video generation reservation method
CN107917699A (en) * 2017-11-13 2018-04-17 中国科学院遥感与数字地球研究所 A kind of method for being used to improve empty three mass of mountain area landforms oblique photograph measurement
CN108470354A (en) * 2018-03-23 2018-08-31 云南大学 Video target tracking method, device and realization device
CN109166143A (en) * 2018-07-06 2019-01-08 航天星图科技(北京)有限公司 A kind of big regional network stereo mapping satellite image matching process
CN109509262A (en) * 2018-11-13 2019-03-22 盎锐(上海)信息科技有限公司 Intelligence enhancing modeling algorithm and device based on artificial intelligence
CN109598750A (en) * 2018-12-07 2019-04-09 中国地质大学(武汉) One kind being based on the pyramidal large scale differential image characteristic point matching method of deformation space
CN109740591A (en) * 2018-11-14 2019-05-10 国网浙江宁波市鄞州区供电有限公司 A kind of Meter recognition algorithm
CN110334622A (en) * 2019-06-24 2019-10-15 电子科技大学 Based on the pyramidal pedestrian retrieval method of self-adaptive features
CN110490268A (en) * 2019-08-26 2019-11-22 山东浪潮人工智能研究院有限公司 A kind of feature matching method of the improvement nearest neighbor distance ratio based on cosine similarity
US10580135B2 (en) 2016-07-14 2020-03-03 Shanghai United Imaging Healthcare Co., Ltd. System and method for splicing images
CN111062927A (en) * 2019-12-18 2020-04-24 广东电网有限责任公司 Method, system and equipment for detecting image quality of unmanned aerial vehicle
CN111209920A (en) * 2020-01-06 2020-05-29 桂林电子科技大学 Airplane detection method under complex dynamic background
CN111639662A (en) * 2019-12-23 2020-09-08 珠海大横琴科技发展有限公司 Remote sensing image bidirectional matching method and device, electronic equipment and storage medium
CN113066130A (en) * 2021-04-21 2021-07-02 国家基础地理信息中心 Aerial photography image center position calculating method and device, electronic equipment and readable storage medium
CN113343920A (en) * 2021-07-01 2021-09-03 中诚信征信有限公司 Method and device for classifying face recognition photos, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169581A (en) * 2011-04-18 2011-08-31 北京航空航天大学 Feature vector-based fast and high-precision robustness matching method
CN103593832A (en) * 2013-09-25 2014-02-19 重庆邮电大学 Method for image mosaic based on feature detection operator of second order difference of Gaussian

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169581A (en) * 2011-04-18 2011-08-31 北京航空航天大学 Feature vector-based fast and high-precision robustness matching method
CN103593832A (en) * 2013-09-25 2014-02-19 重庆邮电大学 Method for image mosaic based on feature detection operator of second order difference of Gaussian

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
唐红梅 等: "一种改进的基于SIFT特征的快速匹配算法", 《电视技术》, vol. 37, no. 15, 2 August 2013 (2013-08-02) *
汪松: "基于SIFT算法的图像匹配方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 1, 15 January 2014 (2014-01-15), pages 19 - 49 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104180794B (en) * 2014-09-02 2016-03-30 西安煤航信息产业有限公司 The disposal route in digital orthoimage garland region
CN104180794A (en) * 2014-09-02 2014-12-03 西安煤航信息产业有限公司 Method for treating texture distortion area of digital orthoimage
CN106023109B (en) * 2016-05-19 2019-06-07 西华大学 A kind of sparse denoising method based on the similar sample study in region
CN106023109A (en) * 2016-05-19 2016-10-12 西华大学 Region similar example learning-based sparse denoising method
US11416993B2 (en) 2016-07-14 2022-08-16 Shanghai United Imaging Healthcare Co., Ltd. System and method for splicing images
US10580135B2 (en) 2016-07-14 2020-03-03 Shanghai United Imaging Healthcare Co., Ltd. System and method for splicing images
US11893738B2 (en) 2016-07-14 2024-02-06 Shanghai United Imaging Healthcare Co., Ltd. System and method for splicing images
CN106504237A (en) * 2016-09-30 2017-03-15 上海联影医疗科技有限公司 Determine method and the image acquiring method of matching double points
CN106851164A (en) * 2017-03-28 2017-06-13 戴金辰 Record image, video generation reservation method
CN107917699B (en) * 2017-11-13 2020-01-17 中国科学院遥感与数字地球研究所 Method for improving aerial three quality of mountain landform oblique photogrammetry
CN107917699A (en) * 2017-11-13 2018-04-17 中国科学院遥感与数字地球研究所 A kind of method for being used to improve empty three mass of mountain area landforms oblique photograph measurement
CN108470354A (en) * 2018-03-23 2018-08-31 云南大学 Video target tracking method, device and realization device
CN108470354B (en) * 2018-03-23 2021-04-27 云南大学 Video target tracking method and device and implementation device
CN109166143A (en) * 2018-07-06 2019-01-08 航天星图科技(北京)有限公司 A kind of big regional network stereo mapping satellite image matching process
CN109509262A (en) * 2018-11-13 2019-03-22 盎锐(上海)信息科技有限公司 Intelligence enhancing modeling algorithm and device based on artificial intelligence
CN109509262B (en) * 2018-11-13 2023-02-28 上海盎维信息技术有限公司 Intelligent enhanced modeling method and device based on artificial intelligence
CN109740591A (en) * 2018-11-14 2019-05-10 国网浙江宁波市鄞州区供电有限公司 A kind of Meter recognition algorithm
CN109598750A (en) * 2018-12-07 2019-04-09 中国地质大学(武汉) One kind being based on the pyramidal large scale differential image characteristic point matching method of deformation space
CN110334622A (en) * 2019-06-24 2019-10-15 电子科技大学 Based on the pyramidal pedestrian retrieval method of self-adaptive features
CN110334622B (en) * 2019-06-24 2022-04-19 电子科技大学 Pedestrian retrieval method based on adaptive feature pyramid
CN110490268A (en) * 2019-08-26 2019-11-22 山东浪潮人工智能研究院有限公司 A kind of feature matching method of the improvement nearest neighbor distance ratio based on cosine similarity
CN111062927A (en) * 2019-12-18 2020-04-24 广东电网有限责任公司 Method, system and equipment for detecting image quality of unmanned aerial vehicle
CN111639662A (en) * 2019-12-23 2020-09-08 珠海大横琴科技发展有限公司 Remote sensing image bidirectional matching method and device, electronic equipment and storage medium
CN111209920A (en) * 2020-01-06 2020-05-29 桂林电子科技大学 Airplane detection method under complex dynamic background
CN111209920B (en) * 2020-01-06 2022-09-23 桂林电子科技大学 Airplane detection method under complex dynamic background
CN113066130A (en) * 2021-04-21 2021-07-02 国家基础地理信息中心 Aerial photography image center position calculating method and device, electronic equipment and readable storage medium
CN113343920A (en) * 2021-07-01 2021-09-03 中诚信征信有限公司 Method and device for classifying face recognition photos, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN103886611B (en) 2018-04-27

Similar Documents

Publication Publication Date Title
CN103886611A (en) Image matching method suitable for automatically detecting flight quality of aerial photography
CN111815757B (en) Large member three-dimensional reconstruction method based on image sequence
EP3382644B1 (en) Method for 3d modelling based on structure from motion processing of sparse 2d images
CN111028277B (en) SAR and optical remote sensing image registration method based on pseudo-twin convolution neural network
DE112012005350B4 (en) Method of estimating the pose of an object
CN103426165A (en) Precise registration method of ground laser-point clouds and unmanned aerial vehicle image reconstruction point clouds
CN103971378A (en) Three-dimensional reconstruction method of panoramic image in mixed vision system
Chen et al. Robust affine-invariant line matching for high resolution remote sensing images
CN104077760A (en) Rapid splicing system for aerial photogrammetry and implementing method thereof
CN103065135A (en) License number matching algorithm based on digital image processing
CN102005047A (en) Image registration system and method thereof
CN103593832A (en) Method for image mosaic based on feature detection operator of second order difference of Gaussian
CN101882308A (en) Method for improving accuracy and stability of image mosaic
Urban et al. Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds
CN103177444A (en) Automatic SAR (synthetic-aperture radar) image rectification method
CN107677274A (en) Unmanned plane independent landing navigation information real-time resolving method based on binocular vision
Bisht et al. Image registration concept and techniques: a review
CN105654479A (en) Multispectral image registering method and multispectral image registering device
Huang et al. SAR and optical images registration using shape context
CN104700359A (en) Super-resolution reconstruction method of image sequence in different polar axis directions of image plane
CN105352482A (en) Bionic compound eye microlens technology-based 3-3-2 dimension object detection method and system
Barazzetti et al. Automatic registration of multi-source medium resolution satellite data
Petrou Image registration: An overview
Kai et al. Multi-source remote sensing image registration based on normalized SURF algorithm
Wang et al. Fast and accurate satellite multi-view stereo using edge-aware interpolation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180427