CN101839722A - Method for automatically recognizing target at medium and low altitudes and positioning carrier with high accuracy - Google Patents

Method for automatically recognizing target at medium and low altitudes and positioning carrier with high accuracy Download PDF

Info

Publication number
CN101839722A
CN101839722A CN 201010164359 CN201010164359A CN101839722A CN 101839722 A CN101839722 A CN 101839722A CN 201010164359 CN201010164359 CN 201010164359 CN 201010164359 A CN201010164359 A CN 201010164359A CN 101839722 A CN101839722 A CN 101839722A
Authority
CN
China
Prior art keywords
image
matching
prime
point
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 201010164359
Other languages
Chinese (zh)
Inventor
曾庆化
王先敏
刘建业
熊智
赖际舟
李荣冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN 201010164359 priority Critical patent/CN101839722A/en
Publication of CN101839722A publication Critical patent/CN101839722A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for automatically recognizing a target at medium and low altitudes and positioning a carrier with high accuracy and belongs to the field of scene matching guidance. The method comprises the following steps of: extracting a constant algorithm and establishing an H model based on an image scale characteristic to match and recognize targets and provide a high-accuracy navigation information source; coordinating and planning relative parameters according to a matching result which is fed back in a self-adaptive mode to further control matching performance; researching integral image-based average filtering which generates an image multi-scale space instead of Gauss convolutional filtering so as to increase a matching speed; establishing an image space transformation model which adapts to the strong matching capability of the algorithm based on the thought of random sampling consistency according to an image matching result; and eliminating accurate positioning pixels of wrongly matched points based on the model to recognize the target and provide the high-accuracy navigation information source through model mapping, and simultaneously, selecting a digraph of known transformation parameters to verify the performance and the result of an image matching algorithm according to the transformation model.

Description

Target is discerned and the carrier with high accuracy localization method automatically under a kind of low-to-medium altitude
Technical field
The present invention relates to utilize transformation model between image matching method and design of graphics picture, accurate recognition objective and provide high precision navigation information source for carrier under low-to-medium altitude.Belong to the image matching guidance field.
Background technology
Utilize images match to realize that accurate detection of target and carrier accurately are positioned at dual-use fields such as image matching guidance, military surveillance, disaster relief, air support, terrain prospecting and all are widely used under the low-to-medium altitude visual field.Wherein, abroad early carry out research aspect this at military aspect: the U.S. begins to launch the research of scene matching aided navigation assisting navigation from the seventies in last century, at present the successful Application and the good result of getting in " battleax ", " F-16 "; In the unmanned plane autonomous flight, the research of scene matching aided navigation assisting navigation is equally extensively carried out,, precision light owing to the image-guidance mass of system be high to enjoy favor, utilizes images match to discern runway or other target automatically and obtains navigation information, realizes autonomous flight and landing; Aspect civil area, based on the also common space in a newspaper of the report of automatic recognition objective of images match and location; Domesticly start late, at present about the scene matching aided navigation assisting navigation, just actively launch based on the research of the Target Recognition of images match etc. in the research aspect this.
Target Recognition and high precision carrier navigation information source is provided under the low-to-medium altitude, its ultimate principle be the real-time target image that obtains by airborne imageing sensor and previously prepared high precision reference reference diagram or early stage the target detection images match, precisely realize Target Recognition and provide high precision navigation information source according to matching result by analysis-by-synthesis to environment; Owing to there is the influence of factors such as environment, equipment, shooting form, can have variations such as translation, rotation, yardstick, visual angle, mistake are cut, illumination, distortion between measured drawing and the digital reference map, so the image matching algorithm of efficiently and accurately is the key that high precision navigation information source was accurately discerned and guaranteed to provide to target.And current in the scene matching aided navigation assisting navigation research comparatively widely the Hausdorff images match and improve that efficiency of algorithm is low, accuracy is not high, the probability that do not have a coupling is big; The yardstick invariant features extracts (SIFT:Scale Invariant Feature Transform) algorithm and adopt comparatively widely in the normal image coupling, though it has the strongest robustness and good real-time performance in class operator, but in the image matching guidance field, matching result is uncontrollable fully, error matching points is many, transformation model is short of, images match result does not have problems such as effective checking means, no accelerated method, has seriously limited its application in image matching guidance.The at present domestic research of not carrying out this type of problem of solution as yet, need be at the strong matching capacity of yardstick invariant features extraction algorithm, in conjunction with the demand of recognition objective and navigation under the low-to-medium altitude visual field, to improve based on yardstick invariant features extraction algorithm and improve it to lack and in image matching guidance necessary function;
The appendix document:
Document 1: Liu Jianye, Ceng Qinghua, Zhao Wei, Xiong Zhi etc. the theoretical and application [M] of navigational system. Xi'an: publishing house of Northwestern Polytechnical University, 2010
Document 2:Lowe D G.Distinctive image features from scale-invariant keypoints[J] .International Journal of Computer Vision, 2004,2 (60): 91~110.
Summary of the invention
Technical matters to be solved by this invention is in the image matching guidance field, make full use of the yardstick of strong matching capacity invariant features extraction algorithm is arranged in images match,, unique point uncontrollable fully at its matching result too much, error matching points probability of occurrence height, inter-image transformations model shortcoming, images match result do not have effective checking means, no accelerated method etc. and seriously limit the bottleneck problem that it is used in scene matching aided navigation, propose a solution from theory and practice, so that accurate target information and in real time accurate navigator fix information to be provided;
The present invention adopts following technical scheme for achieving the above object:
Target is discerned and the carrier with high accuracy localization method automatically under a kind of low-to-medium altitude of the present invention, comprises the steps:
The first step: extract the feature of measured drawing and the feature of reference diagram based on yardstick invariant features extraction algorithm;
It is characterized in that:
Second step: according to the characteristic use of described measured drawing of the first step and reference diagram recently/time neighbour's rule carries out the image initial coupling and obtains the initial matching result; According to initial matching self-adaptation coordinated planning as a result coarse adjustment and the described unique point of extracting measured drawing and reference diagram based on yardstick invariant features extraction algorithm of the fine setting two-stage feature extraction parameter control first step;
The 3rd step: make up the image space transformation model H that is adapted to the strong ability of the constant extraction algorithm of feature based on the RANSAC method according to described matching result of second step;
The 4th step: measured drawing match point coordinate is transformed to the measured drawing match point coordinate that obtains in the reference diagram after the conversion according to described image space transformation model H of the 3rd step, and the measured drawing match point coordinate and second that obtains after the conversion goes on foot the distance error r of described measured drawing through the match point coordinate of initial matching acquisition, when the error threshold of described distance error r greater than setting, then reject described measured drawing match point, recognition objective;
The 5th step: ask for target's center's point according to the measured drawing point set weighting after the described rejecting error matching points of the 4th step, being mapped to digitally by described image space transformation model H of the 3rd step central point, picture library is to obtain target geographic position in the reference diagram;
The 6th step: the height, the attitude information that adopt Airplane Navigation Equipment output, according to the 5th step described impact point geographic position and with the geometry site of body, by how much resolve the relative position of asking for body and target, for body provides the high precision position information source, and described position information source is improved navigation accuracy by information fusion.
Preferably, the verification method of the image space transformation model H of the 3rd step structure and image matching algorithm validity is as follows:
1) allows measured drawing and reference diagram carry out conversion, ask for the transformation matrix R between image simultaneously by the known variant parameter;
2) setting in the error range of permission when the error of corresponding element among element and the transformation matrix R of described image space transformation model H of the 3rd step, then image space transformation model H is effective;
3) have a few in the measured drawing is mapped in the new images of reference diagram size by the H model, and compares with reference diagram, in setting the scope that allows, then the new images of determination map is consistent with reference diagram, and then the correctness of authentication image matching algorithm as if error.
Preferably, the Gaussian convolution filtering in the first step mesoscale invariant features extraction algorithm adopts the mean filter method based on integral image to substitute generation image multiscale space.
Preferably, the method for design of graphics image space transformation model H is as follows:
p i=(x i, y i, 1) T, p i'=(x i', y i', 1) TBe respectively in target measured drawing and the reference diagram match point and concentrate i corresponding homogeneous coordinates point, have image space transformation model H, feasible:
p i′=Hp i (1)
Wherein:
H = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 = h 1 T h 2 T h 3 T , h j ( j = 1,2,3 ) = h j 1 h j 2 h j 3 - - - ( 2 )
Homogeneous coordinates substitution (1) is got formula:
x i ′ = h 11 x i + h 12 y i + h 13 h 31 x i + h 32 y i + h 33 = h 1 T p i h 3 T p i → h 1 T p i - x i ′ h 3 T p i = 0 - - - ( 3 )
y i ′ = h 21 x i + h 22 y i + h 23 h 31 x i + h 32 y i + h 33 = h 2 T p i h 3 T p i → h 2 T p i - y i ′ h 3 T p i = 0 - - - ( 4 )
Make h=(h 11, h 12, h 13, h 21, h 22, h 23, h 31, h 32, h 33), that is:
x i y i 1 0 0 0 - x i ′ x i - x i ′ y i - x i ′ 0 0 0 x i y i 1 - y i ′ x i - y i ′ y i - y i ′ h T = 0 - - - ( 5 )
, then exist corresponding point for n:
Lh T=0 (6)
Wherein L is:
L = p 1 T 0 - x 1 ′ p 1 T 0 p 1 T - y 1 ′ p 1 T . . . . . . . . . p n T 0 - x n ′ p n T 0 p n T - y n ′ p n T - - - ( 7 )
Thus, made up image space transformation model H based on the coupling point set; Wherein T is a transposition, (x i, y i) concentrate the two-dimensional coordinate of i pixel, (x for the measured drawing match point i', y i') concentrate the two-dimensional coordinate of i pixel, h for the reference diagram match point 11, h 12, h 13, h 21, h 22, h 23, h 31, h 32, h 33Be the element of H, geometric transformation parameter between token image when there are how much variations in image, is obtained by relevant geometric transformation cascade between image.
Preferably, based on least square thought, utilize reject error matching points described in svd the 4th step after, by the L of measured drawing with reference diagram coupling point set structure, ask for image optimal spatial transformation model H.
The present invention starts with from analysing in depth based on yardstick invariant features extraction algorithm with based on nearest/time neighbour's matching algorithm of the Euclidean distance on BBF (Best-Bin-First) basis, in conjunction with the application scenarios and the demand of image matching guidance, the analysis that the realization mechanism of yardstick invariant features extraction algorithm is cut thoroughly.By principle analysis and lot of experiments find, influence presents regular the variation to matching result for the variation of image pre-service correlation parameter and SIFT correlation parameter, by the coordinated planning parameter, the generation of may command matching result to a great extent, but the needed unique point number of adaptive control as required; In order to adapt to some evident characteristic zone, reduce calculated amount, accelerate matching speed, select for use mean filter to substitute Gaussian convolution filtering based on integral image, only need carry out an image integration computing, thereafter the convolution process can be carried out based on integral image, has improved matching speed greatly, and its unique point decreased number is in the demand that to a certain degree just is being suitable for some specific region coupling guidance; Be short of at the transformation model between yardstick invariant features extraction algorithm image, the high situation of erroneous matching rate appears, analyse in depth by geometric transformation graph image, adopt on the basis of homogeneous coordinates, transformation relation between image can be used H (Homography Matrix) model description of 3*3 matrix, can deal with the wild value tag point of vast scale by the H model of random sampling consistance thought structure.Measured drawing is mapped in the reference diagram by model, asks for the distance error between the point that itself and coupling obtain, setting threshold is rejected unnecessary and wrong match point as requested; By the optimization model that makes up, can be in the reference number map office with the measured drawing information mapping, the geographic position that obtains target provides high precision navigation information source for carrier simultaneously; Adopt the coupling digraph of known transform order and parameter to mate, utilize said method to build transformation model, ask its transformation matrices by known parameters simultaneously, itself and H battle array are compared, measured drawing and reference diagram in reference diagram by the mapping of H battle array subtracts each other simultaneously, can be from the correctness of point of theory authentication image matching algorithm, thus the roughening method that only can rely on range estimation that matching result is judged broken away from.The present invention has important significance for theories and very strong engineering using value.
Description of drawings
Fig. 1 is overall flow figure of the present invention.
Fig. 2 is that the target initial matching realizes program flow diagram.
Fig. 3 is that correlation parameter is to matching result typical effects trend.
Fig. 4 is based on the boxed area mean filter convolution of integral image.
Fig. 5 is that error matching points is rejected principle.
Embodiment
Be elaborated below in conjunction with the technical scheme of accompanying drawing to invention:
Accurate quick identification and the carrier with high accuracy location of the present invention by be deployed in target the image matching guidance from theoretical research and a large amount of experimental analysis two aspects.As shown in Figure 1, do precision carrier locating information, need finish the work in order accurately to discern and to provide based on the method realization target of images match:
1 target initial matching based on the yardstick invariant features
The superior function that the present invention is based on yardstick invariant features extraction algorithm realizes Feature Extraction, set up the Kd-Tree on the feature descriptor basis, search plain method fast by BBF, find out nearest, the inferior nearest neighbor distance between the unique point fast, according to recently/time neighbour's ratio and threshold value determine whether the initial matching point.Target initial matching recognizer process flow diagram of the present invention as shown in Figure 2;
The realization of whole flow process can be divided into following four parts:
(1) asks for the metric space extreme value.At first make up the multi-scale image space based on the Gaussian kernel function, adopt one group of continuous Gaussian convolution nuclear G (x, y is σ) with original image I (x, y) convolution generates a series of metric space images, and adjacent scalogram looks like to ask difference to generate DOG (Different of Gaussian) image D i(x, y σ), compare every of the non-outermost layer in each rank of DOG metric space and to ask the extreme value extract minutiae with 26 in the field of this layer and adjacent bed.
(2) assigned characteristics point direction.The mould of unique point gradient and direction are formula (1) and (2), sample in the unique point field, create histogram of gradients.Histogram represent a direction with per 10 degree, and totally 36 directions are selected the principal direction of histogrammic main peak value as unique point, value reach main peak value 80% as auxiliary direction, the robustness of mating with enhancing.
m ( x , y ) = ( L ( x + 1 , y ) - L ( x - 1 , y ) ) 2 + ( L ( x , y + 1 ) - L ( x , y - 1 ) ) 2 - - - ( 1 )
θ(x,y)=tan -1((L(x,y+1)-L(x,y-1))/(L(x+1,y)-L(x-1,y))) (2)
(3) feature descriptor generates.The field that with the unique point is center 16 * 16 utilizes formula (1) as sample window, and (2) ask for the Grad and the gradient direction of each pixel.The histogram (sampled point and unique point use relative direction to keep rotational invariance) of every 4*4 fritter at 8 gradient directions asked in Gauss's weighting, draw the accumulated value of each gradient direction, form a seed points, each unique point generates the feature descriptor of 128 dimensions thus, with its normalization, reduce illumination effect.But the antimierophonic ability of thought enhancement algorithms of neighborhood directivity information associating also provides fault-tolerance preferably for the characteristic matching that contains positioning error.
(4) characteristic matching.P a{ p (a) 1, p (a) 2... p (a) m, P b={ p (b) 1, p (b) 2... p (b) nBe respectively the feature point set that from measured drawing and reference diagram, extracts, with reference to figure P bIn each poor root size of unique point 128 dimension descriptor with dimension data and its average be followed successively by index and set up Kd-Tree, employing BBF searching algorithm obtains measured drawing P aUnique point is compared with the traversal method the approximate k of Kd-Tree (this paper k=2) neighbour, and this method efficient can magnitude improve.With the distance function of Euclidean distance, mate definite matching characteristic point then according to the distance-ratio criterion as descriptor.
Improving one's methods of 2 image matching algorithms
The present invention fully inherits the yardstick invariant features extraction algorithm with strong matching capacity, characteristics in conjunction with image matching guidance, be applied as background with the scene matching aided navigation in zone of the same race not, at what exist in the scene matching aided navigation: unique point is too much, match information is redundant, inefficiency or unique point is very few, can't finish the coupling task; Images match result is uncontrollable fully; The zone of some feature rich, matching speed is slow, does not have problems such as the matching process of acceleration, and the means from theoretical level and employing experimental analysis propose improvement project respectively.
2.1 matching effect Study on Adaptive Control
In image matching guidance, the result of expectation coupling usually is desired state, promptly according to matching area character, matching characteristic point dynamically is controlled in the certain limit.But original yardstick invariant features extraction algorithm, its unique point generally has the characteristics of a large amount of property, and matching result is uncontrollable fully.On the one hand, number of characteristics point extracts and coupling certainly will increase calculated amount, reduces matching efficiency, can not satisfy in the navigational guidance requirement to real-time; On the other hand, the probability that the unique point number of increase must impel error matching points to occur increases, and directly influences the accuracy and the navigation accuracy of Target Recognition; In addition, because some regional characteristic or attitude of carrier can cause the corresponding minimizing of match point, need equally to adopt supplementary means to increase its match point number, to satisfy the requirement of image matching guidance to images match.
From Fig. 2 target initial identification process flow diagram as can be seen, from measured drawing and reference diagram through image characteristics extraction to characteristic matching initial identification target, experience many steps, wherein main by generating multi-scale image, extract the image local invariant feature, determine the matching characteristic point according to arest neighbors/time neighbour's ratio, the processing of each process and details all directly influences the result of images match.In the image preprocessing process,, directly change the contrast of image, reject, change the effect of Feature Points Matching by low contrast features point by changing the gradation of image linear transformation factor; In the process that the image multiscale space generates, the image number of plies on the exponent number of metric space and every rank directly increases on the physical significance or the probability of minimizing characteristics of image dot generation; In the Gaussian convolution process, the change of scale factor directly changes the result of filtering convolution, because the change of Gauss's radius, the directly generation of controlling features point; Low contrast is rejected threshold value, arest neighbors/threshold values such as time neighbour's ratio directly determine to increase or reduce the number of unique point; The employing of neighborhood information directly determines result of mating or the like in the grade of histogram of gradients and the unique point direction assigning process; The images match result verification is found that each correlation parameter changes feature point extraction and matching result are presented regular influence by further each correlation parameter of great amount of images being changed; Show as along with the increasing progressively of each individual parameter, matching result totally presents to be increased and decline trend, though middlely fluctuation can occur, general trend is significantly, and single parameter is a bounded to the influence of matching result;
From top theoretical analysis and further experiment as can be known, the parameter in each stage changes the result is presented regular influence in the images match process, and its influence is a bounded and stable; Application background at image matching guidance, for effectively matching result being implemented control, in a large amount of experimentations, respectively with regard to the linear changed factor of image pre-service gray scale, noise pre-service Gauss radius, the pyramid exponent number, every stratum number, scale factor, the histogram of gradients grade, each individual parameter such as threshold value changes to be researched and analysed the influence of matching result, and its rule that influences to feature point extraction and matching result is described with mathematical way, see Fig. 3, under the 500*300 pixel, two parameters are separately to the typical effects trend of matching result, the influencing rule and can describe of other parameter with similar approach, and parameter changes result to feature extraction and coupling and influences rule and be same trend under the different scene.According to practical application request, study the influence amplitude of each correlation parameter variation to the feature extraction result, it can be divided into coarse adjustment and fine setting two-stage adjustment parameter.Changing by parameter influences rule as can be known to the result, and in matching process, according to coupling region characteristic of living in and image quality, by the feedback of matching result last time, constantly dynamic coordinate is planned each parameter, in the adjustment process, and can be according to the principle of finely tuning after the first coarse adjustment.By repeatedly adjusting, search each optimum parameter automatically, present required quality with the control matching result, thereby make matching result adapt to the demand of this area navigation guidance automatically images match;
2.2 quicken to realize based on the coupling of integral image
In the zone of some feature rich, generate the multi-scale image pyramid by Gaussian convolution filtering, need carry out a large amount of calculating, and the non-image matching guidance of generation of unique points resolves required suitable result in a large number, causes the wasting of resources and inefficiency; Be generation, the quickening images match speed that can reduce unique point, make it can adapt to of the requirement of image matching guidance process preferably to images match, the present invention adopts last time matching result judgement scene matching aided navigation zone of living in of basis, thereby decision selects for use the mean filter based on integral image to substitute Gaussian convolution filtering generation multi-scale image pyramid, can reduce the generation of unique point to a certain extent, strengthen images match speed faster, improve matching efficiency fast at small visual angle change image.
Mean filter can be based upon only need once travel through on the basis of quadraturing image and carry out, owing to do not need to carry out the repeatedly calculating of similar gaussian filtering masterplate under moving, calculated amount greatly reduces, and can improve the formation speed of image pyramid greatly; The implementation procedure of integral image is as follows, makes (x 0, y 0) be certain point in the image, I (x 0, y 0) be its gray-scale value, I ' (x 0, y 0) be the integral image values of this point, integral image is asked for suc as formula (3):
I ′ ( x 0 , y 0 ) = Σ 0 ≤ x ≤ x 0 Σ 0 ≤ y ≤ y 0 I ( x 0 , y 0 ) - - - ( 3 )
Setting up on the basis of integral image, can ask for the value of segmented areas mean filter convolution according to integral image fast, the mean filter value of boxed area can rely on integral image to ask for fast, as Fig. 4.Asking for the add up method of gray-scale value V of all pixels of ABCD boxed area is:
V=I′(D)-I′(C)-I′(B)+I′(A) (4)
For generating multiple dimensioned image pyramid, the size that can change boxed area is to realize the mean filter of different scale, select the multi-scale image pyramid that makes up by based on the boxed area mean filter method on the integral image basis for use, can be in the abundant zone of unique point, reducing unique point generates, and accelerogram improves matching efficiency and optimization of matching result as matching process;
The 3 essence couplings based on the image transformation model realize
In above-mentioned initial matching process, the searching method of the arest neighbors of a large amount of property, randomness, complicacy and BBF that the unique point of extracting based on yardstick invariant features algorithm produces makes that the mistake coupling is inevitable sometimes, the result of identification causes departing from of effective target or wrong identification possibly, and then inaccurate locating information is provided.Simultaneously, in the image matching guidance process, need accurately know the transformation relation between the coupling digraph, accurately be mapped in the numerical map reference library with position, thereby obtain the geographical location information that mates target measured drawing.The present invention is directed to the strong ability of yardstick invariant features extraction algorithm, make up, reflect with this transformation relation between image to experimental results show that this model has very strong adaptive faculty based on the H model on the homography matrix basis.On the basis that makes up optimum accurate model, reject error matching points, the research of the smart coupling of unfolded image, its matching result precision can reach sub-pixel.
3.1H the adaptive theoretical analysis of transformation model
Geometric transformation between image relation is analyzed, can be known that yardstick, rotation, translation, mistake between image transformation relation such as cut and can be obtained by the transformation relation of corresponding point between image.Introduce homogeneous coordinates, bring convenience for the solution of problem with n+1 dimension coordinate research n (n=2) dimension coordinate.Make p i=(x i, y i, 1) T, p i'=(x i', y i', 1) TBe respectively image pairing i coordinate points before and after geometric transformation.
When image when there are the dimensional variation of Sx and Sy respectively in x direction and y direction, its respective coordinates point changes context and can be described as (image coordinate initial point, down with) relatively:
x ′ y ′ 1 = Sx 0 0 0 Sy 0 0 0 1 x y 1 - - - ( 5 )
When having the θ rotation change between image, its respective coordinates point context can be described as:
x ′ y ′ 1 = cos θ - sin θ 0 sin θ cos θ 0 0 0 1 x y 1 - - - ( 6 )
Have x when between image, there is translation dx in the y direction, and when dy changed, its respective coordinates point context can be described as:
x ′ y ′ 1 = 1 0 dx 0 1 dy 0 0 1 x y 1 - - - ( 7 )
When between image, there is x, y direction Cx, during the shear of Cy mistake, its respective coordinates point context can be described as:
x ′ y ′ 1 = 1 Cx 0 Cy 1 0 0 0 1 x y 1 - - - ( 8 )
The geometric transformation that exists between image all can be above-mentioned and be changed according to certain concatenated in order for other how much and realize, promptly always exists a 3*3 matrix can describe transformation relation between image, that is:
x ′ y ′ 1 = A B C D E F G H 1 x y 1 - - - ( 9 )
Find out that from above-mentioned analysis various how much between image changes and can describe with the matrix of a 3*3, the result of image change can regard that a series of change for how much finish by a graded cascade as.
3.2H transformation model makes up
Can finish by various independent how much variation cascades at varies between images, and each independent variation can be described with the matrix of a 3*3 under homogeneous coordinates, therefore the matrix of introducing a 3*3 is used for the transformation model between the design of graphics picture, and the variation between image is had descriptive power comparatively accurately.Transformation model between image can make up according to the coupling point set.
Suppose p i=(x i, y i, 1) T, p i'=(x i', y i', 1) TFor match point in target measured drawing and the reference diagram is concentrated i corresponding homogeneous coordinates point,, feasible if there is H:
p i′=Hp i (10)
Wherein:
H = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 = h 1 T h 2 T h 3 T , h j ( j = 1,2,3 ) = h j 1 h j 2 h j 3 - - - ( 11 )
Then H is for describing one 3 * 3 homography matrix of corresponding relation between viewpoint.Conversion factors such as target image rotation, convergent-divergent, translation, mistake are cut, perspective projection have been comprised.Homogeneous coordinates substitution (10) is got formula:
x i ′ = h 11 x i + h 12 y i + h 13 h 31 x i + h 32 y i + h 33 = h 1 T p i h 3 T p i → h 1 T p i - x i ′ h 3 T p i = 0 - - - ( 12 )
y i ′ = h 21 x i + h 22 y i + h 23 h 31 x i + h 32 y i + h 33 = h 2 T p i h 3 T p i → h 2 T p i - y i ′ h 3 T p i = 0 - - - ( 13 )
Make h=(h 11, h 12, h 13, h 21, h 22, h 23, h 31, h 32, h 33), that is:
x i y i 1 0 0 0 - x i ′ x i - x i ′ y i - x i ′ 0 0 0 x i y i 1 - y i ′ x i - y i ′ y i - y i ′ h T = 0 - - - ( 14 )
, then exist corresponding point for n:
Lh T=0 (15)
Wherein L is:
L = p 1 T 0 - x 1 ′ p 1 T 0 p 1 T - y 1 ′ p 1 T . . . . . . . . . p n T 0 - x n ′ p n T 0 p n T - y n ′ p n T - - - ( 16 )
Thus, made up transformation model H between image based on the coupling point set, this model has been described corresponding point transformation of coordinates relation between image accurately, that is has set up the hinge between the coupling digraph;
3.3 H model solution based on RANSAC thought
On the basis that transformation model makes up between the coupling digraph owing to some randomness noises and other nonlinear transformation error inevitably occur in the image transform processes, thereby be not have a few and satisfy the homography transformation model fully.How by the coupling point set how to ask for the equation optimum solution to accurate rejecting match point, accurately to shine upon the impact point positional information significant.
Investigate the H vector, the use of homogeneous coordinates allows H can utilize h 33Carrying out normalization (is that all elements is divided by h 33), thereby make H that the irrelevant unknown quantity of 8 independences only be arranged, therefore finding the solution homography matrix H needs 4 pairs of corresponding match points at least.According to numerical algebra knowledge,, find the solution corresponding to L by L is carried out svd TThe proper vector of L minimal eigenvalue, h is this proper vector.Need to adopt the thought of least square to guarantee feature point set residual error after the model mapping in the solution procedure || Lh T|| minimum.Consider that the present invention randomly draws n (n>=4) according to RANSAC thought the coupling point set is found the solution the H model, may there be morbid state (contradiction) or overdetermination linear equation, demand is separated its optimum solution under the least square meaning, to guarantee to obtain the transformation model of error minimum.And find increase income cvSolve (A in the storehouse of OpenCV by test that many programmed methods are found the solution, B, h ', CV_SVD) function utilizes svd can find the solution the least square solution of ill linear equation and rapid solving overdetermination linear equation accurately, use this function in the computer solving and make up the right H model of random sampling point, have stronger robustness.
Because the using and, can make of homogeneous coordinates to the normalization of h:
Ah’=B (17)
Wherein, A, B, h ' are suc as formula (18), (19), (20):
A = x 1 y 1 1 0 0 0 - x 1 ′ x 1 - x 1 ′ y 1 . . . . . . . . . . . . . . . . . . . . . . . . x n y n 1 0 0 0 - x n ′ x n - x n ′ y n 0 0 0 x 1 y 1 1 - y 1 ′ x 1 - y 1 ′ y 1 . . . . . . . . . . . . . . . . . . . . . . . . 0 0 0 x n y n 1 - y n ′ x n - y n ′ y n - - - ( 18 )
B = x 1 ′ . . . x n ′ y 1 ′ . . . y n ′ - - - ( 19 )
h ′ = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 - - - ( 20 )
How target measured drawing and reference diagram coupling point set number become the another key that improves algorithm performance and matching precision from the concentrated optimum accurately transformation parameter model of image of efficiently seeking of a large amount of points generally much larger than 4.Consider matching efficiency, with respect to the excessive ergodic algorithm of calculated amount, the present invention is based on the RANSAC basic thought, determine dependent thresholds with probabilistic statistical method, find the solution optimal mapping model H, this method can be dealt with the wild value of vast scale preferably.From concentrated equally this subclass (n=4) of randomly drawing of match point, initialization model, after measured drawing mated point set and import this initial model and carry out spatial alternation, determine its effective point set (interior point or consistent point set) according to distance error threshold value r, if in count out greater than setting threshold, then with this effective point set substitution (17) solving model parameter again, to ask for the optimal transformation model.Otherwise, stochastic sampling again; Through repeatedly repeatedly, until asking for optimization model.
3.4 erroneous matching is rejected
By to the initial matching result and aforementioned to yardstick invariant features extraction algorithm property analysis as can be known, because the magnanimity unique point of SIFT, the threshold value under the related request are determined, search plain algorithm etc. fast based on what BBF estimated, make that the existence of error matching points is inevitable.And under the aerial perspective visual field, one or several pixel error of matching result all may cause Target Recognition to produce the error of tens meters even up to a hundred meters, therefore, reject erroneous matching, accurately locate pixel to the accurate identification of target with provide high precision carrier navigation information source to have great function.
A kind of thinking that solves preferably is based on the optimal transformation model H of above-mentioned structure and launches, and Fig. 5 is the rejecting thinking synoptic diagram of error matching points.P a', P b' be the coupling point set of measured drawing and reference diagram, with P a' in unique point behind the spatial alternation model transferring with P b' in some comparison, promptly A point is corresponding with A ' in the reference diagram after automatic the coupling in the target measured drawing, A " be point corresponding with A behind image transformation model H.Ask for its error distance r, wherein r is formula (21):
r = ( x ′ - x ′ ′ ) 2 + ( y ′ - y ′ ′ ) 2 - - - ( 21 )
The match point of measured drawing is mapped in the reference diagram through transformation model, ask for the transformed distances error r of each unique point by Fig. 5, itself and probability statistics are determined threshold ratio, if greater than setting value, then, rejected for the mistake match point, otherwise, then be correct matching result, after error matching points is rejected, can accurately obtain the matching result between measured drawing and reference diagram.
After the rejecting error matching points is accurately located pixel, can obtain the accurate matching result of target image sub-pixel.Can estimate rejecting effect according to demand simultaneously, whether judgement will reject the point set substitution again (17) behind the mispairing point, to ask for the more accurate model that satisfies application requirements; Based on further revised H model, the target image center point coordinate is mapped in the map data base, manage positional information more accurately thereby obtain target.Target geographic position and the attitude that provides of carrier navigator and other navigational parameter by identification can be carrier high precision navigation information source are provided after information fusion.
4 images match performance evaluations based on transformation model
Through above-mentioned steps, can under image matching guidance, realize the accurate identification of target and high precision navigation information source is provided.But how are images match result's correctness, precision, from traditional means that the images match result is verified as can be seen, mainly be matching result to be identified by range estimation, whether rough judgement matching result is effective, whether Target Recognition is correct, rarely matching result is judged, the images match performance is verified by theoretical analysis from point of theory.
The present invention is based on the transformation model between the coupling digraph of structure, proposed a kind of means that can verify the images match result theoretically.It realizes that thinking is as follows:
By 3.1 joints to the analysis of image geometry conversion as can be known, change can be by the cascade realization of single matrix or matrix for how much between image.From this viewpoint of measures, allow and carry out conversion by the known variant parameter between the coupling digraph, with in the transformation parameter substitution matrix, ask for the transformation matrices R between image simultaneously.Then the digraph after the conversion is obtained match point and makes up H model between digraph based on this through matching algorithm, coherent element among the element of this model and the transformation matrices R is compared, if matching algorithm superior performance, the H model should be consistent with transformation matrices, otherwise process decision chart does not meet the demands as matching result or wrong coupling in view of the above; In order further to obtain the comparison effect on directly perceived, measured drawing have a few can be mapped in the reference diagram through transformation model, figure after the measured drawing conversion and reference diagram are subtracted each other the comparison effect of the matching result on can obtaining intuitively; The reference records such as distance error of mating point set coordinate, transformation model and ask for are got off, can analyze images match result's precision; Comprise the steps:
Described image matching algorithm and image space transformation model, combination model and known transform parameter, the performance of matching algorithm is verified that implementation step is as follows:
A) at the image of known single geometric transformation parameter or have a plurality of geometric transformation parameters but the known coupling digraph of order change carries out images match by correlation matching algorithm;
B) make up H transformation model between the coupling digraph according to the coupling point set;
C) according to geometric transformation parameter between known image and order, the transformation matrix between computed image;
D) with b) and c) in matrix compare, analyze matching effect;
E) measured drawing is mapped in the reference diagram through the H model transferring, two width of cloth images are compared, analyze matching result from directly perceived, and the distance error by asking for, matching precision analyzed.
Be implemented as follows:
1) when only having single conversion in the image, to rotate to be example, other conversion can be set forth according to 3.1 joints are relevant, carries out according to same thought, with measured drawing relative reference figure point (x 0, y 0) rotation θ, then its transformation matrices R should for:
x ′ y ′ 1 = cos θ - sin θ - x 0 cos θ + y 0 sin θ + x 0 sin θ cos θ - x 0 * sin θ - y 0 cos θ + y 0 0 0 1 x y 1 - - - ( 22 )
Wherein, (x i, y i) be the two-dimensional coordinate of i pixel in the measured drawing, (x i', y i') be the two-dimensional coordinate of i pixel in the reference diagram, θ is the anglec of rotation.
The digraph of this known transform parameter is mated through above-mentioned image matching algorithm, make up the H model, with this model and each element of transformation matrices in (22) checking matching result of comparing according to the coupling point set.Point coordinate and error distance r before and after changing are analyzed and can analyze the precision of images match;
2) when image exists multiple geometry to change, it should be noted that, because the noncommutativity of matrix, therefore image transformation must be carried out according to a certain sequence in advance, ask for the cascade transformation matrices according to each parameter and order thereof, otherwise even it is identical each single running parameter to occur, but the difference of variation order makes transformation matrices different fully.This matrix and the H model that makes up through coupling are compared with the result of authentication image coupling.When for example the relative image coordinate initial point of coupling digraph being carried out translation, yardstick, rotation change successively, its transformation matrices R is:
x ′ y ′ 1 = S x cos θ - S y sin θ - d x S x cos θ - d y S y sin θ S x sin θ S y cos θ d x S x sin θ + d y S y cos θ 0 0 1 x y 1 - - - ( 23 )
Wherein, S x, S ySeparated image is at x, the dimensional variation of y direction; d x, d yBe respectively image at x, the translational movement of y direction; θ is the anglec of rotation;
Owing to be the inevitable property that error occurs in couple variations and the experimentation, possible H model and transformation matrix R are not quite identical, error is analyzed, calculated its whether in allowed limits (for example the pixel error of translation etc.), judge the validity of coupling in view of the above.
According to this method, can verify the ability of various image matching algorithms, thereby provide theoretical foundation for theoretically the performance (correctness and accuracy) of images match being verified.

Claims (5)

1. target identification and carrier with high accuracy localization method automatically under the low-to-medium altitude comprises the steps:
The first step: extract the feature of measured drawing and the feature of reference diagram based on yardstick invariant features extraction algorithm;
It is characterized in that:
Second step: according to the characteristic use of described measured drawing of the first step and reference diagram recently/time neighbour's rule carries out the image initial coupling and obtains the initial matching result; According to initial matching self-adaptation coordinated planning as a result coarse adjustment and the described unique point of extracting measured drawing and reference diagram based on yardstick invariant features extraction algorithm of the fine setting two-stage feature extraction parameter control first step;
The 3rd step: make up the image space transformation model H that is adapted to the strong ability of the constant extraction algorithm of feature based on the RANSAC method according to described matching result of second step;
The 4th step: measured drawing match point coordinate is transformed to the measured drawing match point coordinate that obtains in the reference diagram after the conversion according to described image space transformation model H of the 3rd step, and the measured drawing match point coordinate and second that obtains after the conversion goes on foot the distance error r of described measured drawing through the match point coordinate of initial matching acquisition, when the error threshold of described distance error r greater than setting, then reject described measured drawing match point, recognition objective;
The 5th step: ask for target's center's point according to the measured drawing point set weighting after the described rejecting error matching points of the 4th step, being mapped to digitally by described image space transformation model H of the 3rd step central point, picture library is to obtain target geographic position in the reference diagram;
The 6th step: the height, the attitude information that adopt Airplane Navigation Equipment output, according to the 5th step described impact point geographic position and with the geometry site of body, by how much resolve the relative position of asking for body and target, for body provides the high precision position information source, and described position information source is improved navigation accuracy by information fusion.
2. automatically identification and carrier with high accuracy localization method of target under a kind of low-to-medium altitude according to claim 1 is characterized in that the verification method of image space transformation model H that the 3rd step made up and image matching algorithm validity is as follows:
1) allows measured drawing and reference diagram carry out conversion, ask for the transformation matrix R between image simultaneously by the known variant parameter;
2) setting in the error range of permission when the error of corresponding element among element and the transformation matrix R of described image space transformation model H of the 3rd step, then image space transformation model H is effective;
3) have a few in the measured drawing is mapped in the new images of reference diagram size by the H model, and compares with reference diagram, in setting the scope that allows, then the new images of determination map is consistent with reference diagram, and then the correctness of authentication image matching algorithm as if error.
3. target is discerned and the carrier with high accuracy localization method automatically under a kind of low-to-medium altitude according to claim 1, it is characterized in that the Gaussian convolution filtering in the first step mesoscale invariant features extraction algorithm adopts the mean filter method based on integral image to substitute generation image multiscale space.
4. target is discerned and the carrier with high accuracy localization method automatically under a kind of low-to-medium altitude according to claim 1, it is characterized in that the method for design of graphics image space transformation model H is as follows:
p i=(x i, y i, 1) T, p i'=(x i', y i', 1) TBe respectively in target measured drawing and the reference diagram match point and concentrate i corresponding homogeneous coordinates point, have image space transformation model H, feasible:
p i′=Hp i (1)
Wherein:
H = h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 32 = h 1 T h 2 T h 3 T , h j ( j = 1,2,3 ) = h j 1 h j 2 h j 3 - - - ( 2 )
Homogeneous coordinates substitution (1) is got formula:
x i ′ = h 11 x i + h 12 y i + h 13 h 31 x i + h 32 y i + h 33 = h 1 T p i h 3 T p i → h 1 T p i - x i ′ h 3 T p i = 0 - - - ( 3 )
y i ′ = h 21 x i + h 22 y i + h 23 h 31 x i + h 32 y i + h 33 = h 2 T p i h 3 T p i → h 2 T p i - y i ′ h 3 T p i = 0 - - - ( 4 )
Make h=(h 11, h 12, h 13, h 21, h 22, h 23, h 31, h 32, h 33), that is:
x i y i 1 0 0 0 - x i ′ x i - x i ′ y i - x i ′ 0 0 0 x i y i 1 - y i ′ x i - y i ′ y i - y i ′ h T = 0 - - - ( 5 )
, then exist corresponding point for n:
Lh T=0 (6)
Wherein L is:
L = p 1 T 0 - x 1 ′ p 1 T 0 p 1 T - y i ′ p 1 T . . . . . . . . . p n T 0 - x n ′ p n T 0 p n T - y n ′ p n T - - - ( 7 )
Thus, made up image space transformation model H based on the coupling point set; Wherein T is a transposition, (x i, y i) concentrate the two-dimensional coordinate of i pixel, (x for the measured drawing match point i', y i') concentrate the two-dimensional coordinate of i pixel, h for the reference diagram match point 11, h 12, h 13, h 21, h 22, h 23, h 31, h 32, h 33Be the element of H, geometric transformation parameter between token image when there are how much variations in image, is obtained by relevant geometric transformation cascade between image.
5. target is discerned and the carrier with high accuracy localization method automatically under a kind of low-to-medium altitude according to claim 4, it is characterized in that based on least square thought, utilize reject error matching points described in svd the 4th step after, by the L of measured drawing with reference diagram coupling point set structure, ask for image optimal spatial transformation model H.
CN 201010164359 2010-05-06 2010-05-06 Method for automatically recognizing target at medium and low altitudes and positioning carrier with high accuracy Pending CN101839722A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010164359 CN101839722A (en) 2010-05-06 2010-05-06 Method for automatically recognizing target at medium and low altitudes and positioning carrier with high accuracy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010164359 CN101839722A (en) 2010-05-06 2010-05-06 Method for automatically recognizing target at medium and low altitudes and positioning carrier with high accuracy

Publications (1)

Publication Number Publication Date
CN101839722A true CN101839722A (en) 2010-09-22

Family

ID=42743243

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010164359 Pending CN101839722A (en) 2010-05-06 2010-05-06 Method for automatically recognizing target at medium and low altitudes and positioning carrier with high accuracy

Country Status (1)

Country Link
CN (1) CN101839722A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169581A (en) * 2011-04-18 2011-08-31 北京航空航天大学 Feature vector-based fast and high-precision robustness matching method
CN102322864A (en) * 2011-07-29 2012-01-18 北京航空航天大学 Airborne optic robust scene matching navigation and positioning method
CN102853835A (en) * 2012-08-15 2013-01-02 西北工业大学 Scale invariant feature transform-based unmanned aerial vehicle scene matching positioning method
CN104573681A (en) * 2015-02-11 2015-04-29 成都果豆数字娱乐有限公司 Face recognition method
CN106599129A (en) * 2016-12-02 2017-04-26 山东科技大学 Multi-beam point cloud data denoising method considering terrain characteristics
CN106774386A (en) * 2016-12-06 2017-05-31 杭州灵目科技有限公司 Unmanned plane vision guided navigation landing system based on multiple dimensioned marker
CN106780312A (en) * 2016-12-28 2017-05-31 南京师范大学 Image space and geographic scenes automatic mapping method based on SIFT matchings
CN107003671A (en) * 2014-09-17 2017-08-01 法雷奥开关和传感器有限责任公司 Positioning and mapping method and system
CN107066459A (en) * 2016-08-30 2017-08-18 广东百华科技股份有限公司 A kind of efficient image search method
CN108387897A (en) * 2018-02-12 2018-08-10 西安电子科技大学 Based on the body localization method for improving Gauss-Newton-Genetic Hybrid Algorithm
CN108761475A (en) * 2017-04-20 2018-11-06 通用汽车环球科技运作有限责任公司 Calibration verification for autonomous vehicle operation
CN108846390A (en) * 2013-09-16 2018-11-20 眼验股份有限公司 Feature extraction and matching and template renewal for biological identification
CN109612447A (en) * 2018-12-29 2019-04-12 湖南璇玑信息科技有限公司 Construction method, enhancing localization method and the enhancing location-server of the enhancing positioning transformation model of Remote sensing photomap data
CN109649654A (en) * 2018-12-28 2019-04-19 东南大学 A kind of low altitude coverage localization method
CN110134816A (en) * 2019-05-20 2019-08-16 清华大学深圳研究生院 A kind of the single picture geographic positioning and system smooth based on ballot
US11107297B2 (en) * 2018-12-12 2021-08-31 Simmonds Precision Products, Inc. Merging discrete time signals

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101009021A (en) * 2007-01-25 2007-08-01 复旦大学 Video stabilizing method based on matching and tracking of characteristic
CN101442619A (en) * 2008-12-25 2009-05-27 武汉大学 Method for splicing non-control point image
US20090141984A1 (en) * 2007-11-01 2009-06-04 Akira Nakamura Information Processing Apparatus, Information Processing Method, Image Identifying Apparatus, Image Identifying Method, and Program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101009021A (en) * 2007-01-25 2007-08-01 复旦大学 Video stabilizing method based on matching and tracking of characteristic
US20090141984A1 (en) * 2007-11-01 2009-06-04 Akira Nakamura Information Processing Apparatus, Information Processing Method, Image Identifying Apparatus, Image Identifying Method, and Program
CN101442619A (en) * 2008-12-25 2009-05-27 武汉大学 Method for splicing non-control point image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《International Journal of Computer Vision》 20040122 David G.Lowe 《Distinctive Image Features from Scale-Invariant Keypoints》 第91-110页 1-5 第2卷, 第60期 2 *
《中国图象图形学报》 20090430 邓熠 等 《仿射不变特征提取算法在遥感影像配准中的应用》 第615-621页 1-5 第14卷, 第4期 2 *
《信号处理》 20070831 陈涛 等 《图像尺度不变特征提取新方法》 第506-511页 1-5 第23卷, 第4期 2 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102169581A (en) * 2011-04-18 2011-08-31 北京航空航天大学 Feature vector-based fast and high-precision robustness matching method
CN102322864A (en) * 2011-07-29 2012-01-18 北京航空航天大学 Airborne optic robust scene matching navigation and positioning method
CN102322864B (en) * 2011-07-29 2014-01-01 北京航空航天大学 Airborne optic robust scene matching navigation and positioning method
CN102853835A (en) * 2012-08-15 2013-01-02 西北工业大学 Scale invariant feature transform-based unmanned aerial vehicle scene matching positioning method
CN108846390B (en) * 2013-09-16 2020-02-14 眼验股份有限公司 Feature extraction and matching and template update for biometric authentication
CN108846390A (en) * 2013-09-16 2018-11-20 眼验股份有限公司 Feature extraction and matching and template renewal for biological identification
CN107003671A (en) * 2014-09-17 2017-08-01 法雷奥开关和传感器有限责任公司 Positioning and mapping method and system
CN104573681A (en) * 2015-02-11 2015-04-29 成都果豆数字娱乐有限公司 Face recognition method
CN107066459A (en) * 2016-08-30 2017-08-18 广东百华科技股份有限公司 A kind of efficient image search method
CN106599129B (en) * 2016-12-02 2019-06-04 山东科技大学 A kind of multi-beam point cloud data denoising method for taking lineament into account
CN106599129A (en) * 2016-12-02 2017-04-26 山东科技大学 Multi-beam point cloud data denoising method considering terrain characteristics
CN106774386A (en) * 2016-12-06 2017-05-31 杭州灵目科技有限公司 Unmanned plane vision guided navigation landing system based on multiple dimensioned marker
CN106774386B (en) * 2016-12-06 2019-08-13 杭州灵目科技有限公司 Unmanned plane vision guided navigation landing system based on multiple dimensioned marker
CN106780312A (en) * 2016-12-28 2017-05-31 南京师范大学 Image space and geographic scenes automatic mapping method based on SIFT matchings
CN106780312B (en) * 2016-12-28 2020-02-18 南京师范大学 Image space and geographic scene automatic mapping method based on SIFT matching
CN108761475A (en) * 2017-04-20 2018-11-06 通用汽车环球科技运作有限责任公司 Calibration verification for autonomous vehicle operation
CN108387897A (en) * 2018-02-12 2018-08-10 西安电子科技大学 Based on the body localization method for improving Gauss-Newton-Genetic Hybrid Algorithm
CN108387897B (en) * 2018-02-12 2021-11-09 西安电子科技大学 Projectile body positioning method based on improved Gauss Newton-genetic hybrid algorithm
US11107297B2 (en) * 2018-12-12 2021-08-31 Simmonds Precision Products, Inc. Merging discrete time signals
CN109649654A (en) * 2018-12-28 2019-04-19 东南大学 A kind of low altitude coverage localization method
CN109612447A (en) * 2018-12-29 2019-04-12 湖南璇玑信息科技有限公司 Construction method, enhancing localization method and the enhancing location-server of the enhancing positioning transformation model of Remote sensing photomap data
CN110134816A (en) * 2019-05-20 2019-08-16 清华大学深圳研究生院 A kind of the single picture geographic positioning and system smooth based on ballot

Similar Documents

Publication Publication Date Title
CN101839722A (en) Method for automatically recognizing target at medium and low altitudes and positioning carrier with high accuracy
CN110097093B (en) Method for accurately matching heterogeneous images
Liu et al. Seqlpd: Sequence matching enhanced loop-closure detection based on large-scale point cloud description for self-driving vehicles
US8705795B2 (en) Information processing apparatus, information processing method, and program
CN104536009A (en) Laser infrared composite ground building recognition and navigation method
CN106886794B (en) Take the heterologous remote sensing image homotopy mapping method of high-order structures feature into account
CN103136525B (en) High-precision positioning method for special-shaped extended target by utilizing generalized Hough transformation
CN102865859A (en) Aviation sequence image position estimating method based on SURF (Speeded Up Robust Features)
CN109376208A (en) A kind of localization method based on intelligent terminal, system, storage medium and equipment
CN108320310B (en) Image sequence-based space target three-dimensional attitude estimation method
Jiang et al. Learned local features for structure from motion of uav images: A comparative evaluation
CN106897730B (en) SAR target model identification method based on fusion category information and local preserving projection
Zhou et al. Place recognition and navigation of outdoor mobile robots based on random Forest learning with a 3D LiDAR
Wang et al. High accuracy and low complexity LiDAR place recognition using unitary invariant frobenius norm
Chen et al. SDPL: Shifting-Dense Partition Learning for UAV-View Geo-Localization
JP5885583B2 (en) Target classifier
CN117197677A (en) Tropical rain forest arbor-shrub separation method based on laser radar point cloud data
Li et al. Registration of Aerial Imagery and Lidar Data in Desert Areas Using the Centroids of Bushes as Control Information.
CN110738098A (en) target identification positioning and locking tracking method
CN110414379A (en) In conjunction with the building extraction algorithm of elevation map Gabor textural characteristics and LiDAR point cloud feature
CN111626096B (en) Three-dimensional point cloud data interest point extraction method
CN115761684A (en) AGV target recognition and attitude angle resolving method and system based on machine vision
CN113316080B (en) Indoor positioning method based on Wi-Fi and image fusion fingerprint
Arthur et al. Rapid processing of unmanned aerial vehicles imagery for disaster management
Jiang et al. Efficient Match Pair Retrieval for Large-scale UAV Images via Graph Indexed Global Descriptor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20100922