CN104318566A - Novel multi-image plumb line track matching method capable of returning multiple elevation values - Google Patents

Novel multi-image plumb line track matching method capable of returning multiple elevation values Download PDF

Info

Publication number
CN104318566A
CN104318566A CN201410578456.6A CN201410578456A CN104318566A CN 104318566 A CN104318566 A CN 104318566A CN 201410578456 A CN201410578456 A CN 201410578456A CN 104318566 A CN104318566 A CN 104318566A
Authority
CN
China
Prior art keywords
image
matched
object space
point
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410578456.6A
Other languages
Chinese (zh)
Other versions
CN104318566B (en
Inventor
张卡
盛业华
闾国年
刘学军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Panzhi Geographic Information Industry Research Institute Co., Ltd.
Original Assignee
Nanjing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Normal University filed Critical Nanjing Normal University
Priority to CN201410578456.6A priority Critical patent/CN104318566B/en
Publication of CN104318566A publication Critical patent/CN104318566A/en
Application granted granted Critical
Publication of CN104318566B publication Critical patent/CN104318566B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a novel multi-image plumb line track matching method capable of returning multiple elevation values. The method comprises the following steps of: confirming m to-be-matched points of a ground space element (X0, Y0) on a benchmark image by using an image imaging model; matching multiple images restrained by space information to m to-be-matched points; calculating space three-dimensional coordinates (Xi, Yi, Zi) corresponding to the to-be-matched points and an absolute difference (deltaXi and deltaYi) between each coordinate (Xi, Yi) and the space element (X0, Y0) by adopting a multi-image adjustment bundle method, and deciding the returned elevation values according to whether the absolute difference is less than a set threshold. In the multi-image plumb line track matching method, multi-image matching and accuracy test of matching result are together fused to effectively eliminate wrong matching, so that the method can solve the disadvantage that only one elevation value can be returned by the traditional ground image matching method.

Description

The novel multi-view images pedal line path matching method of multiple height value can be returned
Technical field
The invention belongs to digital photogrammetry, Geographic Information System and computer vision field, relate to and determining and the aspects such as error hiding result elimination based on the matching constraint strategy in the multi-view images matching process of object space, hunting zone.
Background technology
Digital photogrammetry technology is from two-dimensional digital Extraction of Image and the important means upgrading the geological information of Three dimensional Targets, radiation information and semantic information, play an important role in the development of the national economy and social development, be widely used in the fields such as base surveying, survey of territorial resources exploitation and digital city modeling.Image Matching, as the gordian technique of in digital photogrammetry, through the whole flow process of photogrammetric data process, is the study hotspot in the field such as Photogrammetry and Remote Sensing, computer vision always.In recent years, due to widely using of digital sensor, obtain large degree of overlapping and look digitized video more and more easily more, an object space point can image on 15 even more images simultaneously.Day by day increasing and the continuous growth of three-dimensional spatial information application demand of mass remote sensing image data, also more and more urgent to the full-automation of digital photogrammetry technology, high-level efficiency and high reliability request.How reliably, exactly mate out by the corresponding image points on multi-view images, become the photogrammetric problem demanding prompt solution of Contemporary Digital, the degree that image matching problem solves finally determines the automaticity of digital photogrammetry.
The key issue of Image Matching technology be automatically set up different images picture point between corresponding relation.Image Matching Algorithm can be divided into stereopsis to mate and multi-view images coupling by the image quantity participating in coupling, can be divided into based on the Image Matching of image space primitive and the coupling based on object space primitive by Matching unit.Image Matching Algorithm based on image space primitive can be divided into three major types again: based on the Image Matching of gray scale, the Image Matching of feature based and understanding based on image and the Image Matching of decipher.Be the widely used method in current photogrammetric field based on the coupling of gray scale and feature-based matching, and also there is very high reliability the open region that, texture information level and smooth for image mesorelief enriches; Based on image understanding and fitting through of decipher, semantic description is carried out to image object, thus reach the object of coupling, but with regard to current research situation, it still there is a lot of insurmountable practical problems, seldom uses in photogrammetric field.
But the object of Image Matching is the geological information of extraction object, determines its locus.Thus based on the image matching method of image space primitive after the parallax obtaining left and right image, space intersection also to be utilized to resolve the 3 d space coordinate (X, Y, Z) of its corresponding object point, and then set up digital surface model.And certain interpolating method also may can be used when setting up digital surface model, object space precision of information is reduced more or less.Therefore, it is possible to directly determine that the image matching method based on object space of body surface space of points three-dimensional coordinate obtains research, these methods are also referred to as " ground element image coupling ".Based in the Image Matching of object space, be known as the topocentric planimetric coordinates (X, Y) for the treatment of a primitive, only need to determine its elevation Z.The existing image matching method based on object space mainly adopts following strategy when determining image space corresponding image points: according to topocentric minimum and maximum elevation scope, with object space elevation Z for search benchmark, from minimum height value, each elevation increases Δ Z, determines the object space height value Z of to be matched some object space Searching point at object space i=Z min+ i × Δ Z, i=1,2,3 ..., n, n are searching times, thus obtain three-dimensional coordinate (X, Y, the Z of object space Searching point i); Then, object space Searching point is projected to the Searching point each search image obtaining image space, and then complete the calculating of similarity between point to be matched and Searching point, and the Z selecting maximum similarity corresponding ias topocentric height value.
But the size of existing ground element image matching process Δ Z value in search procedure is difficult to accurately determine, cannot ensure the Z of Searching point icertain for ground point; Δ Z value is excessive, then can miss correct candidate point; Δ Z value is too small, the calculating costed a lot of money again and search time.In addition, existing method can only return a height value at each ground point place, this has the vertical atural object (such as building facade, electric pole etc.) of multiple height value for same planimetric position, obviously can not get the result reasonably reflecting object height journey distribution situation practically.And existing method lacks enough checkout procedures for the whether accurate of matching result, the matching similarity only with image space is difficult to the accuracy ensureing matching result.
Summary of the invention
The object of the invention is to the deficiency that can only return a height value, matching result shortage validation verification existed for existing ground element image matching process, propose a kind of novel multi-view images pedal line path matching method returning multiple height value.
The novel multi-view images pedal line path matching method that can return multiple height value comprises the steps:
Step one, utilizes the imaging model of image, according to multi-view images outer orientation parameter, and the planimetric coordinates (X of the ground object space primitive of input 0, Y 0), the object space maximum elevation Z in region to be matched maxwith minimum elevation Z min, determine the image space ranks number of m the point to be matched that object space primitive is corresponding in reference images;
Step 2, to the m in reference images point to be matched, carries out the multi-view images coupling that object space is information constrained, to obtain the ranks number of the corresponding image points of each point to be matched on other search image;
Step 3, according to the same place result on multi-view images, utilizes many picture bundle adjustments, calculates m the object space three-dimensional coordinate (X corresponding to point to be matched i, Y i, Z i), i=1,2 ..., m, then the coordinate (X calculating each point to be matched i, Y i) with input object space primitive coordinate (X 0, Y 0) between poor absolute value delta X i, Δ Y i, and according to Δ X i, Δ Y ithe threshold value whether being less than setting decides the height value of returned object space primitive.
Wherein, the detailed process of described step one is:
(1) input has the n width multi-view images (Aeronautics and Astronautics or up short image) of elements of exterior orientation, and the planimetric coordinates (X of ground object space primitive 0, Y 0), the object space maximum elevation Z in region to be matched maxwith minimum elevation Z min;
(2) utilize the imaging model of image, according to elements of exterior orientation, the planimetric coordinates of ground object space primitive, the minimum and maximum object space elevation of input of reference images, calculate the image space ranks number of m the candidate to be matched point of object space primitive in reference images.
The detailed process of described step 2 is:
(1) for each to be matched some q in reference images i(i=1,2 ..., m), utilize it as the minimum and maximum object space elevation of planimetric coordinates, input, according to the imaging model of image, calculate to be matched some q iobject point is at the object space three-dimensional coordinate of the highs and lows of the object space region of search accordingly;
(2) according to video imaging model, by the highs and lows of the object space region of search toward n-1 width search image S 1..., S j..., S n-1on project, obtain the image space ranks number of search core line of the same name two end points to be matched some same place place on each width search image;
(3) according to the image space ranks number of two end points of corresponding epipolar line on each search image, the straight-line equation h of picture plane coker line is determined ' j=k ' j× l ' j+ b ' j, and the interval range [S_l of candidate point range l' of the same name j, E_l j], j=1,2 ..., n-1, and with burst length E_l j-S_l ja maximum width search image is main search image, and remaining n-2 width image is secondary search image;
(4) in the image space region of search main search image, take out each pixel one by one as the current candidate corresponding image points on this image, first utilize double image forward intersection method, calculate to be matched some q iwith the object space three-dimensional coordinate of this candidate point institute intersection culture point, and this three-dimensional coordinate is projected on remaining n-2 width pair search image, obtain n-2 candidate's corresponding image points on secondary search image, again together with the current candidate corresponding image points on main search image, thus obtain often group n-1 candidate's corresponding image points on each pair search image; Then, the comprehensive matching Likelihood Computation method based on RGB color characteristic and SIFT feature is recycled, to be matched some q on Calculation Basis image ioften organize many pictures comprehensive matching similarity of n-1 candidate's same place, and with that group candidate point corresponding to maximum similarity, as many n-1 corresponding image points q' obtained as matching process that object space is information constrained 1, q' 2..., q' n-1.
The detailed process of described step 3 is:
(1) for each to be matched some q in reference images i, and n-1 corresponding image points q' on each width search image 1, q' 2..., q' n-1, according to the picture planimetric coordinates of n picture point on each self imaging and the elements of exterior orientation of n width image, utilize many picture bundle adjustments, calculate to be matched some q iobject space three-dimensional coordinate (the X of corresponding culture point i, Y i, Z i);
(2) each to be matched some q is calculated icorresponding object space planimetric coordinates (X i, Y i) with input object space primitive coordinate (X 0, Y 0) between poor absolute value delta X i, Δ Y iif, Δ X iwith Δ Y iall be less than or equal to the threshold value of setting, then think to be matched some p imanyly meet object coordinates coherence request, by q as matching result iand corresponding image points returns, by Z as one group of image space matching result of ground primitive ian object space height value as ground primitive returns; If Δ X ior Δ Y ibe greater than the threshold value of setting, then think to be matched some q iand many picture matching results are invalid, abandon this group result.
Multi-view images pedal line path matching method of the present invention is a kind of PARALLEL MATCHING method, and each topocentric matching process is independent of each other, and is very beneficial for the efficient Rapid matching of millet cake in large quantities.Multi-view images matching process of the present invention merges the image space information employing topocentric object space information and multi-view images, by the information constrained match search scope of topocentric object space, but matching process completes in image space again, make the sure picture point corresponding through ground point of the candidate search of image space point, ensure that the time efficiency of coupling and the validity of Searching point; In addition, matching result is added to the consistency checking of object space information, can effectively eliminate error hiding result, improve based on the reliability of the multi-view images matching result of object space, solve tradition based on also can only obtain the problems such as a height value and validity thereof cannot be verified in the image matching method of object space at vertical atural object place.
Accompanying drawing explanation
Fig. 1 is the method frame figure of the embodiment of the present invention;
Fig. 2 is that the ground candidate of object space primitive in the reference images point to be matched of the embodiment of the present invention determines schematic diagram;
Fig. 3 is the multi-view images coupling schematic diagram information constrained based on object space of the embodiment of the present invention;
Fig. 4 is the corresponding image point position of an actual measurement ground object space primitive on three width aviation images of the embodiment of the present invention, wherein, a () is the image point position of ground primitive in reference images, b () is the image point position of ground primitive on the first width search image, (c) is the image point position of ground primitive on the second width search image;
Fig. 5 (a), (b), (c) are the image space result utilizing traditional multi-view images pedal line path matching method to return for Fig. 4 (a), (b), (c) respectively;
Fig. 6 (a), (b), (c) are the image space result utilizing the present invention's novel multi-view images pedal line path matching method to return for Fig. 4 (a), (b), (c) respectively.
Embodiment
Below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in further detail.
The present invention, according to the image outer orientation parameter of the imaging model of image and input and ground object space information, first determines the multiple to be matched point of ground object space primitive in reference images; Again the information constrained multi-view images coupling of object space is carried out to these points to be matched, to make full use of the redundant information of many images, improve the reliability of match measure calculating and ensure that candidate matches result is affirmed through picture point corresponding to ground primitive; Then, many picture bundle adjustments are carried out to multi-view images matching result, to calculate object space three-dimensional coordinate corresponding to each candidate matches result, and the planimetric coordinates of the ground primitive of the planimetric coordinates in this coordinate and input is carried out the checking of object space consistency on messaging, effectively to eliminate the error hiding result in candidate matches result, improve the accuracy of multi-view images matching result.
As shown in Figure 1, the novel multi-view images pedal line path matching method that can return multiple height value comprises three parts: (1) determines the multiple candidates of ground object space primitive in reference images point to be matched; (2) the information constrained multi-view images coupling of object space is carried out to point to be matched in reference images; (3) how as the object coordinates consistency checking of matching result.Concrete implementation step is:
The first step: determine the multiple candidates of ground object space primitive in reference images point to be matched.
Determine the schematic diagram of the multiple candidates of ground object space primitive in reference images point to be matched as shown in Figure 2, its idiographic flow is as follows:
(1) (Aeronautics and Astronautics or up short image, wherein, 1 width is reference images I to input the n width multi-view images with elements of exterior orientation 0, remaining is n-1 width search image S 1..., S j..., S n-1), and the planimetric coordinates (X of ground object space primitive 0, Y 0), the object space maximum elevation Z in region to be matched maxwith minimum elevation Z min;
(2) utilize the imaging model of image, according to elements of exterior orientation, the planimetric coordinates of ground object space primitive, the minimum and maximum object space elevation of input of reference images, calculate the image space ranks number of m the candidate to be matched point of object space primitive in reference images.
For the in-line model of aviation image, if the planimetric coordinates of the ground object space primitive P of input is (X 0, Y 0), reference images I 0elements of exterior orientation be object space maximum elevation Z maxwith minimum elevation Z min, then ground point is at the peak P of the object space region of search maxwith minimum point P minthree-dimensional coordinate respectively (X 0, Y 0, Z max), (X 0, Y 0, Z min); By P maxand P minin reference images, carry out back projection respectively, obtain two corresponding picture point p maxand p mintwo-dimensional image planimetric coordinates respectively (x 1, y 1), (x 2, y 2).The formula being calculated image space two-dimensional image planimetric coordinates (x, y) by object space three-dimensional coordinate (X, Y, Z) is as follows:
x = - f a 1 I 0 ( X - X I 0 ) + b 1 I 0 ( Y - Y I 0 ) + c 1 I 0 ( Z - X I 0 ) a 3 I 0 ( X - X I 0 ) + b 3 I 0 ( Y - Y I 0 ) + c 3 I 0 ( Z - X I 0 ) y = - f a 2 I 0 ( X - X I 0 ) + b 2 I 0 ( Y - Y I 0 ) + c 2 I 0 ( Z - X I 0 ) a 3 I 0 ( X - X I 0 ) + b 3 I 0 ( Y - Y I 0 ) + c 3 I 0 ( Z - X I 0 ) - - - ( 1 )
In formula, by image I 0foreign side's parallactic angle element nine direction cosine in the rotation matrix determined, f is the focal length of filming image camera.
Again according to image I 0principal point ranks number (h 0, l 0), pixel dimension μ, following formula can be utilized to put p maxand p mintwo-dimensional image planimetric coordinates (x 1, y 1), (x 2, y 2) be converted to image ranks coordinate (h respectively 1, l 1), (h 2, l 2).
h = h 0 - y μ ; l = l 0 + x μ - - - ( 2 )
Therefore, the m of ground primitive in reference images candidate point to be matched is located in the straight line p of picture plane maxp minon, i-th (i=1,2 ..., m) candidate to be matched some q iranks number (h i, l i) by two kinds of situations calculating below:
If 1. abs (l 1– l 2) >=abs (h 1– h 2), straight line p is described maxp minclose with line direction, then l max=max (l 1, l 2), l min=min (l 1, l 2), function max () and min () represents respectively and takes from minimum value in variable and maximal value, m=l max– l min+ 1, l ifrom l minto l maxvalue one by one, h ibe calculated as follows:
h i = k × l i + b k = h 2 - h 1 l 2 - l 1 , b = h 1 - k × l 1 - - - ( 3 )
If 2. abs (l 1– l 2) <abs (h 1– h 2), straight line p is described maxp minclose with column direction, then h max=max (h 1, h 2), h min=min (h 1, h 2), m=h max– h min+ 1, h ifrom h minto h maxvalue one by one, l ibe calculated as follows:
l i = k &times; h i + b k = l 2 - l 1 h 2 - h 1 , b = l 1 - k &times; h 1 - - - ( 4 )
Second step: the information constrained multi-view images coupling of object space is carried out to point to be matched in reference images.
To be matched some q in reference images iinformation constrained based on object space multi-view images coupling schematic diagram as shown in Figure 3, its process is:
(1) for each to be matched some q in reference images i(i=1,2 ..., m), utilize its as planimetric coordinates and input minimum and maximum object space elevation, according to the imaging model of image, calculate to be matched some q iobject point is at the object space three-dimensional coordinate of the highs and lows of the object space region of search accordingly;
Suppose to be matched some q iat reference images I 0image space ranks number be (h i, l i), first according to formula (2) conversion imaging planimetric coordinates (x i, y i); Again according to the elements of exterior orientation of reference images and the object space maximum elevation Z of input maxwith minimum elevation Z min, try to achieve a q by formula (5) ithe peak Q of the region of search on object space photography light maxwith minimum point Q minobject space planimetric coordinates (X max, Y max), (X min, Y min) (with a Q maxbe calculated as example).
X max = X I 0 + ( Z max - Z I 0 ) a 1 I 0 x i + a 2 I 0 y i - a 3 I 0 f c 1 I 0 x i + c 2 I 0 y i - c 3 I 0 f Y max = Y I 0 + ( Z max - Z I 0 ) b 1 I 0 x i + b 2 I 0 y i - b 3 I 0 f c 1 I 0 x i + c 2 I 0 y i - c 3 I 0 f - - - ( 5 )
In formula, deng the implication of character and the same in formula (1).
(2) according to video imaging model, by the highs and lows of the object space region of search toward n-1 width search image S 1..., S j..., S n-1on project, obtain the image space ranks number of search core line of the same name two end points to be matched some same place place on each width search image;
To search for image for S jexample, to be matched some q ithe image space ranks number of search core line of the same name two end points at the same place place on this image calculate by mode below:
First, the imaging model according to formula (1), by the peak Q of the to be matched some object space region of search maxwith minimum point Q min(the search of the correlation parameter now, in the imaging model image S that projects is carried out on this image jelements of exterior orientation calculate), obtain corresponding two picture point q j maxand q j minpicture planimetric coordinates then, according to formula (2), will as planimetric coordinates convert image space ranks number to
(3) according to the image space ranks number of two end points of corresponding epipolar line on each search image, the straight-line equation h of picture plane coker line is determined ' j=k ' j× l ' j+ b ' j, and the interval range [S_l of candidate point range l' of the same name j, E_l j], j=1,2 ..., n-1, and with burst length E_l j-S_l ja maximum width search image is main search image, and remaining n-2 width image is secondary search image;
By to be matched some q iat search image S jupper search core line of the same name two end points q j maxand q j minimage space ranks number candidate's corresponding image points row number on corresponding epipolar line can be obtained interval scope [S_l j, E_l j] (wherein, t=1,2 ..., E_l j-S_l j, and calculate straight-line equation (the i.e. line number of candidate's same place of corresponding epipolar line by formula (6)+1) with row number between calculated relationship):
h t &prime; j = k &prime; j &times; l t &prime; j + b &prime; j k &prime; j = h 2 &prime; j - h 1 &prime; j l 2 &prime; j - l 1 &prime; j , b &prime; j = h 1 &prime; j - k &prime; j l 1 &prime; j - - - ( 6 )
(4) in the image space region of search main search image, take out each pixel one by one as the current candidate corresponding image points on this image, first utilize double image forward intersection method, calculate to be matched some q iwith the object space three-dimensional coordinate of this candidate point institute intersection culture point, and this three-dimensional coordinate is projected on remaining n-2 width pair search image, obtain n-2 candidate's corresponding image points on secondary search image, again together with the current candidate corresponding image points on main search image, thus obtain often group n-1 candidate's corresponding image points on each pair search image; Then, the comprehensive matching Likelihood Computation method based on RGB color characteristic and SIFT feature is recycled, to be matched some q on Calculation Basis image ioften organize many pictures comprehensive matching similarity of n-1 candidate's same place, and with that group candidate point corresponding to maximum similarity, as many n-1 corresponding image points q' obtained as matching process that object space is information constrained 1, q' 2..., q' n-1.
Suppose with S 1for main search image, remainder are n-2 pair search image, determine to be matched some q imanyly as same place process be:
First, from main search image S 1on search core line on take out any one candidate's same place q' 1, t(its row number span be [S_l 1, E_l 1], corresponding line number calculate by formula (6), t=1,2 ..., E_l 1-S_l 1, and will q' be put according to formula (2)+1) 1, tranks number conversion imaging planimetric coordinates recycling double image forward intersection method (formula (7)), calculates to be matched some q iwith candidate point q' 1, tcandidate's culture point Q of institute's intersection ithree-dimensional coordinate (X i, Y i, Z i).
X i = X I 0 + N 1 X i I 0 Y i = ( Y I 0 + N 1 Y i I 0 + Y S 1 + N 2 Y t S 1 ) / 2 Z i = Z I 0 + N 1 Z i I 0 - - - ( 7 )
In formula, N 1 = B X Z t S 1 - B Z X t S 1 X i I 0 Z t S 1 - X t S 1 Z i I 0 ; N 2 = B X Z i I 0 - B Z X i I 0 X i I 0 Z t S 1 - X t S 1 Z i I 0 B X = X S 1 - X I 0 ; B Z = Z S 1 - Z I 0 [ X i I 0 , Y i I 0 , Z i I 0 ] T = R I 0 [ x i , y i , f ] T ; [ X t S 1 , Y t S 1 , Z t S 1 ] T = R S 1 [ x &prime; t 1 , y &prime; t 1 , f ] T , for by image I 0, image S 1the rotation matrix that calculates of foreign side parallactic angle element.
Secondly, utilize formula (1), will Q be put iproject toward remaining n-2 width pair search image, obtain search image S j(j=2 ..., n-1) and upper candidate's same place q' j,tpicture planimetric coordinates and utilize formula (2) to be converted into image space row number again according to image S jcorresponding epipolar line equation (6) calculate corresponding image space line number thus obtain and main search image candidate same place q' 1, tn-2 candidate's same place q' on other corresponding secondary search image j,t, and then form to be matched some q it group n-1 candidate's same place q' 1, t, q' 2, t..., q' j,t..., q' n-1, t.
Again, first utilize the double image comprehensive matching Likelihood Computation method based on color characteristic and SIFT feature, calculate to be matched some q respectively iwith the double image match measure of the t group n-1 candidate point on each width search image the mean value getting n-1 double image match measure is again estimated as comprehensive matching as t group n-1 candidate's same place and the many of point to be matched &rho; m t = ( &rho; s 1 , t + &rho; s 2 , t + . . . + &rho; j , t s + . . . + &rho; s n - 1 , t ) / ( n - 1 ) .
Finally, the maximum max{ ρ of many picture Synthetic Measurements is got m t| t=1,2 ..., E_l 1-S_l 1that group candidate point of+1}, as many to be matched some q obtained as matching process that object space is information constrained in-1 corresponding image points q' on n-1 width search image 1, q' 2..., q' n-1.
For reference images I 0on to be matched some q iwith search image S jon certain candidate's same place q' j,t, the double image comprehensive matching Likelihood Computation method based on grey color characteristic and SIFT feature between 2 is as follows:
First, respectively with a q iwith a q' j,tcentered by, at image I 0go up and image S jon get imaging window W, W that two sizes are N × (N generally gets odd number) ', calculate the gray scale correlation coefficient ρ in red, green, blue three gray channel between two windows respectively r, ρ g, ρ b(computing formula such as formula (8), for red channel), and the mean value getting three gray scale related coefficients is as the similarity measure ρ based on color characteristic c=(ρ r+ ρ g+ ρ b)/3.
&rho; R = &Sigma; i = 1 N &Sigma; j = 1 N f R ( i , j ) f &prime; R ( i , j ) - ( &Sigma; i = 1 N &Sigma; j = 1 N f R ( i , j ) ) ( &Sigma; i = 1 N &Sigma; j = 1 N f &prime; R ( i , j ) ) N 2 ( &Sigma; i = 1 N &Sigma; j = 1 N ( f R ( i , j ) ) 2 - ( &Sigma; i = 1 N &Sigma; j = 1 N f R ( i , j ) ) 2 N 2 ) ( &Sigma; i = 1 N &Sigma; j = 1 N ( f &prime; R ( i , j ) ) 2 - ( &Sigma; i = 1 N &Sigma; j = 1 N f &prime; R ( i , j ) ) 2 N 2 ) - - - ( 8 )
In formula, f r(i, j), f' r(i, j) represent respectively imaging window W, W ' in the gray-scale value of the i-th row jth row pixel in red channel.
Secondly, respectively with q iwith a q' j,tcentered by, at image I 0go up and image S jon get two sizes be 16 × 16 imaging window W, W ', each element value g (i in window, j), g'(i, j) get mean value g (i, the j)=(f of the gray-scale value of respective pixel in red green, blue three gray channel respectively r(i, j)+f g(i, j)+f b(i, j))/3, g'(i, j)=(f' r(i, j)+f' g(i, j)+f' b(i, j))/3, recycling SIFT feature describing method calculates SIFT feature vector V, V of 128 dimensions of point to be matched and Searching point ' (multiscale space not relating to the image in SIFT feature computing method builds, extreme point detects and characteristic point position such as to determine at the step), and calculates the SIFT feature similarity ρ between two imaging windows by formula (9) s;
&rho; S = &Sigma; k = 1 128 ( V k &times; V , k ) &Sigma; k = 1 128 V k 2 &times; &Sigma; i = 1 128 V , k 2 - - - ( 9 )
Finally, the similarity measure ρ based on color characteristic is got cwith based on SIFT feature similarity ρ smean value, estimate ρ as the double image comprehensive matching between point to be matched and candidate's same place s=(ρ c+ ρ s)/2.
3rd step: the object coordinates consistency checking of many picture matching results.
(1) for each to be matched some q in reference images i, and n-1 corresponding image points q' on each width search image 1, q' 2..., q' n-1, according to the picture planimetric coordinates of n picture point on each self imaging and the elements of exterior orientation of n width image, utilize many picture bundle adjustments, calculate to be matched some q iobject space three-dimensional coordinate (the X of corresponding culture point i, Y i, Z i);
Many is with the picture point on every width image as bundle adjustment, the light beam that object point and photo centre are formed accordingly is the elementary cell of adjustment, with the imaging model of image (as the collinearity condition equation model of aviation image, the rational function model of space flight image) as the basic equation of adjustment, in every width image what treat that solution asks the unknown picture point of coordinate is observed reading as planimetric coordinates, list file names with the error equation of all picpointed coordinates participated on coupling image, the three dimensional space coordinate of six elements of exterior orientation of every width image and picture point to be asked object point is accordingly resolved with least square method.When the elements of exterior orientation of image is known, then bundle adjustment is used for resolving the object space three-dimensional coordinate of point to be located.For reference images I 0on certain to be matched some q i, depending on it as planimetric coordinates (x i, y i) be observed reading, at its elements of exterior orientation be when known, following picture planimetric coordinates (x can be listed i, y i) with corresponding object space three-dimensional coordinate (X i, Y i, Z i) between error equation (co-colouration effect for aviation image):
v x i I 0 = - a 11 I 0 dX i - a 12 I 0 dY i - a 13 I 0 dZ i - ( x i - x i 0 ) v y i I 0 = - a 21 I 0 dX i - a 22 I 0 dY i - a 23 I 0 dZ i - ( y i - y i 0 ) - - - ( 10 )
In formula,
a 11 I 0 = ( a 1 I 0 f + a 3 I 0 x i ) / Z &OverBar; , a 12 I 0 = ( b 1 I 0 f + b 3 I 0 x i ) / Z &OverBar; , a 13 I 0 = ( c 1 I 0 f + c 3 I 0 x i ) / Z &OverBar; a 21 I 0 = ( a 2 I 0 f + a 3 I 0 x i ) / Z &OverBar; , a 22 I 0 = ( b 2 I 0 f + b 3 I 0 x i ) / Z &OverBar; , a 23 I 0 = ( c 2 I 0 f + c 3 I 0 x i ) / Z &OverBar; Z &OverBar; = a 3 I 0 ( X i 0 - X I 0 ) + b 3 I 0 ( Y i 0 - Y I 0 ) + c 3 I 0 ( Z i 0 - Z I 0 )
deng the implication of character and the same in formula (1); (X i 0, Y i 0, Z i 0) be culture point three-dimensional coordinate (X to be asked i, Y i, Z i) approximate value (can according to a q iwith arbitrary width search image S jupper same place q' j, utilize the double image forward intersection method shown in formula (7) to calculate), (dX i, dY i, dZ i) be the adjustment correction number of approximate three-dimensional coordinate, (x i 0, y i 0) be approximate three-dimensional coordinate is brought into the some q that formula (1) calculates iat I 0on approximate picture planimetric coordinates.
Therefore, for be matched some q in-1 corresponding image points set { q' on n-1 width search image 1, q' 2..., q' n-1, the error equation of 2 (n-1) individual same place can be listed by formula (10), add to be matched some q i2 error equations in reference images, obtain 2n error equation of n picture point altogether, then many error equations resolved as bundle adjustment three-dimensional coordinate are as follows:
V=BdX-L (11)
In formula, V = v x i I 0 v y i I 0 v x &prime; S 1 v y &prime; S 1 K v x &prime; S n - 1 v y &prime; S n - 1 T , dX=[dX i dY i dZ i] T
B = - a 11 I 0 - a 12 I 0 - a 13 I 0 - a 21 I 0 - a 22 I 0 - a 23 I 0 - a 11 S 1 - a 12 S 1 - a 13 S 1 - a 21 S 1 - a 22 S 1 - a 23 S 1 M M M - a 11 S k - a 12 S k - s 13 S k - a 21 S k - a 22 S k - a 23 S k , L = x i - x i 0 y i - y i 0 x &prime; S 1 - x &prime; S 1 0 y &prime; S 1 - y &prime; S 1 0 M x &prime; S n - 1 - x &prime; S n - 1 0 y &prime; S n - 1 - y &prime; S n - 1 0
Utilize the principle of least square, the correction that can solve approximate coordinates according to error equation is: dX=(B tb) -1(B tl); And then obtain to be matched some q ithe three-dimensional coordinate of object point is accordingly: X i=X i 0+ dX i, Y i=Y i 0+ dY i, Z i=Z i 0+ dZ i.
(2) each to be matched some q is calculated icorresponding object space planimetric coordinates (X i, Y i) with input object space primitive coordinate (X 0, Y 0) between poor absolute value delta X i, Δ Y iif, Δ X iwith Δ Y iall be less than or equal to the threshold value of setting, then think to be matched some p imanyly meet object coordinates coherence request, by q as matching result iand corresponding image points returns, by Z as one group of image space matching result of ground primitive ian object space height value as ground primitive returns; If Δ X ior Δ Y ibe greater than the threshold value of setting, then think to be matched some q iand many picture matching results are invalid, abandon this group result.
Through the multi-view images coupling based on object space and how as bundle adjustment, ground object space primitive (X can be obtained 0, Y 0) m candidate to be matched some q of correspondence in reference images ithree-dimensional coordinate (X i, Y i, Z i).But this m point to be matched differs, and to establish a capital be the picture point that ground primitive is corresponding.If primitive place, ground only has a height value (culture point as ground in plane), then it also only has a corresponding picture point in reference images; If there is multiple height value (as the vertically atural object such as building facade, electric pole) at primitive place, ground, then it has multiple corresponding picture point in reference images.Therefore, certainly some points are had to be other topocentric picture points in m the candidate point to be matched of ground primitive, this can utilize between the object space three-dimensional coordinate of candidate's point to be matched and the object coordinates of ground primitive whether meet coherence request, confirm which point to be matched is correct, concrete object coordinates consistency check method is as follows:
First, i-th candidate to be matched some q is calculated iobject coordinates and ground primitive object coordinates between poor absolute value delta X i=abs (X i– X 0), Δ Y i=abs (Y i– Y 0).Secondly, by Δ X i, Δ Y icompare with given poor threshold value T (the ground space resolution of an image pixel can be set to), if | Δ X i| <=T and | Δ Y i| <=T, then think Z ibe correct, put it in the set of returned object space height value, and by be matched some q iand n-1 corresponding image points q' on each width search image 1, q' 2..., q' n-1returned image space results set is put into as one group of image space matching result; If | Δ X i| >T or | Δ Y i| >T, then think Z i, and the image space matching result (q of correspondence iand n-1 corresponding image points q' 1, q' 2..., q' n-1) be all wrong, abandon this group result.
Therefore, the novel multi-view images pedal line path matching method returning multiple height value of the present invention is under the constraint of object coordinates consistency checking, defer to the principle of " put qualitying before quantity ", one or more object space height value can be returned, also may a height value also not return.Accompanying drawing 4 illustrates the correspondence position of an actual measurement ground object space primitive on three width aviation images, this ground primitive is a point of building facade, its object coordinates is (397211.3781,3555317.2179), wherein, leftmost image is reference images, and image that is middle and the right is search image, image shows corresponding image point position with crosshair shape, and the straight line of the horizontal direction on middle and the right image represents corresponding epipolar line; Figure 5 shows and the image space result that traditional multi-view images pedal line path matching method returns is carried out to three width aviation images in accompanying drawing 4; Accompanying drawing 6 illustrates and carries out to three width aviation images in accompanying drawing 4 the image space result that returns based on the present invention's novel multi-view images pedal line path matching method.In Fig. 5, Fig. 6, with adding the image space matching result that the shape representation image of crosshair returns in circle, in addition, 19.213 seconds classic method used times, 1.809 seconds novel method used times of the present invention.
As can be seen from the results, traditional pedal line path matching method of looking not only consuming time longer more, an image space matching result and object space height value can only be returned, and this rreturn value is also not necessarily correct, as can be seen from the result of Fig. 5, the image space matching result that classic method returns obviously departs from the actual position of ground primitive on image.And NEW Pb vertical line path matching method of the present invention not only consuming time be only 1/10th of classic method, and, multiple height value can be returned, and owing to carrying out object coordinates consistency checking to matching result, a lot of incorrect matching result can be eliminated.

Claims (4)

1. can return the novel multi-view images pedal line path matching method of multiple height value, it is characterized in that, comprise the steps:
Step one, utilizes the imaging model of image, according to multi-view images outer orientation parameter, and the planimetric coordinates (X of the ground object space primitive of input 0, Y 0), the object space maximum elevation Z in region to be matched maxwith minimum elevation Z min, determine the image space ranks number of m the point to be matched that object space primitive is corresponding in reference images;
Step 2, to the m in reference images point to be matched, carries out the multi-view images coupling that object space is information constrained, to obtain the ranks number of the corresponding image points of each point to be matched on other search image;
Step 3, according to the same place result on multi-view images, utilizes many picture bundle adjustments, calculates m the object space three-dimensional coordinate (X corresponding to point to be matched i, Y i, Z i), i=1,2 ..., m, then the coordinate (X calculating each point to be matched i, Y i) with input object space primitive coordinate (X 0, Y 0) between poor absolute value delta X i, Δ Y i, and according to Δ X i, Δ Y ithe threshold value whether being less than setting decides the height value of returned object space primitive.
2. the novel multi-view images pedal line path matching method returning multiple height value according to claim 1, it is characterized in that, the detailed process of described step one is:
(1) input has the n width multi-view images of elements of exterior orientation, comprises Aeronautics and Astronautics or up short image, and the planimetric coordinates (X of ground object space primitive 0, Y 0), the object space maximum elevation Z in region to be matched maxwith minimum elevation Z min;
(2) utilize the imaging model of image, according to elements of exterior orientation, the planimetric coordinates of ground object space primitive, the minimum and maximum object space elevation of input of reference images, calculate the image space ranks number of m the candidate to be matched point of object space primitive in reference images.
3. the novel multi-view images pedal line path matching method returning multiple height value according to claim 2, it is characterized in that, the detailed process of described step 2 is:
(1) for each to be matched some q in reference images i, i=1,2 ..., m, utilizes it as the minimum and maximum object space elevation of planimetric coordinates, input, according to the imaging model of image, calculates to be matched some q iobject point is at the object space three-dimensional coordinate of the highs and lows of the object space region of search accordingly;
(2) according to video imaging model, by the highs and lows of the object space region of search toward n-1 width search image S 1..., S j..., S n-1on project, obtain the image space ranks number of search core line of the same name two end points to be matched some same place place on each width search image;
(3) according to the image space ranks number of two end points of corresponding epipolar line on each search image, the straight-line equation h of picture plane coker line is determined ' j=k ' j× l ' j+ b ' j, and the interval range [S_l of candidate point range l' of the same name j, E_l j], j=1,2 ..., n-1, and with burst length E_l j-S_l ja maximum width search image is main search image, and remaining n-2 width image is secondary search image;
(4) in the image space region of search main search image, take out each pixel one by one as the current candidate corresponding image points on this image, first utilize double image forward intersection method, calculate to be matched some q iwith the object space three-dimensional coordinate of this candidate point institute intersection culture point, and this three-dimensional coordinate is projected on remaining n-2 width pair search image, obtain n-2 candidate's corresponding image points on secondary search image, again together with the current candidate corresponding image points on main search image, thus obtain often group n-1 candidate's corresponding image points on each pair search image; Then, the comprehensive matching Likelihood Computation method based on RGB color characteristic and SIFT feature is recycled, to be matched some q on Calculation Basis image ioften organize many pictures comprehensive matching similarity of n-1 candidate's same place, and with that group candidate point corresponding to maximum similarity, as many n-1 corresponding image points q' obtained as matching process that object space is information constrained 1, q' 2..., q' n-1.
4., according to the novel multi-view images pedal line path matching method returning multiple height value one of claims 1 to 3 Suo Shu, it is characterized in that, the detailed process of described step 3 is:
(1) for each to be matched some q in reference images i, and n-1 corresponding image points q' on each width search image 1, q' 2..., q' n-1, according to the picture planimetric coordinates of n picture point on each self imaging and the elements of exterior orientation of n width image, utilize many picture bundle adjustments, calculate to be matched some q iobject space three-dimensional coordinate (the X of corresponding culture point i, Y i, Z i);
(2) each to be matched some q is calculated icorresponding object space planimetric coordinates (X i, Y i) with input object space primitive coordinate (X 0, Y 0) between poor absolute value delta X i, Δ Y iif, Δ X iwith Δ Y iall be less than or equal to the threshold value of setting, then think to be matched some q imanyly meet object coordinates coherence request, by q as matching result iand corresponding image points returns, by Z as one group of image space matching result of ground primitive ian object space height value as ground primitive returns; If Δ X ior Δ Y ibe greater than the threshold value of setting, then think to be matched some q iand many picture matching results are invalid, abandon this group result.
CN201410578456.6A 2014-10-24 2014-10-24 Can return to the new multi-view images plumb line path matching method of multiple height values Active CN104318566B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410578456.6A CN104318566B (en) 2014-10-24 2014-10-24 Can return to the new multi-view images plumb line path matching method of multiple height values

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410578456.6A CN104318566B (en) 2014-10-24 2014-10-24 Can return to the new multi-view images plumb line path matching method of multiple height values

Publications (2)

Publication Number Publication Date
CN104318566A true CN104318566A (en) 2015-01-28
CN104318566B CN104318566B (en) 2017-04-05

Family

ID=52373792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410578456.6A Active CN104318566B (en) 2014-10-24 2014-10-24 Can return to the new multi-view images plumb line path matching method of multiple height values

Country Status (1)

Country Link
CN (1) CN104318566B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794490A (en) * 2015-04-28 2015-07-22 中测新图(北京)遥感技术有限责任公司 Slanted image homonymy point acquisition method and slanted image homonymy point acquisition device for aerial multi-view images
CN107271974A (en) * 2017-06-08 2017-10-20 中国人民解放军海军航空工程学院 It is a kind of based on the space-time error acquiring method for stablizing angle point
CN107504959A (en) * 2017-08-22 2017-12-22 北京中测智绘科技有限公司 Utilize the method for oblique aerial radiographic measurement house wall base profile
CN108107462A (en) * 2017-12-12 2018-06-01 中国矿业大学 The traffic sign bar gesture monitoring device and method that RTK is combined with high speed camera
CN109829939A (en) * 2019-01-18 2019-05-31 南京泛在地理信息产业研究院有限公司 A method of it reducing multi-view images and matches corresponding image points search range

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7248794B2 (en) * 2003-06-12 2007-07-24 Imagesat International N.V. Remote platform multiple capture image formation method and apparatus
US7356201B2 (en) * 2002-11-25 2008-04-08 Deutsches Zentrum für Luft- und Raumfahrt e.V. Process and device for the automatic rectification of single-channel or multi-channel images
CN103604417A (en) * 2013-11-15 2014-02-26 南京师范大学 Multi-view image bidirectional matching strategy with constrained object information
CN103606151A (en) * 2013-11-15 2014-02-26 南京师范大学 A wide-range virtual geographical scene automatic construction method based on image point clouds

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7356201B2 (en) * 2002-11-25 2008-04-08 Deutsches Zentrum für Luft- und Raumfahrt e.V. Process and device for the automatic rectification of single-channel or multi-channel images
US7248794B2 (en) * 2003-06-12 2007-07-24 Imagesat International N.V. Remote platform multiple capture image formation method and apparatus
CN103604417A (en) * 2013-11-15 2014-02-26 南京师范大学 Multi-view image bidirectional matching strategy with constrained object information
CN103606151A (en) * 2013-11-15 2014-02-26 南京师范大学 A wide-range virtual geographical scene automatic construction method based on image point clouds

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张卡: ""基于多视影像匹配的三维彩色点云自动生成"", 《光学精密工程》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794490A (en) * 2015-04-28 2015-07-22 中测新图(北京)遥感技术有限责任公司 Slanted image homonymy point acquisition method and slanted image homonymy point acquisition device for aerial multi-view images
CN104794490B (en) * 2015-04-28 2018-10-02 中测新图(北京)遥感技术有限责任公司 The inclination image same place acquisition methods and device of aviation multi-view images
CN107271974A (en) * 2017-06-08 2017-10-20 中国人民解放军海军航空工程学院 It is a kind of based on the space-time error acquiring method for stablizing angle point
CN107504959A (en) * 2017-08-22 2017-12-22 北京中测智绘科技有限公司 Utilize the method for oblique aerial radiographic measurement house wall base profile
CN108107462A (en) * 2017-12-12 2018-06-01 中国矿业大学 The traffic sign bar gesture monitoring device and method that RTK is combined with high speed camera
CN108107462B (en) * 2017-12-12 2022-02-25 中国矿业大学 RTK and high-speed camera combined traffic sign post attitude monitoring device and method
CN109829939A (en) * 2019-01-18 2019-05-31 南京泛在地理信息产业研究院有限公司 A method of it reducing multi-view images and matches corresponding image points search range

Also Published As

Publication number Publication date
CN104318566B (en) 2017-04-05

Similar Documents

Publication Publication Date Title
CN103604417B (en) The multi-view images bi-directional matching strategy that object space is information constrained
CN111126148B (en) DSM (digital communication system) generation method based on video satellite images
CN102506824B (en) Method for generating digital orthophoto map (DOM) by urban low altitude unmanned aerial vehicle
CN101226057B (en) Digital close range photogrammetry method
CN105160702A (en) Stereoscopic image dense matching method and system based on LiDAR point cloud assistance
Rumpler et al. Automated end-to-end workflow for precise and geo-accurate reconstructions using fiducial markers
CN104268935A (en) Feature-based airborne laser point cloud and image data fusion system and method
CN104318566A (en) Novel multi-image plumb line track matching method capable of returning multiple elevation values
CN102003938A (en) Thermal state on-site detection method for large high-temperature forging
CN102072725A (en) Spatial three-dimension (3D) measurement method based on laser point cloud and digital measurable images
CN109523595A (en) A kind of architectural engineering straight line corner angle spacing vision measuring method
CN107014399A (en) A kind of spaceborne optical camera laser range finder combined system joint calibration method
CN103292733B (en) A kind of corresponding point lookup method based on phase shift and trifocal tensor
CN108399631B (en) Scale invariance oblique image multi-view dense matching method
WO2021109138A1 (en) Three-dimensional image sensing system and related electronic device, and time-of-flight ranging method
Bethmann et al. Semi-global matching in object space
Rumpler et al. Multi-view stereo: Redundancy benefits for 3D reconstruction
CN110889899A (en) Method and device for generating digital earth surface model
Altuntas Integration of point clouds originated from laser scaner and photogrammetric images for visualization of complex details of historical buildings
CN107492107A (en) The object identification merged based on plane with spatial information and method for reconstructing
CN112270698A (en) Non-rigid geometric registration method based on nearest curved surface
CN103411587A (en) Positioning and attitude-determining method and system
Crispel et al. All-sky photogrammetry techniques to georeference a cloud field
CN105787464A (en) A viewpoint calibration method of a large number of pictures in a three-dimensional scene
CN107504959B (en) Method for measuring house wall base outline by utilizing inclined aerial image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20190312

Address after: 210023 No. 18 Lingshan North Road, Qixia District, Nanjing City, Jiangsu Province, 4 Blocks 102

Patentee after: Nanjing Panzhi Geographic Information Industry Research Institute Co., Ltd.

Address before: 210097 Ninghai Road, Drum Tower District, Nanjing, Jiangsu Province, No. 122

Patentee before: Nanjing Normal University