CN104318566B - Can return to the new multi-view images plumb line path matching method of multiple height values - Google Patents

Can return to the new multi-view images plumb line path matching method of multiple height values Download PDF

Info

Publication number
CN104318566B
CN104318566B CN201410578456.6A CN201410578456A CN104318566B CN 104318566 B CN104318566 B CN 104318566B CN 201410578456 A CN201410578456 A CN 201410578456A CN 104318566 B CN104318566 B CN 104318566B
Authority
CN
China
Prior art keywords
image
point
object space
matched
primitive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410578456.6A
Other languages
Chinese (zh)
Other versions
CN104318566A (en
Inventor
张卡
盛业华
闾国年
刘学军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Panzhi Geographic Information Industry Research Institute Co., Ltd.
Original Assignee
Nanjing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Normal University filed Critical Nanjing Normal University
Priority to CN201410578456.6A priority Critical patent/CN104318566B/en
Publication of CN104318566A publication Critical patent/CN104318566A/en
Application granted granted Critical
Publication of CN104318566B publication Critical patent/CN104318566B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of new multi-view images plumb line path matching method that can return to multiple height values.The method includes following process:First with the imaging model of image, ground object space primitive (X is determined0,Y0) m in reference images point to be matched;Again to this m point to be matched, the information constrained multi-view images matching of object space is carried out;Finally using many as bundle adjustment, the corresponding object space three-dimensional coordinate (X of point to be matched is calculatedi,Yi,Zi), and calculate each coordinate (Xi,Yi) and object space primitive coordinate (X0,Y0) between poor absolute value delta Xi、ΔYi, and according to poor whether less than the threshold value of setting determining returned height value.Multi-view images are matched and are combined together with Test of accuracy the two processes of matching result by the inventive method, can effectively be eliminated error hiding, be solved the defect that traditionally bin image matching method can only return a height value at vertical atural object.

Description

Can return to the new multi-view images plumb line path matching method of multiple height values
Technical field
The invention belongs to digital photogrammetry, GIS-Geographic Information System and computer vision field, are related to based on many of object space Seeing image is as aspects such as the matching constraint strategy in matching process, hunting zone determination and the eliminations of error hiding result.
Background technology
Digital photogrammetry technology is from two-dimensional digital Extraction of Image and updates the geological information of Three dimensional Targets, radiation The important means of information and semantic information, plays an important role in the development of the national economy and social development, is widely used In fields such as base surveying, survey of territorial resources exploitation and digital City Modelings.Image Matching is used as in digital photogrammetry One key technology, through the whole flow process of photogrammetric data process, always Photogrammetry and Remote Sensing, computer vision Deng the study hotspot in field.In recent years, widely using due to digital sensor, to obtain that regard digitized video big degree of overlapping more next more Easier, an object space point can be imaged on 15 even more many images simultaneously.The increasingly increase of mass remote sensing image data and The continuous growth of three-dimensional spatial information application demand, the full-automation, high efficiency and high reliability to digital photogrammetry technology Require also more and more urgent.How reliably, exactly the corresponding image points on multi-view images is matched out, it has also become Contemporary Digital A photogrammetric problem demanding prompt solution, the degree that image matching problem is solved finally determine oneself of digital photogrammetry Dynamicization degree.
The key issue of Image Matching technology is corresponding relation between the picture point for setting up automatically different images.Image Matching is calculated Method can be divided into stereoscopic image matching and multi-view images matching by the image quantity for participating in matching, can be divided into based on picture by Matching unit The Image Matching of Fang Jiyuan and the matching based on object space primitive.It is big that Image Matching Algorithm based on image space primitive can be divided into three again Class:The Image Matching for understanding based on the Image Matching of gray scale, the Image Matching of feature based and based on image and interpreting.It is based on The matching of gray scale and feature-based matching are the widely used methods in current photogrammetric field, and flat for image mesorelief Also there is very high reliability the abundant open region of sliding, texture information;What is understood based on image and interpreted is fitted through to image Object carries out semantic description, so as to reach the purpose of matching, but with regard to current research situation from the point of view of, which still has and cannot much solve Practical problem, seldom use in photogrammetric field.
But, the purpose of Image Matching is to extract the geological information of object, determine its locus.Thus it is based on image space base The image matching method of unit also wants utilization space forward intersection to resolve the sky of its correspondence object point after the parallax for obtaining left and right image Between three-dimensional coordinate (X, Y, Z), and then set up digital surface model.And can may also use when digital surface model is set up certain Interpolating method so that object space precision of information is more or less reduced.Therefore, it is possible to directly determine that the body surface space of points is three-dimensional Coordinate is studied based on the image matching method of object space, and these methods are also referred to as " matching of ground element image ".It is being based on In the Image Matching of object space, as treating that the topocentric plane coordinates (X, Y) of a primitive is known, it is only necessary to determine its elevation Z.The existing image matching method based on object space is it is determined that mainly adopt following strategy during image space corresponding image points:According to ground The minimum and maximum elevation scope of point, with object space elevation Z to search for benchmark, from minimum height value, each elevation increases Δ Z, determines object space height value Z of to be matched object space Searching point in object spacei=Zmin+ i × Δ Z, i=1,2,3 ... ..., n, N is searching times, so as to obtain three-dimensional coordinate (X, Y, the Z of object space Searching pointi);Then, object space Searching point is projected to and is respectively searched Obtain the Searching point of image space on rope image, and then complete the calculating of similarity between point to be matched and Searching point, and select maximum The corresponding Z of similarityiAs topocentric height value.
However, existing ground element image matching process size of Δ Z values in search procedure is difficult accurate determination, it is impossible to Ensure the Z of Searching pointiNecessarily through ground point;Δ Z values are excessive, then can miss correct candidate point;Δ Z values are too small, and Spend too many calculating and search time.In addition, existing method can only return a height value at each ground point, this For there is the vertical atural object of multiple height values approximately the same plane position (such as building facade, electric pole etc.), it is clear that obtain not To the result of the actual atural object elevation distribution situation of rational reflection.And, existing method lacks for the whether accurate of matching result Weary enough checkout procedures, the only matching similarity with image space are difficult to ensure that the accuracy of matching result.
The content of the invention
Present invention aims to existing ground element image matching process exist can only return a height value, Lack the deficiency of validation verification with result, propose a kind of new multi-view images vertical line tracking that can return to multiple height values Matching process.
The new multi-view images plumb line path matching method that can return to multiple height values comprises the steps:
Step one, using the imaging model of image, according to multi-view images exterior orientation parameter, and the ground object space primitive of input Plane coordinates (X0, Y0), the object space highest elevation Z in region to be matchedmaxWith minimum elevation Zmin, determine object space primitive in benchmark The image space ranks number of corresponding m point to be matched on image;
Step 2, to m point to be matched in reference images, carries out the information constrained multi-view images matching of object space, with To the ranks number of corresponding image points of each point to be matched on other search images;
Step 3, according to the same place result on multi-view images, using many as bundle adjustment, calculates m point to be matched Corresponding object space three-dimensional coordinate (Xi,Yi,Zi), i=1,2 ..., m, then calculate the coordinate (X of each point to be matchedi,Yi) with it is defeated The object space primitive coordinate (X for entering0,Y0) between poor absolute value delta Xi、ΔYi, and according to Δ Xi、ΔYiWhether less than setting Threshold value is determining the height value of returned object space primitive.
Wherein, the detailed process of the step one is:
(1) n width multi-view images (Aeronautics and Astronautics or up short image) of the input with elements of exterior orientation, and ground thing Plane coordinates (the X of Fang Jiyuan0, Y0), the object space highest elevation Z in region to be matchedmaxWith minimum elevation Zmin
(2) using the imaging model of image, the plane of elements of exterior orientation, ground object space primitive according to reference images is sat Mark, the minimum and maximum object space elevation of input, calculate the image space row of m candidate of the object space primitive in reference images point to be matched Row number.
The detailed process of the step 2 is:
(1) point q to be matched for each in reference imagesi(i=1,2 ..., m), using its image plane coordinate, input Minimum and maximum object space elevation, according to the imaging model of image, calculate point q to be matchediCorrespondence culture point is in the object space field of search Between highs and lows object space three-dimensional coordinate;
(2) according to video imaging model, the highs and lows of the object space region of search are searched for into image toward n-1 width S1、…、Sj、…、Sn-1On projected, obtain the search core line of the same name that to be matched same place on each width search image is located The image space ranks number of two end points;
(3) the image space ranks number according to two end points of corresponding epipolar line on each search image, determine image plane coker line Linear equation h'j=k'j×l'j+b'j, and the interval range [S_l of candidate's same place row number l'j,E_lj], j=1,2 ..., n- 1, and with siding-to-siding block length E_lj-S_ljMaximum width search image is main search image, and remaining n-2 width image is secondary search Image;
(4) each pixel is taken out one by one as current on the image in the image space region of search from main search image Candidate's corresponding image points, first with double image forward intersection method, calculates point q to be matchediWith the thing of the intersected culture point of the candidate point Square three-dimensional coordinate, and this three-dimensional coordinate is projected toward on the secondary search image of remainder n-2 width, obtain the n- on secondary search image 2 candidate's corresponding image points, then together with the current candidate corresponding image points on main search image, so as to obtain on each secondary search image Every group of n-1 candidate's corresponding image points;Then, recycle the comprehensive matching Likelihood Computation based on RGB color feature and SIFT feature Method, point q to be matched on calculating benchmark imageiWith many picture comprehensive matching similarities of every group of n-1 candidate's same place, and with most Big that group of candidate point corresponding to similarity, as the information constrained many n-1 pictures of the same name obtained as matching process of object space Point q'1、q'2、…、q'n-1
The detailed process of the step 3 is:
(1) point q to be matched for each in reference imagesi, and its n-1 corresponding image points searched on image in each width q'1、q'2、…、q'n-1, the elements of exterior orientation of image plane coordinate and n width images according to n picture point on each self imaging, utilization Many picture bundle adjustments, calculate point q to be matchediObject space three-dimensional coordinate (the X of corresponding culture pointi,Yi,Zi);
(2) calculate each point q to be matchediCorresponding object space plane coordinates (Xi,Yi) with input object space primitive coordinate (X0, Y0) between poor absolute value delta Xi、ΔYiIf, Δ XiWith Δ YiThe threshold value of setting is equal to both less than, then it is assumed that point p to be matchedi It is many as matching result meets object coordinates coherence request, by qiAnd its corresponding image points is used as one group of image space of ground primitive Return with result, by ZiReturn as an object space height value of ground primitive;If Δ XiOr Δ YiMore than setting threshold value, then Think point q to be matchediAnd its it is many as matching result is invalid, abandon this group of result.
The multi-view images plumb line path matching method of the present invention is a kind of PARALLEL MATCHING method, each topocentric to match Journey is independent of each other, and is very beneficial for the efficient Rapid matching of cake in large quantities.The multi-view images matching process fusion of the present invention makes With topocentric object space information and the image space information of multi-view images, by the information constrained matching hunting zone of topocentric object space, But matching process is completed in image space again so that the candidate search point of image space is sure through the corresponding picture point of ground point, it is ensured that The effectiveness of the time efficiency and Searching point of matching;In addition, the consistency checking of object space information is increased to matching result, can have Effect ground eliminates error hiding result, improves the reliability of the multi-view images matching result based on object space, solves tradition based on object space Image matching method in also can only obtain a height value at vertical atural object and its effectiveness cannot be verified etc. asks Topic.
Description of the drawings
Fig. 1 is the method frame figure of the embodiment of the present invention;
Fig. 2 is that candidate of the ground object space primitive of the embodiment of the present invention in reference images point to be matched determines schematic diagram;
Fig. 3 is the multi-view images matching schematic diagram information constrained based on object space of the embodiment of the present invention;
Fig. 4 is a corresponding picture point position of the actual measurement ground object space primitive on three width aviation images of the embodiment of the present invention Put, wherein, it is (a) image point position of the ground primitive in reference images, (b) is that ground primitive is searched on image in the first width Image point position, (c) is image point position of the ground primitive on the second width search image;
Fig. 5 (a), (b), (c) are using traditional multi-view images plumb line path matching respectively for Fig. 4 (a), (b), (c) The image space result that method is returned;
Fig. 6 (a), (b), (c) are using new multi-view images plumb line rail of the invention respectively for Fig. 4 (a), (b), (c) The image space result that mark matching process is returned.
Specific embodiment
Below in conjunction with specific embodiment, and referring to the drawings, the present invention is described in further detail.
The image exterior orientation parameter and ground object space information of imaging model and input of the present invention according to image, first definitely Multiple to be matched point of the face object space primitive in reference images;The information constrained many seeing images of object space are carried out to these points to be matched again As matching, to make full use of the redundancy of many images, improve the reliability of match measure calculating and ensure candidate matches result Certainly through the corresponding picture point of ground primitive;Then, many picture bundle adjustments are carried out to multi-view images matching result, it is every to calculate The corresponding object space three-dimensional coordinate of individual candidate matches result, and by the plane of the plane coordinates in this coordinate and the ground primitive of input Coordinate carries out object space consistency on messaging checking, effectively to eliminate the error hiding result in candidate matches result, improves many seeing images As the accuracy of matching result.
As shown in figure 1, the new multi-view images plumb line path matching method that can return to multiple height values includes three parts: (1) determine multiple candidates of the ground object space primitive in reference images point to be matched;(2) point to be matched in reference images is carried out The information constrained multi-view images matching of object space;(3) more as the object coordinates consistency checking of matching result.Specific implementation step For:
The first step:Determine multiple candidates of the ground object space primitive in reference images point to be matched.
Determine the schematic diagram of multiple candidates of the ground object space primitive in reference images point to be matched as shown in Figure 2, its Idiographic flow is as follows:
(1) input with elements of exterior orientation n width multi-view images (Aeronautics and Astronautics or up short image, wherein, 1 width is Reference images I0, it is remaining to search for image S for n-1 width1、…、Sj、…、Sn-1), and the plane coordinates (X of ground object space primitive0,Y0)、 The object space highest elevation Z in region to be matchedmaxWith minimum elevation Zmin
(2) using the imaging model of image, the plane of elements of exterior orientation, ground object space primitive according to reference images is sat Mark, the minimum and maximum object space elevation of input, calculate the image space row of m candidate of the object space primitive in reference images point to be matched Row number.
By taking the in-line model of aviation image as an example, if the plane coordinates of the ground object space primitive P of input is (X0,Y0), benchmark Image I0Elements of exterior orientation beObject space highest elevation ZmaxWith minimum elevation Zmin, then ground point In the peak P of the object space region of searchmaxWith minimum point PminThree-dimensional coordinate difference (X0,Y0,Zmax)、(X0,Y0,Zmin);Will PmaxAnd PminBack projection is carried out toward in reference images respectively, two correspondence picture points p are obtainedmaxAnd pminTwo-dimensional image plane coordinates point Not (x1,y1)、(x2,y2).The formula that image space two-dimensional image plane coordinates (x, y) is calculated by object space three-dimensional coordinate (X, Y, Z) is as follows:
In formula,It is by image I0Foreign side's parallactic angle elementIt is determined that rotation Nine direction cosines in torque battle array, focal lengths of the f for filming image camera.
Further according to image I0Principal point ranks number (h0, l0), pixel dimension μ, will point p using following formulamaxAnd pminTwo Dimension image plane coordinate (x1,y1)、(x2,y2) it is respectively converted into image ranks coordinate (h1,l1)、(h2,l2)。
Therefore, m candidate of the ground primitive in reference images point to be matched is located in the straight line p of image planemax pmin On, i-th (i=1,2 ..., m) candidate point q to be matchediRanks number (hi, li) calculate by following two situation:
If 1. abs (l1–l2)>=abs (h1–h2), illustrate straight line pmax pminIt is close to line direction, then lmax=max (l1, l2)、lmin=min (l1,l2), function max () and min () represent the minima and maximum taken from variable, m=respectively lmax–lmin+ 1, liFrom lminTo lmaxValue one by one, hiIt is calculated as follows:
If 2. abs (l1–l2)<abs(h1–h2), illustrate straight line pmax pminIt is close to column direction, then hmax=max (h1, h2), hmin=min (h1,h2), m=hmax–hmin+ 1, hiFrom hminTo hmaxValue one by one, liIt is calculated as follows:
Second step:The information constrained multi-view images matching of object space is carried out to point to be matched in reference images.
Point q to be matched in reference imagesiInformation constrained based on object space multi-view images matching schematic diagram such as 3 institute of accompanying drawing Show, its process is:
(1) point q to be matched for each in reference imagesi(i=1,2 ..., m), using its image plane coordinate and defeated The minimum and maximum object space elevation for entering, according to the imaging model of image, calculates point q to be matchediCorrespondence culture point is searched in object space The object space three-dimensional coordinate of interval highs and lows;
Assume point q to be matchediIn reference images I0Image space ranks number be (hi,li), first according to formula (2) conversion imaging Plane coordinates (xi,yi);The object space highest elevation Z of elements of exterior orientation and input further according to reference imagesmaxWith minimum elevation Zmin, point q is tried to achieve by formula (5)iThe peak Q of the region of search on object space photographs lightmaxWith minimum point QminObject space plane sit Mark (Xmax,Ymax)、(Xmin,Ymin) (with point QmaxCalculating as a example by).
In formula,Implication Deng character is as in formula (1).
(2) according to video imaging model, the highs and lows of the object space region of search are searched for into image toward n-1 width S1、…、Sj、…、Sn-1On projected, obtain the search core line of the same name that to be matched same place on each width search image is located The image space ranks number of two end points;
To search for image as SjExample, point q to be matchediTwo ends of the search core line of the same name that same place on the image is located The image space ranks number of point are calculated in following way:
First, the imaging model according to formula (1), by the peak Q of the to be matched object space region of searchmaxMost Low spot Qmin(now, the search image S of the relevant parameter in imaging model is projected toward on the imagejElements of exterior orientation To calculate), obtain corresponding two picture points qj maxAnd qj minImage plane coordinateThen, according to formula (2), by image plane coordinateIt is converted into image space row Row number
(3) the image space ranks number according to two end points of corresponding epipolar line on each search image, determine image plane coker line Linear equation h'j=k'j×l'j+b'j, and the interval range [S_l of candidate's same place row number l'j,E_lj], j=1,2 ..., n- 1, and with siding-to-siding block length E_lj-S_ljMaximum width search image is main search image, and remaining n-2 width image is secondary search Image;
By point q to be matchediIn search image SjUpper search core two end points q of line of the same namej maxAnd qj minImage space ranks number Candidate's corresponding image points row number on corresponding epipolar line is obtainedInterval scope [S_lj, E_lj] (wherein, T=1,2 ..., E_lj-S_lj+ 1), and by formula (6) calculate Linear equation (the i.e. line number of candidate's same place of corresponding epipolar lineWith row numberBetween calculated relationship):
(4) each pixel is taken out one by one as current on the image in the image space region of search from main search image Candidate's corresponding image points, first with double image forward intersection method, calculates point q to be matchediWith the thing of the intersected culture point of the candidate point Square three-dimensional coordinate, and this three-dimensional coordinate is projected toward on the secondary search image of remainder n-2 width, obtain the n- on secondary search image 2 candidate's corresponding image points, then together with the current candidate corresponding image points on main search image, so as to obtain on each secondary search image Every group of n-1 candidate's corresponding image points;Then, recycle the comprehensive matching Likelihood Computation based on RGB color feature and SIFT feature Method, point q to be matched on calculating benchmark imageiWith many picture comprehensive matching similarities of every group of n-1 candidate's same place, and with most Big that group of candidate point corresponding to similarity, as the information constrained many n-1 pictures of the same name obtained as matching process of object space Point q'1、q'2、…、q'n-1
Assume with S1It is the secondary search images of n-2 for main search image, remainder, determines point q to be matchediMany picture same place mistakes Cheng Wei:
First, from main search image S1On search core line on take out any one candidate same place q'1,t(its row number Span be [S_l1,E_l1], corresponding line numberCalculate by formula (6), t=1,2 ..., E_l1-S_l1, and root+1) Will point q' according to formula (2)1,tRanks numberConversion imaging plane coordinatesRecycle double image forward intersection side Method (formula (7)), calculates point q to be matchediWith candidate point q'1,tThe candidate culture point Q for being intersectediThree-dimensional coordinate (Xi,Yi,Zi)。
In formula, It is by image I0, image S1The spin matrix that calculates of foreign side parallactic angle element.
Secondly, using formula (1), will point QiProjected toward the secondary search image of remaining n-2 width, obtain searching for image Sj(j =2 ..., n-1) on candidate same place q'j,tImage plane coordinateAnd image space row number is converted into using formula (2)Further according to Image SjCorresponding epipolar line equation (6) calculate corresponding image space line numberSo as to obtain and main search image candidate's same place q'1,tN-2 candidate same place q' on corresponding other pair search imagesj,t, and then constitute point q to be matchediT groups n-1 Individual candidate's same place q'1,t、q'2,t、…、q'j,t、…、q'n-1,t
Again, first with the double image comprehensive matching Likelihood Computation method based on color characteristic and SIFT feature, calculate respectively Point q to be matchediThe double image match measure of the t group n-1 candidate points searched on image with each widthThe meansigma methodss of n-1 double image match measure are taken again as t group n-1 candidate same places and Point to be matched it is many as comprehensive matching is estimated
Finally, take many as Synthetic Measurement maximum max { ρm t| t=1,2 ..., E_l1-S_l1+ 1 } that group of candidate point, as The information constrained many point q to be matched obtained as matching process of object spaceiThe n-1 corresponding image points q' on n-1 width search image1、 q'2、…、q'n-1
For reference images I0On point q to be matchediWith search image SjOn certain candidate same place q'j,t, 2 points it Between double image comprehensive matching Likelihood Computation method based on grey color characteristic and SIFT feature it is as follows:
First, respectively with point qiWith point q'j,tCentered on, in image I0Upper and image SjOn take two sizes for N × (N mono- As take odd number) imaging window W, W ', the gray scale for being calculated in three gray channels of red, green, blue between two windows respectively is related Coefficient ρR、ρG、ρB(computing formula such as formula (8), by taking red channel as an example), and take the meansigma methodss conduct of three gray scale correlation coefficienies Similarity measure ρ based on color characteristicC=(ρRGB)/3。
In formula, fR(i,j)、f'R(i, j) represent respectively imaging window W, in W ' the i-th row jth row pixel in red channel Gray value.
Secondly, respectively with qiWith point q'j,tCentered on, in image I0Upper and image SjOn take two sizes be 16 × 16 shadow As window W, W ', each element value g (i, j) in window, g'(i, respective pixel j) is taken respectively logical in red green, blue three gray scales Meansigma methodss g (i, the j)=(f of the gray value in roadR(i,j)+fG(i,j)+fB(i, j))/3, g'(i, j)=(f'R(i,j)+f'G (i,j)+f'B(i, j))/3, recycle SIFT feature to describe method special to calculate the SIFT of 128 dimensions of point to be matched and Searching point (multiscale space of the image being not related in SIFT feature computational methods builds, extreme point is detected and feature to levy vectorial V, V ' The steps such as point position determination), and SIFT feature similarity ρ between two imaging windows is calculated by formula (9)S
Finally, take the similarity measure ρ based on color characteristicCWith based on SIFT feature similarity ρSMeansigma methodss, as treating Double image comprehensive matching between match point and candidate's same place estimates ρs=(ρCS)/2。
3rd step:The object coordinates consistency checking of many picture matching results.
(1) point q to be matched for each in reference imagesi, and its n-1 corresponding image points searched on image in each width q'1、q'2、…、q'n-1, the elements of exterior orientation of image plane coordinate and n width images according to n picture point on each self imaging, utilization Many picture bundle adjustments, calculate point q to be matchediObject space three-dimensional coordinate (the X of corresponding culture pointi,Yi,Zi);
More as bundle adjustment is with the picture point on every width image, corresponds to the light beam constituted by culture point and photo centre For the elementary cell of adjustment, imaging model with image (such as the collinearity condition equation model of aviation image, space flight image it is reasonable Function model) as the basic equation of adjustment, ask the image plane coordinate of the unknown picture point of coordinate to be the solution for the treatment of in every width image Observation, is listed file names with all error equations for participating in the picpointed coordinate on matching image, is resolved with method of least square every The three dimensional space coordinate of six elements of exterior orientation of width image and picture point to be asked correspondence culture point.When image elements of exterior orientation When knowing, then bundle adjustment is used for the object space three-dimensional coordinate for resolving point to be located.For reference images I0On certain point to be matched qi, depending on its image plane coordinate (xi,yi) for observation, in its elements of exterior orientation beKnown feelings Under condition, the image plane coordinate (x that can be listed belowi,yi) with corresponding object space three-dimensional coordinate (Xi,Yi,Zi) between error equation (with As a example by the co-colouration effect of aviation image):
In formula,
Implication Deng character is as in formula (1);(Xi 0, Yi 0, Zi 0) be Culture point three-dimensional coordinate (X to be askedi,Yi,Zi) approximation (can be according to point qiImage S is searched for arbitrary widthjUpper same place q'j, profit Calculated with the double image forward intersection method shown in formula (7)), (dXi,dYi,dZi) be approximate three-dimensional coordinate adjustment correction number, (xi 0, yi 0) it is to bring approximate three-dimensional coordinate into point q that formula (1) is calculatediIn I0On approximate image plane coordinate.
Therefore, for point q to be matchediIn the n-1 corresponding image points set { q' that n-1 width is searched on image1、q'2、…、 q'n-1, the error equation of the individual same places of 2 (n-1) can be listed by formula (10), along with point q to be matchediIn reference images 2 error equations, are obtained 2n error equation of n picture point, then many error sides resolved as bundle adjustment three-dimensional coordinate Journey is as follows:
V=BdX-L (11)
In formula,DX=[dXi dYi dZi]T
Using the principle of least square, according to the correction that error equation can solve approximate coordinate it is:DX=(BTB)-1 (BTL);And then obtain point q to be matchediCorrespondingly the three-dimensional coordinate of culture point is:Xi=Xi 0+dXi, Yi=Yi 0+dYi, Zi=Zi 0+ dZi
(2) calculate each point q to be matchediCorresponding object space plane coordinates (Xi, Yi) with input object space primitive coordinate (X0, Y0) between poor absolute value delta Xi、ΔYiIf, Δ XiWith Δ YiThe threshold value of setting is equal to both less than, then it is assumed that point p to be matchedi It is many as matching result meets object coordinates coherence request, by qiAnd its corresponding image points is used as one group of image space of ground primitive Return with result, by ZiReturn as an object space height value of ground primitive;If Δ XiOr Δ YiMore than setting threshold value, then Think point q to be matchediAnd its it is many as matching result is invalid, abandon this group of result.
Through the multi-view images matching based on object space and many picture bundle adjustments, ground object space primitive (X is obtained0,Y0) Corresponding m candidate point q to be matched in reference imagesiThree-dimensional coordinate (Xi,Yi,Zi).But, this m point to be matched is not necessarily All it is the corresponding picture point of ground primitive.If at the primitive of ground being only one of which height value (such as the culture point on plane ground), Then its in reference images also only one of which correspondence picture point;If having multiple height values at the primitive of ground (such as building facade, electric pole Etc. vertical atural object), then which has multiple correspondence picture points in reference images.Therefore, m candidate of ground primitive point to be matched is pertinent Surely some points are other topocentric picture points, this can utilize the object space three-dimensional coordinate and ground primitive of candidate's point to be matched Whether coherence request is met between object coordinates, confirm which point to be matched is correct, concrete object coordinates concordance The method of inspection is as follows:
First, i-th candidate of calculating point q to be matchediObject coordinates and ground primitive object coordinates between it is poor definitely Value Δ Xi=abs (Xi–X0)、ΔYi=abs (Yi–Y0).Secondly, by Δ Xi、ΔYi(could be arranged to given poor threshold value T The ground space resolution of one pixel of image) it is compared, if | Δ Xi|<=T and | Δ Yi|<=T, then it is assumed that ZiIt is correct , put it in the set of returned object space height value, and by point q to be matchediAnd its n-1 searched on image in each width Corresponding image points q'1、q'2、…、q'n-1It is put in returned image space results set as one group of image space matching result;If | Δ Xi| >T or | Δ Yi|>T, then it is assumed that Zi, and its corresponding image space matching result (qiAnd its n-1 corresponding image points q'1、q'2、…、 q'n-1) all it is wrong, abandon this group of result.
Therefore, the of the invention new multi-view images plumb line path matching method that can return to multiple height values is sat in object space Under the constraint of mark consistency checking, the principle of " put qualitying before quantity " is deferred to, one or more object space height values is can return to, it is also possible to one Individual height value is not also returned.Accompanying drawing 4 illustrates an actual measurement correspondence position of the ground object space primitive on three width aviation images, should Ground primitive is a point of building facade, its object coordinates for (397211.3781,3555317.2179), wherein, Far Left Image on the basis of image, the image on middle and the right is search image, with crosshair shape showing corresponding picture on image Point position, the straight line of the horizontal direction on middle and the right image represent corresponding epipolar line;Accompanying drawing 5 is illustrated to three width in accompanying drawing 4 Aviation image carries out the image space result that traditional multi-view images plumb line path matching method is returned;Accompanying drawing 6 is illustrated in accompanying drawing 4 Three width aviation images carry out the image space result returned based on new multi-view images plumb line path matching method of the invention.Fig. 5, figure In 6, with the image space matching result returned in circle plus on the shape representation image of crosshair, in addition, the traditional method used time 19.213 seconds, 1.809 seconds novel method used times of the present invention.
From the results, it was seen that traditional is how not only time-consuming longer regarding plumb line path matching method, one can only be returned Image space matching result and object space height value, and the return value is also not necessarily correct, can be seen that traditional method from the result of Fig. 5 The image space matching result of return deviates considerably from actual position of the ground primitive on image.And the new plumb line track of the present invention It is only 1/10th of traditional method that matching process not only takes, and, multiple height values are can return to, and due to matching result Object coordinates consistency checking is carried out, many incorrect matching results can be eliminated.

Claims (3)

1. the new multi-view images plumb line path matching method of multiple height values can return to, it is characterised in that including following step Suddenly:
Step one, using the imaging model of image, according to multi-view images exterior orientation parameter, and the ground object space primitive of input is flat Areal coordinate (X0, Y0), the object space highest elevation Z in region to be matchedmaxWith minimum elevation Zmin, determine object space primitive in reference images The image space ranks number of upper corresponding m point to be matched;
Step 2, to m point to be matched in reference images, carries out the information constrained multi-view images matching of object space, each to obtain The ranks number of corresponding image points of the individual point to be matched on other search images;
Step 3, according to the same place result on multi-view images, using many as bundle adjustment, calculates m point institute to be matched right Object space three-dimensional coordinate (the X for answeringi,Yi,Zi), i=1,2 ..., m, then calculate the coordinate (X of each point to be matchedi,Yi) and input Object space primitive coordinate (X0,Y0) between poor absolute value delta Xi、ΔYi, and according to Δ Xi、ΔYiWhether less than the threshold value for setting To determine the height value of returned object space primitive.
2. the new multi-view images plumb line path matching method that can return to multiple height values according to claim 1, its It is characterised by, the detailed process of the step one is:
(1) n width multi-view images of the input with elements of exterior orientation, including Aeronautics and Astronautics or up short image, and ground object space Plane coordinates (the X of primitive0,Y0), the object space highest elevation Z in region to be matchedmaxWith minimum elevation Zmin
(2) it is using the imaging model of image, elements of exterior orientation, the plane coordinates of ground object space primitive according to reference images, defeated The minimum and maximum object space elevation for entering, calculates the image space ranks number of m candidate of the object space primitive in reference images point to be matched.
3. the new multi-view images plumb line path matching method that can return to multiple height values according to claim 1 and 2, Characterized in that, the detailed process of the step 3 is:
(1) point q to be matched for each in reference imagesi, and its n-1 corresponding image points q' searched on image in each width1、 q'2、…、q'n-1, the elements of exterior orientation of image plane coordinate and n width images according to n picture point on each self imaging, using many As bundle adjustment, point q to be matched is calculatediObject space three-dimensional coordinate (the X of corresponding culture pointi,Yi,Zi);
(2) calculate each point q to be matchediCorresponding object space plane coordinates (Xi,Yi) with input object space primitive coordinate (X0,Y0) it Between poor absolute value delta Xi、ΔYiIf, Δ XiWith Δ YiThe threshold value of setting is equal to both less than, then it is assumed that point q to be matchediMany pictures Matching result meets object coordinates coherence request, by qiAnd its corresponding image points is used as one group of image space matching result of ground primitive Return, by ZiReturn as an object space height value of ground primitive;If Δ XiOr Δ YiMore than the threshold value of setting, then it is assumed that treat Match point qiAnd its it is many as matching result is invalid, abandon this group of result.
CN201410578456.6A 2014-10-24 2014-10-24 Can return to the new multi-view images plumb line path matching method of multiple height values Active CN104318566B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410578456.6A CN104318566B (en) 2014-10-24 2014-10-24 Can return to the new multi-view images plumb line path matching method of multiple height values

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410578456.6A CN104318566B (en) 2014-10-24 2014-10-24 Can return to the new multi-view images plumb line path matching method of multiple height values

Publications (2)

Publication Number Publication Date
CN104318566A CN104318566A (en) 2015-01-28
CN104318566B true CN104318566B (en) 2017-04-05

Family

ID=52373792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410578456.6A Active CN104318566B (en) 2014-10-24 2014-10-24 Can return to the new multi-view images plumb line path matching method of multiple height values

Country Status (1)

Country Link
CN (1) CN104318566B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794490B (en) * 2015-04-28 2018-10-02 中测新图(北京)遥感技术有限责任公司 The inclination image same place acquisition methods and device of aviation multi-view images
CN107271974B (en) * 2017-06-08 2020-10-20 中国人民解放军海军航空大学 Space-time error solving method based on stable angular points
CN107504959B (en) * 2017-08-22 2020-04-03 北京中测智绘科技有限公司 Method for measuring house wall base outline by utilizing inclined aerial image
CN108107462B (en) * 2017-12-12 2022-02-25 中国矿业大学 RTK and high-speed camera combined traffic sign post attitude monitoring device and method
CN109829939B (en) * 2019-01-18 2023-03-24 南京泛在地理信息产业研究院有限公司 Method for narrowing search range of multi-view image matching same-name image points

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7248794B2 (en) * 2003-06-12 2007-07-24 Imagesat International N.V. Remote platform multiple capture image formation method and apparatus
US7356201B2 (en) * 2002-11-25 2008-04-08 Deutsches Zentrum für Luft- und Raumfahrt e.V. Process and device for the automatic rectification of single-channel or multi-channel images
CN103604417A (en) * 2013-11-15 2014-02-26 南京师范大学 Multi-view image bidirectional matching strategy with constrained object information
CN103606151A (en) * 2013-11-15 2014-02-26 南京师范大学 A wide-range virtual geographical scene automatic construction method based on image point clouds

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7356201B2 (en) * 2002-11-25 2008-04-08 Deutsches Zentrum für Luft- und Raumfahrt e.V. Process and device for the automatic rectification of single-channel or multi-channel images
US7248794B2 (en) * 2003-06-12 2007-07-24 Imagesat International N.V. Remote platform multiple capture image formation method and apparatus
CN103604417A (en) * 2013-11-15 2014-02-26 南京师范大学 Multi-view image bidirectional matching strategy with constrained object information
CN103606151A (en) * 2013-11-15 2014-02-26 南京师范大学 A wide-range virtual geographical scene automatic construction method based on image point clouds

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于多视影像匹配的三维彩色点云自动生成";张卡;《光学精密工程》;20130731;第21卷(第7期);第1940-1849页 *

Also Published As

Publication number Publication date
CN104318566A (en) 2015-01-28

Similar Documents

Publication Publication Date Title
CN103604417B (en) The multi-view images bi-directional matching strategy that object space is information constrained
CN106097348B (en) A kind of fusion method of three-dimensional laser point cloud and two dimensional image
CN101226057B (en) Digital close range photogrammetry method
CN104318566B (en) Can return to the new multi-view images plumb line path matching method of multiple height values
CN104574393B (en) A kind of three-dimensional pavement crack pattern picture generates system and method
CN112927360A (en) Three-dimensional modeling method and system based on fusion of tilt model and laser point cloud data
CN103424112B (en) A kind of motion carrier vision navigation method auxiliary based on laser plane
Xie et al. Study on construction of 3D building based on UAV images
CN102509348B (en) Method for showing actual object in shared enhanced actual scene in multi-azimuth way
CN105069843A (en) Rapid extraction method for dense point cloud oriented toward city three-dimensional modeling
CN103093459B (en) Utilize the method that airborne LiDAR point cloud data assisted image mates
CN105160702A (en) Stereoscopic image dense matching method and system based on LiDAR point cloud assistance
CN106127771A (en) Tunnel orthography system and method is obtained based on laser radar LIDAR cloud data
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
CN107014399A (en) A kind of spaceborne optical camera laser range finder combined system joint calibration method
CN106295512A (en) Many correction line indoor vision data base construction method based on mark and indoor orientation method
CN109341668A (en) Polyphaser measurement method based on refraction projection model and beam ray tracing method
Wang et al. Pictometry’s proprietary airborne digital imaging system and its application in 3D city modelling
CN101419709A (en) Plane target drone characteristic point automatic matching method for demarcating video camera
CN110889899A (en) Method and device for generating digital earth surface model
CN113096183A (en) Obstacle detection and measurement method based on laser radar and monocular camera
CN108010125A (en) True scale three-dimensional reconstruction system and method based on line-structured light and image information
Altuntas Integration of point clouds originated from laser scaner and photogrammetric images for visualization of complex details of historical buildings
CN114140539A (en) Method and device for acquiring position of indoor object
CN109035343A (en) A kind of floor relative displacement measurement method based on monitoring camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20190312

Address after: 210023 No. 18 Lingshan North Road, Qixia District, Nanjing City, Jiangsu Province, 4 Blocks 102

Patentee after: Nanjing Panzhi Geographic Information Industry Research Institute Co., Ltd.

Address before: 210097 Ninghai Road, Drum Tower District, Nanjing, Jiangsu Province, No. 122

Patentee before: Nanjing Normal University