CN104915988A - Photogrammetric dense point cloud generation method - Google Patents

Photogrammetric dense point cloud generation method Download PDF

Info

Publication number
CN104915988A
CN104915988A CN201510369691.7A CN201510369691A CN104915988A CN 104915988 A CN104915988 A CN 104915988A CN 201510369691 A CN201510369691 A CN 201510369691A CN 104915988 A CN104915988 A CN 104915988A
Authority
CN
China
Prior art keywords
image
right image
points
point
rightarrow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510369691.7A
Other languages
Chinese (zh)
Inventor
赵西安
王昭娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Civil Engineering and Architecture
Original Assignee
Beijing University of Civil Engineering and Architecture
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Civil Engineering and Architecture filed Critical Beijing University of Civil Engineering and Architecture
Priority to CN201510369691.7A priority Critical patent/CN104915988A/en
Publication of CN104915988A publication Critical patent/CN104915988A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a photogrammetric dense point cloud generation method comprising the steps that exterior orientation elements of left and right images are obtained by utilizing a photogrammetric backward intersection method; global matching is performed on the left and right images and the corresponding relation between the left and right image points is established so that all the image points are enabled to possess matching points, parallax graphs used for describing the left and right images are acquired, and the coordinate values of the homologous image points of the left and right images in an image space coordinate system are calculated on the basis of the parallax graphs of the left and right images; and three-dimensional position information of all the corresponding ground points are recovered by a forward intersection method according to the exterior orientation elements of left and right images and the coordinates of the homologous image points of the left and right images in the image space coordinate system so that dense three-dimensional point clouds are generated. With application of the disclosed method, a matching problem of image no-texture area image points is solved, LiDAR point cloud data holes can be compensated and thus technical basis is established for construction of a digital city and a smart city.

Description

A kind of photogrammetric point of density cloud generation method
Technical field
The present invention relates to technical field of mapping, particularly relate to a kind of photogrammetric point of density cloud generation method.
Background technology
Digital city and smart city are built, and propose requirements at the higher level for the technology how rapidly and efficiently obtaining City surface three-dimensional spatial information.Airborne LIDAR (laser radar) is a kind of initiatively earth observation systems, is integrated with the technology such as laser ranging, inertia measurement (IMU), GNSS Differential positioning.The laser pulse energy of LIDAR sensor emission partly penetrates the woods and blocks, and directly obtains the cloud data of high-precision three-dimensional earth's surface landform.Have that automaticity is high, little by weather effect, data are with short production cycle, precision high.Airborne LIDAR data, after Correlation method for data processing, can generate high-precision digital terrain model, contour map.Airborne LiDAR point cloud data also for the target classification such as City Building, road and extraction, but utilize on-board LiDAR data to carry out the automatic identification and extraction of City Building, larger with extraction comparison difficulty such as the buildingss based on remote sensing image.Most of LiDAR point cloud Data Management Analysis, mainly concentrates on the geometric characteristic analysis (as curved surface and principal curvatures analysis, TIN analysis etc.) of cloud data.Modern LiDAR sensor research is focusing more on multiecho and the analysis of the multi-sensor information fusion such as texture information, strength information.Such as based on the classification tree algorithm of multiecho data, be applied to target classification and extractions such as City Building, road and trees.There is data void holes in the cloud data measured due to LiDAR, needs to utilize other technologies to carry out compensation data.
Digital photogrammetry comprises that satellite three line scanner is photogrammetric, aviation three line scanner is photogrammetric, unmanned plane is photogrammetric, the mainly multidisciplinary technical method such as appliance computer, digitized video process, Image Matching, pattern-recognition.Traditional photography is measured Image Matching and is mostly feature-based matching method, is to utilize automatic corresponding image points coupling and aerotriangulation to resolve and obtain object dimensional locus.When image existing weak texture area, traditional algorithm effectively can not extract image feature point and carry out reliable Feature Matching, thus cannot resolve object dimensional locus; That is the method for feature based coupling, cannot solve cloud data cavity problem.
Summary of the invention
The object of this invention is to provide a kind of photogrammetric point of density cloud generation method, solve the matching problem of image without the picture point of texture region, LiDAR point cloud data void holes can be realized and compensate, for digital city and smart city construction lay the foundation.
The object of the invention is to be achieved through the following technical solutions:
A kind of photogrammetric point of density cloud generation method, comprising:
Photogrammetric resect is utilized to ask for the elements of exterior orientation of left and right image;
Corresponding relation between the picture point of left and right is set up by carrying out global registration to left and right image, all picture points are made to have its match point, thus the disparity map obtained for describing left and right image, and calculate the coordinate of left and right image corresponding image points in image space coordinate system based on the disparity map of left and right image;
According to elements of exterior orientation and the coordinate of left and right image corresponding image points in image space coordinate system of described left and right image, and calculate the three-dimensional coordinate of all ground points in earth coordinates by forward intersection, thus generate intensive three-dimensional point cloud.
Further, the described elements of exterior orientation utilizing photogrammetric resect to ask for left and right image comprises:
Based on collinearity equation, set up error equation:
V x = a 11 ΔX S + a 12 ΔY S + a 13 ΔZ S + a 14 Δ φ + a 15 Δ ω + a 16 Δ κ - l x V y = a 21 ΔX S + a 22 ΔY S + a 23 ΔZ S + a 24 Δ φ + a 25 Δ ω + a 26 Δ κ - l y ;
Wherein, Δ X s, Δ Y s, Δ Z s, Δ φ, Δ ω, Δ κ are unknown number to be asked, and it represents image elements of exterior orientation (X s, Y s, Z s, φ, ω, κ) correction, a 11, a 12..., a 16, a 21, a 22..., a 26for error equation coefficient, l x, l yfor constant term, V x, V yfor error equation correction;
The matrix form of this error equation is:
V=AX-l;
Wherein, V=[V xv y] t, l=[l xl x] t, A = a 11 a 12 a 13 a 14 a 15 a 16 a 21 a 22 a 23 a 24 a 25 a 26 , X=[ΔX SΔY SΔZ SΔφ Δω Δκ] T
The normal equation of above-mentioned matrix is:
A TPAX=A TPL;
Wherein, P is the weight matrix of observed reading, then the expression formula of unknown number to be asked is:
X=(A TA) -1A TL;
Again iterative computation is carried out to the expression formula of unknown number above-mentioned to be asked, if the last difference calculating unknown number to be asked for twice is less than predetermined value, then calculate stopping.
Further, describedly set up corresponding relation between the picture point of left and right by carrying out global registration to left and right image, all picture points are made to have its match point, thus obtain disparity map for describing left and right image, and calculate the left and right coordinate figure of image corresponding image points in image space coordinate system based on the disparity map of left and right image and comprise:
First, set up the energy function of left and right image parallax, it is expressed as:
E ( d ) = Σ ( p , q ) ∈ N V ( d p , d q ) + Σ p ∈ P D p ( d p ) ;
Wherein, d represents that the parallax of entire image distributes; N represents four neighborhood point sets of all picture points in image; d prepresent the parallax value that pixel p distributes; P represents the set of picture point in image;
Non-set of metadata of similar data item D p(d p): represent that picture point p parallax is d ptime non-similarity tolerance, calculated by following formula: D p(d p)=λ min (| I l(x, y)-I r((x+d p), y) |, T), in formula, λ is regulatory factor, and T represents truncation error, I l(x, y), I r(x+dx, y+dy) is respectively image intensity corresponding to left and right image picture point;
Level and smooth item V (d p, d q): represent that two adjacent image points p and q distribute parallax d pand d qtime the discontinuous punishment amount of parallax, determined by following formula:
In above formula, a 1, a 2with a 3for preset value, meet a 3> a 2> a 1> 0; I pwith I qrepresent the intensity of picture point p and q respectively;
Secondly, utilize belief propagation optimization to calculate and this energy function is minimized, distribute the final parallax obtaining left and right image according to the parallax in minimized energy function;
Finally, the coordinate figure of left and right image corresponding image points in image space coordinate system is calculated according to the disparity map of left and right image.
Further, describedly utilize belief propagation algorithm that this energy function is minimized to comprise:
Information updating, its expression formula is:
m p → q t ( d q ) = m i n d p ∈ Ω [ V ( d p , d q ) + D p ( d p ) + Σ r ∈ N ( p ) \ q m r → p t - 1 ( d p ) ] ;
In formula, Ω represents disparity search scope, when being the t time iteration, the message that picture point p transmits to picture point q, N (p) q represent that picture point p removes three neighborhood nodes outside picture point q, comprise x 1, x 2, x 3; when representing the t-1 time iteration, the message that r point transmits to picture point q;
Σ r ∈ N ( p ) \ q m r → p t - 1 ( d p ) = m x 1 → p t - 1 + m x 2 → p t - 1 + m x 3 → p t - 1 ;
Confidence calculations, its expression formula is:
b p ( d p ) = D p ( d p ) + Σ r ∈ N ( p ) m r → p T ( d p ) ;
B p(d p) represent that the parallax value that picture point p distributes is d ptime degree of confidence, N (p) represents the four neighborhood nodes of picture point p, comprises q, x 1, x 2, x 3; Wherein:
Σ r ∈ N ( p ) m r → p T ( d p ) = m x 1 → p T + m x 2 → p T + m x 3 → p T + m q → p T ;
Minimizing degree of confidence makes this energy function minimize, and the expression formula minimizing degree of confidence is:
d p * = arg min d p ∈ Ω b p ( d p ) .
Further, the described elements of exterior orientation according to described left and right image and the coordinate of left and right image corresponding image points in image space coordinate system, and calculate the three-dimensional coordinate of all ground points in earth coordinates by forward intersection, thus generate intensive three-dimensional point cloud and comprise:
From collinearity equation:
X A - X S 1 X 1 = Y A - Y S 1 Y = Z A - Z S 1 Z 1 = N 1 X A - X S 2 X 2 = Y A - Y S 2 Y 2 = Z A - Z S 2 Z 2 = N 2 ;
In formula, N 1, N 2be respectively left and right image picture point projection coefficient; (X a, Y a, Z a) be the three-dimensional coordinate of ground point in earth coordinates; The elements of exterior orientation of left and right image is designated as (X respectively s1, Y s1, Z s1, φ 1, ω 1, κ 1), (X s2, Y s2, Z s2, φ 2, ω 2, κ 2) ,(Xs 1, Ys 1, Zs 1) and (Xs 2, Ys 2, Zs 2) be respectively image photographic center, left and right, (φ 1, ω 1, κ 1) and (φ 2, ω 2, κ 2) be respectively the attitude angle of left and right image; (X 1, Y 1, Z 1) and (X 2, Y 2, Z 2) be respectively the coordinate of left and right image corresponding image points in the auxiliary coordinates of image space, be expressed as:
X 1 Y 1 Z 1 = R 1 · x 1 ′ y 1 - f ; X 2 Y 2 Z 2 = R 2 · x 2 ′ y 2 - f ;
Wherein, the coordinate of left and right image corresponding image points in image space coordinate system, is designated as (x respectively 1', y 1,-f), (x 2', y 2,-f); R 1and R 2be respectively the attitude angle (φ with left image 1, ω 1, κ 1), the attitude angle (φ of right image 2, ω 2, κ 2) relevant rotation matrix;
Calculate the three-dimensional coordinate (X of ground point in earth coordinates a, Y a, Z a), its formula is as follows:
X A Y A Z A = X S 1 Y S 1 Z S 1 + N 1 X 1 N 1 Y 1 N 1 Z 1 = X S 2 Y S 2 Z S 2 + N 2 X 2 N 2 Y 2 N 2 Z 2 ;
In formula:
N 1 = B X Z 2 - B Z X 2 X 1 Z 2 - X 2 Z 1 N 2 = B X Z 1 - B Z X 1 X 1 Z 2 - X 2 Z 1 , B X = X S 2 - X S 1 B Y = Y S 2 - Y S 1 B Z = Z S 2 - Z S 1
The three-dimensional coordinate of all ground points in earth coordinates can be obtained by calculating above, thus generate intensive three-dimensional point cloud according to its three-dimensional coordinate.
As seen from the above technical solution provided by the invention, by asking for left and right image elements of exterior orientation according to photogrammetric resection, then corresponding relation is set up by the coupling by pixel, acquisition can describe left and right image disparity map, finally carry out forward intersection and recover three dimensional local information, generate intensive three-dimensional point cloud; Not only solve the Pixel matching problem of image without texture region, also can be used for LiDAR point cloud data void holes and compensate, for digital city and smart city construction lay the foundation.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme of the embodiment of the present invention, below the accompanying drawing used required in describing embodiment is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
The process flow diagram of a kind of photogrammetric point of density cloud generation method that Fig. 1 provides for the embodiment of the present invention;
The schematic diagram of the pass-along message that Fig. 2 provides for the embodiment of the present invention;
The schematic diagram of the degree of confidence transmission that Fig. 3 a provides for the embodiment of the present invention;
The schematic diagram of the confidence calculations that Fig. 3 b provides for the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on embodiments of the invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to protection scope of the present invention.
Embodiment
The process flow diagram of a kind of photogrammetric point of density cloud generation method that Fig. 1 provides for the embodiment of the present invention.As shown in Figure 1, the method mainly comprises the steps:
Step 11, photogrammetric resect is utilized to ask for the elements of exterior orientation of left and right image.
In this step, collinearity equation is expressed as:
x = - f a 1 ( X A - X S ) + b 1 ( Y A - Y S ) + c 1 ( Z A - Z S ) a 3 ( X A - X S ) + b 3 ( Y A - Y S ) + c 3 ( Z A - Z S ) y = - f a 2 ( X A - X S ) + b 2 ( Y A - Y S ) + c 2 ( Z A - Z S ) a 3 ( X A - X S ) + b 3 ( Y A - Y S ) + c 3 ( Z A - Z S )
In above-mentioned collinearity equation: (x, y) is image picpointed coordinate, and f is camera focal length, (X a, Y a, Z a), (X s, Y s, Z s) be respectively ground point, the photo centre coordinate in earth coordinates, (φ, ω, κ) is the attitude angle of image.In formula:
R ( φ , ω , κ ) = a 1 a 2 a 3 b 1 b 2 b 3 c 1 c 2 c 3
Be the rotation matrix relevant with image attitude angle (φ, ω, κ), also claim (XS, YS, ZS, φ, ω, κ) to be image elements of exterior orientation.
Taylor series expansion is carried out to collinearity equation and is taken to single order item, can error equation be obtained:
V x = a 11 ΔX S + a 12 ΔY S + a 13 ΔZ S + a 14 Δ φ + a 15 Δ ω + a 16 Δ κ - l x V y = a 21 ΔX S + a 22 ΔY S + a 23 ΔZ S + a 24 Δ φ + a 25 Δ ω + a 26 Δ κ - l y
In formula, Δ X s, Δ Y s, Δ Z s, Δ φ, Δ ω, Δ κ are unknown number to be asked, and it represents image elements of exterior orientation (X s, Y s, Z s, φ, ω, κ) correction, a 11, a 12..., a 16, a 21, a 22..., a 26for error equation coefficient, l x, l yfor constant term, V x, V yfor error equation correction.
Error equation is written as matrix form: V=AX-l;
Wherein: V=[V xv y] t, l=[l xl x] t, A = a 11 a 12 a 13 a 14 a 15 a 16 a 21 a 22 a 23 a 24 a 25 a 26 , X=[ΔX SΔY SΔZ SΔφ Δω Δκ] T
The normal equation of above-mentioned matrix is: A tpAX=A tpL;
Wherein, P is the weight matrix of observed reading, the measurement accuracy of reaction observed reading, then the expression formula of unknown number to be asked is:
X=(A TA) -1A TL;
Again iterative computation is carried out to the expression formula of unknown number above-mentioned to be asked, if the last difference calculating unknown number to be asked for twice is less than predetermined value, then calculate stopping;
Calculate Δ X s, Δ Y s, Δ Z s, Δ φ, Δ ω, after Δ κ, then can obtain corresponding image elements of exterior orientation (X s, Y s, Z s, φ, ω, κ).
Step 12, set up corresponding relation between the picture point of left and right by carrying out global registration to left and right image, all picture points are made to have its match point, thus the disparity map obtained for describing left and right image, and calculate the coordinate figure of left and right image corresponding image points in image space coordinate system based on the disparity map of left and right image.
In the embodiment of the present invention, the disparity map of left and right image is dense disparity map, and its concrete steps are as follows:
First, set up the energy function of left and right image parallax, it is expressed as:
E ( d ) = Σ ( p , q ) ∈ N V ( d p , d q ) + Σ p ∈ P D p ( d p ) ;
Wherein, d represents that the parallax of entire image distributes, and N represents four neighborhood point sets of all picture points in image, d prepresent the parallax value that picture point p distributes, P represents the set of picture point in image.
Non-set of metadata of similar data item D p(d p) represent that picture point p parallax is d ptime non-similarity tolerance, calculated by following formula:
D p(d p)=λmin(|I l(x,y)-I r(x+dx,y+dy)|,T),
In formula, λ is regulatory factor, and T represents truncation error.I l(x, y), I r(x+dx, y+dy) is respectively image intensity corresponding to left and right image picture point.
Level and smooth item V (d p, d q) represent that two adjacent image points p and q distribute parallax d pand d qtime the discontinuous punishment amount of parallax, determined by following formula:
That is, if the parallax of picture point p and q is equal, so the discontinuous punishment amount of parallax is 0; If the disparity difference of picture point p and q is within a pixel, to a less punishment amount a 1; And if difference is outside a pixel, may be now cause parallax discontinuous owing to blocking, also should give less punishment amount, but the inaccurate phenomenon in edge that Iamge Segmentation draws may be there is here, thus need again to judge two picture points intensity difference (| I p-I q|), if intensity difference is greater than threshold value δ, the actual boundary caused is blocked in explanation, and parallax is discontinuous, should give the punishment amount a that is less 2and if intensity difference is less than threshold value δ, then illustrate parallax discontinuous be because the disparity plane template of distributing is not optimum template, a larger punishment amount a should be given 3.Wherein, a 1, a 2with a 3for preset value, meet a 3> a 2> a 1> 0; I pwith I qrepresent the intensity of picture point p and q respectively.
Now, initial parallax figure can be obtained based on core line: calculate basis matrix F; Ask and the limit e1 of epipolar lines intersect, e2; Select transformation matrix U2, limit e2 is mapped to infinite distance limit e; According to formula solve and obtain U2; Correction is completed according to U1 and U2 resampling; The x coordinate difference correcting secondary homonym picture point is initial parallax.
Then, utilize belief propagation optimized algorithm that energy function is minimized, distribute the final parallax obtaining left and right image according to the parallax in minimization of energy function.
1) information updating (see Fig. 2), its expression formula is:
m p → q t ( d q ) = m i n d p ∈ Ω [ V ( d p , d q ) + D p ( d p ) + Σ r ∈ N ( p ) \ q m r → p t - 1 ( d p ) ] ;
In formula, Ω represents disparity search scope, when being the t time iteration, the message that picture point p transmits to picture point q, N (p) q represent that picture point p removes three neighborhood nodes outside picture point q, comprise the x in Fig. 2 1, x 2, x 3;
Equation right-hand member Section 2 is:
Σ r ∈ N ( p ) \ q m r → p t - 1 ( d p ) = m x 1 → p t - 1 + m x 2 → p t - 1 + m x 3 → p t - 1 ;
Wherein, when representing the t-1 time iteration, the message that r point transmits to picture point q;
Message alternating renewal process is as follows:
A, all picture points are divided into two groups: being added by each point ranks number, if odd number, is then A group; Otherwise be B group.Whole pixel set G equal the union of A and B.
Table 1 picture point divides
B, only depending on the value of picture point in its first order neighbors owing to calculating the message of picture point, so when carrying out message iteration, when iterations t is odd number, upgrading the message of B group picture point to A group picture point, the result that the message maintenance of B group picture point is t-1 time, without the need to renewal; When t is even number, upgrade the message of A group picture point to B group picture point, the message of A group picture point remains unchanged.
m ‾ p q t = m p q t if p ∈ A ( P m p q t - 1 o t h e r
C, travel through the information updating of whole pixel set G.
D, return a step, carry out iterative computation.The to the last difference of twice information updating calculating, meet the demands end.
2) confidence calculations.As shown in Fig. 3 a ~ Fig. 3 b, represent the transmission of degree of confidence and the calculating of degree of confidence respectively.
The formula calculating degree of confidence is as follows:
b p ( d p ) = D p ( d p ) + Σ r ∈ N ( p ) m r → p T ( d p ) ;
Wherein, b p(d p) represent that the parallax value that picture point p distributes is d ptime degree of confidence, N (p) represents the four neighborhood nodes of the picture point p shown in Fig. 3 a, comprises q, x 1, x 2, x 3.Wherein:
Σ r ∈ N ( p ) m r → p T ( d p ) = m x 1 → p T + m x 2 → p T + m x 3 → p T + m q → p T ;
3) minimizing degree of confidence makes this energy function minimize, and the expression formula minimizing degree of confidence is:
d p * = arg min d p ∈ Ω b p ( d p ) .
Finally, the coordinate figure of left and right image corresponding image points in image space coordinate system is calculated according to the disparity map of left and right image.
The difference of left and right image corresponding image points coordinate figure is contained, i.e. right picpointed coordinate-left picpointed coordinate in the disparity map of left and right image)=parallax (figure); And all calculate based on left image when calculating the parallax of left and right image, namely the picpointed coordinate in left image is known, thus the picpointed coordinate that can calculate according to the disparity map of left and right image in right image, thus obtain the coordinate figure of left and right image corresponding image points in image space coordinate system, be designated as (x respectively 1', y 1,-f), (x 2', y 2,-f).
Step 13, according to the elements of exterior orientation of described left and right image and the coordinate of left and right image corresponding image points in image space coordinate system, and calculate the three-dimensional coordinate of all ground points in earth coordinates by forward intersection, thus generate intensive three-dimensional point cloud.
In the embodiment of the present invention, after carrying out overall Stereo matching by step 12, all picture points have its match point, and the elements of exterior orientation also having obtained left and right image in a step 11 (is designated as (X respectively s1, Y s1, Z s1, φ 1, ω 1, κ 1), (X s2, Y s2, Z s2, φ 2, ω 2, κ 2)), just can obtain all topocentric three-dimensional coordinate (X by forward intersection a, Y a, Z a), from collinearity equation:
X A - X S 1 X 1 = Y A - Y S 1 Y = Z A - Z S 1 Z 1 = N 1 X A - X S 2 X 2 = Y A - Y S 2 Y 2 = Z A - Z S 2 Z 2 = N 2 ;
In formula, N 1, N 2be respectively left and right image picture point projection coefficient; (Xs 1, Ys 1, Zs 1) and (Xs 2, Ys 2, Zs 2) be respectively image photographic center, left and right, (X a, Y a, Z a) be the three-dimensional coordinate of ground point in earth coordinates; (X 1, Y 1, Z 1) and (X 2, Y 2, Z 2) be respectively the coordinate of left and right image corresponding image points in the auxiliary coordinates of image space, that is:
X 1 Y 1 Z 1 = R 1 · x 1 ′ y 1 - f ; X 2 Y 2 Z 2 = R 2 · x 2 ′ y 2 - f ;
Wherein, R 1and R 2be respectively the attitude angle (φ with left image 1, ω 1, κ 1), the attitude angle (φ of right image 2, ω 2, κ 2) relevant rotation matrix;
Calculate the three-dimensional coordinate (X of ground point in earth coordinates a, Y a, Z a), its formula is as follows:
X A Y A Z A = X S 1 Y S 1 Z S 1 + N 1 X 1 N 1 Y 1 N 1 Z 1 = X S 2 Y S 2 Z S 2 + N 2 X 2 N 2 Y 2 N 2 Z 2 ;
In formula:
N 1 = B X Z 2 - B Z X 2 X 1 Z 2 - X 2 Z 1 N 2 = B X Z 1 - B Z X 1 X 1 Z 2 - X 2 Z 1 , B X = X S 2 - X S 1 B Y = Y S 2 - Y S 1 B Z = Z S 2 - Z S 1
Obtain the three-dimensional coordinate of ground point in earth coordinates by calculating above again, and then generate ground point three-dimensional coordinate point of density cloud.
The such scheme of the embodiment of the present invention is by asking for left and right image elements of exterior orientation according to photogrammetric resection, then corresponding relation is set up by the coupling by pixel, acquisition can describe left and right image disparity map, finally carry out forward intersection and recover three dimensional local information, generate intensive three-dimensional point cloud; Not only solve the matching problem of image without texture region picture point, LiDAR point cloud data void holes can also be compensated, for digital city and smart city construction lay the foundation.
Through the above description of the embodiments, those skilled in the art can be well understood to above-described embodiment can by software simulating, and the mode that also can add necessary general hardware platform by software realizes.Based on such understanding, the technical scheme of above-described embodiment can embody with the form of software product, it (can be CD-ROM that this software product can be stored in a non-volatile memory medium, USB flash disk, portable hard drive etc.) in, comprise some instructions and perform method described in each embodiment of the present invention in order to make a computer equipment (can be personal computer, server, or the network equipment etc.).
The above; be only the present invention's preferably embodiment, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; the change that can expect easily or replacement, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.

Claims (5)

1. a photogrammetric point of density cloud generation method, is characterized in that, comprising:
Photogrammetric resect is utilized to ask for the elements of exterior orientation of left and right image;
Corresponding relation between the picture point of left and right is set up by carrying out global registration to left and right image, all picture points are made to have its match point, thus the disparity map obtained for describing left and right image, and calculate the coordinate of left and right image corresponding image points in image space coordinate system based on the disparity map of left and right image;
According to elements of exterior orientation and the coordinate of left and right image corresponding image points in image space coordinate system of described left and right image, and calculate the three-dimensional coordinate of all ground points in earth coordinates by forward intersection, thus generate intensive three-dimensional point cloud.
2. method according to claim 1, is characterized in that, the described elements of exterior orientation utilizing photogrammetric resect to ask for left and right image comprises:
Based on collinearity equation, set up error equation:
V x = a 11 ΔX S + a 12 ΔY S + a 13 ΔZ S + a 14 Δ φ + a 15 Δ ω + a 16 Δ κ - l x V y = a 12 ΔX S + a 22 ΔY S + a 23 ΔZ S + a 24 Δ φ + a 25 Δ ω + a 26 Δ κ - l y ;
Wherein, Δ X s, Δ Y s, Δ Z s, Δ φ, Δ ω, Δ κ are unknown number to be asked, and it represents image elements of exterior orientation (X s, Y s, Z s, φ, ω, κ) correction, a 11, a 12..., a 16, a 21, a 22..., a 26for error equation coefficient, l x, l yfor constant term, V x, V yfor error equation correction;
The matrix form of this error equation is:
V=AX-l;
Wherein, V=[V xv y] t, l=[l xl x] t, A = a 11 a 12 a 13 a 14 a 15 a 16 a 21 a 22 a 23 a 24 a 25 a 26 , X=[ΔX SΔY SΔZ SΔφ Δω Δκ] T
The normal equation of above-mentioned matrix is:
A TPAX=A TPL;
Wherein, P is the weight matrix of observed reading, then the expression formula of unknown number to be asked is:
X=(A TA) -1A TL;
Again iterative computation is carried out to the expression formula of unknown number above-mentioned to be asked, if the last difference calculating unknown number to be asked for twice is less than predetermined value, then calculate stopping.
3. method according to claim 1, it is characterized in that, describedly set up corresponding relation between the picture point of left and right by carrying out global registration to left and right image, all picture points are made to have its match point, thus obtain disparity map for describing left and right image, and calculate the left and right coordinate figure of image corresponding image points in image space coordinate system based on the disparity map of left and right image and comprise:
First, set up the energy function of left and right image parallax, it is expressed as:
E ( d ) = Σ ( p , q ) ∈ N V ( d p , d q ) + Σ p ∈ P D p ( d p ) ;
Wherein, d represents that the parallax of entire image distributes; N represents four neighborhood point sets of all picture points in image; d prepresent the parallax value that pixel p distributes; P represents the set of picture point in image;
Non-set of metadata of similar data item D p(d p): represent that picture point p parallax is d ptime non-similarity tolerance, calculated by following formula: D p(d p)=λ min (| I l(x, y)-I r((x+d p), y) |, T), in formula, λ is regulatory factor, and T represents truncation error, I l(x, y), I r(x+dx, y+dy) is respectively image intensity corresponding to left and right image picture point;
Level and smooth item V (d p, d q): represent that two adjacent image points p and q distribute parallax d pand d qtime the discontinuous punishment amount of parallax, determined by following formula:
In above formula, a 1, a 2with a 3for preset value, meet a 3> a 2> a 1> 0; I pwith I qrepresent the intensity of picture point p and q respectively;
Secondly, utilize belief propagation optimization to calculate and this energy function is minimized, distribute the final parallax obtaining left and right image according to the parallax in minimized energy function;
Finally, the coordinate figure of left and right image corresponding image points in image space coordinate system is calculated according to the disparity map of left and right image.
4. method according to claim 3, is characterized in that, describedly utilizes belief propagation algorithm that this energy function is minimized to comprise:
Information updating, its expression formula is:
m p → q t ( d q ) = min d p ∈ Ω [ V ( d p , d q ) + D p ( d p ) + Σ r ∈ N ( p ) \ q m r → p t - 1 ( d p ) ] ;
In formula, Ω represents disparity search scope, when being the t time iteration, the message that picture point p transmits to picture point q, N (p) q represent that picture point p removes three neighborhood nodes outside picture point q, comprise x 1, x 2, x 3; when representing the t-1 time iteration, the message that r point transmits to picture point q;
Σ r ∈ N ( p ) \ q m r → p t - 1 ( d p ) = m x 1 → p t - 1 + m x 2 → p t - 1 + m x 3 → p t - 1 ;
Confidence calculations, its expression formula is:
b p ( d p ) = D p ( d p ) + Σ r ∈ N ( p ) m r → p T ( d p ) ;
B p(d p) represent that the parallax value that picture point p distributes is d ptime degree of confidence, N (p) represents the four neighborhood nodes of picture point p, comprises q, x 1, x 2, x 3; Wherein:
Σ r ∈ N ( p ) m r → p T ( d p ) = m x 1 → p T + m x 2 → p T + m x 3 → p T + m q → p T ;
Minimizing degree of confidence makes this energy function minimize, and the expression formula minimizing degree of confidence is:
d p * = arg m i n d p ∈ Ω b p ( d p ) .
5. the method according to any one of claim 1-4, it is characterized in that, the described elements of exterior orientation according to described left and right image and the coordinate of left and right image corresponding image points in image space coordinate system, and calculate the three-dimensional coordinate of all ground points in earth coordinates by forward intersection, thus generate intensive three-dimensional point cloud and comprise:
From collinearity equation:
X A - X S 1 X 1 = Y A - Y S 1 Y 1 = Z A - Z S 1 Z 1 = N 1 X A - X S 2 X 2 = Y A - Y S 2 Y 2 = Z A - Z S 2 Z 2 = N 2 ;
In formula, N 1, N 2be respectively left and right image picture point projection coefficient; (X a, Y a, Z a) be the three-dimensional coordinate of ground point in earth coordinates; The elements of exterior orientation of left and right image is designated as (X respectively s1, Y s1, Z s1, φ 1, ω 1, κ 1), (X s2, Y s2, Z s2, φ 2, ω 2, κ 2) ,(Xs 1, Ys 1, Zs 1) and (Xs 2, Ys 2, Zs 2) be respectively image photographic center, left and right, (φ 1, ω 1, κ 1) and (φ 2, ω 2, κ 2) be respectively the attitude angle of left and right image; (X 1, Y 1, Z 1) and (X 2, Y 2, Z 2) be respectively the coordinate of left and right image corresponding image points in the auxiliary coordinates of image space, be expressed as:
X 1 Y 1 Z 1 = R 1 · x 1 ′ y 1 - f ; X 2 Y 2 Z 2 = R 2 · x 2 ′ y 2 - f ;
Wherein, the coordinate of left and right image corresponding image points in image space coordinate system, is designated as (x respectively 1', y 1,-f), (x 2', y 2,-f); R 1and R 2be respectively the attitude angle (φ with left image 1, ω 1, κ 1), the attitude angle (φ of right image 2, ω 2, κ 2) relevant rotation matrix;
Calculate the three-dimensional coordinate (X of ground point in earth coordinates a, Y a, Z a), its formula is as follows:
X A Y A Z A = X S 1 Y S 1 Z S 1 + N 1 X 1 N 1 Y 1 N 1 Z 1 = X S 2 Y S 2 Z S 2 + N 2 X 2 N 2 Y 2 N 2 Z 2 ;
In formula:
N 1 = B X Z 2 - B Z X 2 X 1 Z 2 - X 2 Z 1 N 2 = B X Z 1 - B Z X 1 X 1 Z 2 - X 2 Z 1 , B X = X S 2 - X S 1 B Y = Y S 2 - Y S 1 B Z = Z S 2 - Z S 1
The three-dimensional coordinate of all ground points in earth coordinates can be obtained by calculating above, thus generate intensive three-dimensional point cloud according to its three-dimensional coordinate.
CN201510369691.7A 2015-06-29 2015-06-29 Photogrammetric dense point cloud generation method Pending CN104915988A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510369691.7A CN104915988A (en) 2015-06-29 2015-06-29 Photogrammetric dense point cloud generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510369691.7A CN104915988A (en) 2015-06-29 2015-06-29 Photogrammetric dense point cloud generation method

Publications (1)

Publication Number Publication Date
CN104915988A true CN104915988A (en) 2015-09-16

Family

ID=54085024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510369691.7A Pending CN104915988A (en) 2015-06-29 2015-06-29 Photogrammetric dense point cloud generation method

Country Status (1)

Country Link
CN (1) CN104915988A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106123862A (en) * 2016-06-03 2016-11-16 北京林业大学 Flight unmanned plane understory species observation procedure in one elite stand
CN107067394A (en) * 2017-04-18 2017-08-18 中国电子科技集团公司电子科学研究院 A kind of oblique photograph obtains the method and device of point cloud coordinate
CN107314762A (en) * 2017-07-06 2017-11-03 广东电网有限责任公司电力科学研究院 Atural object distance detection method below power line based on unmanned plane the sequence monocular image
CN110009675A (en) * 2019-04-03 2019-07-12 北京市商汤科技开发有限公司 Generate method, apparatus, medium and the equipment of disparity map

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226057A (en) * 2008-02-01 2008-07-23 武汉朗视软件有限公司 Digital close range photogrammetry method
CN101706263A (en) * 2009-11-10 2010-05-12 倪友群 Three-dimensional surface measurement method and measurement system
US20130322699A1 (en) * 2012-06-04 2013-12-05 Clicrweight, LLC Methods and systems for determining and displaying animal metrics
CN103606188A (en) * 2013-11-15 2014-02-26 南京师范大学 Geographical information on-demand acquisition method based on image point cloud

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226057A (en) * 2008-02-01 2008-07-23 武汉朗视软件有限公司 Digital close range photogrammetry method
CN101706263A (en) * 2009-11-10 2010-05-12 倪友群 Three-dimensional surface measurement method and measurement system
US20130322699A1 (en) * 2012-06-04 2013-12-05 Clicrweight, LLC Methods and systems for determining and displaying animal metrics
CN103606188A (en) * 2013-11-15 2014-02-26 南京师范大学 Geographical information on-demand acquisition method based on image point cloud

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
FELZENSZWALB P F: "Efficient belief propagation for early vision", 《PROCEEDINGS OF THE 2004 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 2004. CVPR 2004》 *
H. HIRSCHMULLER 等: "Evaluation of stereo matching costs on images with radiometric differences", 《IEEE TPAMI》 *
吕佩育: "基于影像分割的无人机影像密集匹配算法研究与实现", 《中国优秀硕士学位论文全文数据库_基础科学辑》 *
李彬彬: "基于图像分割的置信传播立体匹配算法研究", 《中国优秀硕士学位论文全文数据库_信息科技辑》 *
钱毅湘: "基于数字航拍影像的地形信息提取技术的研究与实现", 《中国优秀硕士学位论文全文数据库_信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106123862A (en) * 2016-06-03 2016-11-16 北京林业大学 Flight unmanned plane understory species observation procedure in one elite stand
CN107067394A (en) * 2017-04-18 2017-08-18 中国电子科技集团公司电子科学研究院 A kind of oblique photograph obtains the method and device of point cloud coordinate
CN107314762A (en) * 2017-07-06 2017-11-03 广东电网有限责任公司电力科学研究院 Atural object distance detection method below power line based on unmanned plane the sequence monocular image
CN107314762B (en) * 2017-07-06 2020-05-08 广东电网有限责任公司电力科学研究院 Method for detecting ground object distance below power line based on monocular sequence images of unmanned aerial vehicle
CN110009675A (en) * 2019-04-03 2019-07-12 北京市商汤科技开发有限公司 Generate method, apparatus, medium and the equipment of disparity map

Similar Documents

Publication Publication Date Title
Qu et al. Vehicle localization using mono-camera and geo-referenced traffic signs
CN102506824B (en) Method for generating digital orthophoto map (DOM) by urban low altitude unmanned aerial vehicle
JP5389964B2 (en) Map information generator
CN105160702A (en) Stereoscopic image dense matching method and system based on LiDAR point cloud assistance
Maset et al. Photogrammetric 3D building reconstruction from thermal images
Tscharf et al. On the use of UAVs in mining and archaeology-geo-accurate 3D reconstructions using various platforms and terrestrial views
US20090154793A1 (en) Digital photogrammetric method and apparatus using intergrated modeling of different types of sensors
Olson et al. Visual terrain mapping for Mars exploration
Rumpler et al. Automated end-to-end workflow for precise and geo-accurate reconstructions using fiducial markers
Alidoost et al. An image-based technique for 3D building reconstruction using multi-view UAV images
Wang et al. Accuracy evaluation of 3d geometry from low-attitude uav collections a case at zijin mine
Zhou et al. LiDAR-guided dense matching for detecting changes and updating of buildings in Airborne LiDAR data
CN104915988A (en) Photogrammetric dense point cloud generation method
Jin et al. An indoor location-based positioning system using stereo vision with the drone camera
CN110889899A (en) Method and device for generating digital earth surface model
Cosido et al. Hybridization of convergent photogrammetry, computer vision, and artificial intelligence for digital documentation of cultural heritage-a case study: the magdalena palace
Ye et al. Integrated image matching and segmentation for 3D surface reconstruction in urban areas
Zhu et al. Fusing GNSS/INS/vision with a priori feature map for high-precision and continuous navigation
Marí et al. To bundle adjust or not: A comparison of relative geolocation correction strategies for satellite multi-view stereo
Brenner Scalable estimation of precision maps in a mapreduce framework
Jiang et al. Low–high orthoimage pairs-based 3D reconstruction for elevation determination using drone
Dal Poz et al. Three-dimensional semiautomatic road extraction from a high-resolution aerial image by dynamic-programming optimization in the object space
Javadnejad et al. An assessment of UAS-based photogrammetry for civil integrated management (CIM) modeling of pipes
Gao et al. Automatic geo-referencing mobile laser scanning data to UAV images
Li et al. Integration of aerial, MMS, and backpack images for seamless 3D mapping in urban areas

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150916

RJ01 Rejection of invention patent application after publication