CN108182722B - Real projective image generation method for three-dimensional object edge optimization - Google Patents

Real projective image generation method for three-dimensional object edge optimization Download PDF

Info

Publication number
CN108182722B
CN108182722B CN201710623219.0A CN201710623219A CN108182722B CN 108182722 B CN108182722 B CN 108182722B CN 201710623219 A CN201710623219 A CN 201710623219A CN 108182722 B CN108182722 B CN 108182722B
Authority
CN
China
Prior art keywords
dimensional
points
line segment
dimensional object
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710623219.0A
Other languages
Chinese (zh)
Other versions
CN108182722A (en
Inventor
赵海盟
王强
崔希民
孙山林
杨彬
王勇军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Aerospace Technology
Original Assignee
Guilin University of Aerospace Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Aerospace Technology filed Critical Guilin University of Aerospace Technology
Priority to CN201710623219.0A priority Critical patent/CN108182722B/en
Publication of CN108182722A publication Critical patent/CN108182722A/en
Application granted granted Critical
Publication of CN108182722B publication Critical patent/CN108182722B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a true ortho image generation method for three-dimensional object edge optimization, which comprises the following steps: 1) performing point feature processing on an image containing a three-dimensional object and generating dense point cloud; 2) extracting edge straight lines of the three-dimensional object from a plurality of images containing the three-dimensional object, matching the same-name straight lines, and then performing three-dimensional reconstruction on the basis of coplanar conditions by using the matched straight lines to obtain three-dimensional straight lines of the edge of the three-dimensional object; 3) selecting two end points from the three-dimensional straight line to obtain a three-dimensional line segment, and discretizing the three-dimensional line segment into three-dimensional points; 4) constructing a triangulation network based on the points in the point cloud obtained in the step 1) and the points obtained in the step 3), generating a digital surface model containing the image of the three-dimensional object, and performing orthorectification based on the digital surface model to obtain a true orthoimage with optimized edge quality of the three-dimensional object. The invention makes up the point cloud missing at the edge of the object, thereby eliminating the sawtooth distortion effect.

Description

Real projective image generation method for three-dimensional object edge optimization
Technical Field
The invention relates to the field of photogrammetry and three-dimensional imaging, in particular to a true ortho-image generation method for three-dimensional object edge optimization.
Background
With the rapid development of current socio-economic, digital ortho images have become an important part of photogrammetric spatial data. In photogrammetric three-dimensional digital imaging, three-dimensional solid objects are obviously the main subject of investigation. Therefore, real beam correction based on a Digital Surface Model (DSM) has become the focus of research in recent years. The method takes the three-dimensional shape and elevation information of the object into consideration, and can correct the inclination and the projection difference of the object so as to obtain a real orthographic projection image.
However, the creation of a true ortho image is much more complex than an ortho image. Real shadowgraph products generally suffer from the following phenomena that affect their quality: stretch marks, twist dislocations, ghosting, edge jaggies, and the like. Factors influencing the quality of the true ortho-image are many, including the quality of the dense point cloud and the DSM in the early stage, and subsequent processing such as mosaic line extraction, light and color evening, shading area filling, shadow elimination, edge modification and the like. When the real shooting correction is carried out on the image, how to acquire the accurate DSM is the first key step. While DSM acquisition is generally obtained by dense point clouds, some methods start with improving the quality of dense three-dimensional point clouds from both laser scanning point clouds and photogrammetric point clouds. After the three-dimensional point cloud is obtained, many researches are made on how to reconstruct the grid surface by using the scattered point cloud, and certain progress is made. For a tall object, the problem of blocking and shadowing a short object nearby is also an important factor affecting the quality of the ortho-image after the problem is solved. Therefore, automatic detection of occlusion and shadow areas also becomes a key issue in true ortho-correction. However, in the case of the true ortho image, the edge of the regular object has jaggy distortion, and the research is less. In the later stage of operation, a plurality of production units trim the twisted part of the sawtooth by using Photoshop to achieve the beautiful effect.
The generation of the sawtooth at the edge is greatly related to the unevenness, local deletion, up-down and left-right dislocation of the three-dimensional point cloud at the constructed edge. The lack of the triangulation network at the position is caused by the uneven and local lack of the point cloud, and the edge point cloud is not in a straight line caused by the shaking dislocation. For the conventional real-time ortho-rectification method, the abnormal point clouds cause the edge model of the object represented by the Irregular triangular Network (TIN) to form a "saw-tooth" shape, so that the image projection is misaligned during ortho-rectification and the edge appears saw-tooth. When the irregular distribution degree of the saw teeth is increased, the saw teeth are in a twisted shape. If a dense uniform point cloud can be compensated at the edge of the point cloud missing, and the point cloud also participates in the triangulation surface reconstruction, the edge can be correctly projected during the ortho-rectification, thereby reducing the effects of "jaggy" and "distortion".
Disclosure of Invention
The invention aims to provide a method for extracting the edge of a three-dimensional object and improving the quality of a true orthophoto image by refining on the problem of distorted and sawteeth of the edge of the object in the conventional production process of the true orthophoto image.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for generating an optimized true projection image of an edge of a three-dimensional object, comprising the steps of:
1) based on a conventional photogrammetric method, performing point feature processing on a shot image of a three-dimensional object to generate dense point cloud, and filtering and thinning the dense point cloud to obtain high-quality point cloud;
2) extracting object edge straight lines from the multiple images, matching the object edge straight lines with the same name straight lines, and reconstructing a three-dimensional straight line;
3) selecting two end points from the three-dimensional straight line to obtain a three-dimensional line segment, and discretizing the three-dimensional line segment into three-dimensional points;
4) constructing a triangular network based on the points in the point cloud obtained in the step 1) and the points obtained through line segment discretization, generating a perfect DSM (digital projection system) and performing orthorectification to obtain a true orthoimage with edge sawtooth distortion eliminated.
The step 1) of generating the dense point cloud and generating the true ortho-image comprises the following steps: extracting characteristic points and matching homonymous points of a plurality of images to obtain a sparse object point cloud; secondly, the observation value in the first step is used for adjustment by a beam method to obtain accurate inside and outside orientation elements of the camera; carrying out dense matching on the image based on the high-precision camera pose (namely the inside and outside orientation elements of the camera) provided by the step two to obtain dense three-dimensional point cloud; and fourthly, filtering and rarefying the dense point cloud to obtain the high-quality dense point cloud.
The step 2) of reconstructing a three-dimensional straight line comprises the following steps: extracting three-dimensional object edges (overlapped parts comprise edges of the three-dimensional object) from two or more images with overlapped parts by utilizing a straight line extraction algorithm; matching the homonymous straight lines on the extracted images to obtain matched straight lines; and thirdly, performing three-dimensional reconstruction on the straight line matched in the step 2) based on a coplanar condition to obtain three-dimensional straight line expression of the edge of the three-dimensional object.
The step 3) of discretizing the three-dimensional line segment into three-dimensional points comprises the following steps: firstly, determining two end points of a three-dimensional line segment to obtain the length of the three-dimensional line segment; and secondly, setting the number of points to be discretized, and performing discrete sampling to obtain the three-dimensional point cloud at the edge of the object.
In the step 3), the step of obtaining the three-dimensional line segment includes: for two images simultaneously containing the edge line segment of the three-dimensional object, selecting a head image point and a tail image point as x in the first image1 (1)、x2 (1)Line segment l of1According to the line segment l1Reconstruct a by X1 (1)And X2 (1)Three-dimensional line segments as leading and trailing end points, i.e.
Figure BDA0001362170850000021
Selecting a head and tail image point as x in the second image1 (2)、x2 (2)Line segment l of2According to the line segment l2Reconstruct a by X1 (2)And X2 (2)Three-dimensional line segments as leading and trailing points, i.e.
Figure BDA0001362170850000022
Wherein the line segment l1And line segment l1Being segments of the same name, L1、L2The lines are collinear and all are line segments on a three-dimensional straight line L; from four end points X1 (1)、X2 (1)、X1 (2)And X2 (2)Selecting two endpoints with the farthest distance as the endpoints of the three-dimensional line segment to obtain the three-dimensional line segment
The step 4) of generating a perfect DSM and performing orthorectification comprises the following steps: combining the three-dimensional points obtained in the step 1) and points obtained through line segment discretization, constructing a triangulation network according to a Dirony triangulation network construction principle, and generating a DSM (digital surface model) containing object edge characteristics; and secondly, generating a real projective image without sawtooth distortion according to a real projective correction algorithm based on the DSM in the first step.
By adopting the technical scheme, the invention has the following advantages:
1. because the conventional real shooting correction method only utilizes point features, the edge of an object is often lack of point clouds or inaccurate, so that the edge of the object is jagged and even distorted in severe cases when the real shooting correction is carried out. The missing point clouds at the edges of the object can be compensated by line segment matching, so that the sawtooth distortion effect is eliminated.
2. The three-dimensional edge of the object is obtained because the three-dimensional reconstruction of the line segment is required to be carried out on the edge of the object. Therefore, when the digital three-dimensional model is manufactured, the outline of the three-dimensional object can be perfected and supplemented, and the three-dimensional object model is more attractive.
Drawings
FIG. 1 is a flow chart of a conventional process for making an ortho-image;
FIG. 2 is a schematic diagram of obtaining elevation values of a TIN interpolation rule grid;
FIG. 3 is a schematic diagram of a triangulation of matching line segments;
FIG. 4 is an orthographic correction roadmap with additional three-dimensional segments;
fig. 5 is a schematic diagram of a straight-line three-dimensional reconstruction.
Detailed Description
The invention is described in detail below with reference to the figures and examples. The specific implementation content is added with a conventional true orthophoto generation method so as to compare the effect with the effect of a newly proposed true orthophoto generation method.
The invention discloses a real projective image generation method for three-dimensional object edge optimization, which comprises the following steps of:
1. the real projective correction is performed by adopting a conventional photogrammetric method to generate a real projective image, and the method comprises the following steps as shown in FIG. 1:
1) extracting characteristic points and matching homonymous points by using a multi-view image matching technology;
2) performing space-three measurement based on the homonymous point data obtained in the step 1), and realizing optimization of coordinates of inside and outside orientation elements and object space points of the camera;
3) and (3) carrying out dense matching based on the optimized data obtained in the step (2) to obtain dense point cloud.
4) And filtering the dense point cloud, thinning and constructing a triangular network.
Dense point clouds obtained by using multi-view image matching generally comprise a plurality of noise points, and the noise points do not belong to ground objects, so that the noise points need to be removed in advance, otherwise, the noise points can influence the subsequent construction of a triangular network; in addition, the quantity of the point clouds reconstructed by photogrammetry is often very large, many millions and tens of millions, and if the point clouds are directly involved in the construction of the triangulation network, the calculation amount is very large, so that the dense point clouds need to be thinned by a thinning algorithm, and the places with dense point clouds and small change of the geometric shape can be expressed by a small quantity of point clouds. The method comprises the steps of removing obvious noise points by using an outlier condition, simplifying a triangular network by using a thinning algorithm based on a TIN gradient, and finally constructing the network by using a region growing method.
5) And (4) obtaining a DSM model based on the triangular network generated in the step 4), and carrying out real incidence correction according to a collinear equation to obtain a real ortho image.
The real emission correction of the photogrammetry technology utilizes a digital differential correction technology, changes the geometric deformation of an original image based on DSM (digital projection model) constructed by photographic point cloud, and resamples a measurement area pixel by pixel to generate an image without projection difference. The basic principle is to establish the projection relation between the two-dimensional image point coordinates and the three-dimensional object space point coordinates, and the projection relation can be expressed by a homogeneous coordinate conformation equation in computer vision. The expression is as follows:
Figure BDA0001362170850000041
wherein, [ x ]1,x2,x3]TAnd [ X ]1,X2,X3,X4]THomogeneous vector representations of 2D image points and 3D spatial points, respectively. K represents the camera intrinsic parameter matrix, R and
Figure BDA0001362170850000042
respectively represent a rotation matrix and a translation matrix, and I is an identity matrix.
Digital differential correction is typically performed by projecting a DSM represented in the form of a triangulation network onto a two-dimensional regular grid of set resolution size as object coordinates, as shown in fig. 2. And (2) carrying out back projection search on the corresponding image points by grid points according to the formula (1), then giving the color information of the image points to the grid points, and generating a corrected image. Interpolation is required from DSM to regular grids.
6) Post-processing of the orthophoto map.
And detecting a shielding area and compensating shielding information in the single image orthorectification process, and then obtaining the whole orthography image of the measurement area through inlaying and color consistency processing.
2. And extracting and matching edge straight lines of the object, and reconstructing a three-dimensional straight line.
The three-dimensional reconstruction of the straight line is based on straight line matching, namely the straight line is constructed by matching the same-name straight lines on a plurality of images. In order to ensure the reconstruction accuracy, certain accuracy must be ensured during the straight line extraction of a single image, so as to reduce mismatching. After obtaining the matched straight line (i.e. for the straight line of the same name, the pixel coordinates of the first and last points on the image are known), the 3D straight line (taking two images as an example) can be solved according to the following principle.
As in fig. 3, L represents a three-dimensional straight line to be reconstructed. t is tm1And tm2Representing the centers of projection of camera 1 and camera 2, respectively, with the coordinates in the world coordinate system denoted by T1、T2Represents; x is the number of1 (1)、x2 (1)Two end points, X, representing matching line segments on image 11 (1)、X2 (1)Representing three-dimensional space points corresponding to the two image points; x is the number of1 (2)、x2 (2)Two end points, X, representing matching line segments on image 21 (2)、X2 (2)Is its corresponding spatial point. t is tm1And L form a plane omega1,n1Represents a normal vector, tm2And L form another plane omega2N for its normal vector2And (4) showing.
It is known that a spatial three-dimensional straight line can be determined by knowing a direction vector and a point on the straight line. An equation expression of the 3D line is derived below.
Taking the camera 1 as an example, assume that a two-dimensional line l passes through two points (x)1,x2) The three-dimensional straight line L passes through two corresponding three-dimensional points (X)1,X2). Where x isiAnd XiAre all expressed in homogeneous coordinates, i.e., (tu) respectivelyi,tviT) and (X)i,Yi,Zi1), t represents a scale factor, ui、viRepresenting the pixel coordinates of two-dimensional points of the image. From the correspondence between the two-dimensional points and the three-dimensional points, the following expression can be obtained.
Figure BDA0001362170850000051
Wherein, K and R represent a calibration matrix and a rotation matrix of the camera. The following formula can be obtained by taking the cross product of the two sides of the two equations in the formula (2).
x1×x2=(KRX1)×(KRX2)=(KR)*(X1×X2)=det(KR)(KR)-T(X1×X2) (3)
With a camera tm1Is the origin of the coordinate system, X in the formula (3)1×X2I.e. by X1,X2Normal vector to the plane of the camera center, i.e. X1×X2=n1(ii) a While a two-dimensional line can be expressed as the cross product of two points on the line, i.e./1=x1×x2. Substituting into equation (3), an expression of a two-dimensional straight line can be obtained as shown below.
l1=det(KR)K-TRn1 (4)
In the above formula, det (KR) represents a constant, which does not affect the representation of the two-dimensional straight line, and can be eliminated, i.e. the constant is obtained
l1=K-TRn1 (5)
Thereby can beObtain a plane omega1And Ω2Are respectively n1=R1KTl1,n2=R2KTl2. Where n is1、n2Normalization is needed, and N is used after normalization1And N2And (4) showing.
Since L is the plane omega1And Ω2So that the direction vector S can be obtainedL=N1×N2
Now it is necessary to determine a point P on the straight line0(X0,Y0,Z0)。
P0With two camera projection centers tm1And tm2Are respectively connected with N1、N2Is vertical, so that
Figure BDA0001362170850000061
Finishing to obtain:
Figure BDA0001362170850000062
let X0When being equal to 0, then there is
Figure BDA0001362170850000063
Can be solved to obtain (Y)0,Z0). Thereby point P0Has the coordinates of (0, Y)0,Z0)。
Thus, the equation of the three-dimensional straight line L can be expressed as:
P=P0+λ*SL (9)
in the above formula, λ is a scale factor, and any point on a straight line can be represented by taking different values.
3. And determining two end points of the three-dimensional straight line to obtain a three-dimensional line segment.
Since the straight edge of the object generally has a certain lengthThe three-dimensional straight line reconstructed by the above formula is an infinite concept. Meanwhile, the edge projection of the three-dimensional object on the image is a two-dimensional line segment with a certain length. Therefore, the lengths of the three-dimensional line segments need to be determined by using the head and tail image points corresponding to the two-dimensional line segments on the respective images of the cameras 1 and 2. As shown in FIG. 3, in the first image, the head and tail image points x of the matched line segment1 (1)And x2 (1)The reconstructed three-dimensional line segment is represented by X1 (1)And X2 (1)As the head and tail end points, can be expressed as
Figure BDA0001362170850000064
Similarly, the three-dimensional line segment reconstructed from the second image is represented as
Figure BDA0001362170850000065
When the images are matched, the homonymous line segments on the images
Figure BDA0001362170850000066
And
Figure BDA0001362170850000067
are not necessarily equal in length, and the end points are not necessarily image points of the same name, so that the constructed three-dimensional line segment L1、L2Cross, phase, inclusion relationships may occur. Therefore, it is necessary to find two end points for determining the maximum length of the line segment, which should be the end points in FIG. 3
Figure BDA0001362170850000068
According to equation (9), different three-dimensional points correspond to different λ values. Therefore, it should be at the terminal Xi (j)Finding the maximum and minimum values of λ, the two end points of the required maximum line segment can be determined.
Following the calculation of X1 (1)The corresponding λ value is explained as an example. Subtracting T from both sides of formula (9)1As follows:
P-T1=P0+λ*SL-T1 (10)
is converted into
(KR1)-1x1 (1)=t(KR1)-1(u1 (1)v1 (1)1)T=P0+λ*SL-T1 (11)
Finishing to obtain:
SL*λ-(KR1)-1(u1 (1)v1 (1)1)T*t=P0-T1 (12)
wherein, the left side is 3x1 matrix, the right part is also 3x1 matrix, and lambda and t are the quantity to be solved.
Figure BDA0001362170850000071
Let [ S ]L(KR1)-1(u1 (1)v1 (1)1)T]=A,P0-T1B can be found according to the principle of least squares
[λt]T=(ATA)-1(ATb) (14)
Thus solving for lambda.
In the same way, all λ values can be found. Finding out the maximum and minimum lambda values to obtain the coordinates of the head and tail end points of the three-dimensional line segment.
4. And discretizing the three-dimensional line segment into three-dimensional points.
Since a three-dimensional line segment cannot participate in the construction of the triangulation network as a line feature, a fixed-length line segment needs to be discretized to obtain discrete three-dimensional points, and the triangulation network can be reconstructed with other three-dimensional points to generate a new DSM, as shown in fig. 4.
Determining two end points of a three-dimensional straight line, and discretizing a three-dimensional line segment into three-dimensional points, comprising the following steps:
1) firstly, determining two end points of a three-dimensional line segment to obtain the length of the three-dimensional line segment;
2) and setting the number of points to be discretized, and performing discrete sampling to obtain the three-dimensional point cloud at the edge of the object.
Determining two end points of the three-dimensional line segment in the step 1) to obtain the length of the three-dimensional line segment, and the method comprises the following steps (refer to fig. 3):
a) for two images simultaneously containing the edge line segment of the three-dimensional object, selecting a head image point and a tail image point as x in the first image1 (1)And x2 (1)Line segment l of1And the reconstructed three-dimensional line segment is represented by X1 (1)And X2 (1)As the head and tail end points, can be expressed as
Figure BDA0001362170850000072
Similarly, a head and tail image point is selected as x from the second image1 (2)、x2 (2)Line segment l2The reconstructed three-dimensional line segment is represented by X1 (2)And X2 (2)As the head end point, can be expressed as
Figure BDA0001362170850000073
Line segment l1And line segment l1Being segments of the same name, L1、L2Respectively, the line segments on the three-dimensional straight line L.
b) L from step a)1、L2The two three-dimensional line segments are collinear and both on a three-dimensional straight line L, and thus from four end points X1 (1)、X2 (1)、X1 (2)And X2 (2)The two endpoints with the farthest distance are selected as the endpoints of the reconstructed three-dimensional line segment, and the length of the line segment is calculated.
In the step 2), different numbers of discrete points can be obtained by setting different sampling intervals according to the three-dimensional line segment obtained in the step 3. The sparse degree of the discrete points to be generated can be set according to actual conditions. Assuming that the number of sampling points is m, the sampling interval Δ t is (λ)maxmin)/m。
The coordinates of each discrete three-dimensional point on the three-dimensional line segment are obtained by equation (15).
Pi=P0+(λmin+i*Δt)SL(i=1,...m) (15)
5. And constructing a triangular network by the existing points and the points discretized by the line segments, generating a perfect DSM and performing orthorectification again.
And (3) specially marking the discretized three-dimensional points, adding the discretized three-dimensional points into the existing three-dimensional point cloud as additional point cloud, and reconstructing the network, as shown in fig. 4. Setting a networking principle:
1) the basic triangulation criteria are unchanged.
2) When the point cloud with the special mark is grown, no matter where the three-dimensional point is located, the point cloud is not removed, and the three-dimensional point cloud and the three-dimensional points of the adjacent area form a triangular net together.
And taking the newly constructed triangular mesh as a new DSM model, re-interpolating the triangular mesh into a regular grid, and performing orthorectification according to the flow of the figure 5 to obtain a new real projective image.

Claims (6)

1. A true ortho image generation method for three-dimensional object edge optimization comprises the following steps:
1) performing point feature processing on an image containing a three-dimensional object and generating dense point cloud;
2) extracting edge straight lines of the three-dimensional object from a plurality of images containing the three-dimensional object, matching the same-name straight lines, and then performing three-dimensional reconstruction on the basis of coplanar conditions by using the matched straight lines to obtain three-dimensional straight lines of the edge of the three-dimensional object;
3) selecting two end points from the three-dimensional straight line to obtain a three-dimensional line segment, and discretizing the three-dimensional line segment into three-dimensional points; wherein the step of obtaining the three-dimensional line segment comprises: a) for two images simultaneously containing the edge line segment of the three-dimensional object, selecting a head image point and a tail image point as x in the first image1 (1)、x2 (1)Line segment l of1According to the line segment l1Reconstruct a by X1 (1)And X2 (1)Three-dimensional line segments as leading and trailing end points, i.e.
Figure FDA0003062649460000011
Selecting a head and tail image point as x in the second image1 (2)、x2 (2)Line segment l of2According to the line segment l2Reconstruct a by X1 (2)And X2 (2)Three-dimensional line segments as leading and trailing points, i.e.
Figure FDA0003062649460000012
Wherein the line segment l1And line segment l1Being segments of the same name, L1、L2The lines are collinear and all are line segments on a three-dimensional straight line L; b) from four end points X1 (1)、X2 (1)、X1 (2)And X2 (2)Selecting two endpoints with the farthest distance as the endpoints of the three-dimensional line segment to obtain the three-dimensional line segment;
4) constructing a triangular net based on the points in the point cloud obtained in the step 1) and the points obtained in the step 3), generating a digital surface model containing the edge characteristics of the three-dimensional object, and performing orthorectification based on the digital surface model to obtain a true orthoimage with optimized edge quality of the three-dimensional object.
2. The method of claim 1, wherein the step of discretizing the three-dimensional line segment into three-dimensional points comprises:
firstly, obtaining the length of the three-dimensional line segment according to two end points of the three-dimensional line segment; and secondly, setting the number of points to be discretized, and performing discrete sampling to obtain the three-dimensional point cloud at the edge of the three-dimensional object.
3. The method according to claim 1, wherein in the step 4), the point cloud obtained in the step 1) is filtered and diluted, and then a triangulation network is constructed based on the processed points and the points obtained in the step 3), so as to generate a digital surface model of the image containing the edge features of the three-dimensional object; and then performing orthorectification based on the digital surface model to obtain a real projective image of the image containing the three-dimensional object.
4. The method of claim 1, wherein the triangulation is constructed according to the principles of dironi triangulation.
5. The method of claim 1, wherein the step of generating the dense point cloud is: firstly, extracting characteristic points and matching homonymous points of a plurality of images containing the three-dimensional object to obtain a sparse object space point cloud; secondly, the observation value in the first step is used for adjustment by a beam method to obtain accurate inside and outside orientation elements of the camera; and thirdly, carrying out dense matching on the image based on the high-precision camera pose provided by the second step to obtain dense point cloud.
6. The method according to claim 1, wherein in step 2), the edges of the three-dimensional object are extracted from two or more overlapping images respectively containing the three-dimensional object by using a straight line extraction algorithm; and then matching the homonymous straight lines on the extracted images to obtain matched straight lines.
CN201710623219.0A 2017-07-27 2017-07-27 Real projective image generation method for three-dimensional object edge optimization Active CN108182722B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710623219.0A CN108182722B (en) 2017-07-27 2017-07-27 Real projective image generation method for three-dimensional object edge optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710623219.0A CN108182722B (en) 2017-07-27 2017-07-27 Real projective image generation method for three-dimensional object edge optimization

Publications (2)

Publication Number Publication Date
CN108182722A CN108182722A (en) 2018-06-19
CN108182722B true CN108182722B (en) 2021-08-06

Family

ID=62545127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710623219.0A Active CN108182722B (en) 2017-07-27 2017-07-27 Real projective image generation method for three-dimensional object edge optimization

Country Status (1)

Country Link
CN (1) CN108182722B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242862B (en) * 2018-09-08 2021-06-11 西北工业大学 Real-time digital surface model generation method
CN109978800B (en) * 2019-04-23 2021-01-19 武汉惟景三维科技有限公司 Point cloud shadow data removing method based on threshold
CN111159498A (en) * 2019-12-31 2020-05-15 北京蛙鸣华清环保科技有限公司 Data point thinning method and device and electronic equipment
CN112419443A (en) * 2020-12-09 2021-02-26 中煤航测遥感集团有限公司 True ortho image generation method and device
CN115200556A (en) * 2022-07-18 2022-10-18 华能澜沧江水电股份有限公司 High-altitude mining area surveying and mapping method and device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009143986A1 (en) * 2008-05-27 2009-12-03 The Provost, Fellows And Scholars Of The College Of The Holy And Undivided Trinity Of Queen Elizabeth Near Dublin Automated building outline detection
CN103017739A (en) * 2012-11-20 2013-04-03 武汉大学 Manufacturing method of true digital ortho map (TDOM) based on light detection and ranging (LiDAR) point cloud and aerial image
CN104778720A (en) * 2015-05-07 2015-07-15 东南大学 Rapid volume measurement method based on spatial invariant feature
CN105466399A (en) * 2016-01-11 2016-04-06 中测新图(北京)遥感技术有限责任公司 Quick semi-global dense matching method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009143986A1 (en) * 2008-05-27 2009-12-03 The Provost, Fellows And Scholars Of The College Of The Holy And Undivided Trinity Of Queen Elizabeth Near Dublin Automated building outline detection
CN103017739A (en) * 2012-11-20 2013-04-03 武汉大学 Manufacturing method of true digital ortho map (TDOM) based on light detection and ranging (LiDAR) point cloud and aerial image
CN104778720A (en) * 2015-05-07 2015-07-15 东南大学 Rapid volume measurement method based on spatial invariant feature
CN105466399A (en) * 2016-01-11 2016-04-06 中测新图(北京)遥感技术有限责任公司 Quick semi-global dense matching method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
The generation of true orthophotos using a 3D building model in conjunction with a conventional DTM;AMHAR F. ET AL.;《International Archives of Photogrammetry and Remote Sensing》;19981231;第16-22页 *

Also Published As

Publication number Publication date
CN108182722A (en) 2018-06-19

Similar Documents

Publication Publication Date Title
CN108182722B (en) Real projective image generation method for three-dimensional object edge optimization
CN107316325B (en) Airborne laser point cloud and image registration fusion method based on image registration
CN110363858B (en) Three-dimensional face reconstruction method and system
CN107977997B (en) Camera self-calibration method combined with laser radar three-dimensional point cloud data
CN107590825B (en) Point cloud hole repairing method based on SFM
KR101533182B1 (en) 3d streets
CN111629193A (en) Live-action three-dimensional reconstruction method and system
CN105243637B (en) One kind carrying out full-view image joining method based on three-dimensional laser point cloud
CN111563921B (en) Underwater point cloud acquisition method based on binocular camera
CN113192193B (en) High-voltage transmission line corridor three-dimensional reconstruction method based on Cesium three-dimensional earth frame
TW201724026A (en) Generating a merged, fused three-dimensional point cloud based on captured images of a scene
CN107123156A (en) A kind of active light source projection three-dimensional reconstructing method being combined with binocular stereo vision
CN109945841B (en) Industrial photogrammetry method without coding points
US8577139B2 (en) Method of orthoimage color correction using multiple aerial images
JP6534296B2 (en) Three-dimensional model generation device, three-dimensional model generation method, and program
KR101602472B1 (en) Apparatus and method for generating 3D printing file using 2D image converting
CN110738731A (en) 3D reconstruction method and system for binocular vision
CN111091076A (en) Tunnel limit data measuring method based on stereoscopic vision
CN106952262A (en) A kind of deck of boat analysis of Machining method based on stereoscopic vision
CN114463521B (en) Building target point cloud rapid generation method for air-ground image data fusion
CN107958489B (en) Curved surface reconstruction method and device
CN108830921A (en) Laser point cloud reflected intensity correcting method based on incident angle
CN114998399A (en) Heterogeneous optical remote sensing satellite image stereopair preprocessing method
CN110631555A (en) Historical image ortho-rectification method based on adjustment of second-order polynomial control-point-free area network
CN113393413B (en) Water area measuring method and system based on monocular and binocular vision cooperation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant