CN111141264B - Unmanned aerial vehicle-based urban three-dimensional mapping method and system - Google Patents

Unmanned aerial vehicle-based urban three-dimensional mapping method and system Download PDF

Info

Publication number
CN111141264B
CN111141264B CN201911421322.2A CN201911421322A CN111141264B CN 111141264 B CN111141264 B CN 111141264B CN 201911421322 A CN201911421322 A CN 201911421322A CN 111141264 B CN111141264 B CN 111141264B
Authority
CN
China
Prior art keywords
dimensional map
dimensional
unmanned aerial
aerial vehicle
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911421322.2A
Other languages
Chinese (zh)
Other versions
CN111141264A (en
Inventor
孟繁乐
王蓉
查文中
孟祥瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC Information Science Research Institute
Original Assignee
CETC Information Science Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC Information Science Research Institute filed Critical CETC Information Science Research Institute
Priority to CN201911421322.2A priority Critical patent/CN111141264B/en
Publication of CN111141264A publication Critical patent/CN111141264A/en
Application granted granted Critical
Publication of CN111141264B publication Critical patent/CN111141264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C15/00Surveying instruments or accessories not provided for in groups G01C1/00 - G01C13/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Abstract

An unmanned aerial vehicle-based urban three-dimensional mapping method and system, the method comprising: planning a first route, and acquiring images along the first route to construct a first three-dimensional map; recovering a dense patch of the first three-dimensional map by a rapid dense reconstruction mode to form a second three-dimensional map; calculating an incomplete area and a poor quality area of the second three-dimensional map, calculating a next optimal viewpoint capable of shooting the area, and generating a collision-free path according to the next optimal viewpoint; and acquiring an image at the next optimal viewpoint to generate a second three-dimensional map, and repeating the steps until the second three-dimensional map with a complete area is constructed, namely a third three-dimensional map is obtained. The invention can autonomously carry out three-dimensional mapping, construct a map in real time and finally form a complete three-dimensional map with texture information. Compared with the prior art, the method can automatically find and complement the incomplete area, and form the three-dimensional map in real time.

Description

Unmanned aerial vehicle-based urban three-dimensional mapping method and system
Technical Field
The invention relates to the field of map surveying and mapping, in particular to a city three-dimensional surveying and mapping method and system based on an unmanned aerial vehicle.
Background
The traditional three-dimensional mapping method mainly depends on manual means, and is long in construction period, high in cost and single in achievement form. In recent years, with the improvement of the performance of products such as a small unmanned aerial vehicle and a video camera, a photogrammetric technology based on aerial photographic images has been widely used. The method takes an image shot by a small unmanned aerial vehicle as input, analyzes the image, and calculates the three-dimensional information of a city by a computer vision method. The three-dimensional surveying and mapping technology based on the unmanned aerial vehicle including oblique photography simplifies the operation complexity of surveying and mapping and saves the labor cost.
However, the following problems still exist, firstly, the surveying and mapping process often needs manual participation such as manual remote control or target course setting, the quality of three-dimensional surveying and mapping needs to be determined through manual judgment and subsequent operation needs to be carried out, and the efficiency and the cost of three-dimensional surveying and mapping are greatly restricted by the factors; secondly, the generation of the mapping model is often based on a large number of images taken by the unmanned aerial vehicle, especially for urban scenes, the data volume is large, and the time consumption is relatively long when the three-dimensional map is obtained through off-line processing.
Therefore, how to enable the unmanned aerial vehicle to autonomously carry out three-dimensional mapping and construct a map in real time, so that manual intervention is reduced, mapping efficiency is improved, and the technical problem which needs to be solved in the prior art is solved urgently.
Disclosure of Invention
The invention aims to provide an unmanned aerial vehicle-based urban three-dimensional mapping method and system, which can enable an unmanned aerial vehicle to autonomously carry out three-dimensional mapping, construct a map in real time and finally form a complete three-dimensional map with texture information.
In order to achieve the purpose, the invention adopts the following technical scheme:
an unmanned aerial vehicle-based urban three-dimensional surveying and mapping method comprises an unmanned aerial vehicle body, a holder, an image acquisition module, a navigation positioning module and an inertia measurement unit;
the method is characterized by comprising the following steps:
a first route planning step S110: receiving a marking signal of a user, planning a first air route for the unmanned aerial vehicle, wherein the first air route is a flight route with the same height, and transmitting the first air route to the unmanned aerial vehicle;
an image acquisition step S120: the unmanned aerial vehicle flies along a first air route, and the unmanned aerial vehicle collects navigation positioning and inertia measurement data corresponding to the collected image signals through the image signals collected by the image collecting module;
a first three-dimensional map generation step S130: calculating the current position and posture of the unmanned aerial vehicle by a simultaneous positioning and map construction technology and integrating navigation positioning data and inertial measurement data, and constructing a first three-dimensional map, wherein the first three-dimensional map is sparse point cloud;
Second three-dimensional map generation step S140: recovering a dense patch of the first three-dimensional map in a rapid dense reconstruction mode to form a second three-dimensional map;
next optimal viewpoint calculating step S150: calculating an incomplete area and a poor quality area of the second three-dimensional map, and calculating a viewpoint which can shoot the area, wherein the viewpoint is called a next optimal viewpoint, and the height of the next optimal viewpoint is not limited to the height of the first route any more;
collision-free path generation step S160: planning and generating a collision-free path according to the movement from the current pose of the unmanned aerial vehicle to the next optimal viewpoint, wherein the collision-free path is a second air route and enables the unmanned aerial vehicle to fly along the second air route;
the second three-dimensional map updating step S170: and collecting an image at the next optimal viewpoint, recovering a dense patch by a fast dense reconstruction method, and adding the dense patch into the second three-dimensional map to obtain an updated second three-dimensional map.
A third three-dimensional map construction step S180: and repeating the steps S150-S170 until a second three-dimensional map with complete area is constructed, namely a third three-dimensional map.
Optionally, in the first route planning step S110, the first route is a flight route with the same height, and the flight route is a broken line type.
Optionally, in the first three-dimensional map generating step S130, the simultaneous localization and mapping technique is ORB-SLAM.
Optionally, the second three-dimensional map generating step S140 specifically includes: dividing an image used for forming a first three-dimensional map into different polygonal sub-regions through simple linear iteration cluster superpixel segmentation, and segmenting each polygon into triangles through Delaunay triangulation; and selecting the triangles, selecting the triangles in which the projection of the points in the first three-dimensional map is projected, wherein each triangle can be regarded as a plane, performing depth estimation on the planes, and forming a linear equation and solving the linear equation in the process so as to perform fast dense reconstruction.
Optionally, the second three-dimensional map generating step S140 specifically includes:
the three-dimensional reconstruction energy function construction sub-step S141 of a single triangle in a single image:
for a selected triangle v in the image1,v2,v3Three-dimensional reconstruction of the triangle is equivalent to calculating v1、v2And v3Depth at a point, assuming p is a point of the first three-dimensional map, and the projection of p on the current image falls on the triangle { v }1,v2,v3Within }, the depth of the p point is denoted as dp=α1d12d23d3Wherein d ispAnd dkAre p and v, respectivelyk(1. ltoreq. k. ltoreq.3) depth, weight (. alpha.) of 123) Is p at triangle { v1,v2,v3The barycentric coordinates in the triangle are reconstructed, i.e. d is solvedk(k is more than or equal to 1 and less than or equal to 3), and the consistency with the first three-dimensional map point is ensured by minimizing the following three-dimensional reconstruction energy function:
Eslam(i)=(dp1d12d23d3)2
wherein i represents an index of the input image;
the sub-step S142 of constructing the local continuity constraint energy function of the single image three-dimensional reconstruction:
parameterizing two adjacent triangles, the other triangle being v4,v5,v6Local continuity can be achieved by minimizing the function:
Econtinuity(i)=wc((d3-d5)2+(d2-d6)2)
wherein the weight wcIndicating that the intensity of the smoothing is controlled by the difference of the two triangles;
multi-view constraint energy function construction sub-step S143:
for adjacent viewpoints i and j, assume a triangle { v'1,v′2,v′3The projection in viewpoint i covers vertex v1And their depths are each d'1、d′2And d'3At this time v1Should be of depth d'1、d′2And d'3Should be minimized, i.e. the following multi-view constrained energy function:
Efusion(i,j)=(d11d′12d′23d′3)2
wherein (gamma)123) Is v1In triangle { v'1,v′2,v′3Barycentric coordinates within (j);
total energy function three-dimensional reconstruction substep S144:
a function of the total energy is established,
Figure BDA0002352462000000041
and expressing the total energy function as a matrix form to carry out linear solution so as to obtain a second three-dimensional map.
Optionally, the next optimal viewpoint calculating step S150 specifically includes:
incomplete and poor area calculation step S151:
calculating a Poisson signal value of the second three-dimensional map, and further evaluating a sampling point for a point with a poor Poisson signal value, wherein a small disc area corresponding to the sampling point p is AP,ap(v) Is the projection of a sample point p in the image, the projection ratio of the sample point being defined as gammap(v)=ap(v)/Ap(ii) a For each sample point, a point with a large projection ratio is marked as a good point, if a sample point is visible in less than two good points, the point is marked as "incomplete", and finally, if the angle between the two good points is within the range of (2-30)Is "complete", otherwise is marked as "incomplete", thereby obtaining incomplete and poor quality areas;
a next optimal viewpoint solving substep S152 corresponding to an incomplete or poor quality region of reconstruction:
respectively calculating an occlusion item, a focusing item, a parallax item and an incidence item, specifically, selecting 10 'incomplete' sampling points with a short distance, and recording the corresponding Poisson disc sampling areas as a set
Figure BDA0002352462000000051
Order to
Figure BDA0002352462000000052
Is composed of
Figure BDA0002352462000000053
The penalty parameter of the set of vertices of the face in (1) is recorded as
Figure BDA0002352462000000054
To obtain forward energy, selection is made
Figure BDA0002352462000000055
To enhance all terms in the energy function,
an occlusion item: to facilitate the visibility of the area from the pose P of the camera, the term is defined as:
Figure BDA0002352462000000056
the focusing term: is used for showing
Figure BDA0002352462000000057
The surrounding area tends to be projected at the center of the image, for vertices centered on the image
Figure BDA0002352462000000058
Performing two-dimensional Gaussian distribution weighting, specifically:
Figure BDA0002352462000000059
wherein the content of the first and second substances,
Figure BDA00023524620000000510
vx Pand vy PIs the coordinate of v projected onto camera P, (x)0,y0) Is the center of the image, order
Figure BDA00023524620000000511
Figure BDA00023524620000000512
W and H are the width and height of the image;
the parallax term: used to represent surfaces that tend to capture images with greater parallax than other images, the parallax term is expressed as:
Figure BDA0002352462000000061
where B is the baseline, i.e., the distance of the two poses; h represents the distance between the pose to be solved and v, namely the height of the unmanned aerial vehicle, and delta is a fixed threshold;
the incident term is as follows: indicating that the camera is inclined to view a surface of interest with an incident angle of 40 ° to 70 °, is given by:
Figure BDA0002352462000000062
where x represents the angle between the normal vector of the surface and the ray from the surface centroid to the camera and μ represents the maximum likelihood
Figure BDA0002352462000000063
Kappa denotes the concentration of the distribution, I0Is a zero order bessel function, expressed as:
Figure BDA0002352462000000064
next best viewpoint energy: combining the shielding term, the focusing term, the parallax term and the incidence term to obtain an energy function for calculating the minimization of the next optimal viewpoint as follows:
Figure BDA0002352462000000065
Wherein mu1、μ2、μ3And mu4Are weights for different terms, then define the weights for all vertices
Figure BDA0002352462000000066
Energy of (c):
Figure BDA0002352462000000067
and minimizing the energy function of the next viewpoint, and obtaining the energy function by calculating the negative number of the sum of the energy function and the minimum value when solving.
Optionally, in the collision-free path generating step S160, a second route moving from the current pose of the unmanned aerial vehicle to the next optimal viewpoint performs path planning in the second three-dimensional map, and the path planning is performed by combining a real-time image of a camera carried by the unmanned aerial vehicle.
Optionally, in the collision-free path generating step S160, a path plan is solved by using a fast-expansion random tree path plan method.
Alternatively, the fast dense reconstruction in the second three-dimensional map updating step S170 is the same as the fast dense reconstruction in the second three-dimensional map generating step S140.
The invention also discloses an unmanned aerial vehicle-based urban three-dimensional mapping system, which comprises an unmanned aerial vehicle and a ground workstation, wherein the unmanned aerial vehicle comprises an unmanned aerial vehicle body, a holder, an optical camera, a navigation positioning module and an inertia measurement unit, and is characterized in that: the urban three-dimensional surveying and mapping method is operated in the ground workstation.
Therefore, the invention can independently carry out three-dimensional mapping and construct a map in real time, and finally form a complete three-dimensional map with texture information. Compared with the prior art, the method can automatically find and complete the incomplete area, and form the three-dimensional map in real time.
Drawings
Fig. 1 is a flow chart of a method for three-dimensional mapping of cities based on unmanned aerial vehicles according to the invention;
FIG. 2 is a schematic diagram of a first lane, in accordance with a specific embodiment of the present invention;
FIG. 3 is a parameterization of two adjacent triangles in accordance with a specific embodiment of the present invention;
FIG. 4 is a diagram illustrating a multi-view consistent constraint process according to an embodiment of the invention;
FIG. 5 is a flowchart of a sample point classification process according to an embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Referring to fig. 1, there is shown a flow chart of a method for three-dimensional mapping of a city based on unmanned aerial vehicles according to the present invention,
The method specifically comprises the following steps:
the method is used in an unmanned aerial vehicle, and the unmanned aerial vehicle comprises an unmanned aerial vehicle body, a holder, an image acquisition module such as an optical camera, a navigation positioning module such as a GPS or Beidou navigation system, and an inertial measurement unit.
A first route planning step S110: receiving a marking signal of a user, planning a first air route for the unmanned aerial vehicle, wherein the first air route is a flight route with the same height, and transmitting the first air route to the unmanned aerial vehicle.
Referring to fig. 2, the first flight path is a flight path with the same height, and the flight path is a broken line type.
An image acquisition step S120: the unmanned aerial vehicle flies along a first air route, passes through image signals collected by the image collecting module, and collects navigation positioning and inertia measurement data corresponding to the collected image signals.
A first three-dimensional map generation step S130: the method comprises the steps of calculating the current position and the attitude of the unmanned aerial vehicle by means of simultaneous positioning and map construction technology and by means of fusion of navigation positioning data and inertial measurement data, and constructing a first three-dimensional map, wherein the first three-dimensional map is sparse point cloud.
Optionally, in this step, the simultaneous localization and mapping technique is ORB-SLAM.
Second three-dimensional map generation step S140: and recovering the dense patch of the first three-dimensional map in a rapid dense reconstruction mode to form a second three-dimensional map.
Due to the uniqueness of the viewpoint, the second three-dimensional map has an area with low quality or incomplete quality, and the step provides for recovering and reconstructing the area with good quality in the next step.
Specifically, an image used for forming the first three-dimensional map is divided into different polygon sub-areas through simple linear iteration cluster superpixel segmentation, and each polygon is divided into triangles through Delaunay triangulation; and selecting the triangles, selecting the triangles in which the projection of the points in the first three-dimensional map is projected, wherein each triangle can be regarded as a plane, and performing depth estimation on the planes, wherein the process forms a linear equation and can be solved, so that rapid dense reconstruction is performed.
Further, the method comprises the following substeps:
the three-dimensional reconstruction energy function construction sub-step S141 of a single triangle in a single image:
for a selected triangle v in the image1,v2,v3Three-dimensional reconstruction of the triangle is equivalent to calculating v1、v2And v3Depth at a point, assuming p is a point of the first three-dimensional map, and the projection of p on the current image falls on the triangle { v }1,v2,v3InLet the depth of the p point be denoted as dp=α1d12d23d3Wherein d ispAnd dkAre p and v, respectivelyk(1. ltoreq. k. ltoreq.3) depth, weight (. alpha.) of123) Is p at triangle { v 1,v2,v3The barycentric coordinates in the triangle are reconstructed, i.e. d is solvedk(k is more than or equal to 1 and less than or equal to 3), and ensuring consistency with the first three-dimensional map point by minimizing the following three-dimensional reconstruction energy function:
Eslam(i)=(dp1d12d23d3)2
where i denotes the index of the input image.
The sub-step S142 of constructing a local continuity constraint energy function of the single image three-dimensional reconstruction is as follows:
for two adjacent triangles, parameterization is carried out, the parameterization represents the process shown in figure 3, and the other is a triangle v4,v5,v6Local continuity can be achieved by minimizing the function:
Econtinuity(i)=wc((d3-d5)2+(d2-d6)2)
wherein the weight wcIndicating that the intensity of the smoothing is controlled by the difference of the two triangles.
In this sub-step, though v3And v5(or v)2And v6) Overlapping in the image, they are still represented by two different depth values to indicate the case where there is a discontinuity in depth. This parametric representation can effectively model the situation where two adjacent triangles are located at different depths.
Multi-view constraint energy function construction sub-step S143:
in order to keep the triangular patches under different views consistent, a multi-view constraint is further introduced. For adjacent viewpoints i and j shown in FIG. 4, assume a triangle { v'1,v′2,v′3The projection in viewpoint i covers vertex v 1And their depths are each represented by d'1、d′2And d'3At this time v1Should be d'1、d′2And d'3Should be consistent, i.e. should minimize the multi-view constrained energy function as follows:
Efusion(i,j)=(d11d′12d′23d′3)2
wherein (gamma)123) Is v1In triangle { v'1,v′2,v′3Coordinates of the center of gravity within.
In particular, this constraint is constructed considering a fixed number of adjacent viewpoints (e.g. 6).
Total energy function three-dimensional reconstruction substep S144:
a function of the total energy is established,
Figure BDA0002352462000000101
and expressing the total energy function as a matrix form to carry out linear solution so as to obtain a second three-dimensional map.
In this step, in order to implement incremental reconstruction, the depth estimation result of the current image needs to be fused into the existing series of images. And at the moment, finding several adjacent viewpoints nearest to the current frame according to the pose state of the camera, and optimizing the total energy function again, so that incremental rapid three-dimensional dense reconstruction is realized, and a second three-dimensional map formed by triangular patches is formed.
Next optimal viewpoint calculating step S150: and calculating an incomplete and poor-quality area of the second three-dimensional map, and calculating a viewpoint capable of shooting the area, wherein the viewpoint is called a next optimal viewpoint, and the height of the next optimal viewpoint is not limited to the height of the first route.
Specifically, the method comprises the following steps:
incomplete and poor area calculation step S151:
and calculating a poisson signal value of the second three-dimensional map, and identifying an area with poor reconstruction quality through the poisson signal value and the smoothness, wherein the poisson signal value generally means high reconstruction quality, namely 'integrity'. The resulting surface is better when the image quality corresponding to the region is high. At the same time, the surface should be visible at least in the range of two reasonable viewpoints to obtain a good three-dimensional reconstruction. According to the two points, the reconstructed surface is sampled consistently through Poisson disc sampling, so that sampling points are obtained, then the sampling points are classified into complete or incomplete according to whether the sampling points are complete or not, and the process is shown in figure 5, so that incomplete and poor-quality areas are obtained.
Specifically, the method comprises the following steps: calculating a Poisson signal value of the second three-dimensional map, and further evaluating a sampling point for a point with a poor Poisson signal value, wherein a small disc area corresponding to the sampling point p is AP,ap(v) Is the projection of a sample point p in the image, the projection ratio of the sample point being defined as gammap(v)=ap(v)/Ap. A large projection ratio indicates a better chance of obtaining a high quality reconstruction; for each sample point, a certain large projection ratio (> γ) minI.e., greater than a threshold value that can be set, typically empirically or computationally obtained) is considered as a good viewpoint and is considered "incomplete" if one sample point is visible in less than two good viewpoints. Finally, if the angle between the two good viewpoints is within a suitable range (θ)min=2,θmax30) that constitute stable stereo vision, marked as "complete" or "incomplete" otherwise, resulting in incomplete and poor quality areas.
A next optimal viewpoint solving substep S152 corresponding to an incomplete or poor quality region of reconstruction:
the next optimal viewpoint is the next shooting pose of the camera. When the camera shoots at this pose, it is necessary to simultaneously satisfy the requirements of improving the existing reconstruction quality and expanding new parts in the environment.
Energy maximization equation corresponding to different requirementsThe terms of the solution constitute: respectively an occlusion item, a focusing item, a parallax item and an incidence item, specifically, selecting 10 'incomplete' sampling points with a short distance, and recording the corresponding Poisson disc sampling areas as a set
Figure BDA0002352462000000111
Order to
Figure BDA0002352462000000112
Is composed of
Figure BDA0002352462000000113
The penalty parameter of the set of vertices of the face in (1) is recorded as
Figure BDA0002352462000000114
To obtain forward energy, selection is made
Figure BDA0002352462000000115
To enhance all terms in the energy function,
an occlusion item: to facilitate visibility of the area from the pose P of the camera, i.e.
Figure BDA0002352462000000116
The face in (1) is not occluded. The term is defined as:
Figure BDA0002352462000000121
in practice, the amount of the liquid to be used,
Figure BDA0002352462000000122
the faces in (1) are common to the boundaries of the reconstructed surface, and occlusion terms may be in
Figure BDA0002352462000000123
The surrounding area generates a new viewpoint.
The focusing term: is used for showing
Figure BDA0002352462000000124
The surrounding area tends to be projected in the center of the image. By capturing the area around the center at the same time as the tendency of projection around the center, it can be assumed that the reconstruction quality of the surrounding area needs to be improved since the plane with the worst reconstruction is selected. For vertices centred on the image, taking into account the displacement relative to the centre of the image
Figure BDA0002352462000000125
Two-dimensional gaussian distribution weighting is performed. We formalize this term of the energy function as follows:
Figure BDA0002352462000000126
wherein v isx PAnd vy PIs the coordinate of v projected onto camera P, (x)0,y0) Is the center of the image. Order to
Figure BDA0002352462000000127
W and H are the width and height of the image, and considering the case where v is projected out of the image, the above equation is rewritten as:
Figure BDA0002352462000000128
the parallax term: to represent surfaces that tend to capture images with greater parallax than other images. This term is proposed based on the BH constraint in aerial photogrammetry, which is defined as
Figure BDA0002352462000000129
Where B is the baseline, i.e., the distance of the two poses; h represents the distance between the pose to be solved and v, namely the height of the unmanned aerial vehicle, and delta is a fixed threshold value. The disparity term is expressed as:
Figure BDA00023524620000001210
The disparity term of the camera P relative to the other cameras C is calculated.
The incident term is as follows: indicating that the camera tends to view the surface of interest at an incident angle of 40 deg. to 70 deg.. The angle is selected based on experience with photogrammetric literature studies. The smaller the angle of incidence, the more distorted the image-captured information. Since the angular quantities are processed, they are expressed as von mises distributions in the direction statistics:
Figure BDA0002352462000000131
where x represents the angle between the normal vector of the surface and the ray from the surface centroid to the camera and μ represents the maximum likelihood
Figure BDA0002352462000000132
κ denotes the concentration of the distribution. I is0Is a zero order bessel function, expressed as:
Figure BDA0002352462000000133
next best viewpoint energy: combining the previous 4 terms, obtaining an energy function for calculating the minimization of the next optimal viewpoint as:
Figure BDA0002352462000000134
wherein mu1、μ2、μ3And mu4Are weights of different items and need to be optimized through experiments. Then, definition is performed for all vertices
Figure BDA0002352462000000135
Energy of (2):
Figure BDA0002352462000000136
although energy is generally considered a cost and the optimal viewpoint must have the lowest energy, in this approach the terms that make up the energy are higher for better camera pose.
Therefore, in order to solve the next optimal viewpoint, the next viewpoint energy function is minimized, and the solution is obtained by calculating the negative of the sum of the next viewpoint energy function and the next optimal viewpoint energy function.
Collision-free path generation step S160: and planning to generate a collision-free path according to the movement from the current pose of the unmanned aerial vehicle to the next optimal viewpoint, wherein the collision-free path is a second air route, and the unmanned aerial vehicle flies along the second air route.
Specifically, a second route moving from the current pose of the unmanned aerial vehicle to the next optimal viewpoint carries out path planning in a second three-dimensional map, and dynamic path planning is carried out by combining real-time images of a camera carried by the unmanned aerial vehicle during path planning.
Furthermore, a path planning method of a fast expansion random tree is adopted to solve the path planning of the high-dimensional space.
The second three-dimensional map updating step S170: and collecting an image at the next optimal viewpoint, recovering a dense patch by a fast dense reconstruction method, and adding the dense patch into the second three-dimensional map to obtain an updated second three-dimensional map.
A third three-dimensional map construction step S180: and repeating the steps S150-S170 until a second three-dimensional map with complete area is constructed, namely a third three-dimensional map.
The invention further discloses an unmanned aerial vehicle-based urban three-dimensional mapping system which comprises an unmanned aerial vehicle and a ground workstation, wherein the unmanned aerial vehicle comprises an unmanned aerial vehicle body, a holder, an optical camera, a navigation positioning module and an inertia measurement unit and has an automatic recharging function. The unmanned aerial vehicle carries the cloud deck, and the camera is installed in cloud deck, operates foretell city three-dimensional surveying and mapping method in ground workstation.
For example, the above functions are realized by constructing several modules, including: the system comprises an image acquisition module, a simultaneous localization and map construction module, a rapid three-dimensional dense reconstruction module, a viewpoint planning module, a motion control and planning module, an interaction module and a communication module. The user only needs to mark the interested region in the two-dimensional plane map of the three-dimensional surveying and mapping region in the software, and the system can carry out autonomous three-dimensional surveying and mapping on the appointed region, automatically generate a three-dimensional map with high precision and good integrity, and does not need additional manual intervention.
Therefore, the invention can carry out three-dimensional mapping independently, construct maps in real time and finally form a complete three-dimensional map with texture information. Compared with the prior art, the method can automatically find and complete the incomplete area, and form the three-dimensional map in real time.
It will be apparent to those skilled in the art that the various elements or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device, or alternatively, they may be implemented using program code that is executable by a computing device, such that they may be stored in a memory device and executed by a computing device, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
While the invention has been described in further detail with reference to specific preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. A city three-dimensional mapping method based on an unmanned aerial vehicle,
the unmanned aerial vehicle comprises an unmanned aerial vehicle body, a holder, an image acquisition module, a navigation positioning module and an inertia measurement unit;
the method is characterized by comprising the following steps:
a first route planning step S110: receiving a marking signal of a user, planning a first air route for the unmanned aerial vehicle, wherein the first air route is a flight route with the same height, and transmitting the first air route to the unmanned aerial vehicle;
an image acquisition step S120: the unmanned aerial vehicle flies along a first air route, and the unmanned aerial vehicle collects navigation positioning and inertia measurement data corresponding to the collected image signals through the image signals collected by the image collecting module;
a first three-dimensional map generation step S130: calculating the current position and posture of the unmanned aerial vehicle by a simultaneous positioning and map construction technology and integrating navigation positioning data and inertial measurement data, and constructing a first three-dimensional map, wherein the first three-dimensional map is sparse point cloud;
Second three-dimensional map generation step S140: recovering a dense patch of the first three-dimensional map in a rapid dense reconstruction mode to form a second three-dimensional map;
next optimal viewpoint calculating step S150: calculating an incomplete area and a poor quality area of the second three-dimensional map, and calculating a viewpoint which can shoot the area, wherein the viewpoint is called a next optimal viewpoint, and the height of the next optimal viewpoint is not limited to the height of the first route any more;
collision-free path generation step S160: planning and generating a collision-free path according to the movement from the current pose of the unmanned aerial vehicle to the next optimal viewpoint, wherein the collision-free path is a second route from the current pose of the unmanned aerial vehicle to the next optimal viewpoint, and enabling the unmanned aerial vehicle to fly along the second route;
the second three-dimensional map updating step S170: collecting an image at the next optimal viewpoint, recovering a dense patch by a fast dense reconstruction method, and adding the dense patch into the second three-dimensional map to obtain an updated second three-dimensional map;
a third three-dimensional map construction step S180: repeating the steps S150-S170 until a second three-dimensional map with complete area is constructed, namely a third three-dimensional map is obtained;
the second three-dimensional map generating step S140 specifically includes: dividing an image used for forming a first three-dimensional map into different polygonal sub-regions through simple linear iteration cluster superpixel segmentation, and segmenting each polygon into triangles through Delaunay triangulation; selecting the triangles, selecting the triangles in which the projection of the points in the first three-dimensional map is projected, wherein each triangle can be regarded as a plane, and performing depth estimation on the planes, wherein the process forms a linear equation and solves the linear equation so as to perform rapid dense reconstruction;
The next optimal viewpoint calculating step S150 specifically includes:
incomplete and poor area calculation step S151:
calculating a Poisson signal value of the second three-dimensional map, and further evaluating a sampling point for a point with a poor Poisson signal value, wherein a small disc area corresponding to the sampling point p is AP,ap(v) Is the projection of a sample point p in the image, the projection ratio of the sample point is defined as gammap(v)=ap(v)/Ap(ii) a Regarding each sampling point, recording a certain viewpoint with a projection ratio larger than a set threshold as a good viewpoint, recording as 'incomplete' if one sampling point is visible in less than two good viewpoints, and finally, marking the point as 'complete' if an included angle between the two good viewpoints is within a range of 2-30 degrees, and otherwise, marking as 'incomplete', thereby obtaining an incomplete and poor-quality area;
a next optimal viewpoint solving substep S152 corresponding to a region where reconstruction is incomplete or poor in quality:
respectively calculating an occlusion item, a focusing item, a parallax item and an incidence item, specifically, selecting 10 'incomplete' sampling points with a short distance, and recording the corresponding Poisson disc sampling areas as a set
Figure FDA0003566097260000021
Order to
Figure FDA0003566097260000022
Is composed of
Figure FDA0003566097260000023
The penalty parameter of the set of vertices of the face in (1) is recorded as
Figure FDA0003566097260000024
To obtain forward energy, choose
Figure FDA0003566097260000025
To enhance all terms in the energy function,
an occlusion item: to facilitate the visibility of this area from the pose P of the camera, the term is defined as:
Figure FDA0003566097260000031
the focusing term: is used for showing
Figure FDA0003566097260000038
The surrounding area tends to be projected at the center of the image, for vertices centered on the image
Figure FDA0003566097260000039
Performing two-dimensional Gaussian distribution weighting, specifically:
Figure FDA0003566097260000032
wherein the content of the first and second substances,
Figure FDA0003566097260000033
vx Pand vy PIs the coordinate of v projected onto camera P, (x)0,y0) Is the center of the image, order
Figure FDA0003566097260000034
Figure FDA0003566097260000035
W and H are the width and height of the image;
the parallax term: used to represent surfaces that tend to capture images with greater parallax than other images, the parallax term is expressed as:
Figure FDA0003566097260000036
where B is the baseline, i.e., the distance of the two poses; h represents the distance between the pose to be solved and v, namely the height of the unmanned aerial vehicle, and delta is a fixed threshold;
the incident term is as follows: indicating that the camera is inclined to view a surface of interest with an incident angle of 40 ° to 70 °, is given by:
Figure FDA0003566097260000037
where x represents the angle between the normal vector of the surface and the ray from the surface centroid to the camera, μ represents the maximum likelihood and
Figure FDA0003566097260000041
kappa denotes the concentration of the distribution, I0Is a zero order bessel function, expressed as:
Figure FDA0003566097260000042
next best viewpoint energy: combining the shielding term, the focusing term, the parallax term and the incidence term to obtain an energy function for calculating the minimization of the next optimal viewpoint as follows:
Figure FDA0003566097260000043
Wherein mu1、μ2、μ3And mu4Are weights for different terms, and then define the weights for all vertices
Figure FDA0003566097260000045
Energy of (2):
Figure FDA0003566097260000044
minimizing the energy function of the next viewpoint, and obtaining the energy function by calculating the negative number of the sum of the energy function and the energy function of the next viewpoint during solving;
the collision-free path generating step S160 is specifically to perform path planning on a second three-dimensional map by using a second route from the current pose of the unmanned aerial vehicle to the next optimal viewpoint, and perform dynamic path planning by combining a real-time image of a camera carried by the unmanned aerial vehicle during the path planning.
2. The urban three-dimensional mapping method according to claim 1, characterized in that:
in the first route planning step S110, the first route is a broken line type.
3. The urban three-dimensional mapping method according to claim 1, characterized in that:
in the first three-dimensional map generating step S130, the simultaneous localization and mapping technique is ORB-SLAM.
4. The urban three-dimensional mapping method according to claim 1, characterized in that:
the second three-dimensional map generating step S140 specifically includes:
the three-dimensional reconstruction energy function construction sub-step S141 of a single triangle in a single image:
for a selected triangle in the image v1,v2,v3Three-dimensional reconstruction of the triangle is equivalent to calculating v 1、v2And v3Depth at a point, assuming p is a point of the first three-dimensional map, and the projection of p on the current image falls on triangle { v }1,v2,v3In (j), the depth of the p-point is expressed as dp=α1d12d23d3Wherein d ispAnd dkAre p and v, respectivelyk(1. ltoreq. k. ltoreq.3) depth, weight (. alpha.) of123) Is p at triangle { v1,v2,v3Center of gravity coordinates inAnd d is solved by reconstructing the trianglek(k is more than or equal to 1 and less than or equal to 3), and the consistency with the first three-dimensional map point is ensured by minimizing the following three-dimensional reconstruction energy function:
Eslam(i)=(dp1d12d23d3)2
wherein i represents an index of the input image;
the sub-step S142 of constructing the local continuity constraint energy function of the single image three-dimensional reconstruction:
parameterizing two adjacent triangles, the other triangle being { v }4,v5,v6} local continuity can be achieved by minimizing the function:
Econtinuity(i)=wc((d3-d5)2+(d2-d6)2)
wherein the weight wcIndicating that the intensity of the smoothing is controlled by the difference of the two triangles;
multi-view constraint energy function construction sub-step S143:
for adjacent viewpoints i and j, assume a triangle { v'1,v′2,v′3The projection in viewpoint i covers vertex v1And their depths are each d'1、d′2And d'3At this time v1Should be of depth d'1、d′2And d'3Should be minimized, i.e. the following multi-view constrained energy function:
Efusion(i,j)=(d11d′12d′23d′3)2
Wherein (gamma)123) Is v1In triangle { v'1,v′2,v′3-barycentric coordinates within;
total energy function three-dimensional reconstruction substep S144:
a function of the total energy is established,
Figure FDA0003566097260000061
and expressing the total energy function as a matrix form to carry out linear solution so as to obtain a second three-dimensional map.
5. The urban three-dimensional mapping method according to claim 1, characterized in that:
in the collision-free path generation step S160, a path plan is solved by using a fast-expansion random tree path plan method.
6. The urban three-dimensional mapping method according to claim 1, characterized in that:
the fast dense reconstruction in the second three-dimensional map updating step S170 is the same as the fast dense reconstruction in the second three-dimensional map generating step S140.
7. The utility model provides a three-dimensional mapping system in city based on unmanned aerial vehicle, this system includes unmanned aerial vehicle and ground workstation two parts, and wherein unmanned aerial vehicle includes unmanned aerial vehicle body, cloud platform, optical camera, navigation orientation module and inertial measurement unit, its characterized in that:
the ground workstation is used for operating the urban three-dimensional mapping method of any one of claims 1-6.
CN201911421322.2A 2019-12-31 2019-12-31 Unmanned aerial vehicle-based urban three-dimensional mapping method and system Active CN111141264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911421322.2A CN111141264B (en) 2019-12-31 2019-12-31 Unmanned aerial vehicle-based urban three-dimensional mapping method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911421322.2A CN111141264B (en) 2019-12-31 2019-12-31 Unmanned aerial vehicle-based urban three-dimensional mapping method and system

Publications (2)

Publication Number Publication Date
CN111141264A CN111141264A (en) 2020-05-12
CN111141264B true CN111141264B (en) 2022-06-28

Family

ID=70522942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911421322.2A Active CN111141264B (en) 2019-12-31 2019-12-31 Unmanned aerial vehicle-based urban three-dimensional mapping method and system

Country Status (1)

Country Link
CN (1) CN111141264B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967339B (en) * 2020-07-27 2022-09-27 北京航空航天大学 Method and device for planning unmanned aerial vehicle path
CN114217618B (en) * 2021-12-14 2024-04-16 重庆富沛和科技有限公司 Method for automatically cruising in selected range in three-dimensional map
CN114485568B (en) * 2021-12-31 2023-06-13 广州极飞科技股份有限公司 Mapping method and device, computer equipment and storage medium
TWI809727B (en) * 2022-02-22 2023-07-21 國立陽明交通大學 Method for searching a path by using a three-dimensional reconstructed map
CN116380023B (en) * 2023-05-16 2023-07-25 深圳市长勘勘察设计有限公司 Land mapping system based on remote sensing technology
CN117288168B (en) * 2023-11-24 2024-01-30 山东中宇航空科技发展有限公司 Unmanned aerial vehicle city building system of taking photo by plane of low-power consumption

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107146200A (en) * 2017-04-25 2017-09-08 广西大学 A kind of unmanned aerial vehicle remote sensing image split-joint method based on image mosaic quality evaluation
CN107194989A (en) * 2017-05-16 2017-09-22 交通运输部公路科学研究所 The scene of a traffic accident three-dimensional reconstruction system and method taken photo by plane based on unmanned plane aircraft
CN107749079A (en) * 2017-09-25 2018-03-02 北京航空航天大学 A kind of quality evaluation of point cloud and unmanned plane method for planning track towards unmanned plane scan rebuilding
CN110097553A (en) * 2019-04-10 2019-08-06 东南大学 The semanteme for building figure and three-dimensional semantic segmentation based on instant positioning builds drawing system
CN110096360A (en) * 2019-04-17 2019-08-06 杭州临安大地规划勘测有限公司 A kind of real-time geographic mapping acquisition system
CN110148219A (en) * 2019-04-03 2019-08-20 南昌云虫科技有限公司 The creation method of 3D model
CN110211223A (en) * 2019-05-28 2019-09-06 哈工大新材料智能装备技术研究院(招远)有限公司 A kind of increment type multiview three-dimensional method for reconstructing
CN110379022A (en) * 2019-07-22 2019-10-25 西安因诺航空科技有限公司 Point cloud and grid method of partition in a kind of landform three-dimensional reconstruction system of taking photo by plane

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107146200A (en) * 2017-04-25 2017-09-08 广西大学 A kind of unmanned aerial vehicle remote sensing image split-joint method based on image mosaic quality evaluation
CN107194989A (en) * 2017-05-16 2017-09-22 交通运输部公路科学研究所 The scene of a traffic accident three-dimensional reconstruction system and method taken photo by plane based on unmanned plane aircraft
CN107749079A (en) * 2017-09-25 2018-03-02 北京航空航天大学 A kind of quality evaluation of point cloud and unmanned plane method for planning track towards unmanned plane scan rebuilding
CN110148219A (en) * 2019-04-03 2019-08-20 南昌云虫科技有限公司 The creation method of 3D model
CN110097553A (en) * 2019-04-10 2019-08-06 东南大学 The semanteme for building figure and three-dimensional semantic segmentation based on instant positioning builds drawing system
CN110096360A (en) * 2019-04-17 2019-08-06 杭州临安大地规划勘测有限公司 A kind of real-time geographic mapping acquisition system
CN110211223A (en) * 2019-05-28 2019-09-06 哈工大新材料智能装备技术研究院(招远)有限公司 A kind of increment type multiview three-dimensional method for reconstructing
CN110379022A (en) * 2019-07-22 2019-10-25 西安因诺航空科技有限公司 Point cloud and grid method of partition in a kind of landform three-dimensional reconstruction system of taking photo by plane

Also Published As

Publication number Publication date
CN111141264A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN111141264B (en) Unmanned aerial vehicle-based urban three-dimensional mapping method and system
US11461912B2 (en) Gaussian mixture models for temporal depth fusion
CN111275750B (en) Indoor space panoramic image generation method based on multi-sensor fusion
CN112132972B (en) Three-dimensional reconstruction method and system for fusing laser and image data
EP3242275B1 (en) Using photo collections for three dimensional modeling
US11748907B2 (en) Object pose estimation in visual data
US11783443B2 (en) Extraction of standardized images from a single view or multi-view capture
Pylvanainen et al. Automatic alignment and multi-view segmentation of street view data using 3d shape priors
CN108519102A (en) A kind of binocular vision speedometer calculation method based on reprojection
CN110176032A (en) A kind of three-dimensional rebuilding method and device
CN109900274B (en) Image matching method and system
CN110132242B (en) Triangularization method for multi-camera instant positioning and map construction and moving body thereof
Li et al. Dense surface reconstruction from monocular vision and LiDAR
Alsadik Guided close range photogrammetry for 3D modelling of cultural heritage sites
CN115855060A (en) Geometric primitive guided route planning method and device
Wang et al. Terrainfusion: Real-time digital surface model reconstruction based on monocular slam
Saxena et al. 3-d reconstruction from sparse views using monocular vision
CN111340942A (en) Three-dimensional reconstruction system based on unmanned aerial vehicle and method thereof
Braun et al. Visual terrain traversability estimation using a combined slope/elevation model
Buck et al. Capturing uncertainty in monocular depth estimation: Towards fuzzy voxel maps
CN109785421B (en) Texture mapping method and system based on air-ground image combination
CN113129422A (en) Three-dimensional model construction method and device, storage medium and computer equipment
Cigla et al. Gaussian mixture models for temporal depth fusion
Panek et al. Visual localization using imperfect 3d models from the internet
Roth et al. Reduction of the fronto-parallel bias for wide-baseline semi-global matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant