CN114581603A - Unmanned aerial vehicle modeling method and device - Google Patents

Unmanned aerial vehicle modeling method and device Download PDF

Info

Publication number
CN114581603A
CN114581603A CN202210161276.2A CN202210161276A CN114581603A CN 114581603 A CN114581603 A CN 114581603A CN 202210161276 A CN202210161276 A CN 202210161276A CN 114581603 A CN114581603 A CN 114581603A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
modeling data
waypoint
modeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210161276.2A
Other languages
Chinese (zh)
Inventor
程晔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Huafei Intelligent Technology Co ltd
Original Assignee
Zhejiang Huafei Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Huafei Intelligent Technology Co ltd filed Critical Zhejiang Huafei Intelligent Technology Co ltd
Priority to CN202210161276.2A priority Critical patent/CN114581603A/en
Publication of CN114581603A publication Critical patent/CN114581603A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation

Abstract

The application relates to the technical field of unmanned aerial vehicles, in particular to an unmanned aerial vehicle modeling method and device, wherein the method comprises the following steps: the unmanned aerial vehicle obtains modeling data of a current position, optimizes the modeling data of the current position according to the modeling data of historical waypoint positions, and then sends the optimized modeling data of the current position to the base station when the current position is determined to belong to the waypoint position, so that the base station combines the modeling data of a plurality of waypoint positions to construct a three-dimensional model of an area to be modeled. The modeling data is obtained by integrating the image information and the pose information of the area to be modeled, and compared with a mode of modeling only by using the image information, the method does not need to match the feature points of the image information, reduces the time for processing the image, and effectively improves the real-time performance of the unmanned aerial vehicle on-line modeling.

Description

Unmanned aerial vehicle modeling method and device
Technical Field
The application relates to the technical field of unmanned aerial vehicles, in particular to an unmanned aerial vehicle modeling method and device.
Background
Unmanned aerial vehicles are widely used in the fields of city management, emergency rescue and relief, landform survey and the like, and in these fields, the demand for rapid regional modeling using unmanned aerial vehicles is increasing.
The fast regional modeling belongs to an online modeling mode. In the rapid regional modeling method, an unmanned aerial vehicle executes shooting tasks in a designated region, shot image data are transmitted back to a ground base station after each shooting task is finished, and incremental modeling is carried out by the ground base station according to the sequence of the received image data after the image data are received, so that after the unmanned aerial vehicle finishes all the shooting tasks, a three-dimensional model of the designated region is correspondingly established by the base station. The modeling mode can quickly obtain the three-dimensional physical information of the designated area, and the unique rapidity and real-time performance of the modeling mode have great significance for various application scenes in the emergency field. However, in the modeling scheme of the unmanned aerial vehicle, the position information of the camera is mostly directly provided by the GPS, and the attitude information of the camera is obtained by the ground base station through the motion Structure From Motion (SFM) to solve and optimize the image information.
Based on this, there is a need for an unmanned aerial vehicle modeling method for improving the real-time performance of unmanned aerial vehicle online modeling.
Disclosure of Invention
The application provides an unmanned aerial vehicle modeling method and device, which are used for improving the real-time performance of unmanned aerial vehicle online modeling.
In the method, an unmanned aerial vehicle shoots an area to be modeled according to a preset route, a plurality of waypoint positions are arranged on the preset route, the unmanned aerial vehicle shoots images between any two adjacent waypoint positions according to a preset frame rate, and modeling data of a current position are optimized according to modeling data of historical waypoint positions after the unmanned aerial vehicle obtains the modeling data of the current position; and then, when the unmanned aerial vehicle determines that the current position belongs to the waypoint position, the optimized modeling data of the current position is sent to the base station, so that the base station combines the modeling data of the plurality of waypoint positions to construct a three-dimensional model of the area to be modeled. The modeling data comprise image information and pose information obtained by shooting an area to be modeled by the unmanned aerial vehicle at the current position.
By the mode, the unmanned aerial vehicle can acquire the image information and the pose information of the area to be modeled, and the modeling data is obtained by fusing the calculation of the image information and the pose information. In addition, the modeling data of the current position is optimized according to the historical modeling data, so that the accuracy of the modeling data is effectively improved, and the accuracy of the three-dimensional model is enhanced.
A possible implementation mode is that the preset route comprises a plurality of sub-route segments, aiming at two sub-route segments with shooting overlapping areas in the plurality of sub-route segments, before the unmanned aerial vehicle sends the modeling data of the optimized current position to the base station, whether the unmanned aerial vehicle reaches a return navigation point on a second sub-route segment can be determined, if yes, the modeling data of all the positions of the waypoints on the second sub-route segment before the return navigation point are updated according to the modeling data of all the positions of the waypoints on the first sub-route segment, and therefore when the modeling data of the optimized current position are sent to the base station, the updated modeling data of the waypoint positions can be sent to the base station.
By the mode, global optimization can be performed on the modeling data when the unmanned aerial vehicle flies to the loop-back waypoint, the characteristic that adjacent sub-route sections have shooting overlapping areas is ingeniously utilized, and the global optimization can be performed on the modeling data without adding other shooting modes, so that the precision of the modeling data is effectively improved, and the precision of the model is further improved.
One possible implementation way is that when the preset route is a zigzag route, two sub-route segments in the shooting overlapping area are two adjacent lines of the zigzag route, and the return-loop point is arranged at the end point of the second sub-route segment.
Through the mode, the preset air route is set to be the zigzag air route, so that the upper line and the lower line of the zigzag air route have shooting overlapping areas, the two lines can be approximated to be a loop, and loop-back navigation points are arranged at the end points of the second sub-air route section, so that after the unmanned aerial vehicle flies through the two sub-air route sections, global optimization can be triggered immediately.
According to a possible implementation mode, before optimizing modeling data of a current position according to modeling data of a historical waypoint position, the unmanned aerial vehicle can encode the modeling data to obtain video stream data, and transmit the encoded video stream data to the base station, wherein the video stream data is used for monitoring the aerial photography operation condition.
By the mode, video stream data can be generated when high-quality image data are acquired for modeling, the aerial photography operation condition of the unmanned aerial vehicle is monitored, the flight fault of the unmanned aerial vehicle is eliminated in time, and the modeling efficiency is improved.
In a second aspect, the application further provides an unmanned aerial vehicle modeling device, wherein the unmanned aerial vehicle is used for moving according to a preset air route in an area to be modeled, a plurality of air point positions are arranged on the preset air route, and the unmanned aerial vehicle shoots images between any two adjacent air point positions according to a preset frame rate. Wherein, unmanned aerial vehicle modeling device includes: the acquisition unit is used for acquiring modeling data of the current position, wherein the modeling data comprises image information and pose information obtained by shooting an area to be modeled by the unmanned aerial vehicle at the current position; the optimization unit is used for optimizing the modeling data of the current position according to the modeling data of the historical waypoint position; the sending unit is used for sending the optimized modeling data of the current position to the base station when the current position is determined to belong to the waypoint position; the base station is used for combining modeling data of a plurality of waypoint positions to construct a three-dimensional model of the area to be modeled.
According to one possible implementation manner, the preset route comprises a plurality of sub-route segments, and for two sub-route segments with shooting overlapping areas in the plurality of sub-route segments: before the optimized modeling data of the current position are sent to the base station, the optimization unit can also determine whether the unmanned aerial vehicle reaches a return navigation point on a second sub-navigation segment, and if so, the modeling data of all navigation point positions on the second sub-navigation segment before the return navigation point are updated according to the modeling data of each navigation point position on the first sub-navigation segment; the sending unit may specifically send the updated modeling data of the waypoint position to the base station.
One possible implementation way is that the preset route is a zigzag route, two adjacent lines of the zigzag route exist in the shooting overlapping area, and the return-loop route point is arranged at the end point of the second sub-route.
In a possible implementation manner, the modeling apparatus of the unmanned aerial vehicle further includes a coding unit; before the optimization unit optimizes the modeling data of the current position according to the modeling data of the historical waypoint position, the coding unit can code the modeling data to obtain video stream data and transmit the video stream data to the base station, wherein the video stream data is used for monitoring the aerial photography operation condition of the unmanned aerial vehicle by the base station.
In a third aspect, the present application provides a computer-readable storage medium storing a computer program which, when executed, performs the method of any one of the above first aspects.
In a fourth aspect, the present application provides a computing device comprising: a memory for storing program instructions; and the processor is used for calling the program instructions stored in the memory and executing the method in any one of the designs of the first aspect according to the obtained program.
In a fifth aspect, the present application provides a computer program product for implementing the method as designed in any one of the first aspects above when the computer program product is run on a processor.
The advantageous effects of the second aspect to the fifth aspect can be found in any design of the first aspect, and are not described in detail herein.
Drawings
Fig. 1 schematically illustrates an application scenario provided in an embodiment of the present application;
fig. 2 illustrates an exemplary unmanned aerial vehicle modeling flow diagram provided in the industry;
fig. 3 schematically illustrates a modeling flow diagram of an unmanned aerial vehicle according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a layout of a predetermined route and waypoints provided by an embodiment of the application;
fig. 5 schematically illustrates an unmanned aerial vehicle modeling apparatus provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Fig. 1 is a schematic diagram illustrating an application scenario provided by an embodiment of the present application, as shown in fig. 1, the scenario includes a building having a spatial structure, an air-ground having a planar structure, and an unmanned aerial vehicle flying in the air, where there may be two intersecting roads on the air-ground. In implementation, the drone may fly into the scene from a location in the scene, and after traversing buildings and roads present in the scene, the flight is ended. The motion track of the unmanned aerial vehicle can be controlled by a worker in real time, can also be automatically realized according to a pre-programmed program, and is not limited specifically.
It should be noted that in the embodiment of the present application, the building and the surrounding road area illustrated in fig. 1 may be modeled by using an unmanned aerial vehicle to assist in city building planning, but this is only an exemplary scenario, and the unmanned aerial vehicle modeling scheme in the embodiment of the present application may also be applied to other scenarios. For example, in another exemplary scenario, a mountain forest on fire may also be modeled in real time by the drone in order to pinpoint the location of the fire, the terrain or elevation of the location of the fire, saving time for emergency rescue actions. For another example, in another exemplary scenario, an unmanned aerial vehicle may be used to model the ruins of buildings after an earthquake, after the earthquake, the original three-dimensional models of some buildings cannot be used, and before earthquake relief, a new three-dimensional model of the building needs to be obtained in a short time in order to deploy a relief plan as soon as possible.
Based on the application scenario illustrated in fig. 1, fig. 2 schematically illustrates a flow chart of a modeling method for an unmanned aerial vehicle provided in the industry, and as shown in fig. 2, the flow chart includes:
step 201, the unmanned aerial vehicle executes a flight task according to a pre-planned air route.
The pre-planned route can be obtained by planning related personnel according to the actual condition of the area to be modeled, and under the condition, when the unmanned aerial vehicle executes a flight task, the related personnel can operate the unmanned aerial vehicle according to the pre-planned route so as to enable the unmanned aerial vehicle to fly according to the pre-planned route. Or, the pre-planned route may also be obtained according to an operation specification of the unmanned aerial vehicle, for example, a program code corresponding to the pre-planned route may be written into the unmanned aerial vehicle in advance, so that the unmanned aerial vehicle flies according to the pre-planned route.
And step 202, the unmanned aerial vehicle executes a shooting task at a preset waypoint.
Illustratively, a plurality of waypoints are arranged on a preset air route, when the unmanned aerial vehicle flies according to the pre-planned air route, the unmanned aerial vehicle can execute a shooting task when flying to the preset waypoints, and the shot image data is transmitted to the ground base station. In order to ensure the accuracy of modeling, image data between two adjacent waypoints can have certain overlap.
And step 203, the unmanned aerial vehicle transmits the shot image data back to the ground base station.
And 204, performing incremental modeling by the ground base station according to the image data to obtain a three-dimensional point cloud model of the area to be modeled.
In specific implementation, after the ground base station receives one frame of image data, the feature points in the frame of image data and the feature points in the previous frame of image data can be extracted by using an SFM algorithm, the feature points of the two frames of image data have certain overlap due to certain overlap between the two frames of image data, and the SFM algorithm can be used for calculating the image data for three-dimensional modeling by matching the feature points of the two frames of image data. Then, the bottom base station may obtain a three-dimensional model obtained by historical modeling, and further add the image data after the resolving to the historical modeling model through a multi-view stereo (MVS) algorithm, so as to update the three-dimensional model of the region to be modeled through incremental modeling.
Although the three-dimensional model of the region to be modeled can be obtained through modeling in the scheme, the SFM algorithm solves the image data in a feature point matching mode, a plurality of feature points need to be extracted from the image in the process, and matching needs to be carried out one by one, so that obviously, longer time is consumed, and the real-time performance of three-dimensional modeling is reduced.
In addition, the above scheme only allows the drone to shoot image data at a specific waypoint for modeling, and if the drone wants to monitor the aerial work situation, the drone needs to transmit video data to the ground base station. In a possible implementation manner, although the video stream data may be generated in combination with the image data so as to be transmitted to the ground base station, the unmanned aerial vehicle only shoots at waypoints, and the waypoints are scattered, which results in that the video stream data generated based on the image data shot at each waypoint is not continuous, and the ground base station obviously cannot accurately monitor the aerial operation condition of the unmanned aerial vehicle based on the discontinuous video stream data.
In view of this, an embodiment of the present application provides an unmanned aerial vehicle modeling method, which is used to improve the real-time performance of three-dimensional modeling and acquire continuous video stream data so as to monitor the aerial photography operation condition of an unmanned aerial vehicle.
Based on the application scenario illustrated in fig. 1, fig. 3 exemplarily shows a schematic flowchart of a modeling method for an unmanned aerial vehicle provided in an embodiment of the present application, and as shown in fig. 3, the flowchart includes the following steps:
step 301, the unmanned aerial vehicle acquires modeling data of a current position, wherein the modeling data comprises image information and pose information obtained by shooting an area to be modeled by the unmanned aerial vehicle at the current position.
In the embodiment of the application, the preset air route for the unmanned aerial vehicle to fly can be planned in advance according to the area to be modeled, and the waypoint is set on the preset air route, and then the unmanned aerial vehicle can be driven to fly in the area to be modeled according to the preset air route so as to traverse the area to be modeled. The preset route is a route capable of traversing the whole area to be modeled, and the waypoint refers to a more critical position point on the preset route, and specifically refers to a point having a critical function on modeling, such as the top of a building, an intersection of a road, a line connecting the building and the ground, and the like. The preset route and waypoint can be set by those skilled in the art according to experience, and are not particularly limited.
For example, assuming the scene in fig. 1 is to be modeled, the area to be modeled may include one or more of a building, an open space, and a road with an intersection in fig. 1. Exemplarily, assuming that the modeling area includes the building, the open place, and the road with the intersection in fig. 1, then:
fig. 4 shows a schematic layout of a preset route and waypoints provided by an embodiment of the present application, and an unmanned aerial vehicle can fly above the scene in fig. 1 according to the route shown in fig. 4. As shown in fig. 4, in this example, waypoints a, waypoint B, waypoint C, waypoint D, waypoint E, waypoint F, waypoint G, waypoint H and waypoint I are set on the preset route, waypoint a is used as a starting waypoint, waypoint I is used as an ending waypoint, and these waypoints sequentially form a route segment AC, a route segment CD, a route segment DF, a route segment FG and a route segment GI in the order in which the unmanned aerial vehicle flies through. The airline section AC, the airline section DF and the airline section GI are parallel in pairs, have a shooting overlapping area and are sub airline sections on the airline. The distance between the adjacent parallel sub-route segments may be set according to the size of the scene in fig. 1, and for example, the distance between the adjacent parallel sub-route segments may be set to 50 m.
Illustratively, in conjunction with fig. 1 and 4, the starting waypoint a may be disposed at a corner where two edges of the scene shown in fig. 1 intersect, and the ending waypoint I may be disposed at another corner on a diagonal of the scene shown in fig. 1. The distance between any two adjacent waypoints may be the same or different on the whole preset route, and preferably, the distance between any two waypoints may be set to be 50 m.
In one possible implementation, in addition to setting waypoints at the same distance, some special waypoints may be added at specific locations, for example, in one example, the specific locations refer to turning locations of the route, such as waypoint C and waypoint D are located at the turning locations of the "zigzag" route when the preset route is the "zigzag" route shown in fig. 4, and thus, even if the distance between the two turning locations is less than 50m, the two special waypoints may be set so as to make the waypoint setting of the whole preset route more reasonable and comprehensive. For another example, the specific location refers to an intersection close to a road, and as shown in fig. 1, a waypoint may be additionally provided on the preset route at a position closest to the intersection of the road.
Further, in order to ensure that the image data used for modeling can cover all positions of the whole scene to be modeled, in a specific waypoint setting rule, the distance between waypoints needs to satisfy: the images shot by the unmanned aerial vehicle at the adjacent waypoints have a certain image overlapping rate. For example, the value of the image overlapping ratio may be set empirically by a person skilled in the art, and may be set to 75% as an example. Under the condition, the images shot by the unmanned aerial vehicle at any two adjacent waypoints all contain the same part of the region to be modeled, and the image of the same part of the region to be modeled accounts for 75% of the whole image.
Further, when the unmanned aerial vehicle flies according to a preset air route in the area to be modeled, the unmanned aerial vehicle can shoot images according to a preset frame rate, wherein the preset frame rate needs to meet the following conditions: at least one frame of image is captured between any two adjacent waypoint locations. That is to say, unmanned aerial vehicle not only can shoot the image in the waypoint position, also can shoot the image in the position between two waypoints, so, can acquire more comprehensive image data. In a specific example, the preset frame rate may be set to be 25 frames/second or more, when the frame rate is set to be 25 frames/second, the unmanned aerial vehicle may shoot an image on the preset route at the frame rate of 25 frames/second, and after shooting the image each time, the pose data of the unmanned aerial vehicle is acquired through the pose sensor arranged on the unmanned aerial vehicle, and then the image data obtained by shooting this time and the pose data of the unmanned aerial vehicle are fused as the modeling data of the current position.
In one possible implementation manner, an Inertial Measurement Unit (IMU) and a vision sensor are disposed on the drone, the vision sensor may be a metal oxide semiconductor (CMOS) image sensor, the IMU and the CMOS image sensor may be connected to a same processor, and the connection manner may be wired or wireless. The CMOS image sensor is a typical solid-state imaging sensor, and generally comprises an image sensing cell array, a row driver, a column driver, a timing control logic, an AD converter, a data bus output interface, and a control interface. These parts are usually integrated on the same silicon chip, and the CMOS image sensor operation process can be generally divided into reset, photoelectric conversion, integration, and readout. Accordingly, an IMU is a device that measures the three-axis attitude angles (or angular velocities) and acceleration of an object. In general, an IMU may comprise three single-axis accelerometers and three single-axis gyroscopes, the accelerometers being used to detect acceleration signals of the object in three independent axes of the carrier coordinate system, and the gyroscopes being used to detect angular velocity signals of the carrier relative to the navigational coordinate system, to measure angular velocity and acceleration of the object in three-dimensional space, and to solve the attitude of the object accordingly. Further, the processor may be a System On Chip (SOC).
In the specific implementation, in the flying process of the unmanned aerial vehicle, the CMOS image sensor obtains image information and reports the image information to the processor according to a preset frame rate, the IMU refers to the frame rate of the CMOS image sensor, obtains pose information of the unmanned aerial vehicle at the same moment and reports the pose information to the processor, and after the processor receives corresponding pose information and image information, the pose information and the image information can be fused to obtain modeling data which can be used for modeling.
In the embodiment of the application, considering that the IMU is interfered by environmental noise, the acquired pose information cannot be directly used as pose information of the camera, and pure image data acquired by the CMOS image sensor cannot acquire absolute position information, so that in order to acquire accurate pose information of the unmanned aerial vehicle, a visual inertial algorithm (VIO) is required to perform fusion calculation on data acquired by the IMU and data acquired by the CMOS image sensor. Illustratively, the VIO may be a filtering-like VIO or an optimization-like VIO. The filtering-like VIO is an algorithm for fusing data collected by the IMU and data collected by the CMOS image sensor using the kalman filtering algorithm. The optimization-like VIO is an algorithm for optimizing data collected by the IMU and data collected by the CMOS image sensor using a beam adjustment algorithm (BA), and may specifically include a monocular visual-inertial system algorithm (VINS-Mono), a directional feature based on an accelerated segmentation test, and a binary robust independent basis feature with rotation, namely, an ordered FAST and rotated bright-sparse localization and mapping algorithm (ORB-SLAM), and the like.
Further, the VIO of the optimization class introduces the concept of key frames, which are frames that are decisive for optimization, and exemplarily, frame data acquired at waypoint positions may be set as key frame data. The optimized VIO uses local BA to perform fusion calculation on the current time frame data and the historical key frame data to acquire data with higher quality, for example, when the unmanned aerial vehicle flies between a waypoint A and a waypoint B, the current time frame data is the frame data acquired when the unmanned aerial vehicle is at the current position, and the historical key frame data is the frame data acquired at the waypoint A. The number of the past key frames for fusion calculation can be set according to the requirement. In addition, when a loop is detected, the optimized VIO algorithm can also optimize all key frames by using the global BA to acquire more accurate pose data.
And 302, optimizing the modeling data of the current position by the unmanned aerial vehicle according to the modeling data of the historical waypoint position.
For ease of understanding, the optimization-like VIO fusion process is described below with the scenario in fig. 1. Illustratively, the modeling data for the current location may be optimized using an optimization class VIO.
In specific implementation, the unmanned aerial vehicle flies according to a preset route, images are shot according to a preset frame rate, meanwhile, the pose information of the unmanned aerial vehicle is obtained through the IMU sensor, after the image of one position is shot, the obtained image data and the pose information are sent to the processor, the processor uses the image data and the pose information of the historical waypoint position to locally optimize the currently received image data and the pose information to obtain optimized modeling data, and further, if the current position is not the waypoint position, the optimized data are directly discarded.
Further exemplarily, when the current position is the waypoint position, the processor may further determine whether the current position is the roundtrip point after obtaining the optimized modeling data, if so, update the modeling data of all waypoint positions located before the roundtrip point on the second sub-route segment according to the modeling data of each waypoint position on the first sub-route segment, and send the updated modeling data of the waypoint position to the base station, which is called global optimization. The loop-back waypoint is a preset waypoint which can be similar to a loop at the starting waypoint position, and the first sub-route segment is a road segment flown by the unmanned aerial vehicle before the second sub-route segment.
It should be understood that, in the above description, it is only an optional implementation to optimize the modeling data first and then determine whether the current position is a waypoint, and in another optional implementation, it may also be determined whether the current position is a waypoint position first and, if so, then optimize the modeling data. If not, the current data is not optimized.
In the embodiment of the application, the optimized VIO has higher frame filtering, when image data are processed, through combining IMU data, a plurality of feature points do not need to be acquired on an image, the feature points do not need to be matched one by one, only the feature points need to be tracked, the time for matching the feature points is saved, and the real-time performance of unmanned aerial vehicle modeling is effectively improved.
And 303, when the unmanned aerial vehicle determines that the current position belongs to the waypoint position, sending the optimized modeling data of the current position to the base station.
In the embodiment of the application, the base station can combine modeling data of a plurality of waypoint positions to construct a three-dimensional model of the area to be modeled. Illustratively, each time the base station receives one modeling data, incremental modeling is carried out according to the modeling data, and after the globally optimized modeling data is received, the three-dimensional model obtained by the previous incremental modeling is synchronously updated to obtain more accurate modeling data.
The specific steps of modeling the drone according to the above steps are described below with reference to fig. 3.
The method comprises the following steps that firstly, the unmanned aerial vehicle takes off, flies according to a preset air route, and arrives at a preset shooting starting point on the air route, namely an air point A.
And step two, shooting images according to a frame rate of 25 frames/second in the process that the unmanned aerial vehicle flies on a preset air route, simultaneously acquiring pose information of the unmanned aerial vehicle by using an IMU sensor, and performing local optimization in real time.
For example, when the unmanned aerial vehicle reaches the preset shooting start waypoint a, shooting of images at a frame rate of 25 frames/second can be started, that is, images of the waypoint a are firstly shot, then, after 1/25s, a first frame image is shot, after 2/25s, a second frame image is shot, … … is shot, and the shooting of images is ended until the preset shooting end point is reached. And when an image is obtained every time of shooting, the pose data of the unmanned aerial vehicle are obtained through the IMU sensor, and the image data and the pose data jointly form modeling data of the current position.
Illustratively, local optimization may take one of three forms:
in the first mode, optimization is performed once every time one frame of image data is captured. As shown in fig. 3, in the process of flying from waypoint a to waypoint B, modeling data is acquired at waypoint a first, and since there is no other historical waypoint before waypoint a, the modeling data of waypoint a is not optimized, and after a first frame of image is taken from a position after waypoint a, the modeling data of the first frame of position is optimized according to the modeling data of historical waypoint a.
In the second mode, optimization is performed once after image data of a specific frame number is captured. Continuing with FIG. 1, after each five frames of image data are captured, the image modeling data for the five frames can be optimized separately based on the modeling data for the previous waypoint. In this way, the frequency of optimizing the modeling data can be reduced, and the occupancy rate of the processor can be effectively reduced.
And thirdly, optimizing once every specific time interval. For example, the last frame of image data may be optimized once every 0.2 seconds. For example, if the images are captured at a frequency of 25 frames per second, the calculation is started from the capture of the first frame image, and after 0.2 second, the unmanned aerial vehicle just finishes capturing the fifth frame image and reaches the position of the fifth frame, and at this time, the modeling data collected at the position of the fifth frame is optimized according to the modeling data of the waypoint a.
And step three, the unmanned aerial vehicle determines whether the current position belongs to the waypoint position. If yes, executing step four; if not, the optimized data is discarded.
Illustratively, as shown in fig. 1, during the flight of the drone from point a to point B, assuming that the modeling data is optimized in the manner described above, when the drone arrives at the position at which the second frame image is captured, it is determined that the current position does not belong to the waypoint, and the modeling data after optimization is discarded; when the unmanned aerial vehicle reaches the position of the waypoint B, the current position is determined to belong to the waypoint position, the optimized modeling data is sent to the base station, and meanwhile, the optimized modeling data is stored in the sliding window for the next optimization. The sliding window is a flow control technique, and in the embodiment of the present application, is used to store modeling data and transmit the modeling data.
And step four, judging whether the current navigation point is a loop navigation point by the unmanned aerial vehicle, if so, updating the modeling data of all navigation point positions on the next sub-navigation line segment before the loop navigation point according to the modeling data of each navigation point position on the previous sub-navigation line segment, and then executing the step five, otherwise, directly executing the step five.
In general, when the unmanned aerial vehicle performs aerial photography, a "zigzag" preset route shown in fig. 3 is used to scan a region to be modeled. The zigzag preset air route comprises a plurality of sub air route segments, and shooting overlapping areas exist between adjacent sub air route segments. Illustratively, as shown in FIG. 3, there is a 75% overlap area between the sub-route segment AC and the sub-route segment DF, and the image features have strong similarity, so the sub-route segment AC and the sub-route segment DF can be approximated as a loop, and the waypoint F is set as the loop waypoint. According to the flight trajectory of the unmanned aerial vehicle, the sub route section AC is a first sub route section, and the sub route section DF is a second sub route section.
Illustratively, as shown in FIG. 3, when the drone reaches the return waypoint F, the modeled data for waypoint D and waypoint E are updated from the modeled data for waypoint A, waypoint B and waypoint C on the secondary route segment AC. Illustratively, the modeling data for these two rows may be optimized using global BA to update the modeling data; when the unmanned aerial vehicle does not reach the loop back point F, the modeling data does not need to be optimized.
And step five, the unmanned aerial vehicle sends the optimized or updated modeling data to the base station.
And step six, the base station combines the modeling data of the plurality of waypoint positions to construct a three-dimensional model of the area to be modeled.
Continuing with fig. 1, when the base station receives the modeling data obtained by the drone at waypoint B, a three-dimensional model corresponding to the area between waypoint a and waypoint B is constructed in combination with the modeling data obtained at waypoint a. When the unmanned aerial vehicle flies to the waypoint F, the global BA is used for carrying out global optimization on the modeling data, and the updated modeling data is sent to the base station. And after receiving the modeling data subjected to global optimization, the base station synchronously updates the three-dimensional model obtained by incremental modeling so as to obtain more accurate modeling data.
By the mode, the unmanned aerial vehicle can acquire the image information and the pose information of the area to be modeled, modeling data is obtained through fusion calculation of the image information and the pose information, feature point matching of the image information is not needed, the time for processing the image is shortened, and the real-time performance of online modeling of the unmanned aerial vehicle is effectively improved. In addition, the modeling data of the current position is optimized according to the historical modeling data, so that the accuracy of the modeling data is effectively improved, and the accuracy of the three-dimensional model is enhanced.
In a possible implementation manner, the camera of the drone may generate video stream data while acquiring image data required for modeling, and for example, after each piece of image data is acquired, the image data is video-encoded, and multiple frames of image data are combined together to form the video stream data. Illustratively, every 25 frames of image data may be combined together to form video stream data and transmitted to the ground-based base station in real time. For example, as shown in fig. 3, assuming that the drone acquires image data at a frequency of 25 frames/second, the drone starts video-encoding the image data after acquiring one frame of image data at waypoint a, and starts forming video stream data after encoding the next 25 frames of image data. That is, the initial video stream data is formed by encoding the first frame of image data and the twenty-fifth frame of image data, the second video stream data is formed by encoding the second frame of image data and the twenty-sixth frame of image data, and so on, to form the complete video stream data. And then, the unmanned aerial vehicle transmits the video stream data formed by coding to the ground base station, so that an operator of the ground base station can monitor the situation of aerial photography operation according to the video stream data.
By the mode, as the shooting frame rate of the unmanned aerial vehicle is higher, continuous video stream data can be generated while high-quality image data are obtained for modeling, the aerial shooting operation condition of the unmanned aerial vehicle is monitored, the flight fault of the unmanned aerial vehicle is timely eliminated, and the modeling efficiency is improved.
Based on the same technical concept, the embodiment of the application also provides an unmanned aerial vehicle modeling device, and the unmanned aerial vehicle modeling device can execute the flow of the unmanned aerial vehicle modeling method provided by the embodiment.
Fig. 5 schematically illustrates a structural diagram of an unmanned aerial vehicle modeling apparatus provided in an embodiment of the present application, and as shown in fig. 5, the unmanned aerial vehicle modeling apparatus includes:
the acquiring unit 501 is configured to acquire modeling data of a current position, where the modeling data includes image information and pose information obtained by shooting a region to be modeled by an unmanned aerial vehicle at the current position;
an optimizing unit 502, configured to optimize modeling data of a current position according to modeling data of a historical waypoint position;
a sending unit 503, configured to send the optimized modeling data of the current position to the base station when it is determined that the current position belongs to the waypoint position; the base station is used for combining modeling data of a plurality of waypoint positions to construct a three-dimensional model of the area to be modeled.
According to a possible implementation manner, the preset route comprises a plurality of sub-route segments, and aiming at two sub-route segments with shooting overlapping areas in the plurality of sub-route segments: before the optimized modeling data of the current position are sent to the base station, the optimization unit can also determine whether the unmanned aerial vehicle reaches a loopback navigation point on a second sub-navigation segment, and if so, the optimization unit updates the modeling data of all navigation point positions on the second sub-navigation segment before the loopback navigation point according to the modeling data of each navigation point position on the first sub-navigation segment; the sending unit may specifically send the updated modeling data of the waypoint position to the base station.
One possible implementation way is that the preset route is a zigzag route, two adjacent lines of the zigzag route exist in the shooting overlapping area, and the return-loop route point is arranged at the end point of the second sub-route.
In a possible implementation manner, the modeling apparatus of the unmanned aerial vehicle further includes a coding unit; before the optimization unit 502 optimizes the modeling data of the current position according to the modeling data of the historical waypoint position, the encoding unit may encode the modeling data to obtain video stream data, and transmit the video stream data to the base station, where the video stream data is used by the base station to monitor the aerial photography operation condition of the unmanned aerial vehicle.
Based on the same technical concept, an embodiment of the present invention further provides a computing device, including: a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory and executing the method illustrated in the figure 2 according to the obtained program.
Based on the same technical concept, embodiments of the present invention also provide a computer-readable storage medium, which when run on a processor implements the method illustrated in fig. 2.
Based on the same technical concept, the embodiment of the present invention also provides a computer program product, which when running on a processor implements the method illustrated in fig. 2.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. The modeling method of the unmanned aerial vehicle is characterized in that the unmanned aerial vehicle is used for moving in an area to be modeled according to a preset route, a plurality of waypoint positions are arranged on the preset route, and the unmanned aerial vehicle shoots images between any two adjacent waypoint positions according to a preset frame rate; the method comprises the following steps:
the unmanned aerial vehicle acquires modeling data of a current position, wherein the modeling data comprises image information and pose information obtained by shooting the area to be modeled by the unmanned aerial vehicle at the current position;
the unmanned aerial vehicle optimizes the modeling data of the current position according to the modeling data of the historical waypoint position;
when the unmanned aerial vehicle determines that the current position belongs to the waypoint position, the optimized modeling data of the current position is sent to a base station; and the base station is used for constructing a three-dimensional model of the area to be modeled by combining the modeling data of the plurality of waypoint positions.
2. The method of claim 1, wherein the preset route comprises a plurality of sub-route segments, and wherein before sending the optimized modeling data of the current position to a base station, for two sub-route segments of the plurality of sub-route segments having a shooting overlap area, the method further comprises:
when the unmanned aerial vehicle reaches a return navigation point on a second sub-navigation line segment, updating modeling data of all navigation point positions on the second sub-navigation line segment before the return navigation point according to modeling data of all navigation point positions on the first sub-navigation line segment;
the sending the optimized modeling data of the current position to a base station includes:
and sending the updated modeling data of the waypoint position to the base station.
3. The method of claim 2, wherein the predetermined course is a "zigzag" course, the two sub-course segments having the shot overlap region are two adjacent rows of the "zigzag" course, and the roundabout point is disposed at an end point of the second sub-course segment.
4. The method of claim 1, wherein prior to the drone optimizing the modeling data for the current location based on the modeling data for historical waypoint locations, further comprising:
the unmanned aerial vehicle encodes the modeling data to obtain video stream data;
and the unmanned aerial vehicle transmits the video stream data to a base station, and the video stream data is used for the base station to monitor the aerial photography operation condition of the unmanned aerial vehicle.
5. An unmanned aerial vehicle modeling device is characterized in that an unmanned aerial vehicle is used for moving according to a preset route in an area to be modeled, a plurality of waypoint positions are arranged on the preset route, and the unmanned aerial vehicle shoots images between any two adjacent waypoint positions according to a preset frame rate; the device comprises:
the acquisition unit is used for acquiring modeling data of a current position, wherein the modeling data comprises image information and pose information obtained by shooting the area to be modeled by the unmanned aerial vehicle at the current position;
the optimization unit is used for optimizing the modeling data of the current position according to the modeling data of the historical waypoint position;
the sending unit is used for sending the optimized modeling data of the current position to a base station when the current position is determined to belong to the waypoint position; and the base station is used for constructing a three-dimensional model of the area to be modeled by combining the modeling data of the plurality of waypoint positions.
6. The apparatus of claim 5, wherein the preset route comprises a plurality of sub-route segments, and for two of the plurality of sub-route segments having a shooting overlap area:
before the optimization unit sends the optimized modeling data of the current position to the base station, the optimization unit is further configured to: when the unmanned aerial vehicle reaches a return navigation point on a second sub-navigation line segment, updating modeling data of all navigation point positions on the second sub-navigation line segment before the return navigation point according to modeling data of all navigation point positions on the first sub-navigation line segment;
the sending unit is specifically configured to: and sending the updated modeling data of the waypoint position to the base station.
7. The apparatus of claim 6, wherein the predetermined course is a "zigzag" course, the two sub-course segments having the shot overlap region are two adjacent rows of the "zigzag" course, and the return points are disposed at end points of the second sub-course segment.
8. The apparatus of claim 5, wherein the apparatus further comprises an encoding unit;
before the optimization unit optimizes the modeling data of the current position according to the modeling data of the historical waypoint positions, the encoding unit is configured to:
and coding the modeling data to obtain video stream data, and transmitting the video stream data to a base station, wherein the video stream data is used for the base station to monitor the aerial photography operation condition of the unmanned aerial vehicle.
9. A computer-readable storage medium, characterized in that it stores a computer program which, when executed, performs the method of any one of claims 1 to 4.
10. A computing device, comprising:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory to execute the method of any one of claims 1 to 4 in accordance with the obtained program.
CN202210161276.2A 2022-02-22 2022-02-22 Unmanned aerial vehicle modeling method and device Pending CN114581603A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210161276.2A CN114581603A (en) 2022-02-22 2022-02-22 Unmanned aerial vehicle modeling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210161276.2A CN114581603A (en) 2022-02-22 2022-02-22 Unmanned aerial vehicle modeling method and device

Publications (1)

Publication Number Publication Date
CN114581603A true CN114581603A (en) 2022-06-03

Family

ID=81774370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210161276.2A Pending CN114581603A (en) 2022-02-22 2022-02-22 Unmanned aerial vehicle modeling method and device

Country Status (1)

Country Link
CN (1) CN114581603A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115562332A (en) * 2022-09-01 2023-01-03 北京普利永华科技发展有限公司 Efficient processing method and system for airborne recorded data of unmanned aerial vehicle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115562332A (en) * 2022-09-01 2023-01-03 北京普利永华科技发展有限公司 Efficient processing method and system for airborne recorded data of unmanned aerial vehicle
CN115562332B (en) * 2022-09-01 2023-05-16 北京普利永华科技发展有限公司 Efficient processing method and system for airborne record data of unmanned aerial vehicle

Similar Documents

Publication Publication Date Title
US11218689B2 (en) Methods and systems for selective sensor fusion
JP7274674B1 (en) Performing 3D reconstruction with unmanned aerial vehicle
JP6387782B2 (en) Control device, control method, and computer program
Strydom et al. Visual odometry: autonomous uav navigation using optic flow and stereo
CN105973236A (en) Indoor positioning or navigation method and device, and map database generation method
KR101160454B1 (en) Construction method of 3D Spatial Information using position controlling of UAV
JP6138326B1 (en) MOBILE BODY, MOBILE BODY CONTROL METHOD, PROGRAM FOR CONTROLLING MOBILE BODY, CONTROL SYSTEM, AND INFORMATION PROCESSING DEVICE
US20190385361A1 (en) Reconstruction of a scene from a moving camera
US11726501B2 (en) System and method for perceptive navigation of automated vehicles
CN110687928A (en) Landing control method, system, unmanned aerial vehicle and storage medium
EP3799618B1 (en) Method of navigating a vehicle and system thereof
CN113156998B (en) Control method of unmanned aerial vehicle flight control system
CN111650962B (en) Multi-rotor unmanned aerial vehicle route planning and aerial photography method suitable for banded survey area
JP6001914B2 (en) Target position specifying device, target position specifying system, and target position specifying method
Cui et al. Search and rescue using multiple drones in post-disaster situation
Azhari et al. A comparison of sensors for underground void mapping by unmanned aerial vehicles
JP4624000B2 (en) Compound artificial intelligence device
CN114581603A (en) Unmanned aerial vehicle modeling method and device
WO2021250914A1 (en) Information processing device, movement device, information processing system, method, and program
WO2021079516A1 (en) Flight route creation method for flying body and management server
JP7437930B2 (en) Mobile objects and imaging systems
US20220221857A1 (en) Information processing apparatus, information processing method, program, and information processing system
JP7031997B2 (en) Aircraft system, air vehicle, position measurement method, program
JP7317684B2 (en) Mobile object, information processing device, and imaging system
Liu et al. A Monument Digital Reconstruction Experiment Based on the Human UAV Cooperation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination