CN118115670A - Unmanned aerial vehicle viewpoint generation method, device and equipment based on incremental point cloud data - Google Patents

Unmanned aerial vehicle viewpoint generation method, device and equipment based on incremental point cloud data Download PDF

Info

Publication number
CN118115670A
CN118115670A CN202410260174.5A CN202410260174A CN118115670A CN 118115670 A CN118115670 A CN 118115670A CN 202410260174 A CN202410260174 A CN 202410260174A CN 118115670 A CN118115670 A CN 118115670A
Authority
CN
China
Prior art keywords
point cloud
viewpoint
cloud data
data
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410260174.5A
Other languages
Chinese (zh)
Inventor
刘宇凡
孟伟
陈龙生
李俊杰
连仕康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202410260174.5A priority Critical patent/CN118115670A/en
Publication of CN118115670A publication Critical patent/CN118115670A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application relates to the technical field of unmanned aerial vehicles, in particular to an unmanned aerial vehicle viewpoint generating method, device and equipment based on incremental point cloud data, wherein the unmanned aerial vehicle viewpoint generating method based on the incremental point cloud data obtains the incremental point cloud data, filters by using a three-dimensional grid occupation map to obtain a grid update map, calculates a redirection normal vector of each point cloud data, generates a plurality of derivative viewpoints corresponding to viewpoints by using a view cone model, then traverses each viewpoint (comprising a starting viewpoint and the derivative viewpoints) in a circulation, obtains optimized viewpoints by constraint screening viewpoints, sorts all the optimized viewpoints according to a minimum connection distance through an LKH algorithm, and finally outputs an optimized viewpoint sequence; the method solves the technical problems of long time consumption and low viewpoint generating efficiency caused by large calculation amount of the existing unmanned aerial vehicle three-dimensional coverage viewpoint generating technology.

Description

Unmanned aerial vehicle viewpoint generation method, device and equipment based on incremental point cloud data
Technical Field
The application relates to the technical field of unmanned aerial vehicles, in particular to an unmanned aerial vehicle viewpoint generating method, device and equipment based on incremental point cloud data.
Background
In recent years, unmanned aerial vehicles are becoming more popular due to agility and flexibility, and are used for collecting target scene information to execute various tasks, such as building structure inspection and three-dimensional reconstruction, unmanned aerial vehicles carrying laser radars and unmanned aerial vehicle groups carrying cameras, and through cooperation of a communication system, autonomous exploration and three-dimensional reconstruction can be rapidly carried out on unknown building areas, but the unmanned aerial vehicle groups are high in coverage rate, high requirements are provided for view points meeting unmanned aerial vehicle flight constraint and high in instantaneity, when a plurality of unmanned aerial vehicles cooperatively explore coverage, wrong view points can cause poor reconstruction effect and low coverage rate, and therefore generation of three-dimensional coverage view points is an important premise for completing work of unmanned aerial vehicle groups.
The viewpoint generating algorithm of the unmanned plane is crucial, in order to generate a correct coverage viewpoint, the technology adopted in the prior art is that after all point cloud information of a wall surface outline of a building is acquired, global point cloud normal vector information, structural information of the building and the like are calculated, and then viewpoint information is calculated by combining view field constraint and the like. However, in actual operation, the area for mapping does not have prior map information, and the coverage view point can be calculated only after mapping is performed on the unknown area by the mapping unmanned aerial vehicle, which takes a lot of time, and if a conventional algorithm for calculating the global view point is used, a lot of calculation force is consumed for repeated calculation while mapping, and a lot of time is consumed, which results in reduced view point generating efficiency. For example: if the three-dimensional coverage viewpoint generation technology of the unmanned aerial vehicle adopts a rotational symmetry point based on ROSA, recognizing a framework of a building, dividing a space into subspaces through building framework information, and combining the unmanned aerial vehicle and viewpoint information in each subspace to calculate viewpoints in parallel; the viewpoint generation technology needs to rely on the building structure to have high symmetry so as to generate a reliable skeleton and further generate a viewpoint, and has limitations. If the three-dimensional coverage view point generation technology of the unmanned aerial vehicle is to adopt the acquisition of point cloud map information, establish a space occupation grid map, establish bounding boxes with different resolutions for known space occupation areas, expand the bounding boxes to form a feasible bounding set, and uniformly sample coverage view points on the set at equal distances; the viewpoint generation technique uses equidistant sampling, and when the coverage has a complex structure, the viewpoint generation is insufficient, and it is difficult for the viewpoint to be completely covered. If the three-dimensional coverage view point generation technology of the unmanned aerial vehicle adopts acquisition point cloud map information, surface reconstruction is carried out through triangulation, a task plane needing to be covered is acquired, view points and unmanned aerial vehicle constraints are collected, and view points needed by the full coverage task plane are calculated; the viewpoint generation technology uses triangulation to reconstruct the surface and then performs viewpoint generation, which requires extremely great calculation effort, and is not efficient in viewpoint generation based on incremental point cloud information. If the three-dimensional coverage view point generation technology of the unmanned aerial vehicle adopts the acquisition of point cloud map information, sampling the view points, taking space constraint and the like into consideration to primarily filter part of view points, and then using Monte Carlo tree search iteration to select a view point sequence with highest coverage rate; the viewpoint generation technology needs to iterate by using global point cloud information, and when generating a viewpoint based on incremental point cloud information, the viewpoint is easy to sink into local optimum, so that the viewpoint coverage efficiency is low.
Disclosure of Invention
The embodiment of the application provides an unmanned aerial vehicle viewpoint generating method, device and equipment based on incremental point cloud data, which are used for solving the technical problems of long time consumption and low viewpoint generating efficiency caused by large calculation amount of the existing unmanned aerial vehicle three-dimensional coverage viewpoint generating technology.
In order to achieve the above object, the embodiment of the present application provides the following technical solutions:
In one aspect, an unmanned aerial vehicle viewpoint generating method based on incremental point cloud data is provided, which includes the following steps:
Constructing a three-dimensional grid occupation map of a to-be-three-dimensional coverage viewpoint object, and acquiring incremental point cloud data of the to-be-three-dimensional coverage viewpoint object and acquisition parameters of acquisition equipment through the acquisition equipment;
updating the three-dimensional grid occupation map according to the acquisition parameters and the incremental point cloud data to obtain a grid update map and acquiring filtered point cloud data in the grid update map;
Constructing a grid coordinate system for the grid updating map and acquiring point cloud coordinates corresponding to the point cloud data under the grid coordinate system; processing the point cloud coordinates to obtain a redirection normal vector corresponding to the point cloud data;
Constructing an actual coordinate system for the to-be-three-dimensional coverage viewpoint object, acquiring point cloud actual coordinates corresponding to the point cloud data under the actual coordinate system, and calculating according to the acquisition parameters, the point cloud actual coordinates and the redirection normal vectors to obtain viewpoint data of viewpoints corresponding to the point cloud data;
Taking the view point corresponding to each view point data as a starting view point, iteratively generating a plurality of derivative view points for each starting view point based on a view cone model under the actual coordinate system, and constructing a first view point data set by all the starting view points and the corresponding derivative view points;
Performing traversal screening and merging processing on all viewpoints in the first viewpoint data set to obtain an optimized viewpoint; and sequencing all the optimized viewpoints by using an LKH algorithm to generate a three-dimensional coverage viewpoint sequence matched with the object of the three-dimensional coverage viewpoint to be subjected to three-dimensional coverage.
Preferably, updating the three-dimensional grid occupation map according to the acquisition parameters and the incremental point cloud data to obtain a grid update map includes:
Calculating according to the acquisition parameters and the incremental point cloud data to obtain grid width;
Updating the three-dimensional grid occupation map by taking the grid width as a unit window and adopting a filtering rule to obtain a grid updating map;
Wherein, the content of the filtering rule comprises:
traversing the three-dimensional grid occupation map by using the unit window, acquiring the quantity of the increment point cloud data in the unit window, and determining the state corresponding to the unit window according to the quantity of the increment point cloud data;
If the number of the increment point cloud data is 0, the unit window is in an unoccupied state and is set to be in an occupied state;
If the number of the increment point cloud data is not 0, the unit window is in an occupied state;
if the unit window is in an occupied state and the quantity of the increment point cloud data is larger than a quantity threshold value, eliminating all increment point cloud data in the unit window.
Preferably, processing the point cloud coordinates to obtain a redirection normal vector corresponding to the point cloud data includes:
Calculating according to the point cloud coordinates and the grid width to obtain point cloud center coordinates corresponding to the point cloud coordinates;
Searching the point cloud data by using KD-TREE under the grid coordinate system and with the point cloud center coordinate as a reference point to obtain n data sets constructed by the point cloud data;
Performing least square fitting processing on n point cloud data in the data set to obtain a local plane expression; calculating according to the point cloud coordinates corresponding to each piece of point cloud data and the local plane expression, and obtaining fitting parameters corresponding to each piece of point cloud data;
Screening the fitting parameters with the smallest numerical values from n fitting parameters to construct a local plane normal vector;
And calculating the local plane normal vector and the unit direction vector to obtain a redirection normal vector of the point cloud coordinate corresponding to the point cloud center coordinate.
Preferably, according to the collection parameter, the actual coordinates of each point cloud and each redirection normal vector, obtaining viewpoint data of a viewpoint corresponding to each point cloud data includes:
calculating according to the center distance of the acquisition parameters, the actual coordinates of each point cloud and each redirection normal vector to obtain first viewpoint coordinates of viewpoints corresponding to the point cloud data;
calculating according to fitting parameters of the redirection normal vector to obtain rotation data of a viewpoint corresponding to the point cloud data;
wherein the viewpoint data includes first viewpoint coordinates and rotation data.
Preferably, iteratively generating a plurality of derivative viewpoints for each of the starting viewpoints based on a cone model under the actual coordinate system includes:
obtaining set iteration parameters, wherein the iteration parameters comprise an iteration direction matrix, a stepping width and a stepping angle;
and placing the bottom end surface of the view cone model on the starting viewpoint, and gradually iterating the starting viewpoint to the vertex of the view cone model according to the iteration parameters to generate a plurality of derivative viewpoints.
Preferably, performing traversal screening and merging processing on all viewpoints in the first viewpoint data set, and obtaining an optimized viewpoint includes: and performing traversal screening and viewpoint merging processing on all viewpoints in the first viewpoint data set by sequentially adopting space constraint filtering and view constraint filtering to obtain an optimized viewpoint.
Preferably, the performing traversal screening on all viewpoints in the first viewpoint data set by using spatial constraint filtering includes:
Mapping all viewpoints in the first viewpoint data set to the grid coordinate system to obtain point cloud data points corresponding to each viewpoint, and acquiring center coordinates, first expansion data and second expansion data of the grid coordinate system;
Under the grid coordinate system, taking the center coordinate as a center and taking the first expansion data as a unit to expand to obtain a first expansion area; deleting all the point cloud data points in the first expansion area;
expanding by taking the center coordinate as a center and taking the second expansion data as a unit to obtain a second expansion area; and deleting all the point cloud data points outside the second expansion area to obtain a screened point cloud data point set.
Preferably, the unmanned aerial vehicle viewpoint generating method based on the incremental point cloud data comprises the following steps: performing traversal screening on all view points in the point cloud data point set by adopting view field constraint filtering, wherein the content of performing traversal screening on all view points in the point cloud data point set by adopting view field constraint filtering comprises the following steps:
Mapping all the point cloud data points in the point cloud data point set to the actual coordinate system to obtain the point actual coordinates of a second viewpoint corresponding to each point cloud data point; acquiring rotation data corresponding to each point cloud data point;
Converting according to the actual coordinates of the points and the rotation data to obtain second viewpoint coordinates of each corresponding second viewpoint under a viewpoint coordinate system;
filtering each filtered view point by adopting a constraint condition according to the second view point coordinates to obtain a coverage view point set formed by the second view points meeting the constraint condition;
wherein, the constraint condition is:
Where X s is the X-axis coordinate value of the second viewpoint coordinate, Y s is the Y-axis coordinate value of the second viewpoint coordinate, Z s is the Z-axis coordinate value of the second viewpoint coordinate, horzFOV is the horizontal field angle of the acquisition parameter, vertFOV is the vertical field angle of the acquisition parameter, and d is the center distance of the acquisition parameter.
Preferably, the unmanned aerial vehicle viewpoint generating method based on incremental point cloud data comprises the following steps: the merging processing is carried out on all the second viewpoints in the coverage viewpoint set, and the merging processing of all the second viewpoints in the coverage viewpoint set comprises the following steps:
Mapping each second view point in the coverage view point set into the grid coordinate system, and dividing all point cloud data in the grid coordinate system according to the grid width to obtain a plurality of grid areas;
Acquiring plane parameters covered by each second viewpoint, the first point cloud data quantity and the second point cloud data quantity in the grid area where the first point cloud data quantity is located;
Calculating according to the plane parameters, the first point cloud data quantity and the second point cloud data quantity to obtain an index total score corresponding to the second viewpoint;
and merging all the second viewpoints in each grid region into one optimized viewpoint according to the index total score.
In still another aspect, an unmanned aerial vehicle viewpoint generating device based on incremental point cloud data is provided, which comprises a data acquisition module, an updating and filtering module, a vector calculation module, a viewpoint iteration derivation module and a viewpoint optimization module;
the data acquisition module is used for constructing a three-dimensional grid occupation map of the viewpoint object to be three-dimensional covered, acquiring incremental point cloud data of the viewpoint object to be three-dimensional covered through acquisition equipment and acquiring acquisition parameters of the acquisition equipment;
The updating and filtering module is used for updating the three-dimensional grid occupation map according to the acquisition parameters and the incremental point cloud data to obtain a grid updating map and obtaining filtered point cloud data in the grid updating map;
The vector calculation module is used for constructing a grid coordinate system for the grid updating map and acquiring point cloud coordinates corresponding to the point cloud data under the grid coordinate system; processing the point cloud coordinates to obtain a redirection normal vector corresponding to the point cloud data;
The viewpoint calculating module is used for constructing an actual coordinate system for the viewpoint object to be three-dimensionally covered, acquiring the point cloud actual coordinates corresponding to the point cloud data under the actual coordinate system, and calculating according to the acquisition parameters, the point cloud actual coordinates and the redirection normal vectors to obtain viewpoint data of viewpoints corresponding to the point cloud data;
the viewpoint iteration derivative module is used for taking a viewpoint corresponding to each viewpoint datum as a starting viewpoint, carrying out iteration on each starting viewpoint based on a view cone model under the actual coordinate system to generate a plurality of derivative viewpoints, and constructing a first viewpoint dataset by all the starting viewpoints and the corresponding derivative viewpoints;
The viewpoint optimizing module is used for performing traversal screening and merging processing on all viewpoints in the first viewpoint data set to obtain an optimized viewpoint; and sequencing all the optimized viewpoints by using an LKH algorithm to generate a three-dimensional coverage viewpoint sequence matched with the object of the three-dimensional coverage viewpoint to be subjected to three-dimensional coverage.
In yet another aspect, a terminal device is provided that includes a processor and a memory;
The memory is used for storing program codes and transmitting the program codes to the processor;
The processor is configured to execute the unmanned aerial vehicle viewpoint generating method based on the incremental point cloud data according to the instruction in the program code.
The unmanned aerial vehicle viewpoint generating method, device and equipment based on the incremental point cloud data comprise the steps of constructing a three-dimensional grid occupation map of a viewpoint object to be three-dimensional covered, acquiring the incremental point cloud data of the viewpoint object to be three-dimensional covered through acquisition equipment and acquiring acquisition parameters of the acquisition equipment; updating the three-dimensional grid occupation map according to the acquisition parameters and the incremental point cloud data to obtain a grid updating map and acquire the filtered point cloud data in the grid updating map; constructing a grid coordinate system for the grid updating map and acquiring point cloud coordinates corresponding to each point cloud data under the grid coordinate system; processing the point cloud coordinates to obtain a redirection normal vector corresponding to the point cloud data; constructing an actual coordinate system for the three-dimensional coverage viewpoint object, acquiring point cloud actual coordinates corresponding to each point cloud data under the actual coordinate system, and calculating according to acquisition parameters, each point cloud actual coordinate and each redirection normal vector to acquire viewpoint data of viewpoints corresponding to each point cloud data; taking the view point corresponding to each view point data as a starting view point, carrying out iteration on each starting view point based on a view cone model under an actual coordinate system to generate a plurality of derivative view points, and constructing a first view point data set by all the starting view points and the corresponding derivative view points; performing traversal screening and merging processing on all viewpoints in the first viewpoint data set to obtain an optimized viewpoint; and sequencing all the optimized viewpoints by using an LKH algorithm to generate a three-dimensional coverage viewpoint sequence matched with the object of the three-dimensional coverage viewpoint to be subjected to three-dimensional coverage. From the above technical solutions, the embodiment of the present application has the following advantages: according to the unmanned aerial vehicle viewpoint generating method based on the incremental point cloud data, after the incremental point cloud data are obtained, filtering is carried out by using a three-dimensional grid occupation map to obtain a grid update map, then a redirection normal vector of each point cloud data is calculated, a plurality of derivative viewpoints corresponding to viewpoints are generated by using a view cone model, each viewpoint (comprising a starting viewpoint and the derivative viewpoints) is traversed in a circulation, optimized viewpoints are obtained by constraint screening viewpoints, all the optimized viewpoints are ordered according to the minimum connection distance through an LKH algorithm, and finally an optimized viewpoint sequence is output; the method solves the technical problems of long time consumption and low viewpoint generating efficiency caused by large calculation amount of the existing unmanned aerial vehicle three-dimensional coverage viewpoint generating technology.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the application, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a step flowchart of an unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to an embodiment of the present application;
Fig. 2 is a schematic diagram of view cone model viewpoint generation in an unmanned aerial vehicle viewpoint generation method based on incremental point cloud data according to an embodiment of the present application;
Fig. 3 is a schematic diagram of iterative generation of derivative viewpoints in an unmanned aerial vehicle viewpoint generation method based on incremental point cloud data according to an embodiment of the present application;
fig. 4 is a schematic diagram of space constraint expansion filtering in an unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to an embodiment of the present application;
Fig. 5 is a schematic diagram of horizontal resolution calculation in an unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to an embodiment of the present application;
Fig. 6 is a schematic diagram of view merging in an unmanned aerial vehicle view generating method based on incremental point cloud data according to an embodiment of the present application;
fig. 7 is a schematic diagram of optimizing viewpoint sequencing in an unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to an embodiment of the present application;
Fig. 8 is a schematic diagram of an object being a building body in an unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to an embodiment of the present application;
Fig. 9 is a schematic diagram of generating a view point when a wall part of a building is subjected to point cloud in the unmanned aerial vehicle view point generation method based on incremental point cloud data according to the embodiment of the application;
Fig. 10 is a schematic diagram of a view point generation effect at a lower part of a building in an unmanned aerial vehicle view point generation method based on incremental point cloud data according to an embodiment of the present application;
fig. 11 is a schematic diagram of complete generation of a final view of a building in an unmanned aerial vehicle view point generation method based on incremental point cloud data according to an embodiment of the present application;
fig. 12 is a schematic frame diagram of an unmanned aerial vehicle viewpoint generating device based on incremental point cloud data according to an embodiment of the present application;
Fig. 13 is a schematic diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the present application more comprehensible, the following description of the embodiments accompanied with the accompanying drawings in the embodiments of the present application will make it apparent that the embodiments described below are only some embodiments but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the description of embodiments of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the embodiments of the present application, the meaning of "plurality" is two or more, unless explicitly defined otherwise.
In the embodiments of the present application, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured" and the like are to be construed broadly and include, for example, either permanently connected, removably connected, or integrally formed; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the embodiments of the present application will be understood by those of ordinary skill in the art according to specific circumstances.
The embodiment of the application provides an unmanned aerial vehicle viewpoint generating method, device and equipment based on incremental point cloud data, which solve the technical problems of long time consumption and low viewpoint generating efficiency caused by large calculation amount of the existing unmanned aerial vehicle three-dimensional coverage viewpoint generating technology.
Embodiment one:
Fig. 1 is a step flowchart of an unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to an embodiment of the present application.
As shown in fig. 1, an embodiment of the present application provides a method for generating an unmanned aerial vehicle viewpoint based on incremental point cloud data, including the following steps:
S1, constructing a three-dimensional grid occupation map of a viewpoint object to be three-dimensional covered, and acquiring incremental point cloud data of the viewpoint object to be three-dimensional covered and acquisition parameters of acquisition equipment through the acquisition equipment.
In step S1, a three-dimensional grid occupation map of the viewpoint object to be generated is first constructed, incremental point cloud data of the viewpoint object to be three-dimensionally covered and acquisition parameters of the acquisition device are acquired. In this embodiment, the acquisition parameters include center distance, vertical field angle, horizontal field angle, and zoom adjustment coefficient. The center distance refers to the distance from the center of the viewpoint to the center of the bottom of the viewing cone in the viewing cone model.
In the embodiment of the application, the acquisition equipment such as the laser radar carried by the unmanned aerial vehicle can be used for acquiring the incremental point cloud data, and the incremental point cloud data of the viewpoint object to be three-dimensionally covered can be acquired.
It should be noted that, the incremental point cloud data refers to point cloud data of the current location environment of the unmanned aerial vehicle, which only includes point cloud space coordinate information near the current location environment of the unmanned aerial vehicle, for example: the unmanned aerial vehicle moves to other positions, and the incremental point cloud data of the unmanned aerial vehicle are refreshed due to different environments of other positions.
S2, updating the three-dimensional grid occupation map according to the acquisition parameters and the incremental point cloud data to obtain a grid updating map and obtaining the filtered point cloud data in the grid updating map.
In step S2, the incremental point cloud data is filtered according to the acquisition parameters obtained in step S1, and the three-dimensional grid occupation map is updated to obtain a grid update map, and the filtered point cloud data is obtained.
S3, constructing a grid coordinate system for the grid updating map and acquiring point cloud coordinates corresponding to each point cloud data under the grid coordinate system; and processing the point cloud coordinates to obtain a redirection normal vector corresponding to the point cloud data.
In the step S3, a grid coordinate system is constructed according to the grid update map obtained in the step S2, and each point cloud data is obtained under the grid coordinate system; and then processing each point cloud coordinate to obtain a redirection normal vector corresponding to each point cloud data.
S4, constructing an actual coordinate system of the three-dimensional coverage viewpoint object, acquiring point cloud actual coordinates corresponding to each point cloud data under the actual coordinate system, and calculating according to the acquisition parameters, each point cloud actual coordinate and each redirection normal vector to obtain viewpoint data of viewpoints corresponding to each point cloud data.
In step S4, an actual coordinate system is constructed based on the viewpoint object to be three-dimensionally covered, and the actual coordinates of the point cloud of each point cloud data are obtained under the actual coordinate system. And secondly, calculating according to the acquisition parameters acquired in the step S1, the redirection normal vector and the cloud actual coordinates acquired in the step S3, and acquiring viewpoint data of viewpoints corresponding to each point cloud data.
S5, taking the view point corresponding to each view point data as a starting view point, iteratively generating a plurality of derivative view points for each starting view point based on a view cone model under an actual coordinate system, and constructing a first view point data set by all the starting view points and the corresponding derivative view points.
It should be noted that, in step S5, iterative derivatization is performed on the viewpoint data of each viewpoint obtained in step S4, and each viewpoint generates a plurality of derivative viewpoints according to the corresponding viewpoint data, so as to obtain the first viewpoint data set.
S6, performing traversal screening and merging processing on all the viewpoints in the first viewpoint data set to obtain an optimized viewpoint; and sequencing all the optimized viewpoints by using an LKH algorithm to generate a three-dimensional coverage viewpoint sequence matched with the object of the three-dimensional coverage viewpoint to be subjected to three-dimensional coverage.
In step S6, performing traversal screening and merging processing on each view (including the starting view and the derivative view) obtained in step S5 to obtain an optimized view; and secondly, sequencing all optimized viewpoints through an LKH algorithm to generate a three-dimensional coverage viewpoint sequence matched with the object of the three-dimensional coverage viewpoint.
In the embodiment of the application, the unmanned aerial vehicle viewpoint generating method based on the incremental point cloud data can be applied to three-dimensional coverage viewpoint generation of an unmanned aerial vehicle on an unknown environment object, and the unmanned aerial vehicle viewpoint generating method based on the incremental point cloud data can be used for performing viewpoint generation while constructing a three-dimensional grid occupation map, so that the three-dimensional coverage work efficiency is greatly improved. According to the unmanned aerial vehicle viewpoint generating method based on the incremental point cloud data, a three-dimensional grid occupation map of an object is updated through the incremental point cloud data, viewpoints are generated based on a view cone model, the viewpoints are filtered, combined and adjusted based on set constraint indexes, an attractive force model and the like to obtain optimized viewpoints, and finally an LKH algorithm is used for sequencing the optimized viewpoint sequences, so that a three-dimensional coverage viewpoint sequence (or a three-dimensional coverage viewpoint sequence) meeting index requirements is generated. In addition, the unmanned aerial vehicle viewpoint generating method based on the incremental point cloud data obtains the incremental point cloud data, then filters by using a three-dimensional grid occupation map to obtain a grid update map, then calculates a redirection normal vector of each point cloud data, then generates a plurality of derivative viewpoints corresponding to viewpoints by using a view cone model, then traverses each viewpoint (comprising a starting viewpoint and the derivative viewpoints) in a circulation, screens the viewpoints by constraint, adjusts the combined partial viewpoints by using a centroid model, sorts according to the minimum connection distance by using an LKH algorithm, and finally outputs a viewpoint sequence.
The application provides an unmanned aerial vehicle viewpoint generating method based on incremental point cloud data, which comprises the steps of constructing a three-dimensional grid occupation map of a viewpoint object to be three-dimensionally covered, acquiring the incremental point cloud data of the viewpoint object to be three-dimensionally covered through acquisition equipment and acquiring acquisition parameters of the acquisition equipment; updating the three-dimensional grid occupation map according to the acquisition parameters and the incremental point cloud data to obtain a grid updating map and acquire the filtered point cloud data in the grid updating map; constructing a grid coordinate system for the grid updating map and acquiring point cloud coordinates corresponding to each point cloud data under the grid coordinate system; processing the point cloud coordinates to obtain a redirection normal vector corresponding to the point cloud data; constructing an actual coordinate system for the three-dimensional coverage viewpoint object, acquiring point cloud actual coordinates corresponding to each point cloud data under the actual coordinate system, and calculating according to acquisition parameters, each point cloud actual coordinate and each redirection normal vector to acquire viewpoint data of viewpoints corresponding to each point cloud data; taking the view point corresponding to each view point data as a starting view point, carrying out iteration on each starting view point based on a view cone model under an actual coordinate system to generate a plurality of derivative view points, and constructing a first view point data set by all the starting view points and the corresponding derivative view points; performing traversal screening and merging processing on all viewpoints in the first viewpoint data set to obtain an optimized viewpoint; and sequencing all the optimized viewpoints by using an LKH algorithm to generate a three-dimensional coverage viewpoint sequence matched with the object of the three-dimensional coverage viewpoint to be subjected to three-dimensional coverage. According to the unmanned aerial vehicle viewpoint generating method based on the incremental point cloud data, after the incremental point cloud data are obtained, filtering is carried out by using a three-dimensional grid occupation map to obtain a grid update map, then a redirection normal vector of each point cloud data is calculated, a plurality of derivative viewpoints corresponding to viewpoints are generated by using a view cone model, each viewpoint (comprising a starting viewpoint and the derivative viewpoints) is traversed in a circulation, optimized viewpoints are obtained by constraint screening viewpoints, all the optimized viewpoints are ordered according to the minimum connection distance through an LKH algorithm, and finally an optimized viewpoint sequence is output; the method solves the technical problems of long time consumption and low viewpoint generating efficiency caused by large calculation amount of the existing unmanned aerial vehicle three-dimensional coverage viewpoint generating technology.
In one embodiment of the present application, updating a three-dimensional grid occupancy map according to acquisition parameters and incremental point cloud data, the obtaining a grid update map includes:
Calculating according to the acquisition parameters and the incremental point cloud data to obtain the grid width;
Updating the three-dimensional grid occupation map by taking the grid width as a unit window and adopting a filtering rule to obtain a grid updating map;
the content of the filtering rule comprises:
Traversing the three-dimensional grid occupation map by using a unit window to obtain the quantity of the increment point cloud data in the unit window, and determining the state of the corresponding unit window according to the quantity of the increment point cloud data;
If the number of the increment point cloud data is 0, the unit window is in an unoccupied state and is set to be in an occupied state;
If the number of the increment point cloud data is not 0, the unit window is in an occupied state;
If the unit window is in an occupied state and the quantity of the increment point cloud data is larger than the quantity threshold value, eliminating all increment point cloud data in the unit window.
It should be noted that, the method for generating the unmanned aerial vehicle viewpoint based on the incremental point cloud data needs to construct a three-dimensional grid occupation map of the object before generating the viewpoint, so as to determine an area to be covered, avoid repeatedly calculating the covered viewpoint, and simultaneously exclude positions where the unmanned aerial vehicle cannot reach, such as a cavity in a building, and assist in generating the viewpoint. After the three-dimensional grid occupation map is established, the size of the grid map needs to be determined, if the grid is set too small, the calculated amount is too large, and the generation of coverage view points is affected by the too large grid, so that the three-dimensional grid occupation map needs to be updated according to the acquired incremental point cloud data and acquisition parameters to obtain a grid update map. In this embodiment, updating the three-dimensional grid occupation map by adopting the filtering rule is to use a three-dimensional array to store occupation flag bits to filter incremental point cloud data, initially, setting all three-dimensional grid occupation maps to be unoccupied, judging whether the grid width is occupied by a unit window by querying the flag bits of corresponding indexes of the three-dimensional array, if so, querying whether the number of the incremental point cloud data in the unit window is greater than a number threshold, if the number of the incremental point cloud data is greater than the threshold, eliminating all the incremental point cloud data in the unit window, and if not, setting the unit window to be occupied. The number threshold may be set according to requirements, and is not particularly limited herein.
In the embodiment of the application, the grid width is obtained by calculating according to the acquisition parameters and the incremental point cloud data by adopting a grid width formula. The grid width formula is:
Where, voxel_width is the grid width, N Cloud is the number of incremental point cloud data, V oxel_p is the scaling factor, horzFOV is the horizontal field angle of the acquisition parameters, vertFOV is the vertical field angle of the acquisition parameters, and d is the center distance of the acquisition parameters.
In one embodiment of the present application, processing the point cloud coordinates to obtain a redirection normal vector corresponding to the point cloud data includes:
Calculating according to the point cloud coordinates and the grid width to obtain point cloud center coordinates corresponding to the point cloud coordinates;
searching point cloud data by using KD-TREE under a grid coordinate system and taking a point cloud center coordinate as a reference point to obtain a data set constructed by n point cloud data;
Carrying out least square fitting treatment on n point cloud data in a data set to obtain a local plane expression; calculating according to the point cloud coordinates and the local plane expression corresponding to each point cloud data to obtain fitting parameters corresponding to each point cloud data;
Screening fitting parameters with the smallest numerical value from the n fitting parameters to construct a local plane normal vector;
and calculating the local plane normal vector and the unit direction vector to obtain a redirection normal vector of the point cloud coordinate corresponding to the point cloud center coordinate.
It should be noted that, according to the point cloud coordinates and the grid width, a grid map center coordinate formula is adopted to calculate and obtain the point cloud center coordinates corresponding to the point cloud coordinates, the point cloud center coordinates are used as reference points under the grid coordinate system, KD-TREE is adopted to search for point cloud data, the search radius is R, K, K pieces of point cloud data which are searched for the closest to the reference points are set to be the center at the reference points and the search radius is set as a limit, a data set is obtained, P (R, K) = { P 1,p2…pn } is adopted to represent, normal vector estimation is carried out in the data set in a local surface fitting mode, and under the condition that the surface of all point cloud data in the data set is smooth, the local neighborhood of any point can be well fitted by using a plane; for this reason, for each point cloud data, all points within a certain radius can be searched, and then a local plane in the least square sense of the points is calculated, wherein the local plane can be represented by a local plane expression; calculating by adopting a fitting formula according to the local plane expression and each point cloud coordinate of the data set under the grid coordinate system to obtain a fitting numerical value of a fitting plane, and then obtaining a local plane normal vector asThe grid map center coordinate formula is:
the local plane expression is: z 1i=a*x1i+b*y1i + c,
The fitting formula is:
Wherein x, y and z are point cloud coordinates under a grid coordinate system, x i、yi、zi is a point cloud center coordinate corresponding to the point cloud coordinates under the grid coordinate system, x 1i、y1i、z1i is a point cloud coordinate of the data set under the grid coordinate system, a, b and c are fitting parameters of point cloud data in a local plane, and (a, b and c) are fitting parameters for constructing a normal vector of the local plane.
In the embodiment of the application, the calculated local plane normal vector has ambiguity, namely, only the straight line where the normal vector is located is obtained, and the direction of the normal vector is not determined to be the final direction of the normal vector, then the direction of the normal vector is required to be simultaneously directed to the outer side of a building, because the unmanned aerial vehicle is always positioned at the periphery of the building, only the unit vector of the unmanned aerial vehicle to the point cloud data is required to be multiplied with the calculated normal vector, if the unit vector is smaller than 0, the normal vector is reversed, and a redirection normal vector formula is adopted to calculate a redirection normal vector of the point cloud coordinate corresponding to the point cloud center coordinate, wherein the redirection normal vector formula is as follows:
In the method, in the process of the invention, Is a redirected normal vector redirected by a normal vector, is a normal vector not redirected by the normal vector,/>Is a unit direction vector. The unit direction vector refers to a unit direction vector directed from the body of the unmanned aerial vehicle to the point cloud data.
Fig. 2 is a schematic diagram of view cone model viewpoint generation in the unmanned aerial vehicle viewpoint generation method based on incremental point cloud data according to the embodiment of the application.
In one embodiment of the present application, according to the acquisition parameters, the actual coordinates of each point cloud, and each redirection normal vector, obtaining viewpoint data of a viewpoint corresponding to each point cloud data includes:
calculating according to the center distance of the acquisition parameters, the actual coordinates of each point cloud and each redirection normal vector to obtain first viewpoint coordinates of viewpoints corresponding to each point cloud data;
according to fitting parameter calculation of the redirection normal vector, rotation data of the view point corresponding to the point cloud data is obtained;
Wherein the viewpoint data includes first viewpoint coordinates and rotation data.
As shown in fig. 2, each sampled point cloud data is mapped on the bottom plane center of the view point cone according to the filtered point cloud data and the corresponding redirection normal vector, and a rough view point is generated according to the view cone model. And calculating a first viewpoint coordinate of a viewpoint corresponding to each point cloud data by adopting a generated viewpoint position formula according to the center distance of the acquisition parameters, the actual coordinates of each point cloud and each redirection normal vector. If the fitting parameters of the redirection normal vector are recorded asAnd calculating to obtain rotation data of the view point corresponding to the point cloud data by adopting a pitching yaw angle formula according to fitting parameters of the redirection normal vector. The viewpoint position formula is:
The formula of the pitching yaw angle is as follows:
Wherein [ x 0,y0,z0 ] is the first viewpoint coordinate of the viewpoint, [ x, y, z ] is the point cloud coordinate under the grid coordinate system, and [ yaw, pitch, roll ] are the yaw angle, pitch angle and roll angle of the acquisition device when three-dimensional coverage is performed respectively. In this embodiment, the roll angle is set to 0, reducing the computational complexity. Viewpoint data may be employed And (3) representing.
Fig. 3 is a schematic diagram of iterative generation of derivative viewpoints in an unmanned aerial vehicle viewpoint generation method based on incremental point cloud data according to an embodiment of the present application.
As shown in fig. 3, in one embodiment of the present application, iteratively generating a plurality of derivative viewpoints for each starting viewpoint based on a cone model under an actual coordinate system includes:
Obtaining set iteration parameters, wherein the iteration parameters comprise an iteration direction matrix diag { d 1,d2…d5 }, a Step width Step pos and a Step angle Step angle;
and placing the bottom end surface of the view cone model on a starting viewpoint, and gradually iterating the starting viewpoint to the vertex of the view cone model according to the iteration parameters to generate a plurality of derivative viewpoints.
It should be noted that, the starting viewpoint is used as an iteration starting point, and according to the set position and iteration parameters, a denser and multi-view derivative viewpoint is generated in the neighborhood of the starting viewpoint. It can be understood that: in the view cone model, a starting viewpoint is taken as a root viewpoint, and the position of the root viewpoint is translated according to the stepping width and rotated by the stepping angle, so that a new viewpoint is generated and recorded as a derivative viewpoint. The derived viewpoint formula is generated as follows:
Where [ x 0i,y0i,z0i,yawi,pitchi ] is the starting viewpoint of the ith iteration, i is a natural number other than 0, and [ x 0i,new,y0i,new,z0i,new,yawi,new,pitchi,new ] is the derivative viewpoint of the ith iteration.
Fig. 4 is a schematic diagram of spatial constraint expansion filtering in an unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to an embodiment of the present application.
In one embodiment of the present application, performing traversal screening and merging processing on all viewpoints in the first viewpoint data set, to obtain an optimized viewpoint includes: and carrying out traversal screening and viewpoint merging processing on all viewpoints in the first viewpoint data set by adopting space constraint filtering and view constraint filtering in sequence to obtain an optimized viewpoint. As shown in fig. 4, the content of performing traversal screening on all viewpoints in the first viewpoint data set by using spatial constraint filtering includes:
Mapping all viewpoints in the first viewpoint data set to a grid coordinate system to obtain point cloud data points corresponding to each viewpoint, and obtaining center coordinates, first expansion data and second expansion data of the grid coordinate system;
Under a grid coordinate system, expanding by taking the center coordinate as the center and taking the first expansion data as a unit to obtain a first expansion area; deleting all the point cloud data points in the first expansion area;
Expanding by taking the center coordinate as a center and taking the second expansion data as a unit to obtain a second expansion area; and deleting all the point cloud data points outside the second expansion area to obtain a screened point cloud data point set.
It should be noted that the first expansion data and the second expansion data may be set according to the requirement, and are not particularly limited herein. In this embodiment, all the views in the first view dataset are first converted into the grid coordinate system by using the coordinate conversion formula, and then space constraint filtering is performed. The coordinate conversion formula is:
Where [ x 0i,y0i,z0i ] is the coordinate of the ith view point data in the first view point dataset, [ x 1i,y1i,z1i ] is the coordinate of the ith point cloud data point in the grid coordinate system, and [ x min,ymin,zmin]T ] is the minimum coordinate of the set task space. In this embodiment, in order to ensure that the unmanned aerial vehicle (the acquisition device) does not collide with an object during exploration, and the unmanned aerial vehicle generates a view point in the building or a view point very close to the building through filtering, the operation range of the unmanned aerial vehicle is restrained by the second expansion, some view points which are far away from the building and ineffective are removed, the expansion uses KD-TREE to traverse the center point of the occupied area, other areas which are not occupied in the expansion radius are searched for taking the point as the occupied area, and the occupied area is set to be occupied, so that expansion is performed. As shown in fig. 4, the expansion area formula is also adopted to expand according to the grid coordinate system, so that the point cloud data points are filtered through expansion twice. The expansion area formula is:
Wherein OccupyMap (x 1i,y1i,z1i) is the occupancy state of the ith point cloud data point in the grid coordinates, kdsearch (OccupyMap, S radius) is the center of all occupied area center points, occupycent (x 1i,y1i,z1i) is the center coordinates of the ith point cloud data point in the grid coordinates, and S radius is the first expansion data or the second expansion data.
In one embodiment of the present application, the unmanned aerial vehicle viewpoint generating method based on incremental point cloud data includes: performing traversal screening on all view points in the point cloud data point set by adopting view field constraint filtering, wherein the content of performing traversal screening on all view points in the point cloud data point set by adopting view field constraint filtering comprises the following steps:
mapping all the point cloud data points in the point cloud data point set to an actual coordinate system to obtain the point actual coordinates of the second view point corresponding to each point cloud data point; acquiring rotation data corresponding to each point cloud data point;
converting according to the actual coordinates and the rotation data of the points to obtain second viewpoint coordinates of each second viewpoint under a viewpoint coordinate system;
Filtering each filtered view point by adopting constraint conditions according to the second view point coordinates to obtain a coverage view point set formed by the second view points meeting the constraint conditions;
Wherein, the constraint condition is:
Where X s is the X-axis coordinate value of the second viewpoint coordinate, Y s is the Y-axis coordinate value of the second viewpoint coordinate, Z s is the Z-axis coordinate value of the second viewpoint coordinate, horzFOV is the horizontal field angle of the acquisition parameter, vertFOV is the vertical field angle of the acquisition parameter, and d is the center distance of the acquisition parameter.
It should be noted that, all the point cloud data points in the point cloud data point set are converted to the second viewpoint coordinates in the viewpoint coordinate system according to the viewpoint coordinate system conversion formula. The viewpoint coordinate system conversion formula is:
[xs,ys,zs]T=[x1-x0,y1-y0,z1-z0]T*R
where R is a rotation matrix converted from an actual coordinate system to a viewpoint coordinate system.
Fig. 5 is a schematic diagram of horizontal resolution calculation in the unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to the embodiment of the present application, and fig. 6 is a schematic diagram of viewpoint merging in the unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to the embodiment of the present application.
In one embodiment of the present application, the unmanned aerial vehicle viewpoint generating method based on incremental point cloud data includes: the merging processing is carried out on all the second viewpoints in the coverage viewpoint set, and the merging processing of all the second viewpoints in the coverage viewpoint set comprises the following steps:
Mapping each second viewpoint in the coverage viewpoint set into a grid coordinate system, and dividing all point cloud data in the grid coordinate system according to the grid width to obtain a plurality of grid areas;
Acquiring plane parameters covered by each second viewpoint, the first point cloud data quantity and the second point cloud data quantity in the grid area where the first point cloud data quantity is located;
Calculating according to the plane parameters, the first point cloud data quantity and the second point cloud data quantity to obtain an index total score corresponding to the second viewpoint;
and merging all second viewpoints in each grid region into one optimized viewpoint according to the index total score.
It should be noted that, mapping each second view point in the coverage view point set to a grid coordinate system through a coordinate conversion formula, dividing all point cloud data in the grid coordinate system through a grid width to obtain a plurality of grid areas, and then obtaining data in each grid area to calculate an index total score of each second view point.
In the embodiment of the present application, the content for obtaining the index total score corresponding to the second viewpoint includes:
Calculating according to the first point cloud data quantity and the second point cloud data quantity by adopting a coverage rate formula to obtain coverage rate indexes of the view points;
calculating according to the plane parameters by adopting a resolution index formula to obtain the resolution of the viewpoint;
taking the product of the coverage index of the viewpoint and the resolution ratio of the coverage index as the index score of the viewpoint;
if the index score is not less than the score threshold, the index total score is the index score, and if the index score is less than the score threshold, the index total score is 0;
The coverage formula is
The resolution index formula is:
Wherein, N cover is the number of the first point cloud data, which can be understood as the number of the second viewpoint coverage point cloud data; n is the second point cloud data quantity, S cover is the coverage index of the view point, S res is the resolution of the view point, L is the resolution constant of the plane parameter, c is the pixel width constant of the plane parameter, d horz is the level difference value, and d vert is the vertical difference value. In this embodiment, the second viewpoint is used as the interest point to extend the center distance d along two ends to obtain two boundary points, the two boundary points are mapped onto the imaging plane to be used as the interest plane, the difference between the distances between the two boundary points and the center of the acquisition device is calculated on the interest plane to be |u 2-u1 |, and the corresponding difference |v 2-v1 | can be calculated by intersecting the interest plane with the vertical plane in the same way, as shown in fig. 5.
Note that the scoring threshold value may be set according to requirements, and is not particularly limited herein.
In the embodiment of the application, all the second viewpoints in each grid area are combined into one optimized viewpoint according to the index total score, and it can be understood that in viewpoint combination based on a centroid model, the second viewpoints in the grid area are combined into one optimized viewpoint by traversing the second viewpoints of the grid areas and taking the second viewpoint with the highest index total score as the center, and adopting an attractive force model of a viewpoint combination formula to combine the second viewpoints in a certain radius; it can be understood that all the second views are combined together in weight by the index total score in the grid area as shown in fig. 6. The viewpoint merging formula is:
In the method, in the process of the invention, For the merged optimized viewpoint,/>For the second viewpoint with the highest total index score in the grid region,/>For the second view in the grid region, S center is the index total score for the central second view in the grid region, and S VP is the index total score for the second view in the grid region.
Fig. 7 is a schematic diagram of optimizing viewpoint sequencing in the unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to the embodiment of the application.
As shown in fig. 7, in one embodiment of the present application, using LKH algorithm to sort all optimized views, generating the content of the three-dimensional overlay view sequence that matches the three-dimensional overlay view object to be three-dimensional overlay view includes: under the viewpoint coordinates, acquiring the space position data of each optimized viewpoint, and calculating the space distance of the corresponding two optimized viewpoints according to the space position data of each optimized viewpoint; and sequencing all the optimized viewpoints by adopting space distance minimization to obtain a three-dimensional coverage viewpoint sequence.
It should be noted that, when the method is applied to a three-dimensional coverage unmanned plane, the paths of the optimized viewpoints need to be ordered, so that the distance between the optimized viewpoints needs to be minimized, so as to ensure the efficiency of three-dimensional coverage, the problem can be considered as a multi-travel business problem, and the problem can be quickly solved by using an LKH heuristic solver, wherein the objective of the solver is to minimize the cost function:
In the formula, VP j is the spatial position data of the jth optimized viewpoint, M is the total number of optimized viewpoints, jall _dis is the minimum spatial distance, after all optimized viewpoints are ordered by the rule of the optimized viewpoint ordering, the output three-dimensional coverage viewpoint sequence provides navigation points for the unmanned aerial vehicle cluster to perform collaborative three-dimensional coverage, wherein the optimized viewpoint ordering is shown in fig. 7, for example, the path length between each optimized viewpoint is long, all optimized viewpoints are ordered by ordering according to the ordering rule with the shortest path, the distance that the unmanned aerial vehicle needs to fly during coverage is reduced, and the coverage efficiency is improved.
According to the unmanned aerial vehicle viewpoint generation method based on the incremental point cloud data, the viewpoint generation is not needed to be carried out by using global map information, the incremental viewpoint generation is directly carried out by utilizing the initial incremental point cloud quantity, the map and the viewpoint generation can be simultaneously built on line in real time, the grid map is built according to viewpoint constraint, the viewpoint generation is carried out according to the incremental point cloud data, the viewpoint filtering is carried out by secondary expansion, the three-dimensional coverage effect and the coverage efficiency of the optimized viewpoint are ensured based on the viewpoint evaluation index of resolution and coverage rate, the multiple viewpoints are combined and adjusted through viewpoint combination based on the centroid model, the viewpoint density is controlled while the viewpoint quality is ensured, and finally all the optimized viewpoints are sequenced through an LKH solver, so that the shortest path traversing of each optimized viewpoint is ensured, and the efficiency is improved.
Fig. 8 is a schematic diagram of an object being a building body in the unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to an embodiment of the present application, fig. 9 is a schematic diagram of a viewpoint generated when a wall part of a building is pointed by the unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to an embodiment of the present application, fig. 10 is a schematic diagram of a viewpoint generating effect at a lower part of a building in the unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to an embodiment of the present application, and fig. 11 is a schematic diagram of a complete generation of a final viewpoint of a building in the unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to an embodiment of the present application.
As shown in fig. 8 to 11, by performing simulation in gazebo by the unmanned aerial vehicle viewpoint generating method based on incremental point cloud data, one unmanned aerial vehicle carrying a laser radar scans a building, and simultaneously receives point clouds and generates viewpoints covering the building, as shown in fig. 10, as map data increases, the viewpoints are generated continuously as map data increases, and fine-tuning some previously generated viewpoints, and finally, the effect diagram of complete viewpoint generation of fig. 11 is obtained.
Embodiment two:
Fig. 12 is a schematic frame diagram of an unmanned aerial vehicle viewpoint generating device based on incremental point cloud data according to an embodiment of the present application.
As shown in fig. 12, an embodiment of the present application provides an unmanned aerial vehicle viewpoint generating device based on incremental point cloud data, which includes a data acquisition module 10, an update filtering module 20, a vector calculation module 30, a viewpoint calculation module 40, a viewpoint iterative derivation module 50 and a viewpoint optimization module 60;
the data acquisition module 10 is used for constructing a three-dimensional grid occupation map of the viewpoint object to be three-dimensional covered, acquiring incremental point cloud data of the viewpoint object to be three-dimensional covered through the acquisition equipment and acquiring acquisition parameters of the acquisition equipment;
The updating and filtering module 20 is configured to update the three-dimensional grid occupation map according to the acquisition parameters and the incremental point cloud data, obtain a grid update map, and obtain filtered point cloud data in the grid update map;
the vector calculation module 30 is configured to construct a grid coordinate system for the grid update map and obtain point cloud coordinates corresponding to each point cloud data under the grid coordinate system; processing the point cloud coordinates to obtain a redirection normal vector corresponding to the point cloud data;
The viewpoint calculating module 40 is configured to construct an actual coordinate system for the three-dimensional coverage viewpoint object, obtain actual coordinates of point clouds corresponding to the point cloud data under the actual coordinate system, and calculate according to the acquisition parameters, the actual coordinates of the point clouds and the redirection normal vectors to obtain viewpoint data of viewpoints corresponding to the point cloud data;
The viewpoint iteration deriving module 50 is configured to iterate each starting viewpoint based on the cone model under an actual coordinate system to generate a plurality of derived viewpoints by taking the viewpoint corresponding to each viewpoint datum as the starting viewpoint, and construct a first viewpoint dataset from all the starting viewpoints and the corresponding derived viewpoints;
the viewpoint optimizing module 60 is configured to perform traversal screening and merging processing on all viewpoints in the first viewpoint data set to obtain an optimized viewpoint; and sequencing all the optimized viewpoints by using an LKH algorithm to generate a three-dimensional coverage viewpoint sequence matched with the object of the three-dimensional coverage viewpoint to be subjected to three-dimensional coverage.
It should be noted that, the content of the module in the second apparatus corresponds to the content of the step of the method in the first apparatus. The content of the unmanned aerial vehicle viewpoint generating method based on the incremental point cloud data has been described in the first embodiment, and in this embodiment, the module content of the unmanned aerial vehicle viewpoint generating device based on the incremental point cloud data will not be described in detail.
Embodiment III:
Fig. 13 is a schematic diagram of a terminal device according to an embodiment of the present application.
As shown in fig. 13, an embodiment of the present application provides a terminal device, including a processor and a memory;
a memory for storing program code and transmitting the program code to the processor;
and the processor is used for executing the unmanned aerial vehicle viewpoint generating method based on the incremental point cloud data according to the instructions in the program codes.
It should be noted that the processor is configured to execute the steps in the embodiment of the method for generating the unmanned aerial vehicle viewpoint based on the incremental point cloud data according to the instructions in the program code. Or the processor, when executing the computer program, performs the functions of the modules/units in the system/device embodiments described above.
For example, a computer program may be split into one or more modules/units, which are stored in a memory and executed by a processor to perform the present application. One or more of the modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program in the terminal device.
The terminal device may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server, etc. The terminal device may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that the terminal device is not limited and may include more or less components than those illustrated, or may be combined with certain components, or different components, e.g., the terminal device may also include input and output devices, network access devices, buses, etc.
The processor may be a Central Processing Unit (CPU), or other general purpose processor, digital Signal Processor (DSP), application Specific Integrated Circuit (ASIC), off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may be an internal storage unit of the terminal device, such as a hard disk or a memory of the terminal device. The memory may also be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) card, a flash memory card, etc. provided on the terminal device. Further, the memory may also include both an internal storage unit of the terminal device and an external storage device. The memory is used for storing computer programs and other programs and data required by the terminal device. The memory may also be used to temporarily store data that has been output or is to be output.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a read-only memory (ROM), a random-access memory (RAM), a magnetic disk, or an optical disk, etc.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (11)

1. The unmanned aerial vehicle viewpoint generation method based on the incremental point cloud data is characterized by comprising the following steps of:
Constructing a three-dimensional grid occupation map of a to-be-three-dimensional coverage viewpoint object, and acquiring incremental point cloud data of the to-be-three-dimensional coverage viewpoint object and acquisition parameters of acquisition equipment through the acquisition equipment;
updating the three-dimensional grid occupation map according to the acquisition parameters and the incremental point cloud data to obtain a grid update map and acquiring filtered point cloud data in the grid update map;
Constructing a grid coordinate system for the grid updating map and acquiring point cloud coordinates corresponding to the point cloud data under the grid coordinate system; processing the point cloud coordinates to obtain a redirection normal vector corresponding to the point cloud data;
Constructing an actual coordinate system for the to-be-three-dimensional coverage viewpoint object, acquiring point cloud actual coordinates corresponding to the point cloud data under the actual coordinate system, and calculating according to the acquisition parameters, the point cloud actual coordinates and the redirection normal vectors to obtain viewpoint data of viewpoints corresponding to the point cloud data;
Taking the view point corresponding to each view point data as a starting view point, iteratively generating a plurality of derivative view points for each starting view point based on a view cone model under the actual coordinate system, and constructing a first view point data set by all the starting view points and the corresponding derivative view points;
Performing traversal screening and merging processing on all viewpoints in the first viewpoint data set to obtain an optimized viewpoint; and sequencing all the optimized viewpoints by using an LKH algorithm to generate a three-dimensional coverage viewpoint sequence matched with the object of the three-dimensional coverage viewpoint to be subjected to three-dimensional coverage.
2. The unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to claim 1, wherein updating the three-dimensional grid occupancy map according to the acquisition parameters and the incremental point cloud data to obtain a grid update map comprises:
Calculating according to the acquisition parameters and the incremental point cloud data to obtain grid width;
Updating the three-dimensional grid occupation map by taking the grid width as a unit window and adopting a filtering rule to obtain a grid updating map;
Wherein, the content of the filtering rule comprises:
traversing the three-dimensional grid occupation map by using the unit window, acquiring the quantity of the increment point cloud data in the unit window, and determining the state corresponding to the unit window according to the quantity of the increment point cloud data;
If the number of the increment point cloud data is 0, the unit window is in an unoccupied state and is set to be in an occupied state;
If the number of the increment point cloud data is not 0, the unit window is in an occupied state;
if the unit window is in an occupied state and the quantity of the increment point cloud data is larger than a quantity threshold value, eliminating all increment point cloud data in the unit window.
3. The unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to claim 2, wherein processing the point cloud coordinates to obtain a redirection normal vector corresponding to the point cloud data comprises:
Calculating according to the point cloud coordinates and the grid width to obtain point cloud center coordinates corresponding to the point cloud coordinates;
Searching the point cloud data by using KD-TREE under the grid coordinate system and with the point cloud center coordinate as a reference point to obtain n data sets constructed by the point cloud data;
Performing least square fitting processing on n point cloud data in the data set to obtain a local plane expression; calculating according to the point cloud coordinates corresponding to each piece of point cloud data and the local plane expression, and obtaining fitting parameters corresponding to each piece of point cloud data;
Screening the fitting parameters with the smallest numerical values from n fitting parameters to construct a local plane normal vector;
And calculating the local plane normal vector and the unit direction vector to obtain a redirection normal vector of the point cloud coordinate corresponding to the point cloud center coordinate.
4. The unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to claim 1, wherein obtaining viewpoint data of viewpoints corresponding to the point cloud data according to the acquisition parameters, the actual coordinates of the point clouds and the redirection normal vectors comprises:
calculating according to the center distance of the acquisition parameters, the actual coordinates of each point cloud and each redirection normal vector to obtain first viewpoint coordinates of viewpoints corresponding to the point cloud data;
calculating according to fitting parameters of the redirection normal vector to obtain rotation data of a viewpoint corresponding to the point cloud data;
wherein the viewpoint data includes first viewpoint coordinates and rotation data.
5. The unmanned aerial vehicle viewpoint generation method based on incremental point cloud data of claim 1, wherein iteratively generating a number of derivative viewpoints for each of the starting viewpoints based on a cone model under the actual coordinate system comprises:
obtaining set iteration parameters, wherein the iteration parameters comprise an iteration direction matrix, a stepping width and a stepping angle;
and placing the bottom end surface of the view cone model on the starting viewpoint, and gradually iterating the starting viewpoint to the vertex of the view cone model according to the iteration parameters to generate a plurality of derivative viewpoints.
6. The unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to claim 2, wherein performing traversal screening and merging processing on all viewpoints in the first viewpoint data set to obtain an optimized viewpoint comprises: and performing traversal screening and viewpoint merging processing on all viewpoints in the first viewpoint data set by sequentially adopting space constraint filtering and view constraint filtering to obtain an optimized viewpoint.
7. The unmanned aerial vehicle viewpoint generation method based on incremental point cloud data of claim 6, wherein traversing the content screened by spatial constraint filtering for all viewpoints in the first viewpoint data set comprises:
Mapping all viewpoints in the first viewpoint data set to the grid coordinate system to obtain point cloud data points corresponding to each viewpoint, and acquiring center coordinates, first expansion data and second expansion data of the grid coordinate system;
Under the grid coordinate system, taking the center coordinate as a center and taking the first expansion data as a unit to expand to obtain a first expansion area; deleting all the point cloud data points in the first expansion area;
expanding by taking the center coordinate as a center and taking the second expansion data as a unit to obtain a second expansion area; and deleting all the point cloud data points outside the second expansion area to obtain a screened point cloud data point set.
8. The unmanned aerial vehicle viewpoint generating method based on incremental point cloud data of claim 7, comprising: performing traversal screening on all view points in the point cloud data point set by adopting view field constraint filtering, wherein the content of performing traversal screening on all view points in the point cloud data point set by adopting view field constraint filtering comprises the following steps:
Mapping all the point cloud data points in the point cloud data point set to the actual coordinate system to obtain the point actual coordinates of a second viewpoint corresponding to each point cloud data point; acquiring rotation data corresponding to each point cloud data point;
Converting according to the actual coordinates of the points and the rotation data to obtain second viewpoint coordinates of each corresponding second viewpoint under a viewpoint coordinate system;
filtering each filtered view point by adopting a constraint condition according to the second view point coordinates to obtain a coverage view point set formed by the second view points meeting the constraint condition;
wherein, the constraint condition is:
Where X s is the X-axis coordinate value of the second viewpoint coordinate, Y s is the Y-axis coordinate value of the second viewpoint coordinate, Z s is the Z-axis coordinate value of the second viewpoint coordinate, horzFOV is the horizontal field angle of the acquisition parameter, vertFOV is the vertical field angle of the acquisition parameter, and d is the center distance of the acquisition parameter.
9. The unmanned aerial vehicle viewpoint generating method based on incremental point cloud data of claim 8, comprising: the merging processing is carried out on all the second viewpoints in the coverage viewpoint set, and the merging processing of all the second viewpoints in the coverage viewpoint set comprises the following steps:
Mapping each second view point in the coverage view point set into the grid coordinate system, and dividing all point cloud data in the grid coordinate system according to the grid width to obtain a plurality of grid areas;
Acquiring plane parameters covered by each second viewpoint, the first point cloud data quantity and the second point cloud data quantity in the grid area where the first point cloud data quantity is located;
Calculating according to the plane parameters, the first point cloud data quantity and the second point cloud data quantity to obtain an index total score corresponding to the second viewpoint;
and merging all the second viewpoints in each grid region into one optimized viewpoint according to the index total score.
10. The unmanned aerial vehicle viewpoint generating device based on the incremental point cloud data is characterized by comprising a data acquisition module, an updating and filtering module, a vector calculation module, a viewpoint iteration derivative module and a viewpoint optimization module;
the data acquisition module is used for constructing a three-dimensional grid occupation map of the viewpoint object to be three-dimensional covered, acquiring incremental point cloud data of the viewpoint object to be three-dimensional covered through acquisition equipment and acquiring acquisition parameters of the acquisition equipment;
The updating and filtering module is used for updating the three-dimensional grid occupation map according to the acquisition parameters and the incremental point cloud data to obtain a grid updating map and obtaining filtered point cloud data in the grid updating map;
The vector calculation module is used for constructing a grid coordinate system for the grid updating map and acquiring point cloud coordinates corresponding to the point cloud data under the grid coordinate system; processing the point cloud coordinates to obtain a redirection normal vector corresponding to the point cloud data;
The viewpoint calculating module is used for constructing an actual coordinate system for the viewpoint object to be three-dimensionally covered, acquiring the point cloud actual coordinates corresponding to the point cloud data under the actual coordinate system, and calculating according to the acquisition parameters, the point cloud actual coordinates and the redirection normal vectors to obtain viewpoint data of viewpoints corresponding to the point cloud data;
the viewpoint iteration derivative module is used for taking a viewpoint corresponding to each viewpoint datum as a starting viewpoint, carrying out iteration on each starting viewpoint based on a view cone model under the actual coordinate system to generate a plurality of derivative viewpoints, and constructing a first viewpoint dataset by all the starting viewpoints and the corresponding derivative viewpoints;
The viewpoint optimizing module is used for performing traversal screening and merging processing on all viewpoints in the first viewpoint data set to obtain an optimized viewpoint; and sequencing all the optimized viewpoints by using an LKH algorithm to generate a three-dimensional coverage viewpoint sequence matched with the object of the three-dimensional coverage viewpoint to be subjected to three-dimensional coverage.
11. A terminal device comprising a processor and a memory;
The memory is used for storing program codes and transmitting the program codes to the processor;
The processor is configured to execute the unmanned aerial vehicle viewpoint generating method based on incremental point cloud data according to any one of claims 1 to 9 according to instructions in the program code.
CN202410260174.5A 2024-03-07 2024-03-07 Unmanned aerial vehicle viewpoint generation method, device and equipment based on incremental point cloud data Pending CN118115670A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410260174.5A CN118115670A (en) 2024-03-07 2024-03-07 Unmanned aerial vehicle viewpoint generation method, device and equipment based on incremental point cloud data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410260174.5A CN118115670A (en) 2024-03-07 2024-03-07 Unmanned aerial vehicle viewpoint generation method, device and equipment based on incremental point cloud data

Publications (1)

Publication Number Publication Date
CN118115670A true CN118115670A (en) 2024-05-31

Family

ID=91210299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410260174.5A Pending CN118115670A (en) 2024-03-07 2024-03-07 Unmanned aerial vehicle viewpoint generation method, device and equipment based on incremental point cloud data

Country Status (1)

Country Link
CN (1) CN118115670A (en)

Similar Documents

Publication Publication Date Title
Wang et al. Lidar point clouds to 3-D urban models $: $ A review
CN105160702B (en) The stereopsis dense Stereo Matching method and system aided in based on LiDAR point cloud
CN110363849A (en) A kind of interior three-dimensional modeling method and system
CN112305559A (en) Power transmission line distance measuring method, device and system based on ground fixed-point laser radar scanning and electronic equipment
WO2023124676A1 (en) 3d model construction method, apparatus, and electronic device
CN113916130B (en) Building position measuring method based on least square method
CN110827302A (en) Point cloud target extraction method and device based on depth map convolutional network
CN113012063B (en) Dynamic point cloud repairing method and device and computer equipment
CN114332291A (en) Oblique photography model building outer contour rule extraction method
CN115564926A (en) Three-dimensional patch model construction method based on image building structure learning
WO2011085435A1 (en) Classification process for an extracted object or terrain feature
CN114387506A (en) Transmission tower monitoring method and device, computer equipment and storage medium
CN116563466A (en) Deep learning-based three-dimensional Shan Mudian cloud completion method
CN113763529B (en) Substation modeling method based on three-dimensional scanning
CN112687005A (en) Coal pile volume measurement method based on three-dimensional reconstruction
CN118115670A (en) Unmanned aerial vehicle viewpoint generation method, device and equipment based on incremental point cloud data
JP6761388B2 (en) Estimator and program
CN113409446B (en) Blind person assisted vision processing method and device
CN113066161B (en) Modeling method of urban radio wave propagation model
CN115713548A (en) Automatic registration method for multi-stage live-action three-dimensional model
US20220215619A1 (en) Geospatial modeling system providing 3d geospatial model update based upon iterative predictive image registration and related methods
Wang et al. 3D building reconstruction from LiDAR data
Sayed et al. Point clouds reduction model based on 3D feature extraction
Ye et al. Gaussian mixture model of ground filtering based on hierarchical curvature constraints for airborne lidar point clouds
Ridzuan et al. Voxelization techniques: data segmentation and data modelling for 3d building models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination