CN115903543A - Simulation verification system for rapid traversal search control of unmanned aerial vehicle complex space - Google Patents

Simulation verification system for rapid traversal search control of unmanned aerial vehicle complex space Download PDF

Info

Publication number
CN115903543A
CN115903543A CN202211479541.8A CN202211479541A CN115903543A CN 115903543 A CN115903543 A CN 115903543A CN 202211479541 A CN202211479541 A CN 202211479541A CN 115903543 A CN115903543 A CN 115903543A
Authority
CN
China
Prior art keywords
simulation
unmanned aerial
aerial vehicle
module
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211479541.8A
Other languages
Chinese (zh)
Inventor
赵小川
董忆雪
冯运铎
燕琦
李陈
史津竹
王子彻
樊迪
彭阳
董仕鹏
邵佳星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China North Computer Application Technology Research Institute
Original Assignee
China North Computer Application Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China North Computer Application Technology Research Institute filed Critical China North Computer Application Technology Research Institute
Priority to CN202211479541.8A priority Critical patent/CN115903543A/en
Publication of CN115903543A publication Critical patent/CN115903543A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to a simulation verification system for fast traversal search control of an unmanned aerial vehicle complex space, which comprises a space search module, a trajectory planning module and a simulation unmanned aerial vehicle; the space search module receives the dynamic scene depth map for space search; updating a probability occupancy map based on a dynamic scene depth map in space search, finding out a boundary set and an optimal observation point position corresponding to each boundary in the boundary set, and selecting one optimal observation point position from a plurality of optimal observation point positions as the position of a next target point of the simulated unmanned aerial vehicle; the trajectory planning module is used for planning a trajectory according to the searched target point to generate a continuous expected trajectory and outputting the continuous expected trajectory to the simulation unmanned aerial vehicle; the simulation unmanned aerial vehicle is used for carrying out simulation flight according to the expected track of continuity, and the stability and the safety of operation in the flight process are verified. The invention provides a complete simulation platform capable of realizing environment perception, algorithm verification and maneuvering flight control.

Description

Simulation verification system for rapid traversal search control of unmanned aerial vehicle complex space
Technical Field
The invention belongs to the technical field of intelligent control of unmanned aerial vehicles, and particularly relates to a simulation verification system for fast traversal search control of an unmanned aerial vehicle in a complex space.
Background
The main method for searching indoor space of the existing unmanned aerial vehicle is a wall-following navigation method, and the method senses the distance between the unmanned aerial vehicle and an adjacent wall surface indoors by using a distance sensor, maintains the fixed distance between the unmanned aerial vehicle and the wall surface, and enables the unmanned aerial vehicle to traverse and search all indoor rooms along the wall surface. However, the method is not suitable for indoor space with complex environment, because in the scene, the distance sensor can sense the distance between the distance sensor and the wall surface and also can sense the distance between other obstacles which do not belong to the wall surface, and at the moment, great interference is brought to the logic of the algorithm, so that the algorithm cannot run. In addition, even if only a wall surface exists in the spatial environment, in the case of a large-scale scene, navigation along the wall may result in lack of exploration in the middle of the space due to the limitation of the sensing range of the sensor, thereby resulting in incomplete search. And the unmanned aerial vehicle is controlled to operate by the wall navigation method, the speed direction and the size of the unmanned aerial vehicle at a certain moment are controlled mainly by referring to an artificial potential field method, the method cannot restrict the whole operation time and the average speed of the unmanned aerial vehicle, and the unmanned aerial vehicle cannot be ensured to quickly complete a space search task.
Meanwhile, the existing unmanned aerial vehicle space traversal search simulation verification system mostly focuses on running in systems with poor visualization, modularization and portability, such as MATLAB/SIMULINK. And only the simulation verification of the algorithm is paid attention to, and the restrictions of the space complexity, the sensor sensing capability, the data communication capability and the flight control capability of the real environment are ignored. If the unmanned aerial vehicle space search algorithm is verified without considering the various capacity restriction factors, the verified search algorithm is inevitably invalid when the real environment verification is carried out in the actual environment.
Disclosure of Invention
In view of the above analysis, the present invention aims to disclose a simulation verification system for fast traversal search control of an unmanned aerial vehicle complex space, which is used for solving the simulation verification problem of flight trajectory planning of an unmanned aerial vehicle in an indoor complex environment.
The invention discloses a simulation verification system for fast traversal search control of an unmanned aerial vehicle complex space, which comprises a space search module, a track planning module and a simulation unmanned aerial vehicle;
the space search module is connected with the simulation unmanned aerial vehicle and is used for performing space search according to the received dynamic scene depth map output frame by frame when the simulation unmanned aerial vehicle performs simulation flight; in the space search, updating a probability occupation map based on a dynamic scene depth map, finding out a boundary set for distinguishing unknown and known areas of the map and an optimal observation point position corresponding to each boundary in the boundary set, and selecting one from the optimal observation point positions as the position of the next target point of the simulated unmanned aerial vehicle;
the track planning module is connected with the space searching module and used for carrying out track planning according to a searched target point to generate a continuous expected track and outputting the continuous expected track to the simulation unmanned aerial vehicle;
the simulation unmanned aerial vehicle is used for carrying out simulation flight in a virtual simulation environment according to the expected track of the continuity, and verifying the stability and the safety of operation in the flight process.
Further, the space searching module comprises a coordinate conversion module, a probability occupation map updating module, a boundary generating module, an observation point calculating module and a target point generating module;
the coordinate conversion module is used for converting a scene depth map output by the simulation unmanned aerial vehicle frame by frame into a three-dimensional obstacle space point cloud;
the probability occupation map updating module is used for establishing a probability occupation map composed of dynamic arrays and updating the current state of each unit element in the probability occupation map according to the three-dimensional obstacle space point cloud acquired frame by frame, wherein the current state comprises three states of unknown, unoccupied and occupied;
the boundary generating module is used for searching the current state of each element in the probability occupying map to find out a boundary set for distinguishing unknown areas and known areas of the map;
the observation point calculation module is used for determining the observation point position with the maximum observation efficiency when each boundary in the boundary set is observed;
and the target point generating module is used for carrying out global cost sorting on the observation point positions of all the boundaries and selecting one observation point position as the next target point of the path planning.
Further, the probability occupancy map updating module comprises a map initialization module, an updating module and a state determination module;
the map initialization module is used for initializing the probability occupation map and endowing each element in a map array for storing observed state values of each unit of the three-dimensional space with an initial value; the initial value is a probability value of an unknown state;
the updating module is used for dynamically updating the observed state numerical value of the probability occupation map according to the three-dimensional obstacle point cloud acquired frame by frame; according to the current frame three-dimensional obstacle point cloud, if a certain unit element is judged to be in an occupied state, updating the observed state value of the unit element according to the probability of the occupied state; if a certain unit element is judged to be in an unoccupied state, updating the observed state value of the unit element according to the probability of the unoccupied state;
and the state determining module is used for determining the current state of each unit element in the probability occupation map according to the current observed state value.
Further, in the update module, the observed state value of the map element is updated as follows:
map[x i ][y i ][z i ]=S old +l occ
map[x k ][y k ][z k ]=S old +l free
Figure BDA0003960751300000031
Figure BDA0003960751300000032
wherein, map [ x ] i ][y i ][z i ]Updating values for map elements currently observed as occupied; map [ x ] k ][y k ][z k ]An updated value for the map element currently observed as an unoccupied state; s old A value representing an observed state of a previous observation of the map element; l occ The current observation is an increment of the occupied state value, l free The current observation is an unoccupied state value increment; p is a radical of formula hit For a predetermined probability of observing an occupied state once, p miss For a predetermined probability of observing an unoccupied state once, and p hit +p miss =1,p hit >p miss
Further, in the observation point calculating module, the determined optimal observation point position VP of the boundary Fro is:
VP=Mid+s·dir normal
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003960751300000041
and &>
Figure BDA0003960751300000042
Traversing and searching from the boundary Fro mean point to obtain two spatial point positions with the farthest distance between two ends of the boundary Fro; dir normal Is the boundary Fro center normal; s is the distance from the center point Mid from the best observation point position VP on the center normal.
Further, in the target point generation module, a total observation cost composed of a distance cost, a speed cost and a speed change cost is established for the observation point positions of each boundary and is sequenced, and the observation point position with the minimum total observation cost is used as the next target point of the unmanned aerial vehicle.
Further, the total observation cost is:
cost sum =cost reach_dist +cost vel_normal +cost vel_change
wherein, cost reach_dist The distance cost is the path length of the current position of the simulated unmanned aerial vehicle to the observation point position planned by the global A star;
cost vel_normal the speed cost is the included angle between the vector direction of a path point at the tail end of a path planned by the global star A and the normal vector of the center of the boundary;
cost vel_change and the cost of speed change is the included angle between the current speed vector direction and the boundary central normal vector.
Furthermore, the simulated unmanned aerial vehicle is a simulated quad-rotor unmanned aerial vehicle and comprises a first type of simulation sensor, a second type of simulation sensor, a nonlinear simulation controller and four simulation motors;
the first type of simulation sensor comprises a simulated depth camera; the system comprises a front-end sensing data acquisition module, a track planning module, a data acquisition module and a data processing module, wherein the front-end sensing data including a dynamic scene is acquired from scene information of a virtual environment where the simulation unmanned aerial vehicle is located and is used as input data of space search and track planning;
the second type of simulation sensor comprises a pose measurement sensor including a simulated IMU, a GPS module, a barometer and a magnetometer and is used for sensing pose information of the simulated unmanned aerial vehicle from natural environment simulation data of a virtual environment where the simulated unmanned aerial vehicle is located;
the nonlinear simulation controller; the system is used for carrying out space search and track planning according to front-end sensing data of the first type of simulation sensor to obtain an expected track and simulated unmanned aerial vehicle pose information sensed by the second type of simulation sensor, carrying out nonlinear control and outputting a motor rotating speed corresponding to a simulated motor;
the simulation motor is used for simulating thrust, air resistance, rotation moment and air resistance moment generated by the unmanned aerial vehicle in the rotation process of the propeller by using the brushless direct current motor according to the motor rotating speed output by the nonlinear simulation controller; the resultant force and resultant moment of the simulated quad-rotor unmanned aerial vehicle in the simulation environment are obtained under the action of the four simulation motors.
Further, the nonlinear simulation controller comprises a track solver, a position loop controller, a nonlinear angle converter, a nonlinear attitude mapper and a hybrid controller;
the trajectory solver is used for converting the input simulation data of the unmanned aerial vehicle expected trajectory into the system expected state quantity at the current moment;
the position loop controller is used for carrying out position loop PID control according to the expected state quantity of the system and outputting a total error of the speed control;
the nonlinear angle converter is used for carrying out nonlinear SE (3) space angle conversion on the total error of the speed control to obtain a space expected rotation matrix;
the nonlinear attitude mapper is used for performing nonlinear attitude SO (3) space mapping on the current space expected rotation matrix and the measurement rotation matrix and outputting an attitude control error;
and the hybrid controller is used for controlling the rotating speed of the motor according to the attitude control error and the total bit speed control error and outputting the rotating speed of the motor to the simulation motor.
Further, the system also comprises a virtual simulation environment module; the virtual simulation environment module is in data communication with the simulation unmanned aerial vehicle; outputting the simulated virtual scene information to a first type of simulation sensor of the simulated unmanned aerial vehicle, so that the first type of simulation sensor can sense the scene information to obtain front-end sensing data including a dynamic scene; the resultant force and resultant moment of the quadrotor unmanned aerial vehicle in the simulation environment are output according to the simulation motor, and the natural environment simulation data including the simulation force field, the atmospheric field and the magnetic field obtained through simulation are output to a second simulation sensor of the simulation unmanned aerial vehicle, so that the second simulation sensor can sense the pose information of the unmanned aerial vehicle according to the natural environment simulation data.
The invention can realize one of the following beneficial effects:
the simulation system solves the problem that the unmanned aerial vehicle lacks a rapid, complete, concise and clear complex space traversal search strategy and a simulation verification environment in the field of intelligent control; a complete simulation platform capable of realizing environment perception, algorithm verification and maneuvering flight control is provided for researchers in the field, and the problem that influences of other module factors cannot be considered in space search algorithm verification of the unmanned aerial vehicle is solved.
In addition, according to the simulation verification space searching and trajectory planning method, the unknown area in the space is quantized by constructing the boundary and the optimal observation point thereof, and the space position with the maximized searching efficiency is determined, so that the problems of low efficiency, incomplete searching and complex logic in a complex space in the conventional indoor searching technology are solved; and then the global optimal observation point is determined as the next target point by calculating the observation point cost, so that the problem of low internal search speed of a complex space is solved.
Drawings
The drawings, in which like reference numerals refer to like parts throughout, are for the purpose of illustrating particular embodiments only and are not to be considered limiting of the invention.
FIG. 1 is a schematic block diagram of the connection of components of a simulation verification system in an embodiment of the present invention;
FIG. 2 is a schematic block diagram illustrating the connection of the spatial search module components in the embodiment of the present invention;
FIG. 3 is a schematic block diagram illustrating the connection of the components of the trajectory planning module in the embodiment of the present invention;
fig. 4 is a schematic diagram of the components and the internal and external connections of the simulated unmanned aerial vehicle in the embodiment of the invention.
Detailed Description
The preferred embodiments of the present invention will now be described in detail with reference to the accompanying drawings, which form a part hereof, and which together with the embodiments of the invention serve to explain the principles of the invention.
The embodiment of the invention discloses a simulation verification system for fast traversal search control of an unmanned aerial vehicle complex space, which comprises a space search module, a trajectory planning module, a simulation unmanned aerial vehicle and a virtual simulation environment module, wherein the space search module is used for searching a space;
the space search module is connected with the simulation unmanned aerial vehicle and used for performing space search according to a received dynamic scene depth map which is output frame by frame when the simulation unmanned aerial vehicle performs simulation flight; in the space search, updating a probability occupation map based on a dynamic scene depth map, finding out a boundary set for distinguishing unknown and known areas of the map and an optimal observation point position corresponding to each boundary in the boundary set, and selecting one from the optimal observation point positions as a next target point position for simulating the flight of the unmanned aerial vehicle;
the track planning module is connected with the space searching module and used for carrying out track planning according to the position of the next target point to be searched to generate a continuous expected track and outputting the continuous expected track to the simulation unmanned aerial vehicle;
the simulation unmanned aerial vehicle is used for carrying out simulation flight in a virtual simulation environment according to the expected continuous track, and verifying the stability and safety of operation in the flight process;
the virtual simulation environment module is in data communication with the simulation unmanned aerial vehicle; outputting the simulated virtual scene information to a simulated depth camera of the simulated unmanned aerial vehicle, so that the simulated depth camera can sense the scene information to obtain front-end sensing data comprising a scene depth map of the position; and outputting the simulated natural environment simulation data such as the simulated force field, the simulated atmospheric field, the simulated magnetic field and the like to the simulated IMU, the GPS module, the barometer and the magnetometer in the simulated unmanned aerial vehicle, so that the simulated IMU, the GPS module, the barometer and the magnetometer can sense the pose information of the unmanned aerial vehicle according to the natural environment simulation data.
Specifically, as shown in fig. 2, the space search module includes a coordinate conversion module, a probability occupancy map update module, a boundary generation module, an observation point calculation module, and a target point generation module;
the coordinate conversion module is used for converting a scene depth map output by the simulation unmanned aerial vehicle frame by frame into a three-dimensional obstacle space point cloud;
the probability occupation map updating module is used for establishing a probability occupation map composed of dynamic arrays and updating the current state of each unit element in the probability occupation map according to the three-dimensional obstacle space point cloud acquired frame by frame, wherein the current state comprises three states of unknown, unoccupied and occupied;
the boundary generating module is used for searching the current state of each element in the probability occupying map to find out a boundary set for distinguishing unknown areas and known areas of the map;
the observation point calculation module is used for determining the observation point position with the maximum observation efficiency when each boundary in the boundary set is observed;
and the target point generating module is used for carrying out global cost sorting on the observation point positions of all the boundaries and selecting one observation point position as the next target point of the path planning.
More specifically, in the coordinate conversion module, based on a depth camera pinhole imaging model, pixels in a depth map shot by an unmanned aerial vehicle depth camera are converted into coordinates in a global coordinate system, so that depth information corresponding to the pixel coordinates is converted into a three-dimensional obstacle space point cloud in the global coordinate system.
Wherein, the internal parameter matrix of the unmanned aerial vehicle depth camera is expressed as:
Figure BDA0003960751300000081
f x 、f y 、c x 、c y is an internal parameter of the camera;
the external parameter matrix of the depth camera of the unmanned aerial vehicle is as follows:
Figure BDA0003960751300000082
r is a rotation matrix of camera external parameters, and t is a translation vector; determined by the camera in the installation position of the drone.
Mapping the pixel plane to a camera coordinate system according to the camera's internal parameters:
Figure BDA0003960751300000083
wherein u and v are two-dimensional pixel coordinates of the image; x, Y and Z are coordinates of a camera coordinate system;
according to the external parameters of the camera, the coordinate under the camera coordinate system is transferred to the global coordinate system as follows:
Figure BDA0003960751300000084
wherein, X w 、Y w 、Z w Three-dimensional coordinates under a global coordinate system;
therefore, the depth information corresponding to the two-dimensional pixel coordinates in the depth map can be converted into the three-dimensional obstacle space point cloud under the global coordinate system.
More specifically, the probability occupancy map updating module comprises a map initialization module, an updating module and a state determination module;
the map initialization module is used for initializing the probability occupation map and endowing each element in a map array for storing observed state values of each unit of the three-dimensional space with an initial value; the initial value is a probability value of an unknown state;
in the map initialization module, the maximum size x of the indoor map on the global coordinate system is determined lenght 、y lenght 、z lenght And a resolution res, establishing a map array representing the state:
map[x size ][y size ][z size ];
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003960751300000091
Figure BDA0003960751300000092
Figure BDA0003960751300000093
each element in the map array stores a value representing the observed state of the corresponding three-dimensional space unit.
When initialization is carried out, each element in the array is initialized to the value S init Assignment, representing the initial state value of the space in which the element is located: map [ x ] size ][y size ][z size ]={S init }。
The updating module is used for dynamically updating the observed state numerical value of the probability occupation map according to the three-dimensional obstacle point cloud acquired frame by frame; according to the current frame three-dimensional obstacle point cloud, if a certain unit element is judged to be in an occupied state, updating the observed state value of the unit element according to the probability of the occupied state; if a certain unit element is judged to be in an unoccupied state, updating the observed state value of the unit element according to the probability of the unoccupied state;
in the updating module, specifically, when the map acquires a frame of obstacle point cloud, the occupied point in the point cloud data is represented as a point set P of the obstacle in a three-dimensional space obs ={pt 0 ,pt 1 ,...,pt i ,...,pt n },P obs Each obstacle in the set of points represents an obstacle point in the global coordinate system
Figure BDA0003960751300000101
The corresponding elements and the belonged sets in the map array are as follows:
map[x i ][y i ][z i ]∈Set occ
Figure BDA0003960751300000102
wherein Set occ In order for the set of elements to be occupied,
Figure BDA0003960751300000103
represents the map starting element map 0][0][0]Coordinates in a three-dimensional coordinate system.
Specifically, when a map acquires a frame of obstacle point cloud, points in the point cloud data in an unoccupied state are spatial points under a global coordinate system through which a communication link passes between the position of the unmanned aerial vehicle and the obstacle point cloud in the global coordinate system;
the resulting unoccupied spots
Figure BDA0003960751300000104
The corresponding array elements collected on the map are:
{map[x k,1 ][y k,1 ][z k,1 ],map[x k,2 ][y k,2 ][z k,2 ],...,map[x k,m ][y k,m ][z k,m ]}∈Set free
wherein, corresponding to a certain frame of obstacle point cloud, probability occupies the occupied element Set in the map occ Middle map element map [ x [ ] i ][y i ][z i ]Elements that are observed once for the current time in occupied state; set of unoccupied elements free Middle map element map [ x [ ] k ][y k ][z k ]Elements that are observed once for the current time in an unoccupied state;
then, the observed state values of the map elements are updated as:
map[x i ][y i ][z i ]=S old +l occ
map[x k ][y k ][z k ]=S old +l free
Figure BDA0003960751300000105
Figure BDA0003960751300000106
wherein S is old A value representing an observed state of a previous observation within the map element; l occ The current observation is an increment of the occupied state value, l free The current observation is an unoccupied state value increment; p is a radical of formula hit For a predetermined probability of observing an occupied state once, p miss For a predetermined probability of observing an unoccupied state once, and p hit +p miss =1,p hit >p miss
And the state determining module is used for determining the current state of each unit element in the probability occupation map according to the current observed state value.
In the state determination module, the current state of each element is judged according to the current observed state value of any element in the map array:
Figure BDA0003960751300000111
wherein, map [ x ]][y][z]The observed state value of any element in the map array, threshold is the boundary Threshold of the unoccupied state and the occupied state, and can be preset according to experience, S init Is the initial value of the element in the map array.
More specifically, the boundary generation module searches for a map element in the probability occupancy map at the intersection of the unknown state and the unoccupied state as a boundary element map [ x ] of the searched map j ][y j ][z j ];
The map [ x ] j ][y j ][z j ]In the state of non-occupation of the bus,with adjacent elements in unknown states, the set of three-dimensional points represented by such map elements forming the boundary
Figure BDA0003960751300000112
There may be multiple Fros in a map, expressed as a Set of boundaries Fro e Set Fro
For the boundary Fro, a reasonable observation point needs to be selected, so that the drone observes the boundary at the point to maximize observation efficiency. The observation efficiency can be maximized in consideration of the observation in the normal direction along the center of the unknown region when the unknown region is observed in a normal case.
More specifically, the process of determining, in the observation point calculation module, the observation point position at which the observation efficiency of each boundary Fro is maximized includes;
1) Acquiring a mean value point of the boundary Fro;
specifically, the boundary Fro mean point is expressed as:
Figure BDA0003960751300000121
wherein, num Fro Representing the number of spatial points contained in the boundary Fro,
Figure BDA0003960751300000122
indicating summing all points on the boundary.
2) Traversing and searching from the mean point of the boundary Fro to obtain two space point positions with the farthest distance between the two ends of the boundary Fro through the established target function;
the objective function is:
Figure BDA0003960751300000123
Figure BDA0003960751300000124
Figure BDA0003960751300000125
wherein, dir PCA The principal component directions of all spatial points within the Fro.
Will find the point satisfying the target function in the boundary Fro by searching
Figure BDA0003960751300000126
And &>
Figure BDA0003960751300000127
The point in space that is farthest from both ends within the boundary Fro is determined.
3) Determining a center normal of the boundary Fro according to two spatial point positions with the farthest distance between two ends of the boundary Fro;
the center normal direction is expressed as:
Figure BDA0003960751300000128
/>
Figure BDA0003960751300000131
wherein light represents the distance between spatial points at both ends of the boundary, z w Is the z-axis in the global coordinate system.
4) And determining the optimal observation point position according to the central point position calculated by the two spatial point positions with the farthest distance from the two ends of the boundary and the perception visual field FOV of the depth camera.
The central points calculated by the space points at the two ends of the boundary are as follows:
Figure BDA0003960751300000132
Figure BDA0003960751300000133
where s is the distance from the center point Mid to the optimal observation point VP on the centerline, so the optimal observation point VP can be expressed as:
VP=Mid+s·dir normal
in this embodiment, a plurality of boundaries Fro exist in a map, and the optimal observation point position corresponding to each boundary Fro can be obtained by searching each boundary through the above method.
More specifically, in the target point generation module, total observation costs composed of distance costs, speed costs, and speed change costs are established for the observation point positions of each boundary and are sorted, and the observation point position with the minimum total observation cost is used as the next target point of the unmanned aerial vehicle.
The observation cost for performing global cost sequencing on the observation point positions of each boundary comprises the following steps:
1) A distance cost; considering the reachable distance from the current position of the unmanned aerial vehicle to the optimal observation point position of each boundary, the distance is obtained by planning the path length from the current position of the unmanned aerial vehicle to the optimal observation point position through the global star A:
cost reach_dist =lenght(A * path);
2) A speed penalty; considering the coincidence degree of the tail end speed of the unmanned aerial vehicle path and the central normal line where the optimal observation point is located, wherein the coincidence degree is the included angle between the vector direction of the path point at the tail end of the path planned by the global star A and the central normal vector of the boundary:
cost vel_normal =coS -1 (dir normal ·dir path_end );
3) A speed change cost; considering the variation of the current speed of the unmanned aerial vehicle reaching the point, the value is the included angle between the current speed vector direction of the unmanned aerial vehicle and the normal vector of the boundary center:
cost vel_change =cos -1 (dir vel_current ·dir path_end );
the total observation cost of the observation points obtained by combining the three observation costs is as follows:
cost sum =cost reach_dist +cost vel_normal +cost vel_change
and sequencing the cost of the observation points of all the boundaries, namely selecting the observation point with the minimum cost for observation, wherein the observation point is used as a target point of the unmanned aerial vehicle, and solving a continuous track curve reaching the point by a track planning method.
As shown in fig. 3, the trajectory planning module includes: a discrete path planning module and a continuous track generation module;
the discrete path planning module is used for planning a path based on the global A star according to the target point position output by the space search module and outputting discrete path nodes;
specifically, a global A star algorithm is used for searching a discrete safe path from a current position to a target point position in the current, and a minimum node of the path is a minimum volume unit of the map.
In the a-star algorithm, the total cost of each node within the map is represented as the sum of the cost from the start point and the cost from the end point:
f(n)=g(n)+h(n);
starting from the starting point, the A star algorithm searches the adjacent nodes with the minimum total cost in sequence until the end point is found. The total cost of the nodes is stored in the priority queue structure, and the algorithm can be ensured to quickly extract the nodes with the minimum total cost. And finally obtaining a safe path with the minimum unit of map resolution, wherein the path is represented by path points of the voxel size of the continuous adjacent map units.
And the continuous track generation module is used for generating a continuous track according to discrete path nodes output by the discrete path planning module and outputting a continuous expected track to the simulation unmanned aerial vehicle.
Specifically, the discrete path calculated by the global star a is down-sampled by a proper path distance, and the continuous adjacent path voxel points are sampled at proper distance intervals to form more dispersed path point data. Establishing a parameterized trajectory curve equation related to time by using the Bezier curve principle and taking the path points after down sampling as Bezier curve control points:
Figure BDA0003960751300000151
Figure BDA0003960751300000152
where wi represents the weight of the ith term,
Figure BDA0003960751300000153
the number of combinations is indicated.
The safety discrete path is expressed as a continuous trajectory equation with respect to time by using a parameterized Bezier curve, and a continuous expected trajectory generated by the expected position of the unmanned aerial vehicle at each moment can be acquired. And outputting the continuous expected track to a flight controller of the simulated unmanned aerial vehicle, so that the simulated unmanned aerial vehicle can carry out simulated flight along the expected track.
Specifically, as shown in fig. 4, the simulation unmanned aerial vehicle is a simulated quad-rotor unmanned aerial vehicle, and includes a first simulation sensor, a second simulation sensor, a nonlinear simulation controller, and four simulation motors;
the first type of simulation sensor comprises a simulation depth camera; the system comprises a data acquisition module, a data processing module and a data processing module, wherein the data acquisition module is used for acquiring front-end sensing data including a dynamic scene from scene information of a virtual environment where the simulation unmanned aerial vehicle is located as input data of space search and trajectory planning;
the second type of simulation sensor comprises a simulated IMU, a GPS module, a barometer and a magnetometer, and is used for sensing the pose information of the simulated unmanned aerial vehicle from the natural environment simulation data of the virtual environment where the simulated unmanned aerial vehicle is located;
the nonlinear simulation controller; the system is used for carrying out space search and track planning according to front-end sensing data of the first type of simulation sensor to obtain an expected track and simulated unmanned aerial vehicle pose information sensed by the second type of simulation sensor, carrying out nonlinear control and outputting a motor rotating speed corresponding to a simulated motor;
the simulation motor is used for simulating thrust, air resistance, rotation moment and air resistance moment generated by the unmanned aerial vehicle in the rotation process of the propeller by using the brushless direct current motor according to the motor rotating speed output by the nonlinear simulation controller; the resultant force and resultant moment of the simulated quad-rotor unmanned aerial vehicle in the simulation environment are obtained under the action of the four simulation motors.
Specifically, the nonlinear simulation controller comprises a trajectory solver, a position loop controller, a nonlinear angle converter, a nonlinear attitude mapper and a hybrid controller;
the trajectory solver is used for converting the input simulation data of the unmanned aerial vehicle expected trajectory into the system expected state quantity at the current moment;
the position loop controller is used for carrying out position loop PID control according to the expected state quantity of the system and outputting a total error of the speed control;
the nonlinear angle converter is used for carrying out nonlinear SE (3) space angle conversion on the total error of the speed control to obtain a space expected rotation matrix;
the nonlinear attitude mapper is used for performing nonlinear attitude SO (3) space mapping on the current space expected rotation matrix and the measurement rotation matrix and outputting an attitude control error;
and the hybrid controller is used for controlling the rotating speed of the motor according to the attitude control error and the total bit speed control error and outputting the rotating speed of the motor to the simulation motor.
Specifically, the input signal of the trajectory solver is that the unmanned aerial vehicle expects the trajectory to be a polynomial equation with respect to time t:
Figure BDA0003960751300000161
wherein the content of the first and second substances,
Figure BDA0003960751300000162
indicating the position that the drone should reach at time t, C 3×5 Coefficient matrix of order 5, [ t ] representing a three-dimensional polynomial 4 ,t 3 ,t 2 ,t,1] T Representing a higher order argument with respect to time t.
Specifically, the track solver converts the expected track into the current time t k System state quantity of (2):
Figure BDA0003960751300000163
wherein, P des Represents the time t k Desired position of
Figure BDA0003960751300000164
V des Represents the time t k Desired speed of
Figure BDA0003960751300000165
A des Represents the time t k Is desired acceleration>
Figure BDA0003960751300000166
Respectively at time t k And taking 0-order derivation, 1-order derivation and 2-order derivation for the expected track respectively:
P des =F (0) (t k )
V des =F (1) (t k )
A des =F (2) (t k )
at the same time, the trajectory solver will expect the yaw angle attitude ψ in view of the possibility of an obstacle appearing on the course of travel des Adjusted to be forward along the trajectory:
[Δx,Δy] T =F (0) (t k )-F (0) (t k-1 )
Figure BDA0003960751300000171
thus, the trajectory solver solves the desired trajectory into the desired state quantity of the input flight control
Figure BDA0003960751300000172
Specifically, in the position loop controller, a desired state quantity output from a trajectory resolver
Figure BDA0003960751300000173
The input position ring PID is converted into the total error A of bit rate control input
e P =P des -P now
e V =V des -V now
A input =K P e P +K V e V +K Vi ∫e V +A des +g
Wherein e is P 、e V Error value representing desired position and desired velocity, g represents gravitational acceleration, A input Representing the total error of the bit rate control; k P Proportional gain, K, representing position error V Proportional gain, K, representing speed error Vi Integral gain, P, representing velocity error now Representing the current moment t of the simulation unmanned aerial vehicle k Position of (a), V now Representing the current moment t of the simulation unmanned aerial vehicle k Of the speed of (c).
Specifically, in the nonlinear angle converter, the total error A is controlled for the input bit rate input Carrying out nonlinear SE (3) space angle conversion to obtain a space expected rotation matrix R des . Spatial expected rotation matrix R des By [ x ] B,des ,y B,des ,z B,des ]It is shown that there is, among others,
z B,des vector direction representing acceleration PID value:
Figure BDA0003960751300000174
y B,des denotes z B,des And vector [ cos psi des ,sinψ des ,0] T Normal vector of the component plane:
Figure BDA0003960751300000175
x B,des denotes y B,des And z B,des Normal vector of the component plane:
x B,des =y B,des ×z B,des
thereby, a desired rotation matrix R is obtained des
In particular, the spatial expectation is rotated by a matrix R in a nonlinear attitude mapper des And the measured rotation matrix R B Performing nonlinear attitude SO (3) space mapping, and outputting attitude control error [ e ] p ,e q ,e r ] T
Measured rotation matrix R B =[x B y B z B ]From orthogonal three-axis directions x in the measured coordinate system of the machine body B ,y B ,z B And (6) performing characterization.
Will expect to rotate the matrix R des And the current rotation matrix R B By vee mapping to SO (3) space, attitude angle error e R Expressed as:
Figure BDA0003960751300000181
the superscript "V" represents the vee mapping of SO (3); r B =[x B y B z B ]From orthogonal three-axis directions x on the coordinate system of the body B ,y B ,z B Carrying out characterization;
in the present embodiment, the error e is calculated by correcting the attitude angle R Performs PID control to output an attitude control error e p ,e q ,e r ] T
[e p ,e q ,e r ] T =K R e R +K Ri ∫e R
K R Proportional gain, K, representing attitude angle error Ri To indicate attitude angle errorThe integrated gain of the difference.
Preferably, the simulation unmanned aerial vehicle is at the current moment t k Position P of now Velocity V now And the measured rotation matrix R B =[x B y B z B ]The pose measurement sensor for simulating the IMU, the GPS module, the barometer and the magnetometer included in the unmanned aerial vehicle is obtained by sensing natural environment simulation data including a simulation force field, an atmospheric field and a magnetic field in a virtual environment and measuring the data.
In addition, the current moment t of the simulation unmanned aerial vehicle k Position P of now Velocity V now And the measured rotation matrix R B =[x B y B z B ]Or the pose data of the simulated unmanned aerial vehicle can be obtained by the current virtual physical engine.
Specifically, in the hybrid controller, an attitude control error [ e ] is output according to the nonlinear attitude mapper p ,e q ,e r ] T And the bit rate control total error A output by the position loop controller input And controlling the rotating speed of the motor to output the rotating speed omega of the motor to the simulation motor.
Wherein, the thrust generated by the ith simulation motor meets the requirement
Figure BDA0003960751300000191
The torque generated satisfies >>
Figure BDA0003960751300000192
Figure BDA0003960751300000193
k F Is the thrust coefficient; k is a radical of formula M Is a moment coefficient;
for drones in the simulation system there are:
Figure BDA0003960751300000194
the four motor rotating speeds of the simulation quad-rotor unmanned aerial vehicle are as follows:
Figure BDA0003960751300000195
wherein L is the distance length from each motor of the quad-rotor unmanned aerial vehicle to the center of mass of the unmanned aerial vehicle; and m is the mass of the unmanned aerial vehicle.
The four motor rotating speeds are input into the simulation motor, so that the simulation motor simulates thrust F generated in the rotating process of the propeller by using the brushless direct current motor of the unmanned aerial vehicle i Air resistance
Figure BDA0003960751300000196
Rotational moment M i Air resistance moment->
Figure BDA0003960751300000197
Figure BDA0003960751300000198
Figure BDA0003960751300000199
Figure BDA00039607513000001910
Figure BDA00039607513000001911
Wherein k is D Is the coefficient of air resistance, mu D Is the air moment coefficient of resistance;
Figure BDA00039607513000001912
which represents the air flow velocity at the surface of the propeller,
Figure BDA00039607513000001913
dir turn representing positive and negative of an emulated machineDirection of rotation, z B Is a propeller plane vertical normal vector;
under the action of the four simulation motors, resultant force and resultant moment borne by the quad-rotor unmanned aerial vehicle are expressed as follows:
Figure BDA0003960751300000201
Figure BDA0003960751300000202
wherein m is the unmanned aerial vehicle mass, and g is acceleration of gravity.
Specifically, the virtual simulation environment module is in data communication with the simulation unmanned aerial vehicle; outputting the simulated virtual scene information to a first type of simulation sensor of the simulated unmanned aerial vehicle, so that the first type of simulation sensor can sense the scene information to obtain front-end sensing data including a dynamic scene; the resultant force and resultant moment of the quadrotor unmanned aerial vehicle in the simulation environment are output according to the simulation motor, and the natural environment simulation data including the simulation force field, the atmospheric field and the magnetic field obtained through simulation are output to a second simulation sensor of the simulation unmanned aerial vehicle, so that the second simulation sensor can sense the pose information of the unmanned aerial vehicle according to the natural environment simulation data.
Preferably, a simulation scene environment is built in the virtual simulation environment module by using a Gazebo simulation platform and a virtual physics engine.
Optionally, in the process of building the simulation scene environment, the main steps include:
1) Modeling a simulated obstacle;
by modeling in a Gazebo simulation platform, obstacles with different shapes and sizes are constructed, and typical structures in various scenes such as forests, buildings, rooms and the like are simulated. In order to verify the adaptability of the obstacle avoidance capability of the multiple unmanned aerial vehicles in different environments, obstacle scenes with different densities are constructed, and multiple test environments are provided for obstacle avoidance.
2) Virtual physical engine configuration;
the virtual physical engine simulates the stress of the unmanned aerial vehicle in the natural environment, applies external forces such as gravity, aerodynamic force, resistance and the like to the unmanned aerial vehicle, calculates through a dynamic model, and updates the current kinematic state of the unmanned aerial vehicle in each iteration step. Meanwhile, the physical engine applies a simulation force field, an atmospheric field, a magnetic field and the like to sensor modules in the flight control such as an IMU (inertial measurement Unit), a barometer and a magnetometer, and natural environment simulation data are provided for the flight control.
3) Compiling an analog sensor interface;
and under a simulation environment and a virtual physical engine, constructing an unmanned aerial vehicle model in the simulation unmanned aerial vehicle.
Utilizing a motor dynamics model in the simulated unmanned aerial vehicle: the thrust generated by a single motor satisfies
Figure BDA0003960751300000211
Figure BDA0003960751300000212
The torque generated satisfies >>
Figure BDA0003960751300000213
The aerodynamic force and the moment that four rotor unmanned aerial vehicle motors produced are simulated, and unmanned aerial vehicle motor speed input interface is compiled.
4) And compiling a data communication interface.
Compiling a data communication interface, and communicating data of a sensor and a motor in the unmanned aerial vehicle model with a trajectory tracking flight control; on the basis of a robot operating system ROS, a UDP port of a flight control and simulation system is arranged, data of a simulation sensor IMU, a GPS, a magnetometer and a barometer are accessed into the flight control, the rotating speed of a flight control output motor is transmitted into a simulation motor through a topic mechanism, and simulated aerodynamic force is generated.
In conclusion, the unmanned aerial vehicle simulation system of the embodiment overcomes the problem that the unmanned aerial vehicle lacks a rapid, complete, concise and clear complex space traversal search strategy and a simulation verification environment in the field of intelligent control; a complete simulation platform capable of realizing environment perception, algorithm verification and maneuvering flight control is provided for researchers in the field, and the problem that influence of other module factors cannot be considered in unmanned aerial vehicle space search algorithm verification is solved.
In addition, according to the simulation verification space searching and trajectory planning method, the unknown area in the space is quantized by constructing the boundary and the optimal observation point thereof, and the space position with the maximized searching efficiency is determined, so that the problems of low efficiency, incomplete searching and complex logic in a complex space in the conventional indoor searching technology are solved; and then the global optimal observation point is determined as the next target point by calculating the observation point cost, so that the problem of low internal search speed of a complex space is solved. While the invention has been described with reference to specific preferred embodiments, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.

Claims (10)

1. A simulation verification system for fast traversal search control of an unmanned aerial vehicle complex space is characterized by comprising a space search module, a track planning module and a simulation unmanned aerial vehicle;
the space search module is connected with the simulation unmanned aerial vehicle and is used for performing space search according to the received dynamic scene depth map output frame by frame when the simulation unmanned aerial vehicle performs simulation flight; in the space search, updating a probability occupation map based on a dynamic scene depth map, finding out a boundary set for distinguishing unknown and known areas of the map and an optimal observation point position corresponding to each boundary in the boundary set, and selecting one from the optimal observation point positions as the position of the next target point of the simulated unmanned aerial vehicle;
the track planning module is connected with the space searching module and used for carrying out track planning according to a searched target point to generate a continuous expected track and outputting the continuous expected track to the simulation unmanned aerial vehicle;
the simulation unmanned aerial vehicle is used for carrying out simulation flight in a virtual simulation environment according to the expected track of the continuity, and verifying the stability and safety of operation in the flight process.
2. The simulation verification system according to claim 1, wherein the space search module comprises a coordinate conversion module, a probability occupancy map update module, a boundary generation module, an observation point calculation module, and a target point generation module;
the coordinate conversion module is used for converting a scene depth map output by the simulation unmanned aerial vehicle frame by frame into a three-dimensional obstacle space point cloud;
the probability occupation map updating module is used for establishing a probability occupation map composed of dynamic arrays and updating the current state of each unit element in the probability occupation map according to the three-dimensional obstacle space point cloud acquired frame by frame, wherein the current state comprises three states of unknown, unoccupied and occupied;
the boundary generating module is used for searching the current state of each element in the probability occupation map to find out a boundary set for distinguishing unknown areas and known areas of the map;
the observation point calculation module is used for determining the observation point position with the maximum observation efficiency when each boundary in the boundary set is observed;
and the target point generation module is used for carrying out global cost sorting on the observation point positions of all the boundaries and selecting one observation point position as the next target point of the path planning.
3. The simulation verification system of claim 2, wherein the probabilistic occupancy map update module comprises a map initialization module, an update module, and a status determination module;
the map initialization module is used for initializing the probability occupation map and endowing each element in a map array for storing observed state values of each unit of the three-dimensional space with an initial value; the initial value is a probability value of an unknown state;
the updating module is used for dynamically updating the observed state numerical value of the probability occupation map according to the three-dimensional obstacle point cloud acquired frame by frame; according to the current frame three-dimensional obstacle point cloud, if a certain unit element is judged to be in an occupied state, updating the observed state value of the unit element according to the probability of the occupied state; if a certain unit element is judged to be in an unoccupied state, updating the observed state value of the unit element according to the probability of the unoccupied state;
and the state determining module is used for determining the current state of each unit element in the probability occupancy map according to the current observed state value.
4. The simulation verification system of claim 3,
in the updating module, the observed state value of the map element is updated as follows:
map[x i ][y i ][z i ]=S old +l occ
map[x k ][y k ][z k ]=S old +l free
Figure FDA0003960751290000021
Figure FDA0003960751290000022
wherein, map [ x ] i ][y i ][z i ]Updating values for map elements currently observed as occupied;
map[x k ][y k ][z k ]an updated value for the map element currently observed as an unoccupied state; s old A value representing an observed state of a previous observation of the map element; l occ The current observation is an increment of the occupied state value, l free The current observation is an unoccupied state value increment; p is a radical of hit For a predetermined probability of observing an occupied state once, p miss For a predetermined probability of observing an unoccupied state once, and p hit +p miss =1,p hit >p miss
5. The simulation verification system according to claim 3, wherein the optimal observation point position VP of the determined boundary Fro in the observation point calculation module is:
VP=Mid+s·dir normal
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003960751290000031
Figure FDA0003960751290000032
and &>
Figure FDA0003960751290000033
Traversing and searching from the boundary Fro mean value point to obtain two space point positions with the farthest distance between two ends of the boundary Fro; dir normal Is the boundary Fro center normal; s is the distance from the center point Mid of the best observation point position VP on the center normal.
6. The simulation verification system according to claim 3, wherein in the target point generation module, a total observation cost composed of a distance cost, a speed cost and a speed change cost is established for the observation point positions of each boundary and sorted, and the observation point position with the minimum total observation cost is used as the next target point of the unmanned aerial vehicle.
7. The simulation verification system of claim 3, wherein the total observation cost is:
cost sum =cost reach_dist +cost vel_normal +cost vel_change
wherein, cost reach_dist The distance cost is the path length of the current position of the simulated unmanned aerial vehicle to the observation point position planned by the global A star;
cost vel_nomal for the cost of speed, the path point vector at the end of the path planned by the global A-starThe angle between the vector direction and the normal vector of the boundary center;
cost vel_change and the cost of speed change is the included angle between the current speed vector direction and the boundary central normal vector.
8. The simulation verification system of claim 3, wherein the simulated drone is a simulated quad-rotor drone comprising a first type of simulated sensor, a second type of simulated sensor, a nonlinear simulation controller, and four simulated motors;
the first type of simulation sensor comprises a simulation depth camera; the system comprises a front-end sensing data acquisition unit, a data processing unit and a track planning unit, wherein the front-end sensing data acquisition unit is used for acquiring front-end sensing data including a dynamic scene depth map from scene information of a virtual environment where the simulation unmanned aerial vehicle is located and is used as input data of space search and track planning;
the second type of simulation sensor comprises a simulated IMU, a GPS module, a barometer and a magnetometer, and is used for sensing the pose information of the simulated unmanned aerial vehicle from the natural environment simulation data of the virtual environment where the simulated unmanned aerial vehicle is located;
the nonlinear simulation controller; the system is used for carrying out space search and track planning according to front-end sensing data of the first type of simulation sensor to obtain an expected track and simulated unmanned aerial vehicle pose information sensed by the second type of simulation sensor, carrying out nonlinear control and outputting a motor rotating speed corresponding to a simulated motor;
the simulation motor is used for simulating thrust, air resistance, rotation torque and air resistance torque generated by the brushless direct current motor used by the unmanned aerial vehicle in the propeller rotation process according to the motor rotation speed output by the nonlinear simulation controller; the resultant force and resultant moment of the simulated quad-rotor unmanned aerial vehicle in the simulation environment are obtained under the action of the four simulation motors.
9. The simulation verification system of claim 3 wherein the nonlinear simulation controller comprises a trajectory solver, a position loop controller, a nonlinear angle converter, a nonlinear attitude mapper and a hybrid controller;
the trajectory solver is used for converting the input simulation data of the expected trajectory of the unmanned aerial vehicle into the system expected state quantity at the current moment;
the position loop controller is used for carrying out position loop PID control according to the expected state quantity of the system and outputting a total error of the speed control;
the nonlinear angle converter is used for carrying out nonlinear SE (3) space angle conversion on the total error of the bit rate control to obtain a space expected rotation matrix;
the nonlinear attitude mapper is used for performing nonlinear attitude SO (3) space mapping on the current space expected rotation matrix and the measurement rotation matrix and outputting an attitude control error;
and the hybrid controller is used for controlling the rotating speed of the motor according to the attitude control error and the total position speed control error and outputting the rotating speed of the motor to the simulation motor.
10. The simulation verification system of claim 3, further comprising a virtual simulation environment module; the virtual simulation environment module is in data communication with the simulation unmanned aerial vehicle; outputting the simulated virtual scene information to a first type of simulation sensor of the simulated unmanned aerial vehicle, so that the first type of simulation sensor can sense the scene information to obtain front-end sensing data including a dynamic scene; the resultant force and resultant moment of the quadrotor unmanned aerial vehicle in the simulation environment are output according to the simulation motor, and the natural environment simulation data including the simulation force field, the atmospheric field and the magnetic field obtained through simulation are output to a second simulation sensor of the simulation unmanned aerial vehicle, so that the second simulation sensor can sense the pose information of the unmanned aerial vehicle according to the natural environment simulation data.
CN202211479541.8A 2022-11-24 2022-11-24 Simulation verification system for rapid traversal search control of unmanned aerial vehicle complex space Pending CN115903543A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211479541.8A CN115903543A (en) 2022-11-24 2022-11-24 Simulation verification system for rapid traversal search control of unmanned aerial vehicle complex space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211479541.8A CN115903543A (en) 2022-11-24 2022-11-24 Simulation verification system for rapid traversal search control of unmanned aerial vehicle complex space

Publications (1)

Publication Number Publication Date
CN115903543A true CN115903543A (en) 2023-04-04

Family

ID=86485633

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211479541.8A Pending CN115903543A (en) 2022-11-24 2022-11-24 Simulation verification system for rapid traversal search control of unmanned aerial vehicle complex space

Country Status (1)

Country Link
CN (1) CN115903543A (en)

Similar Documents

Publication Publication Date Title
CN112347840B (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN111256703B (en) Multi-rotor unmanned aerial vehicle inspection path planning method
CN109991636A (en) Map constructing method and system based on GPS, IMU and binocular vision
CN109683629B (en) Unmanned aerial vehicle electric power overhead line system based on combination navigation and computer vision
Hérissé et al. A terrain-following control approach for a vtol unmanned aerial vehicle using average optical flow
CN111596684A (en) Fixed-wing unmanned aerial vehicle dense formation and anti-collision obstacle avoidance semi-physical simulation system and method
Altug Vision based control of unmanned aerial vehicles with applications to an autonomous four-rotor helicopter, quadrotor
Mokrane et al. UAV path planning based on dynamic programming algorithm on photogrammetric DEMs
CN114488848A (en) Unmanned aerial vehicle autonomous flight system and simulation experiment platform for indoor building space
Zheng et al. An improved EKF-SLAM for Mars surface exploration
CN115639823A (en) Terrain sensing and movement control method and system for robot under rugged and undulating terrain
Tao et al. Modeling and control of swing oscillation of underactuated indoor miniature autonomous blimps
CN113741523B (en) Mixed unmanned aerial vehicle autonomous detection method based on boundary and sampling
CN115857372A (en) Control simulation system for distributed multi-unmanned aerial vehicle cluster collaborative space search scheduling
CN114529585A (en) Mobile equipment autonomous positioning method based on depth vision and inertial measurement
CN113741503A (en) Autonomous positioning type unmanned aerial vehicle and indoor path autonomous planning method thereof
Amidi et al. Research on an autonomous vision-guided helicopter
CN117406771A (en) Efficient autonomous exploration method, system and equipment based on four-rotor unmanned aerial vehicle
CN115903543A (en) Simulation verification system for rapid traversal search control of unmanned aerial vehicle complex space
CN117234203A (en) Multi-source mileage fusion SLAM downhole navigation method
Zhang et al. Indoor navigation for quadrotor using rgb-d camera
CN115903899A (en) Trajectory planning method, device and simulation system for multi-unmanned aerial vehicle obstacle avoidance simulation control
CN114647255A (en) Unmanned aerial vehicle sphere sensing and capturing device and method
Respall et al. Unmanned aerial vehicle path planning for exploration mapping
Andert et al. A fast and small 3-d obstacle model for autonomous applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination